1
Arm queue; the big bit here is RTH's MTE for user-mode series.
1
Hi; this is one last arm pullreq before the end of the year.
2
Mostly minor cleanups, and also implementation of the
3
FEAT_XS architectural feature.
2
4
5
thanks
3
-- PMM
6
-- PMM
4
7
5
The following changes since commit 83339e21d05c824ebc9131d644f25c23d0e41ecf:
8
The following changes since commit 8032c78e556cd0baec111740a6c636863f9bd7c8:
6
9
7
Merge remote-tracking branch 'remotes/stefanha-gitlab/tags/block-pull-request' into staging (2021-02-10 15:42:20 +0000)
10
Merge tag 'firmware-20241216-pull-request' of https://gitlab.com/kraxel/qemu into staging (2024-12-16 14:20:33 -0500)
8
11
9
are available in the Git repository at:
12
are available in the Git repository at:
10
13
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210211
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20241217
12
15
13
for you to fetch changes up to 5213c78932ecf4bae18d62baf8735724e25fb478:
16
for you to fetch changes up to e91254250acb8570bd7b8a8f89d30e6d18291d02:
14
17
15
target/arm: Correctly initialize MDCR_EL2.HPMN (2021-02-11 11:50:16 +0000)
18
tests/functional: update sbsa-ref firmware used in test (2024-12-17 15:21:06 +0000)
16
19
17
----------------------------------------------------------------
20
----------------------------------------------------------------
18
target-arm queue:
21
target-arm queue:
19
* Correctly initialize MDCR_EL2.HPMN
22
* remove a line of redundant code
20
* versal: Use nr_apu_cpus in favor of hard coding 2
23
* convert various TCG helper fns to use 'fpst' alias
21
* npcm7xx: Add ethernet device
24
* Use float_status in helper_fcvtx_f64_to_f32
22
* Enable ARMv8.4-MemTag for user-mode emulation
25
* Use float_status in helper_vfp_fcvt{ds,sd}
23
* accel/tcg: Add URL of clang bug to comment about our workaround
26
* Implement FEAT_XS
24
* Add support for FEAT_DIT, Data Independent Timing
27
* hw/intc/arm_gicv3_its: Zero initialize local DTEntry etc structs
25
* Remove GPIO from unimplemented NPCM7XX
28
* tests/functional: update sbsa-ref firmware used in test
26
* Fix SCR RES1 handling
27
* Don't migrate CPUARMState.features
28
29
29
----------------------------------------------------------------
30
----------------------------------------------------------------
30
Aaron Lindsay (1):
31
Denis Rastyogin (1):
31
target/arm: Don't migrate CPUARMState.features
32
target/arm: remove redundant code
32
33
33
Daniel Müller (1):
34
Manos Pitsidianakis (3):
34
target/arm: Correctly initialize MDCR_EL2.HPMN
35
target/arm: Add decodetree entry for DSB nXS variant
36
target/arm: Enable FEAT_XS for the max cpu
37
tests/tcg/aarch64: add system test for FEAT_XS
35
38
36
Doug Evans (3):
39
Marcin Juszkiewicz (1):
37
hw/net: Add npcm7xx emc model
40
tests/functional: update sbsa-ref firmware used in test
38
hw/arm: Add npcm7xx emc model
39
tests/qtests: Add npcm7xx emc model test
40
41
41
Edgar E. Iglesias (1):
42
Peter Maydell (4):
42
hw/arm: versal: Use nr_apu_cpus in favor of hard coding 2
43
target/arm: Implement fine-grained-trap handling for FEAT_XS
44
target/arm: Add ARM_CP_ADD_TLBI_NXS type flag for NXS insns
45
target/arm: Add ARM_CP_ADD_TLBI_NXS type flag to TLBI insns
46
hw/intc/arm_gicv3_its: Zero initialize local DTEntry etc structs
43
47
44
Hao Wu (1):
48
Richard Henderson (10):
45
hw/arm: Remove GPIO from unimplemented NPCM7XX
49
target/arm: Convert vfp_helper.c to fpst alias
50
target/arm: Convert helper-a64.c to fpst alias
51
target/arm: Convert vec_helper.c to fpst alias
52
target/arm: Convert neon_helper.c to fpst alias
53
target/arm: Convert sve_helper.c to fpst alias
54
target/arm: Convert sme_helper.c to fpst alias
55
target/arm: Convert vec_helper.c to use env alias
56
target/arm: Convert neon_helper.c to use env alias
57
target/arm: Use float_status in helper_fcvtx_f64_to_f32
58
target/arm: Use float_status in helper_vfp_fcvt{ds,sd}
46
59
47
Mike Nawrocki (1):
60
docs/system/arm/emulation.rst | 1 +
48
target/arm: Fix SCR RES1 handling
61
target/arm/cpregs.h | 80 ++--
49
62
target/arm/cpu-features.h | 5 +
50
Peter Maydell (2):
63
target/arm/helper.h | 638 +++++++++++++++----------------
51
arm: Update infocenter.arm.com URLs
64
target/arm/tcg/helper-a64.h | 116 +++---
52
accel/tcg: Add URL of clang bug to comment about our workaround
65
target/arm/tcg/helper-sme.h | 4 +-
53
66
target/arm/tcg/helper-sve.h | 426 ++++++++++-----------
54
Rebecca Cran (4):
67
target/arm/tcg/a64.decode | 3 +
55
target/arm: Add support for FEAT_DIT, Data Independent Timing
68
hw/intc/arm_gicv3_its.c | 44 +--
56
target/arm: Support AA32 DIT by moving PSTATE_SS from cpsr into env->pstate
69
target/arm/helper.c | 30 +-
57
target/arm: Set ID_AA64PFR0.DIT and ID_PFR0.DIT to 1 for "max" AA64 CPU
70
target/arm/tcg/cpu64.c | 1 +
58
target/arm: Set ID_PFR0.DIT to 1 for "max" 32-bit CPU
71
target/arm/tcg/helper-a64.c | 101 ++---
59
72
target/arm/tcg/neon_helper.c | 27 +-
60
Richard Henderson (31):
73
target/arm/tcg/op_helper.c | 11 +-
61
tcg: Introduce target-specific page data for user-only
74
target/arm/tcg/sme_helper.c | 8 +-
62
linux-user: Introduce PAGE_ANON
75
target/arm/tcg/sve_helper.c | 96 ++---
63
exec: Use uintptr_t for guest_base
76
target/arm/tcg/tlb-insns.c | 202 ++++++----
64
exec: Use uintptr_t in cpu_ldst.h
77
target/arm/tcg/translate-a64.c | 26 +-
65
exec: Improve types for guest_addr_valid
78
target/arm/tcg/translate-vfp.c | 4 +-
66
linux-user: Check for overflow in access_ok
79
target/arm/tcg/vec_helper.c | 81 ++--
67
linux-user: Tidy VERIFY_READ/VERIFY_WRITE
80
target/arm/vfp_helper.c | 130 +++----
68
bsd-user: Tidy VERIFY_READ/VERIFY_WRITE
81
tests/tcg/aarch64/system/feat-xs.c | 27 ++
69
linux-user: Do not use guest_addr_valid for h2g_valid
82
tests/functional/test_aarch64_sbsaref.py | 20 +-
70
linux-user: Fix guest_addr_valid vs reserved_va
83
23 files changed, 1083 insertions(+), 998 deletions(-)
71
exec: Introduce cpu_untagged_addr
84
create mode 100644 tests/tcg/aarch64/system/feat-xs.c
72
exec: Use cpu_untagged_addr in g2h; split out g2h_untagged
73
linux-user: Explicitly untag memory management syscalls
74
linux-user: Use guest_range_valid in access_ok
75
exec: Rename guest_{addr,range}_valid to *_untagged
76
linux-user: Use cpu_untagged_addr in access_ok; split out *_untagged
77
linux-user: Move lock_user et al out of line
78
linux-user: Fix types in uaccess.c
79
linux-user: Handle tags in lock_user/unlock_user
80
linux-user/aarch64: Implement PR_TAGGED_ADDR_ENABLE
81
target/arm: Improve gen_top_byte_ignore
82
target/arm: Use the proper TBI settings for linux-user
83
linux-user/aarch64: Implement PR_MTE_TCF and PR_MTE_TAG
84
linux-user/aarch64: Implement PROT_MTE
85
target/arm: Split out syndrome.h from internals.h
86
linux-user/aarch64: Pass syndrome to EXC_*_ABORT
87
linux-user/aarch64: Signal SEGV_MTESERR for sync tag check fault
88
linux-user/aarch64: Signal SEGV_MTEAERR for async tag check error
89
target/arm: Add allocation tag storage for user mode
90
target/arm: Enable MTE for user-only
91
tests/tcg/aarch64: Add mte smoke tests
92
93
docs/system/arm/nuvoton.rst | 3 +-
94
bsd-user/qemu.h | 9 +-
95
include/exec/cpu-all.h | 47 +-
96
include/exec/cpu_ldst.h | 39 +-
97
include/exec/exec-all.h | 2 +-
98
include/hw/arm/npcm7xx.h | 2 +
99
include/hw/dma/pl080.h | 7 +-
100
include/hw/misc/arm_integrator_debug.h | 2 +-
101
include/hw/net/npcm7xx_emc.h | 286 +++++++++++
102
include/hw/ssi/pl022.h | 5 +-
103
linux-user/aarch64/target_signal.h | 3 +
104
linux-user/aarch64/target_syscall.h | 13 +
105
linux-user/qemu.h | 76 +--
106
linux-user/syscall_defs.h | 1 +
107
target/arm/cpu-param.h | 3 +
108
target/arm/cpu.h | 49 ++
109
target/arm/internals.h | 255 +---------
110
target/arm/syndrome.h | 273 +++++++++++
111
tests/tcg/aarch64/mte.h | 60 +++
112
accel/tcg/cpu-exec.c | 25 +-
113
accel/tcg/translate-all.c | 32 +-
114
accel/tcg/user-exec.c | 51 +-
115
bsd-user/main.c | 4 +-
116
hw/arm/aspeed_ast2600.c | 2 +-
117
hw/arm/musca.c | 4 +-
118
hw/arm/npcm7xx.c | 58 ++-
119
hw/arm/xlnx-versal.c | 4 +-
120
hw/misc/arm_integrator_debug.c | 2 +-
121
hw/net/npcm7xx_emc.c | 857 +++++++++++++++++++++++++++++++++
122
hw/timer/arm_timer.c | 7 +-
123
linux-user/aarch64/cpu_loop.c | 38 +-
124
linux-user/elfload.c | 18 +-
125
linux-user/flatload.c | 2 +-
126
linux-user/hppa/cpu_loop.c | 39 +-
127
linux-user/i386/cpu_loop.c | 6 +-
128
linux-user/i386/signal.c | 5 +-
129
linux-user/main.c | 4 +-
130
linux-user/mmap.c | 86 ++--
131
linux-user/ppc/signal.c | 4 +-
132
linux-user/syscall.c | 165 +++++--
133
linux-user/uaccess.c | 82 +++-
134
target/arm/cpu.c | 29 +-
135
target/arm/cpu64.c | 5 +
136
target/arm/helper-a64.c | 31 +-
137
target/arm/helper.c | 71 ++-
138
target/arm/machine.c | 2 +-
139
target/arm/mte_helper.c | 39 +-
140
target/arm/op_helper.c | 9 +-
141
target/arm/tlb_helper.c | 15 +-
142
target/arm/translate-a64.c | 37 +-
143
target/hppa/op_helper.c | 2 +-
144
target/i386/tcg/mem_helper.c | 2 +-
145
target/s390x/mem_helper.c | 4 +-
146
tests/qtest/npcm7xx_emc-test.c | 812 +++++++++++++++++++++++++++++++
147
tests/tcg/aarch64/mte-1.c | 28 ++
148
tests/tcg/aarch64/mte-2.c | 45 ++
149
tests/tcg/aarch64/mte-3.c | 51 ++
150
tests/tcg/aarch64/mte-4.c | 45 ++
151
tests/tcg/aarch64/pauth-2.c | 1 -
152
hw/net/meson.build | 1 +
153
hw/net/trace-events | 17 +
154
tests/qtest/meson.build | 1 +
155
tests/tcg/aarch64/Makefile.target | 6 +
156
tests/tcg/configure.sh | 4 +
157
64 files changed, 3312 insertions(+), 575 deletions(-)
158
create mode 100644 include/hw/net/npcm7xx_emc.h
159
create mode 100644 target/arm/syndrome.h
160
create mode 100644 tests/tcg/aarch64/mte.h
161
create mode 100644 hw/net/npcm7xx_emc.c
162
create mode 100644 tests/qtest/npcm7xx_emc-test.c
163
create mode 100644 tests/tcg/aarch64/mte-1.c
164
create mode 100644 tests/tcg/aarch64/mte-2.c
165
create mode 100644 tests/tcg/aarch64/mte-3.c
166
create mode 100644 tests/tcg/aarch64/mte-4.c
167
diff view generated by jsdifflib
Deleted patch
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
2
1
3
As feature flags are added or removed, the meanings of bits in the
4
`features` field can change between QEMU versions, causing migration
5
failures. Additionally, migrating the field is not useful because it is
6
a constant function of the CPU being used.
7
8
Fixes: LP:1914696
9
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Andrew Jones <drjones@redhat.com>
12
Tested-by: Andrew Jones <drjones@redhat.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/machine.c | 2 +-
17
1 file changed, 1 insertion(+), 1 deletion(-)
18
19
diff --git a/target/arm/machine.c b/target/arm/machine.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/machine.c
22
+++ b/target/arm/machine.c
23
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
24
VMSTATE_UINT64(env.exclusive_addr, ARMCPU),
25
VMSTATE_UINT64(env.exclusive_val, ARMCPU),
26
VMSTATE_UINT64(env.exclusive_high, ARMCPU),
27
- VMSTATE_UINT64(env.features, ARMCPU),
28
+ VMSTATE_UNUSED(sizeof(uint64_t)),
29
VMSTATE_UINT32(env.exception.syndrome, ARMCPU),
30
VMSTATE_UINT32(env.exception.fsr, ARMCPU),
31
VMSTATE_UINT64(env.exception.vaddress, ARMCPU),
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
1
From: Daniel Müller <muellerd@fb.com>
1
From: Denis Rastyogin <gerben@altlinux.org>
2
2
3
When working with performance monitoring counters, we look at
3
This call is redundant as it only retrieves a value that is not used further.
4
MDCR_EL2.HPMN as part of the check whether a counter is enabled. This
5
check fails, because MDCR_EL2.HPMN is reset to 0, meaning that no
6
counters are "enabled" for < EL2.
7
That's in violation of the Arm specification, which states that
8
4
9
> On a Warm reset, this field [MDCR_EL2.HPMN] resets to the value in
5
Found by Linux Verification Center (linuxtesting.org) with SVACE.
10
> PMCR_EL0.N
11
6
12
That's also what a comment in the code acknowledges, but the necessary
7
Signed-off-by: Denis Rastyogin <gerben@altlinux.org>
13
adjustment seems to have been forgotten when support for more counters
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
was added.
9
Message-id: 20241212120618.518369-1-gerben@altlinux.org
15
This change fixes the issue by setting the reset value to PMCR.N, which
16
is four.
17
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
11
---
21
target/arm/helper.c | 9 ++++-----
12
target/arm/vfp_helper.c | 2 --
22
1 file changed, 4 insertions(+), 5 deletions(-)
13
1 file changed, 2 deletions(-)
23
14
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
25
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.c
17
--- a/target/arm/vfp_helper.c
27
+++ b/target/arm/helper.c
18
+++ b/target/arm/vfp_helper.c
28
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rintd)(float64 x, void *fp_status)
29
#endif
20
30
21
ret = float64_round_to_int(x, fp_status);
31
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
22
32
+#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */
23
- new_flags = get_float_exception_flags(fp_status);
33
24
-
34
#ifndef CONFIG_USER_ONLY
25
/* Suppress any inexact exceptions the conversion produced */
35
26
if (!(old_flags & float_flag_inexact)) {
36
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
27
new_flags = get_float_exception_flags(fp_status);
37
.writefn = gt_hyp_ctl_write, .raw_writefn = raw_write },
38
#endif
39
/* The only field of MDCR_EL2 that has a defined architectural reset value
40
- * is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N; but we
41
- * don't implement any PMU event counters, so using zero as a reset
42
- * value for MDCR_EL2 is okay
43
+ * is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N.
44
*/
45
{ .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
46
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
47
- .access = PL2_RW, .resetvalue = 0,
48
+ .access = PL2_RW, .resetvalue = PMCR_NUM_COUNTERS,
49
.fieldoffset = offsetof(CPUARMState, cp15.mdcr_el2), },
50
{ .name = "HPFAR", .state = ARM_CP_STATE_AA32,
51
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
52
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
53
* field as main ID register, and we implement four counters in
54
* addition to the cycle count register.
55
*/
56
- unsigned int i, pmcrn = 4;
57
+ unsigned int i, pmcrn = PMCR_NUM_COUNTERS;
58
ARMCPRegInfo pmcr = {
59
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
60
.access = PL0_RW,
61
--
28
--
62
2.20.1
29
2.34.1
63
64
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We were fudging TBI1 enabled to speed up the generated code.
4
Now that we've improved the code generation, remove this.
5
Also, tidy the comment to reflect the current code.
6
7
The pauth test was testing a kernel address (-1) and making
8
incorrect assumptions about TBI1; stick to userland addresses.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210210000223.884088-23-richard.henderson@linaro.org
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20241206031224.78525-3-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
7
---
15
target/arm/internals.h | 4 ++--
8
target/arm/helper.h | 268 ++++++++++++++++++++--------------------
16
target/arm/cpu.c | 10 +++-------
9
target/arm/vfp_helper.c | 120 ++++++++----------
17
tests/tcg/aarch64/pauth-2.c | 1 -
10
2 files changed, 186 insertions(+), 202 deletions(-)
18
3 files changed, 5 insertions(+), 10 deletions(-)
19
11
20
diff --git a/target/arm/internals.h b/target/arm/internals.h
12
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/internals.h
14
--- a/target/arm/helper.h
23
+++ b/target/arm/internals.h
15
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool tcma_check(uint32_t desc, int bit55, int ptr_tag)
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(probe_access, TCG_CALL_NO_WG, void, env, tl, i32, i32, i32)
17
DEF_HELPER_1(vfp_get_fpscr, i32, env)
18
DEF_HELPER_2(vfp_set_fpscr, void, env, i32)
19
20
-DEF_HELPER_3(vfp_addh, f16, f16, f16, ptr)
21
-DEF_HELPER_3(vfp_adds, f32, f32, f32, ptr)
22
-DEF_HELPER_3(vfp_addd, f64, f64, f64, ptr)
23
-DEF_HELPER_3(vfp_subh, f16, f16, f16, ptr)
24
-DEF_HELPER_3(vfp_subs, f32, f32, f32, ptr)
25
-DEF_HELPER_3(vfp_subd, f64, f64, f64, ptr)
26
-DEF_HELPER_3(vfp_mulh, f16, f16, f16, ptr)
27
-DEF_HELPER_3(vfp_muls, f32, f32, f32, ptr)
28
-DEF_HELPER_3(vfp_muld, f64, f64, f64, ptr)
29
-DEF_HELPER_3(vfp_divh, f16, f16, f16, ptr)
30
-DEF_HELPER_3(vfp_divs, f32, f32, f32, ptr)
31
-DEF_HELPER_3(vfp_divd, f64, f64, f64, ptr)
32
-DEF_HELPER_3(vfp_maxh, f16, f16, f16, ptr)
33
-DEF_HELPER_3(vfp_maxs, f32, f32, f32, ptr)
34
-DEF_HELPER_3(vfp_maxd, f64, f64, f64, ptr)
35
-DEF_HELPER_3(vfp_minh, f16, f16, f16, ptr)
36
-DEF_HELPER_3(vfp_mins, f32, f32, f32, ptr)
37
-DEF_HELPER_3(vfp_mind, f64, f64, f64, ptr)
38
-DEF_HELPER_3(vfp_maxnumh, f16, f16, f16, ptr)
39
-DEF_HELPER_3(vfp_maxnums, f32, f32, f32, ptr)
40
-DEF_HELPER_3(vfp_maxnumd, f64, f64, f64, ptr)
41
-DEF_HELPER_3(vfp_minnumh, f16, f16, f16, ptr)
42
-DEF_HELPER_3(vfp_minnums, f32, f32, f32, ptr)
43
-DEF_HELPER_3(vfp_minnumd, f64, f64, f64, ptr)
44
-DEF_HELPER_2(vfp_sqrth, f16, f16, ptr)
45
-DEF_HELPER_2(vfp_sqrts, f32, f32, ptr)
46
-DEF_HELPER_2(vfp_sqrtd, f64, f64, ptr)
47
+DEF_HELPER_3(vfp_addh, f16, f16, f16, fpst)
48
+DEF_HELPER_3(vfp_adds, f32, f32, f32, fpst)
49
+DEF_HELPER_3(vfp_addd, f64, f64, f64, fpst)
50
+DEF_HELPER_3(vfp_subh, f16, f16, f16, fpst)
51
+DEF_HELPER_3(vfp_subs, f32, f32, f32, fpst)
52
+DEF_HELPER_3(vfp_subd, f64, f64, f64, fpst)
53
+DEF_HELPER_3(vfp_mulh, f16, f16, f16, fpst)
54
+DEF_HELPER_3(vfp_muls, f32, f32, f32, fpst)
55
+DEF_HELPER_3(vfp_muld, f64, f64, f64, fpst)
56
+DEF_HELPER_3(vfp_divh, f16, f16, f16, fpst)
57
+DEF_HELPER_3(vfp_divs, f32, f32, f32, fpst)
58
+DEF_HELPER_3(vfp_divd, f64, f64, f64, fpst)
59
+DEF_HELPER_3(vfp_maxh, f16, f16, f16, fpst)
60
+DEF_HELPER_3(vfp_maxs, f32, f32, f32, fpst)
61
+DEF_HELPER_3(vfp_maxd, f64, f64, f64, fpst)
62
+DEF_HELPER_3(vfp_minh, f16, f16, f16, fpst)
63
+DEF_HELPER_3(vfp_mins, f32, f32, f32, fpst)
64
+DEF_HELPER_3(vfp_mind, f64, f64, f64, fpst)
65
+DEF_HELPER_3(vfp_maxnumh, f16, f16, f16, fpst)
66
+DEF_HELPER_3(vfp_maxnums, f32, f32, f32, fpst)
67
+DEF_HELPER_3(vfp_maxnumd, f64, f64, f64, fpst)
68
+DEF_HELPER_3(vfp_minnumh, f16, f16, f16, fpst)
69
+DEF_HELPER_3(vfp_minnums, f32, f32, f32, fpst)
70
+DEF_HELPER_3(vfp_minnumd, f64, f64, f64, fpst)
71
+DEF_HELPER_2(vfp_sqrth, f16, f16, fpst)
72
+DEF_HELPER_2(vfp_sqrts, f32, f32, fpst)
73
+DEF_HELPER_2(vfp_sqrtd, f64, f64, fpst)
74
DEF_HELPER_3(vfp_cmph, void, f16, f16, env)
75
DEF_HELPER_3(vfp_cmps, void, f32, f32, env)
76
DEF_HELPER_3(vfp_cmpd, void, f64, f64, env)
77
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
78
79
DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
80
DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
81
-DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
82
-DEF_HELPER_FLAGS_2(bfcvt_pair, TCG_CALL_NO_RWG, i32, i64, ptr)
83
+DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, fpst)
84
+DEF_HELPER_FLAGS_2(bfcvt_pair, TCG_CALL_NO_RWG, i32, i64, fpst)
85
86
-DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
87
-DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
88
-DEF_HELPER_2(vfp_uitod, f64, i32, ptr)
89
-DEF_HELPER_2(vfp_sitoh, f16, i32, ptr)
90
-DEF_HELPER_2(vfp_sitos, f32, i32, ptr)
91
-DEF_HELPER_2(vfp_sitod, f64, i32, ptr)
92
+DEF_HELPER_2(vfp_uitoh, f16, i32, fpst)
93
+DEF_HELPER_2(vfp_uitos, f32, i32, fpst)
94
+DEF_HELPER_2(vfp_uitod, f64, i32, fpst)
95
+DEF_HELPER_2(vfp_sitoh, f16, i32, fpst)
96
+DEF_HELPER_2(vfp_sitos, f32, i32, fpst)
97
+DEF_HELPER_2(vfp_sitod, f64, i32, fpst)
98
99
-DEF_HELPER_2(vfp_touih, i32, f16, ptr)
100
-DEF_HELPER_2(vfp_touis, i32, f32, ptr)
101
-DEF_HELPER_2(vfp_touid, i32, f64, ptr)
102
-DEF_HELPER_2(vfp_touizh, i32, f16, ptr)
103
-DEF_HELPER_2(vfp_touizs, i32, f32, ptr)
104
-DEF_HELPER_2(vfp_touizd, i32, f64, ptr)
105
-DEF_HELPER_2(vfp_tosih, s32, f16, ptr)
106
-DEF_HELPER_2(vfp_tosis, s32, f32, ptr)
107
-DEF_HELPER_2(vfp_tosid, s32, f64, ptr)
108
-DEF_HELPER_2(vfp_tosizh, s32, f16, ptr)
109
-DEF_HELPER_2(vfp_tosizs, s32, f32, ptr)
110
-DEF_HELPER_2(vfp_tosizd, s32, f64, ptr)
111
+DEF_HELPER_2(vfp_touih, i32, f16, fpst)
112
+DEF_HELPER_2(vfp_touis, i32, f32, fpst)
113
+DEF_HELPER_2(vfp_touid, i32, f64, fpst)
114
+DEF_HELPER_2(vfp_touizh, i32, f16, fpst)
115
+DEF_HELPER_2(vfp_touizs, i32, f32, fpst)
116
+DEF_HELPER_2(vfp_touizd, i32, f64, fpst)
117
+DEF_HELPER_2(vfp_tosih, s32, f16, fpst)
118
+DEF_HELPER_2(vfp_tosis, s32, f32, fpst)
119
+DEF_HELPER_2(vfp_tosid, s32, f64, fpst)
120
+DEF_HELPER_2(vfp_tosizh, s32, f16, fpst)
121
+DEF_HELPER_2(vfp_tosizs, s32, f32, fpst)
122
+DEF_HELPER_2(vfp_tosizd, s32, f64, fpst)
123
124
-DEF_HELPER_3(vfp_toshh_round_to_zero, i32, f16, i32, ptr)
125
-DEF_HELPER_3(vfp_toslh_round_to_zero, i32, f16, i32, ptr)
126
-DEF_HELPER_3(vfp_touhh_round_to_zero, i32, f16, i32, ptr)
127
-DEF_HELPER_3(vfp_toulh_round_to_zero, i32, f16, i32, ptr)
128
-DEF_HELPER_3(vfp_toshs_round_to_zero, i32, f32, i32, ptr)
129
-DEF_HELPER_3(vfp_tosls_round_to_zero, i32, f32, i32, ptr)
130
-DEF_HELPER_3(vfp_touhs_round_to_zero, i32, f32, i32, ptr)
131
-DEF_HELPER_3(vfp_touls_round_to_zero, i32, f32, i32, ptr)
132
-DEF_HELPER_3(vfp_toshd_round_to_zero, i64, f64, i32, ptr)
133
-DEF_HELPER_3(vfp_tosld_round_to_zero, i64, f64, i32, ptr)
134
-DEF_HELPER_3(vfp_tosqd_round_to_zero, i64, f64, i32, ptr)
135
-DEF_HELPER_3(vfp_touhd_round_to_zero, i64, f64, i32, ptr)
136
-DEF_HELPER_3(vfp_tould_round_to_zero, i64, f64, i32, ptr)
137
-DEF_HELPER_3(vfp_touqd_round_to_zero, i64, f64, i32, ptr)
138
-DEF_HELPER_3(vfp_touhh, i32, f16, i32, ptr)
139
-DEF_HELPER_3(vfp_toshh, i32, f16, i32, ptr)
140
-DEF_HELPER_3(vfp_toulh, i32, f16, i32, ptr)
141
-DEF_HELPER_3(vfp_toslh, i32, f16, i32, ptr)
142
-DEF_HELPER_3(vfp_touqh, i64, f16, i32, ptr)
143
-DEF_HELPER_3(vfp_tosqh, i64, f16, i32, ptr)
144
-DEF_HELPER_3(vfp_toshs, i32, f32, i32, ptr)
145
-DEF_HELPER_3(vfp_tosls, i32, f32, i32, ptr)
146
-DEF_HELPER_3(vfp_tosqs, i64, f32, i32, ptr)
147
-DEF_HELPER_3(vfp_touhs, i32, f32, i32, ptr)
148
-DEF_HELPER_3(vfp_touls, i32, f32, i32, ptr)
149
-DEF_HELPER_3(vfp_touqs, i64, f32, i32, ptr)
150
-DEF_HELPER_3(vfp_toshd, i64, f64, i32, ptr)
151
-DEF_HELPER_3(vfp_tosld, i64, f64, i32, ptr)
152
-DEF_HELPER_3(vfp_tosqd, i64, f64, i32, ptr)
153
-DEF_HELPER_3(vfp_touhd, i64, f64, i32, ptr)
154
-DEF_HELPER_3(vfp_tould, i64, f64, i32, ptr)
155
-DEF_HELPER_3(vfp_touqd, i64, f64, i32, ptr)
156
-DEF_HELPER_3(vfp_shtos, f32, i32, i32, ptr)
157
-DEF_HELPER_3(vfp_sltos, f32, i32, i32, ptr)
158
-DEF_HELPER_3(vfp_sqtos, f32, i64, i32, ptr)
159
-DEF_HELPER_3(vfp_uhtos, f32, i32, i32, ptr)
160
-DEF_HELPER_3(vfp_ultos, f32, i32, i32, ptr)
161
-DEF_HELPER_3(vfp_uqtos, f32, i64, i32, ptr)
162
-DEF_HELPER_3(vfp_shtod, f64, i64, i32, ptr)
163
-DEF_HELPER_3(vfp_sltod, f64, i64, i32, ptr)
164
-DEF_HELPER_3(vfp_sqtod, f64, i64, i32, ptr)
165
-DEF_HELPER_3(vfp_uhtod, f64, i64, i32, ptr)
166
-DEF_HELPER_3(vfp_ultod, f64, i64, i32, ptr)
167
-DEF_HELPER_3(vfp_uqtod, f64, i64, i32, ptr)
168
-DEF_HELPER_3(vfp_shtoh, f16, i32, i32, ptr)
169
-DEF_HELPER_3(vfp_uhtoh, f16, i32, i32, ptr)
170
-DEF_HELPER_3(vfp_sltoh, f16, i32, i32, ptr)
171
-DEF_HELPER_3(vfp_ultoh, f16, i32, i32, ptr)
172
-DEF_HELPER_3(vfp_sqtoh, f16, i64, i32, ptr)
173
-DEF_HELPER_3(vfp_uqtoh, f16, i64, i32, ptr)
174
+DEF_HELPER_3(vfp_toshh_round_to_zero, i32, f16, i32, fpst)
175
+DEF_HELPER_3(vfp_toslh_round_to_zero, i32, f16, i32, fpst)
176
+DEF_HELPER_3(vfp_touhh_round_to_zero, i32, f16, i32, fpst)
177
+DEF_HELPER_3(vfp_toulh_round_to_zero, i32, f16, i32, fpst)
178
+DEF_HELPER_3(vfp_toshs_round_to_zero, i32, f32, i32, fpst)
179
+DEF_HELPER_3(vfp_tosls_round_to_zero, i32, f32, i32, fpst)
180
+DEF_HELPER_3(vfp_touhs_round_to_zero, i32, f32, i32, fpst)
181
+DEF_HELPER_3(vfp_touls_round_to_zero, i32, f32, i32, fpst)
182
+DEF_HELPER_3(vfp_toshd_round_to_zero, i64, f64, i32, fpst)
183
+DEF_HELPER_3(vfp_tosld_round_to_zero, i64, f64, i32, fpst)
184
+DEF_HELPER_3(vfp_tosqd_round_to_zero, i64, f64, i32, fpst)
185
+DEF_HELPER_3(vfp_touhd_round_to_zero, i64, f64, i32, fpst)
186
+DEF_HELPER_3(vfp_tould_round_to_zero, i64, f64, i32, fpst)
187
+DEF_HELPER_3(vfp_touqd_round_to_zero, i64, f64, i32, fpst)
188
+DEF_HELPER_3(vfp_touhh, i32, f16, i32, fpst)
189
+DEF_HELPER_3(vfp_toshh, i32, f16, i32, fpst)
190
+DEF_HELPER_3(vfp_toulh, i32, f16, i32, fpst)
191
+DEF_HELPER_3(vfp_toslh, i32, f16, i32, fpst)
192
+DEF_HELPER_3(vfp_touqh, i64, f16, i32, fpst)
193
+DEF_HELPER_3(vfp_tosqh, i64, f16, i32, fpst)
194
+DEF_HELPER_3(vfp_toshs, i32, f32, i32, fpst)
195
+DEF_HELPER_3(vfp_tosls, i32, f32, i32, fpst)
196
+DEF_HELPER_3(vfp_tosqs, i64, f32, i32, fpst)
197
+DEF_HELPER_3(vfp_touhs, i32, f32, i32, fpst)
198
+DEF_HELPER_3(vfp_touls, i32, f32, i32, fpst)
199
+DEF_HELPER_3(vfp_touqs, i64, f32, i32, fpst)
200
+DEF_HELPER_3(vfp_toshd, i64, f64, i32, fpst)
201
+DEF_HELPER_3(vfp_tosld, i64, f64, i32, fpst)
202
+DEF_HELPER_3(vfp_tosqd, i64, f64, i32, fpst)
203
+DEF_HELPER_3(vfp_touhd, i64, f64, i32, fpst)
204
+DEF_HELPER_3(vfp_tould, i64, f64, i32, fpst)
205
+DEF_HELPER_3(vfp_touqd, i64, f64, i32, fpst)
206
+DEF_HELPER_3(vfp_shtos, f32, i32, i32, fpst)
207
+DEF_HELPER_3(vfp_sltos, f32, i32, i32, fpst)
208
+DEF_HELPER_3(vfp_sqtos, f32, i64, i32, fpst)
209
+DEF_HELPER_3(vfp_uhtos, f32, i32, i32, fpst)
210
+DEF_HELPER_3(vfp_ultos, f32, i32, i32, fpst)
211
+DEF_HELPER_3(vfp_uqtos, f32, i64, i32, fpst)
212
+DEF_HELPER_3(vfp_shtod, f64, i64, i32, fpst)
213
+DEF_HELPER_3(vfp_sltod, f64, i64, i32, fpst)
214
+DEF_HELPER_3(vfp_sqtod, f64, i64, i32, fpst)
215
+DEF_HELPER_3(vfp_uhtod, f64, i64, i32, fpst)
216
+DEF_HELPER_3(vfp_ultod, f64, i64, i32, fpst)
217
+DEF_HELPER_3(vfp_uqtod, f64, i64, i32, fpst)
218
+DEF_HELPER_3(vfp_shtoh, f16, i32, i32, fpst)
219
+DEF_HELPER_3(vfp_uhtoh, f16, i32, i32, fpst)
220
+DEF_HELPER_3(vfp_sltoh, f16, i32, i32, fpst)
221
+DEF_HELPER_3(vfp_ultoh, f16, i32, i32, fpst)
222
+DEF_HELPER_3(vfp_sqtoh, f16, i64, i32, fpst)
223
+DEF_HELPER_3(vfp_uqtoh, f16, i64, i32, fpst)
224
225
-DEF_HELPER_3(vfp_shtos_round_to_nearest, f32, i32, i32, ptr)
226
-DEF_HELPER_3(vfp_sltos_round_to_nearest, f32, i32, i32, ptr)
227
-DEF_HELPER_3(vfp_uhtos_round_to_nearest, f32, i32, i32, ptr)
228
-DEF_HELPER_3(vfp_ultos_round_to_nearest, f32, i32, i32, ptr)
229
-DEF_HELPER_3(vfp_shtod_round_to_nearest, f64, i64, i32, ptr)
230
-DEF_HELPER_3(vfp_sltod_round_to_nearest, f64, i64, i32, ptr)
231
-DEF_HELPER_3(vfp_uhtod_round_to_nearest, f64, i64, i32, ptr)
232
-DEF_HELPER_3(vfp_ultod_round_to_nearest, f64, i64, i32, ptr)
233
-DEF_HELPER_3(vfp_shtoh_round_to_nearest, f16, i32, i32, ptr)
234
-DEF_HELPER_3(vfp_uhtoh_round_to_nearest, f16, i32, i32, ptr)
235
-DEF_HELPER_3(vfp_sltoh_round_to_nearest, f16, i32, i32, ptr)
236
-DEF_HELPER_3(vfp_ultoh_round_to_nearest, f16, i32, i32, ptr)
237
+DEF_HELPER_3(vfp_shtos_round_to_nearest, f32, i32, i32, fpst)
238
+DEF_HELPER_3(vfp_sltos_round_to_nearest, f32, i32, i32, fpst)
239
+DEF_HELPER_3(vfp_uhtos_round_to_nearest, f32, i32, i32, fpst)
240
+DEF_HELPER_3(vfp_ultos_round_to_nearest, f32, i32, i32, fpst)
241
+DEF_HELPER_3(vfp_shtod_round_to_nearest, f64, i64, i32, fpst)
242
+DEF_HELPER_3(vfp_sltod_round_to_nearest, f64, i64, i32, fpst)
243
+DEF_HELPER_3(vfp_uhtod_round_to_nearest, f64, i64, i32, fpst)
244
+DEF_HELPER_3(vfp_ultod_round_to_nearest, f64, i64, i32, fpst)
245
+DEF_HELPER_3(vfp_shtoh_round_to_nearest, f16, i32, i32, fpst)
246
+DEF_HELPER_3(vfp_uhtoh_round_to_nearest, f16, i32, i32, fpst)
247
+DEF_HELPER_3(vfp_sltoh_round_to_nearest, f16, i32, i32, fpst)
248
+DEF_HELPER_3(vfp_ultoh_round_to_nearest, f16, i32, i32, fpst)
249
250
-DEF_HELPER_FLAGS_2(set_rmode, TCG_CALL_NO_RWG, i32, i32, ptr)
251
+DEF_HELPER_FLAGS_2(set_rmode, TCG_CALL_NO_RWG, i32, i32, fpst)
252
253
-DEF_HELPER_FLAGS_3(vfp_fcvt_f16_to_f32, TCG_CALL_NO_RWG, f32, f16, ptr, i32)
254
-DEF_HELPER_FLAGS_3(vfp_fcvt_f32_to_f16, TCG_CALL_NO_RWG, f16, f32, ptr, i32)
255
-DEF_HELPER_FLAGS_3(vfp_fcvt_f16_to_f64, TCG_CALL_NO_RWG, f64, f16, ptr, i32)
256
-DEF_HELPER_FLAGS_3(vfp_fcvt_f64_to_f16, TCG_CALL_NO_RWG, f16, f64, ptr, i32)
257
+DEF_HELPER_FLAGS_3(vfp_fcvt_f16_to_f32, TCG_CALL_NO_RWG, f32, f16, fpst, i32)
258
+DEF_HELPER_FLAGS_3(vfp_fcvt_f32_to_f16, TCG_CALL_NO_RWG, f16, f32, fpst, i32)
259
+DEF_HELPER_FLAGS_3(vfp_fcvt_f16_to_f64, TCG_CALL_NO_RWG, f64, f16, fpst, i32)
260
+DEF_HELPER_FLAGS_3(vfp_fcvt_f64_to_f16, TCG_CALL_NO_RWG, f16, f64, fpst, i32)
261
262
-DEF_HELPER_4(vfp_muladdd, f64, f64, f64, f64, ptr)
263
-DEF_HELPER_4(vfp_muladds, f32, f32, f32, f32, ptr)
264
-DEF_HELPER_4(vfp_muladdh, f16, f16, f16, f16, ptr)
265
+DEF_HELPER_4(vfp_muladdd, f64, f64, f64, f64, fpst)
266
+DEF_HELPER_4(vfp_muladds, f32, f32, f32, f32, fpst)
267
+DEF_HELPER_4(vfp_muladdh, f16, f16, f16, f16, fpst)
268
269
-DEF_HELPER_FLAGS_2(recpe_f16, TCG_CALL_NO_RWG, f16, f16, ptr)
270
-DEF_HELPER_FLAGS_2(recpe_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
271
-DEF_HELPER_FLAGS_2(recpe_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
272
-DEF_HELPER_FLAGS_2(rsqrte_f16, TCG_CALL_NO_RWG, f16, f16, ptr)
273
-DEF_HELPER_FLAGS_2(rsqrte_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
274
-DEF_HELPER_FLAGS_2(rsqrte_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
275
+DEF_HELPER_FLAGS_2(recpe_f16, TCG_CALL_NO_RWG, f16, f16, fpst)
276
+DEF_HELPER_FLAGS_2(recpe_f32, TCG_CALL_NO_RWG, f32, f32, fpst)
277
+DEF_HELPER_FLAGS_2(recpe_f64, TCG_CALL_NO_RWG, f64, f64, fpst)
278
+DEF_HELPER_FLAGS_2(rsqrte_f16, TCG_CALL_NO_RWG, f16, f16, fpst)
279
+DEF_HELPER_FLAGS_2(rsqrte_f32, TCG_CALL_NO_RWG, f32, f32, fpst)
280
+DEF_HELPER_FLAGS_2(rsqrte_f64, TCG_CALL_NO_RWG, f64, f64, fpst)
281
DEF_HELPER_FLAGS_1(recpe_u32, TCG_CALL_NO_RWG, i32, i32)
282
DEF_HELPER_FLAGS_1(rsqrte_u32, TCG_CALL_NO_RWG, i32, i32)
283
DEF_HELPER_FLAGS_4(neon_tbl, TCG_CALL_NO_RWG, i64, env, i32, i64, i64)
284
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(shr_cc, i32, env, i32, i32)
285
DEF_HELPER_3(sar_cc, i32, env, i32, i32)
286
DEF_HELPER_3(ror_cc, i32, env, i32, i32)
287
288
-DEF_HELPER_FLAGS_2(rinth_exact, TCG_CALL_NO_RWG, f16, f16, ptr)
289
-DEF_HELPER_FLAGS_2(rints_exact, TCG_CALL_NO_RWG, f32, f32, ptr)
290
-DEF_HELPER_FLAGS_2(rintd_exact, TCG_CALL_NO_RWG, f64, f64, ptr)
291
-DEF_HELPER_FLAGS_2(rinth, TCG_CALL_NO_RWG, f16, f16, ptr)
292
-DEF_HELPER_FLAGS_2(rints, TCG_CALL_NO_RWG, f32, f32, ptr)
293
-DEF_HELPER_FLAGS_2(rintd, TCG_CALL_NO_RWG, f64, f64, ptr)
294
+DEF_HELPER_FLAGS_2(rinth_exact, TCG_CALL_NO_RWG, f16, f16, fpst)
295
+DEF_HELPER_FLAGS_2(rints_exact, TCG_CALL_NO_RWG, f32, f32, fpst)
296
+DEF_HELPER_FLAGS_2(rintd_exact, TCG_CALL_NO_RWG, f64, f64, fpst)
297
+DEF_HELPER_FLAGS_2(rinth, TCG_CALL_NO_RWG, f16, f16, fpst)
298
+DEF_HELPER_FLAGS_2(rints, TCG_CALL_NO_RWG, f32, f32, fpst)
299
+DEF_HELPER_FLAGS_2(rintd, TCG_CALL_NO_RWG, f64, f64, fpst)
300
301
DEF_HELPER_FLAGS_2(vjcvt, TCG_CALL_NO_RWG, i32, f64, env)
302
-DEF_HELPER_FLAGS_2(fjcvtzs, TCG_CALL_NO_RWG, i64, f64, ptr)
303
+DEF_HELPER_FLAGS_2(fjcvtzs, TCG_CALL_NO_RWG, i64, f64, fpst)
304
305
DEF_HELPER_FLAGS_3(check_hcr_el2_trap, TCG_CALL_NO_WG, void, env, i32, i32)
306
307
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fmlal_idx_a32, TCG_CALL_NO_RWG,
308
DEF_HELPER_FLAGS_5(gvec_fmlal_idx_a64, TCG_CALL_NO_RWG,
309
void, ptr, ptr, ptr, ptr, i32)
310
311
-DEF_HELPER_FLAGS_2(frint32_s, TCG_CALL_NO_RWG, f32, f32, ptr)
312
-DEF_HELPER_FLAGS_2(frint64_s, TCG_CALL_NO_RWG, f32, f32, ptr)
313
-DEF_HELPER_FLAGS_2(frint32_d, TCG_CALL_NO_RWG, f64, f64, ptr)
314
-DEF_HELPER_FLAGS_2(frint64_d, TCG_CALL_NO_RWG, f64, f64, ptr)
315
+DEF_HELPER_FLAGS_2(frint32_s, TCG_CALL_NO_RWG, f32, f32, fpst)
316
+DEF_HELPER_FLAGS_2(frint64_s, TCG_CALL_NO_RWG, f32, f32, fpst)
317
+DEF_HELPER_FLAGS_2(frint32_d, TCG_CALL_NO_RWG, f64, f64, fpst)
318
+DEF_HELPER_FLAGS_2(frint64_d, TCG_CALL_NO_RWG, f64, f64, fpst)
319
320
DEF_HELPER_FLAGS_3(gvec_ceq0_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
321
DEF_HELPER_FLAGS_3(gvec_ceq0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
322
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
323
index XXXXXXX..XXXXXXX 100644
324
--- a/target/arm/vfp_helper.c
325
+++ b/target/arm/vfp_helper.c
326
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val)
327
#define VFP_HELPER(name, p) HELPER(glue(glue(vfp_,name),p))
328
329
#define VFP_BINOP(name) \
330
-dh_ctype_f16 VFP_HELPER(name, h)(dh_ctype_f16 a, dh_ctype_f16 b, void *fpstp) \
331
+dh_ctype_f16 VFP_HELPER(name, h)(dh_ctype_f16 a, dh_ctype_f16 b, float_status *fpst) \
332
{ \
333
- float_status *fpst = fpstp; \
334
return float16_ ## name(a, b, fpst); \
335
} \
336
-float32 VFP_HELPER(name, s)(float32 a, float32 b, void *fpstp) \
337
+float32 VFP_HELPER(name, s)(float32 a, float32 b, float_status *fpst) \
338
{ \
339
- float_status *fpst = fpstp; \
340
return float32_ ## name(a, b, fpst); \
341
} \
342
-float64 VFP_HELPER(name, d)(float64 a, float64 b, void *fpstp) \
343
+float64 VFP_HELPER(name, d)(float64 a, float64 b, float_status *fpst) \
344
{ \
345
- float_status *fpst = fpstp; \
346
return float64_ ## name(a, b, fpst); \
347
}
348
VFP_BINOP(add)
349
@@ -XXX,XX +XXX,XX @@ VFP_BINOP(minnum)
350
VFP_BINOP(maxnum)
351
#undef VFP_BINOP
352
353
-dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, void *fpstp)
354
+dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, float_status *fpst)
355
{
356
- return float16_sqrt(a, fpstp);
357
+ return float16_sqrt(a, fpst);
358
}
359
360
-float32 VFP_HELPER(sqrt, s)(float32 a, void *fpstp)
361
+float32 VFP_HELPER(sqrt, s)(float32 a, float_status *fpst)
362
{
363
- return float32_sqrt(a, fpstp);
364
+ return float32_sqrt(a, fpst);
365
}
366
367
-float64 VFP_HELPER(sqrt, d)(float64 a, void *fpstp)
368
+float64 VFP_HELPER(sqrt, d)(float64 a, float_status *fpst)
369
{
370
- return float64_sqrt(a, fpstp);
371
+ return float64_sqrt(a, fpst);
372
}
373
374
static void softfloat_to_vfp_compare(CPUARMState *env, FloatRelation cmp)
375
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64, float64, fp_status)
376
/* Integer to float and float to integer conversions */
377
378
#define CONV_ITOF(name, ftype, fsz, sign) \
379
-ftype HELPER(name)(uint32_t x, void *fpstp) \
380
+ftype HELPER(name)(uint32_t x, float_status *fpst) \
381
{ \
382
- float_status *fpst = fpstp; \
383
return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
384
}
385
386
#define CONV_FTOI(name, ftype, fsz, sign, round) \
387
-sign##int32_t HELPER(name)(ftype x, void *fpstp) \
388
+sign##int32_t HELPER(name)(ftype x, float_status *fpst) \
389
{ \
390
- float_status *fpst = fpstp; \
391
if (float##fsz##_is_any_nan(x)) { \
392
float_raise(float_flag_invalid, fpst); \
393
return 0; \
394
@@ -XXX,XX +XXX,XX @@ float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env)
395
return float64_to_float32(x, &env->vfp.fp_status);
396
}
397
398
-uint32_t HELPER(bfcvt)(float32 x, void *status)
399
+uint32_t HELPER(bfcvt)(float32 x, float_status *status)
400
{
401
return float32_to_bfloat16(x, status);
402
}
403
404
-uint32_t HELPER(bfcvt_pair)(uint64_t pair, void *status)
405
+uint32_t HELPER(bfcvt_pair)(uint64_t pair, float_status *status)
406
{
407
bfloat16 lo = float32_to_bfloat16(extract64(pair, 0, 32), status);
408
bfloat16 hi = float32_to_bfloat16(extract64(pair, 32, 32), status);
409
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(bfcvt_pair)(uint64_t pair, void *status)
25
*/
410
*/
26
static inline uint64_t useronly_clean_ptr(uint64_t ptr)
411
#define VFP_CONV_FIX_FLOAT(name, p, fsz, ftype, isz, itype) \
27
{
412
ftype HELPER(vfp_##name##to##p)(uint##isz##_t x, uint32_t shift, \
28
- /* TBI is known to be enabled. */
413
- void *fpstp) \
29
#ifdef CONFIG_USER_ONLY
414
-{ return itype##_to_##float##fsz##_scalbn(x, -shift, fpstp); }
30
- ptr = sextract64(ptr, 0, 56);
415
+ float_status *fpst) \
31
+ /* TBI0 is known to be enabled, while TBI1 is disabled. */
416
+{ return itype##_to_##float##fsz##_scalbn(x, -shift, fpst); }
32
+ ptr &= sextract64(ptr, 0, 56);
417
33
#endif
418
#define VFP_CONV_FIX_FLOAT_ROUND(name, p, fsz, ftype, isz, itype) \
34
return ptr;
419
ftype HELPER(vfp_##name##to##p##_round_to_nearest)(uint##isz##_t x, \
35
}
420
uint32_t shift, \
36
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
421
- void *fpstp) \
37
index XXXXXXX..XXXXXXX 100644
422
+ float_status *fpst) \
38
--- a/target/arm/cpu.c
423
{ \
39
+++ b/target/arm/cpu.c
424
ftype ret; \
40
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
425
- float_status *fpst = fpstp; \
41
env->vfp.zcr_el[1] = MIN(cpu->sve_max_vq - 1, 3);
426
FloatRoundMode oldmode = fpst->float_rounding_mode; \
427
fpst->float_rounding_mode = float_round_nearest_even; \
428
- ret = itype##_to_##float##fsz##_scalbn(x, -shift, fpstp); \
429
+ ret = itype##_to_##float##fsz##_scalbn(x, -shift, fpst); \
430
fpst->float_rounding_mode = oldmode; \
431
return ret; \
432
}
433
434
#define VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, ftype, isz, itype, ROUND, suff) \
435
uint##isz##_t HELPER(vfp_to##name##p##suff)(ftype x, uint32_t shift, \
436
- void *fpst) \
437
+ float_status *fpst) \
438
{ \
439
if (unlikely(float##fsz##_is_any_nan(x))) { \
440
float_raise(float_flag_invalid, fpst); \
441
@@ -XXX,XX +XXX,XX @@ VFP_CONV_FLOAT_FIX_ROUND(uq, d, 64, float64, 64, uint64,
442
/* Set the current fp rounding mode and return the old one.
443
* The argument is a softfloat float_round_ value.
444
*/
445
-uint32_t HELPER(set_rmode)(uint32_t rmode, void *fpstp)
446
+uint32_t HELPER(set_rmode)(uint32_t rmode, float_status *fp_status)
447
{
448
- float_status *fp_status = fpstp;
449
-
450
uint32_t prev_rmode = get_float_rounding_mode(fp_status);
451
set_float_rounding_mode(rmode, fp_status);
452
453
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_rmode)(uint32_t rmode, void *fpstp)
454
}
455
456
/* Half precision conversions. */
457
-float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
458
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, float_status *fpst,
459
+ uint32_t ahp_mode)
460
{
461
/* Squash FZ16 to 0 for the duration of conversion. In this case,
462
* it would affect flushing input denormals.
463
*/
464
- float_status *fpst = fpstp;
465
bool save = get_flush_inputs_to_zero(fpst);
466
set_flush_inputs_to_zero(false, fpst);
467
float32 r = float16_to_float32(a, !ahp_mode, fpst);
468
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
469
return r;
470
}
471
472
-uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
473
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, float_status *fpst,
474
+ uint32_t ahp_mode)
475
{
476
/* Squash FZ16 to 0 for the duration of conversion. In this case,
477
* it would affect flushing output denormals.
478
*/
479
- float_status *fpst = fpstp;
480
bool save = get_flush_to_zero(fpst);
481
set_flush_to_zero(false, fpst);
482
float16 r = float32_to_float16(a, !ahp_mode, fpst);
483
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
484
return r;
485
}
486
487
-float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
488
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, float_status *fpst,
489
+ uint32_t ahp_mode)
490
{
491
/* Squash FZ16 to 0 for the duration of conversion. In this case,
492
* it would affect flushing input denormals.
493
*/
494
- float_status *fpst = fpstp;
495
bool save = get_flush_inputs_to_zero(fpst);
496
set_flush_inputs_to_zero(false, fpst);
497
float64 r = float16_to_float64(a, !ahp_mode, fpst);
498
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
499
return r;
500
}
501
502
-uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
503
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, float_status *fpst,
504
+ uint32_t ahp_mode)
505
{
506
/* Squash FZ16 to 0 for the duration of conversion. In this case,
507
* it would affect flushing output denormals.
508
*/
509
- float_status *fpst = fpstp;
510
bool save = get_flush_to_zero(fpst);
511
set_flush_to_zero(false, fpst);
512
float16 r = float64_to_float16(a, !ahp_mode, fpst);
513
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
514
}
515
}
516
517
-uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
518
+uint32_t HELPER(recpe_f16)(uint32_t input, float_status *fpst)
519
{
520
- float_status *fpst = fpstp;
521
float16 f16 = float16_squash_input_denormal(input, fpst);
522
uint32_t f16_val = float16_val(f16);
523
uint32_t f16_sign = float16_is_neg(f16);
524
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
525
return make_float16(f16_val);
526
}
527
528
-float32 HELPER(recpe_f32)(float32 input, void *fpstp)
529
+float32 HELPER(recpe_f32)(float32 input, float_status *fpst)
530
{
531
- float_status *fpst = fpstp;
532
float32 f32 = float32_squash_input_denormal(input, fpst);
533
uint32_t f32_val = float32_val(f32);
534
bool f32_sign = float32_is_neg(f32);
535
@@ -XXX,XX +XXX,XX @@ float32 HELPER(recpe_f32)(float32 input, void *fpstp)
536
return make_float32(f32_val);
537
}
538
539
-float64 HELPER(recpe_f64)(float64 input, void *fpstp)
540
+float64 HELPER(recpe_f64)(float64 input, float_status *fpst)
541
{
542
- float_status *fpst = fpstp;
543
float64 f64 = float64_squash_input_denormal(input, fpst);
544
uint64_t f64_val = float64_val(f64);
545
bool f64_sign = float64_is_neg(f64);
546
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
547
return extract64(estimate, 0, 8) << 44;
548
}
549
550
-uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
551
+uint32_t HELPER(rsqrte_f16)(uint32_t input, float_status *s)
552
{
553
- float_status *s = fpstp;
554
float16 f16 = float16_squash_input_denormal(input, s);
555
uint16_t val = float16_val(f16);
556
bool f16_sign = float16_is_neg(f16);
557
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
558
if (float16_is_signaling_nan(f16, s)) {
559
float_raise(float_flag_invalid, s);
560
if (!s->default_nan_mode) {
561
- nan = float16_silence_nan(f16, fpstp);
562
+ nan = float16_silence_nan(f16, s);
563
}
42
}
564
}
43
/*
565
if (s->default_nan_mode) {
44
- * Enable TBI0 and TBI1. While the real kernel only enables TBI0,
566
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
45
- * turning on both here will produce smaller code and otherwise
567
return make_float16(val);
46
- * make no difference to the user-level emulation.
568
}
47
- *
569
48
- * In sve_probe_page, we assume that this is set.
570
-float32 HELPER(rsqrte_f32)(float32 input, void *fpstp)
49
- * Do not modify this without other changes.
571
+float32 HELPER(rsqrte_f32)(float32 input, float_status *s)
50
+ * Enable TBI0 but not TBI1.
572
{
51
+ * Note that this must match useronly_clean_ptr.
573
- float_status *s = fpstp;
52
*/
574
float32 f32 = float32_squash_input_denormal(input, s);
53
- env->cp15.tcr_el[1].raw_tcr = (3ULL << 37);
575
uint32_t val = float32_val(f32);
54
+ env->cp15.tcr_el[1].raw_tcr = (1ULL << 37);
576
uint32_t f32_sign = float32_is_neg(f32);
55
#else
577
@@ -XXX,XX +XXX,XX @@ float32 HELPER(rsqrte_f32)(float32 input, void *fpstp)
56
/* Reset into the highest available EL */
578
if (float32_is_signaling_nan(f32, s)) {
57
if (arm_feature(env, ARM_FEATURE_EL3)) {
579
float_raise(float_flag_invalid, s);
58
diff --git a/tests/tcg/aarch64/pauth-2.c b/tests/tcg/aarch64/pauth-2.c
580
if (!s->default_nan_mode) {
59
index XXXXXXX..XXXXXXX 100644
581
- nan = float32_silence_nan(f32, fpstp);
60
--- a/tests/tcg/aarch64/pauth-2.c
582
+ nan = float32_silence_nan(f32, s);
61
+++ b/tests/tcg/aarch64/pauth-2.c
583
}
62
@@ -XXX,XX +XXX,XX @@ void do_test(uint64_t value)
584
}
63
int main()
585
if (s->default_nan_mode) {
64
{
586
@@ -XXX,XX +XXX,XX @@ float32 HELPER(rsqrte_f32)(float32 input, void *fpstp)
65
do_test(0);
587
return make_float32(val);
66
- do_test(-1);
588
}
67
do_test(0xda004acedeadbeefull);
589
68
return 0;
590
-float64 HELPER(rsqrte_f64)(float64 input, void *fpstp)
591
+float64 HELPER(rsqrte_f64)(float64 input, float_status *s)
592
{
593
- float_status *s = fpstp;
594
float64 f64 = float64_squash_input_denormal(input, s);
595
uint64_t val = float64_val(f64);
596
bool f64_sign = float64_is_neg(f64);
597
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrte_f64)(float64 input, void *fpstp)
598
if (float64_is_signaling_nan(f64, s)) {
599
float_raise(float_flag_invalid, s);
600
if (!s->default_nan_mode) {
601
- nan = float64_silence_nan(f64, fpstp);
602
+ nan = float64_silence_nan(f64, s);
603
}
604
}
605
if (s->default_nan_mode) {
606
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rsqrte_u32)(uint32_t a)
607
608
/* VFPv4 fused multiply-accumulate */
609
dh_ctype_f16 VFP_HELPER(muladd, h)(dh_ctype_f16 a, dh_ctype_f16 b,
610
- dh_ctype_f16 c, void *fpstp)
611
+ dh_ctype_f16 c, float_status *fpst)
612
{
613
- float_status *fpst = fpstp;
614
return float16_muladd(a, b, c, 0, fpst);
615
}
616
617
-float32 VFP_HELPER(muladd, s)(float32 a, float32 b, float32 c, void *fpstp)
618
+float32 VFP_HELPER(muladd, s)(float32 a, float32 b, float32 c,
619
+ float_status *fpst)
620
{
621
- float_status *fpst = fpstp;
622
return float32_muladd(a, b, c, 0, fpst);
623
}
624
625
-float64 VFP_HELPER(muladd, d)(float64 a, float64 b, float64 c, void *fpstp)
626
+float64 VFP_HELPER(muladd, d)(float64 a, float64 b, float64 c,
627
+ float_status *fpst)
628
{
629
- float_status *fpst = fpstp;
630
return float64_muladd(a, b, c, 0, fpst);
631
}
632
633
/* ARMv8 round to integral */
634
-dh_ctype_f16 HELPER(rinth_exact)(dh_ctype_f16 x, void *fp_status)
635
+dh_ctype_f16 HELPER(rinth_exact)(dh_ctype_f16 x, float_status *fp_status)
636
{
637
return float16_round_to_int(x, fp_status);
638
}
639
640
-float32 HELPER(rints_exact)(float32 x, void *fp_status)
641
+float32 HELPER(rints_exact)(float32 x, float_status *fp_status)
642
{
643
return float32_round_to_int(x, fp_status);
644
}
645
646
-float64 HELPER(rintd_exact)(float64 x, void *fp_status)
647
+float64 HELPER(rintd_exact)(float64 x, float_status *fp_status)
648
{
649
return float64_round_to_int(x, fp_status);
650
}
651
652
-dh_ctype_f16 HELPER(rinth)(dh_ctype_f16 x, void *fp_status)
653
+dh_ctype_f16 HELPER(rinth)(dh_ctype_f16 x, float_status *fp_status)
654
{
655
int old_flags = get_float_exception_flags(fp_status), new_flags;
656
float16 ret;
657
@@ -XXX,XX +XXX,XX @@ dh_ctype_f16 HELPER(rinth)(dh_ctype_f16 x, void *fp_status)
658
return ret;
659
}
660
661
-float32 HELPER(rints)(float32 x, void *fp_status)
662
+float32 HELPER(rints)(float32 x, float_status *fp_status)
663
{
664
int old_flags = get_float_exception_flags(fp_status), new_flags;
665
float32 ret;
666
@@ -XXX,XX +XXX,XX @@ float32 HELPER(rints)(float32 x, void *fp_status)
667
return ret;
668
}
669
670
-float64 HELPER(rintd)(float64 x, void *fp_status)
671
+float64 HELPER(rintd)(float64 x, float_status *fp_status)
672
{
673
int old_flags = get_float_exception_flags(fp_status), new_flags;
674
float64 ret;
675
@@ -XXX,XX +XXX,XX @@ const FloatRoundMode arm_rmode_to_sf_map[] = {
676
* Implement float64 to int32_t conversion without saturation;
677
* the result is supplied modulo 2^32.
678
*/
679
-uint64_t HELPER(fjcvtzs)(float64 value, void *vstatus)
680
+uint64_t HELPER(fjcvtzs)(float64 value, float_status *status)
681
{
682
- float_status *status = vstatus;
683
uint32_t frac, e_old, e_new;
684
bool inexact;
685
686
@@ -XXX,XX +XXX,XX @@ static float32 frint_s(float32 f, float_status *fpst, int intsize)
687
return (0x100u + 126u + intsize) << 23;
688
}
689
690
-float32 HELPER(frint32_s)(float32 f, void *fpst)
691
+float32 HELPER(frint32_s)(float32 f, float_status *fpst)
692
{
693
return frint_s(f, fpst, 32);
694
}
695
696
-float32 HELPER(frint64_s)(float32 f, void *fpst)
697
+float32 HELPER(frint64_s)(float32 f, float_status *fpst)
698
{
699
return frint_s(f, fpst, 64);
700
}
701
@@ -XXX,XX +XXX,XX @@ static float64 frint_d(float64 f, float_status *fpst, int intsize)
702
return (uint64_t)(0x800 + 1022 + intsize) << 52;
703
}
704
705
-float64 HELPER(frint32_d)(float64 f, void *fpst)
706
+float64 HELPER(frint32_d)(float64 f, float_status *fpst)
707
{
708
return frint_d(f, fpst, 32);
709
}
710
711
-float64 HELPER(frint64_d)(float64 f, void *fpst)
712
+float64 HELPER(frint64_d)(float64 f, float_status *fpst)
713
{
714
return frint_d(f, fpst, 64);
69
}
715
}
70
--
716
--
71
2.20.1
717
2.34.1
72
718
73
719
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This data can be allocated by page_alloc_target_data() and
4
released by page_set_flags(start, end, prot | PAGE_RESET).
5
6
This data will be used to hold tag memory for AArch64 MTE.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210210000223.884088-2-richard.henderson@linaro.org
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20241206031224.78525-4-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
7
---
13
include/exec/cpu-all.h | 42 +++++++++++++++++++++++++++++++++------
8
target/arm/tcg/helper-a64.h | 94 +++++++++++++++++------------------
14
accel/tcg/translate-all.c | 28 ++++++++++++++++++++++++++
9
target/arm/tcg/helper-a64.c | 98 +++++++++++++------------------------
15
linux-user/mmap.c | 4 +++-
10
2 files changed, 80 insertions(+), 112 deletions(-)
16
linux-user/syscall.c | 4 ++--
17
4 files changed, 69 insertions(+), 9 deletions(-)
18
11
19
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
12
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu-all.h
14
--- a/target/arm/tcg/helper-a64.h
22
+++ b/include/exec/cpu-all.h
15
+++ b/target/arm/tcg/helper-a64.h
23
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(msr_i_spsel, void, env, i32)
24
#define PAGE_EXEC 0x0004
17
DEF_HELPER_2(msr_i_daifset, void, env, i32)
25
#define PAGE_BITS (PAGE_READ | PAGE_WRITE | PAGE_EXEC)
18
DEF_HELPER_2(msr_i_daifclear, void, env, i32)
26
#define PAGE_VALID 0x0008
19
DEF_HELPER_1(msr_set_allint_el1, void, env)
27
-/* original state of the write flag (used when tracking self-modifying
20
-DEF_HELPER_3(vfp_cmph_a64, i64, f16, f16, ptr)
28
- code */
21
-DEF_HELPER_3(vfp_cmpeh_a64, i64, f16, f16, ptr)
29
+/*
22
-DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, ptr)
30
+ * Original state of the write flag (used when tracking self-modifying code)
23
-DEF_HELPER_3(vfp_cmpes_a64, i64, f32, f32, ptr)
31
+ */
24
-DEF_HELPER_3(vfp_cmpd_a64, i64, f64, f64, ptr)
32
#define PAGE_WRITE_ORG 0x0010
25
-DEF_HELPER_3(vfp_cmped_a64, i64, f64, f64, ptr)
33
-/* Invalidate the TLB entry immediately, helpful for s390x
26
+DEF_HELPER_3(vfp_cmph_a64, i64, f16, f16, fpst)
34
- * Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs() */
27
+DEF_HELPER_3(vfp_cmpeh_a64, i64, f16, f16, fpst)
35
-#define PAGE_WRITE_INV 0x0040
28
+DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, fpst)
36
+/*
29
+DEF_HELPER_3(vfp_cmpes_a64, i64, f32, f32, fpst)
37
+ * Invalidate the TLB entry immediately, helpful for s390x
30
+DEF_HELPER_3(vfp_cmpd_a64, i64, f64, f64, fpst)
38
+ * Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs()
31
+DEF_HELPER_3(vfp_cmped_a64, i64, f64, f64, fpst)
39
+ */
32
DEF_HELPER_FLAGS_4(simd_tblx, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
+#define PAGE_WRITE_INV 0x0020
33
-DEF_HELPER_FLAGS_3(vfp_mulxs, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
41
+/* For use with page_set_flags: page is being replaced; target_data cleared. */
34
-DEF_HELPER_FLAGS_3(vfp_mulxd, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
42
+#define PAGE_RESET 0x0040
35
-DEF_HELPER_FLAGS_3(neon_ceq_f64, TCG_CALL_NO_RWG, i64, i64, i64, ptr)
43
+
36
-DEF_HELPER_FLAGS_3(neon_cge_f64, TCG_CALL_NO_RWG, i64, i64, i64, ptr)
44
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
37
-DEF_HELPER_FLAGS_3(neon_cgt_f64, TCG_CALL_NO_RWG, i64, i64, i64, ptr)
45
/* FIXME: Code that sets/uses this is broken and needs to go away. */
38
-DEF_HELPER_FLAGS_3(recpsf_f16, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
46
-#define PAGE_RESERVED 0x0020
39
-DEF_HELPER_FLAGS_3(recpsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
47
+#define PAGE_RESERVED 0x0100
40
-DEF_HELPER_FLAGS_3(recpsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
48
#endif
41
-DEF_HELPER_FLAGS_3(rsqrtsf_f16, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
49
/* Target-specific bits that will be used via page_get_flags(). */
42
-DEF_HELPER_FLAGS_3(rsqrtsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
50
#define PAGE_TARGET_1 0x0080
43
-DEF_HELPER_FLAGS_3(rsqrtsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
51
@@ -XXX,XX +XXX,XX @@ int walk_memory_regions(void *, walk_memory_regions_fn);
44
-DEF_HELPER_FLAGS_2(frecpx_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
52
int page_get_flags(target_ulong address);
45
-DEF_HELPER_FLAGS_2(frecpx_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
53
void page_set_flags(target_ulong start, target_ulong end, int flags);
46
-DEF_HELPER_FLAGS_2(frecpx_f16, TCG_CALL_NO_RWG, f16, f16, ptr)
54
int page_check_range(target_ulong start, target_ulong len, int flags);
47
+DEF_HELPER_FLAGS_3(vfp_mulxs, TCG_CALL_NO_RWG, f32, f32, f32, fpst)
55
+
48
+DEF_HELPER_FLAGS_3(vfp_mulxd, TCG_CALL_NO_RWG, f64, f64, f64, fpst)
56
+/**
49
+DEF_HELPER_FLAGS_3(neon_ceq_f64, TCG_CALL_NO_RWG, i64, i64, i64, fpst)
57
+ * page_alloc_target_data(address, size)
50
+DEF_HELPER_FLAGS_3(neon_cge_f64, TCG_CALL_NO_RWG, i64, i64, i64, fpst)
58
+ * @address: guest virtual address
51
+DEF_HELPER_FLAGS_3(neon_cgt_f64, TCG_CALL_NO_RWG, i64, i64, i64, fpst)
59
+ * @size: size of data to allocate
52
+DEF_HELPER_FLAGS_3(recpsf_f16, TCG_CALL_NO_RWG, f16, f16, f16, fpst)
60
+ *
53
+DEF_HELPER_FLAGS_3(recpsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, fpst)
61
+ * Allocate @size bytes of out-of-band data to associate with the
54
+DEF_HELPER_FLAGS_3(recpsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, fpst)
62
+ * guest page at @address. If the page is not mapped, NULL will
55
+DEF_HELPER_FLAGS_3(rsqrtsf_f16, TCG_CALL_NO_RWG, f16, f16, f16, fpst)
63
+ * be returned. If there is existing data associated with @address,
56
+DEF_HELPER_FLAGS_3(rsqrtsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, fpst)
64
+ * no new memory will be allocated.
57
+DEF_HELPER_FLAGS_3(rsqrtsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, fpst)
65
+ *
58
+DEF_HELPER_FLAGS_2(frecpx_f64, TCG_CALL_NO_RWG, f64, f64, fpst)
66
+ * The memory will be freed when the guest page is deallocated,
59
+DEF_HELPER_FLAGS_2(frecpx_f32, TCG_CALL_NO_RWG, f32, f32, fpst)
67
+ * e.g. with the munmap system call.
60
+DEF_HELPER_FLAGS_2(frecpx_f16, TCG_CALL_NO_RWG, f16, f16, fpst)
68
+ */
61
DEF_HELPER_FLAGS_2(fcvtx_f64_to_f32, TCG_CALL_NO_RWG, f32, f64, env)
69
+void *page_alloc_target_data(target_ulong address, size_t size);
62
DEF_HELPER_FLAGS_3(crc32_64, TCG_CALL_NO_RWG_SE, i64, i64, i64, i32)
70
+
63
DEF_HELPER_FLAGS_3(crc32c_64, TCG_CALL_NO_RWG_SE, i64, i64, i64, i32)
71
+/**
64
-DEF_HELPER_FLAGS_3(advsimd_maxh, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
72
+ * page_get_target_data(address)
65
-DEF_HELPER_FLAGS_3(advsimd_minh, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
73
+ * @address: guest virtual address
66
-DEF_HELPER_FLAGS_3(advsimd_maxnumh, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
74
+ *
67
-DEF_HELPER_FLAGS_3(advsimd_minnumh, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
75
+ * Return any out-of-bound memory assocated with the guest page
68
-DEF_HELPER_3(advsimd_addh, f16, f16, f16, ptr)
76
+ * at @address, as per page_alloc_target_data.
69
-DEF_HELPER_3(advsimd_subh, f16, f16, f16, ptr)
77
+ */
70
-DEF_HELPER_3(advsimd_mulh, f16, f16, f16, ptr)
78
+void *page_get_target_data(target_ulong address);
71
-DEF_HELPER_3(advsimd_divh, f16, f16, f16, ptr)
79
#endif
72
-DEF_HELPER_3(advsimd_ceq_f16, i32, f16, f16, ptr)
80
73
-DEF_HELPER_3(advsimd_cge_f16, i32, f16, f16, ptr)
81
CPUArchState *cpu_copy(CPUArchState *env);
74
-DEF_HELPER_3(advsimd_cgt_f16, i32, f16, f16, ptr)
82
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
75
-DEF_HELPER_3(advsimd_acge_f16, i32, f16, f16, ptr)
76
-DEF_HELPER_3(advsimd_acgt_f16, i32, f16, f16, ptr)
77
-DEF_HELPER_3(advsimd_mulxh, f16, f16, f16, ptr)
78
-DEF_HELPER_4(advsimd_muladdh, f16, f16, f16, f16, ptr)
79
-DEF_HELPER_3(advsimd_add2h, i32, i32, i32, ptr)
80
-DEF_HELPER_3(advsimd_sub2h, i32, i32, i32, ptr)
81
-DEF_HELPER_3(advsimd_mul2h, i32, i32, i32, ptr)
82
-DEF_HELPER_3(advsimd_div2h, i32, i32, i32, ptr)
83
-DEF_HELPER_3(advsimd_max2h, i32, i32, i32, ptr)
84
-DEF_HELPER_3(advsimd_min2h, i32, i32, i32, ptr)
85
-DEF_HELPER_3(advsimd_maxnum2h, i32, i32, i32, ptr)
86
-DEF_HELPER_3(advsimd_minnum2h, i32, i32, i32, ptr)
87
-DEF_HELPER_3(advsimd_mulx2h, i32, i32, i32, ptr)
88
-DEF_HELPER_4(advsimd_muladd2h, i32, i32, i32, i32, ptr)
89
-DEF_HELPER_2(advsimd_rinth_exact, f16, f16, ptr)
90
-DEF_HELPER_2(advsimd_rinth, f16, f16, ptr)
91
+DEF_HELPER_FLAGS_3(advsimd_maxh, TCG_CALL_NO_RWG, f16, f16, f16, fpst)
92
+DEF_HELPER_FLAGS_3(advsimd_minh, TCG_CALL_NO_RWG, f16, f16, f16, fpst)
93
+DEF_HELPER_FLAGS_3(advsimd_maxnumh, TCG_CALL_NO_RWG, f16, f16, f16, fpst)
94
+DEF_HELPER_FLAGS_3(advsimd_minnumh, TCG_CALL_NO_RWG, f16, f16, f16, fpst)
95
+DEF_HELPER_3(advsimd_addh, f16, f16, f16, fpst)
96
+DEF_HELPER_3(advsimd_subh, f16, f16, f16, fpst)
97
+DEF_HELPER_3(advsimd_mulh, f16, f16, f16, fpst)
98
+DEF_HELPER_3(advsimd_divh, f16, f16, f16, fpst)
99
+DEF_HELPER_3(advsimd_ceq_f16, i32, f16, f16, fpst)
100
+DEF_HELPER_3(advsimd_cge_f16, i32, f16, f16, fpst)
101
+DEF_HELPER_3(advsimd_cgt_f16, i32, f16, f16, fpst)
102
+DEF_HELPER_3(advsimd_acge_f16, i32, f16, f16, fpst)
103
+DEF_HELPER_3(advsimd_acgt_f16, i32, f16, f16, fpst)
104
+DEF_HELPER_3(advsimd_mulxh, f16, f16, f16, fpst)
105
+DEF_HELPER_4(advsimd_muladdh, f16, f16, f16, f16, fpst)
106
+DEF_HELPER_3(advsimd_add2h, i32, i32, i32, fpst)
107
+DEF_HELPER_3(advsimd_sub2h, i32, i32, i32, fpst)
108
+DEF_HELPER_3(advsimd_mul2h, i32, i32, i32, fpst)
109
+DEF_HELPER_3(advsimd_div2h, i32, i32, i32, fpst)
110
+DEF_HELPER_3(advsimd_max2h, i32, i32, i32, fpst)
111
+DEF_HELPER_3(advsimd_min2h, i32, i32, i32, fpst)
112
+DEF_HELPER_3(advsimd_maxnum2h, i32, i32, i32, fpst)
113
+DEF_HELPER_3(advsimd_minnum2h, i32, i32, i32, fpst)
114
+DEF_HELPER_3(advsimd_mulx2h, i32, i32, i32, fpst)
115
+DEF_HELPER_4(advsimd_muladd2h, i32, i32, i32, i32, fpst)
116
+DEF_HELPER_2(advsimd_rinth_exact, f16, f16, fpst)
117
+DEF_HELPER_2(advsimd_rinth, f16, f16, fpst)
118
119
DEF_HELPER_2(exception_return, void, env, i64)
120
DEF_HELPER_FLAGS_2(dc_zva, TCG_CALL_NO_WG, void, env, i64)
121
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
83
index XXXXXXX..XXXXXXX 100644
122
index XXXXXXX..XXXXXXX 100644
84
--- a/accel/tcg/translate-all.c
123
--- a/target/arm/tcg/helper-a64.c
85
+++ b/accel/tcg/translate-all.c
124
+++ b/target/arm/tcg/helper-a64.c
86
@@ -XXX,XX +XXX,XX @@ typedef struct PageDesc {
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
87
unsigned int code_write_count;
126
return flags;
88
#else
127
}
89
unsigned long flags;
128
90
+ void *target_data;
129
-uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
91
#endif
130
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, float_status *fp_status)
92
#ifndef CONFIG_USER_ONLY
131
{
93
QemuSpin lock;
132
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
94
@@ -XXX,XX +XXX,XX @@ int page_get_flags(target_ulong address)
133
}
95
void page_set_flags(target_ulong start, target_ulong end, int flags)
134
96
{
135
-uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
97
target_ulong addr, len;
136
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, float_status *fp_status)
98
+ bool reset_target_data;
137
{
99
138
return float_rel_to_flags(float16_compare(x, y, fp_status));
100
/* This function should never be called with addresses outside the
139
}
101
guest address space. If this assert fires, it probably indicates
140
102
@@ -XXX,XX +XXX,XX @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
141
-uint64_t HELPER(vfp_cmps_a64)(float32 x, float32 y, void *fp_status)
103
if (flags & PAGE_WRITE) {
142
+uint64_t HELPER(vfp_cmps_a64)(float32 x, float32 y, float_status *fp_status)
104
flags |= PAGE_WRITE_ORG;
143
{
144
return float_rel_to_flags(float32_compare_quiet(x, y, fp_status));
145
}
146
147
-uint64_t HELPER(vfp_cmpes_a64)(float32 x, float32 y, void *fp_status)
148
+uint64_t HELPER(vfp_cmpes_a64)(float32 x, float32 y, float_status *fp_status)
149
{
150
return float_rel_to_flags(float32_compare(x, y, fp_status));
151
}
152
153
-uint64_t HELPER(vfp_cmpd_a64)(float64 x, float64 y, void *fp_status)
154
+uint64_t HELPER(vfp_cmpd_a64)(float64 x, float64 y, float_status *fp_status)
155
{
156
return float_rel_to_flags(float64_compare_quiet(x, y, fp_status));
157
}
158
159
-uint64_t HELPER(vfp_cmped_a64)(float64 x, float64 y, void *fp_status)
160
+uint64_t HELPER(vfp_cmped_a64)(float64 x, float64 y, float_status *fp_status)
161
{
162
return float_rel_to_flags(float64_compare(x, y, fp_status));
163
}
164
165
-float32 HELPER(vfp_mulxs)(float32 a, float32 b, void *fpstp)
166
+float32 HELPER(vfp_mulxs)(float32 a, float32 b, float_status *fpst)
167
{
168
- float_status *fpst = fpstp;
169
-
170
a = float32_squash_input_denormal(a, fpst);
171
b = float32_squash_input_denormal(b, fpst);
172
173
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_mulxs)(float32 a, float32 b, void *fpstp)
174
return float32_mul(a, b, fpst);
175
}
176
177
-float64 HELPER(vfp_mulxd)(float64 a, float64 b, void *fpstp)
178
+float64 HELPER(vfp_mulxd)(float64 a, float64 b, float_status *fpst)
179
{
180
- float_status *fpst = fpstp;
181
-
182
a = float64_squash_input_denormal(a, fpst);
183
b = float64_squash_input_denormal(b, fpst);
184
185
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_mulxd)(float64 a, float64 b, void *fpstp)
186
}
187
188
/* 64bit/double versions of the neon float compare functions */
189
-uint64_t HELPER(neon_ceq_f64)(float64 a, float64 b, void *fpstp)
190
+uint64_t HELPER(neon_ceq_f64)(float64 a, float64 b, float_status *fpst)
191
{
192
- float_status *fpst = fpstp;
193
return -float64_eq_quiet(a, b, fpst);
194
}
195
196
-uint64_t HELPER(neon_cge_f64)(float64 a, float64 b, void *fpstp)
197
+uint64_t HELPER(neon_cge_f64)(float64 a, float64 b, float_status *fpst)
198
{
199
- float_status *fpst = fpstp;
200
return -float64_le(b, a, fpst);
201
}
202
203
-uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
204
+uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, float_status *fpst)
205
{
206
- float_status *fpst = fpstp;
207
return -float64_lt(b, a, fpst);
208
}
209
210
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
211
* multiply-add-and-halve.
212
*/
213
214
-uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
215
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, float_status *fpst)
216
{
217
- float_status *fpst = fpstp;
218
-
219
a = float16_squash_input_denormal(a, fpst);
220
b = float16_squash_input_denormal(b, fpst);
221
222
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
223
return float16_muladd(a, b, float16_two, 0, fpst);
224
}
225
226
-float32 HELPER(recpsf_f32)(float32 a, float32 b, void *fpstp)
227
+float32 HELPER(recpsf_f32)(float32 a, float32 b, float_status *fpst)
228
{
229
- float_status *fpst = fpstp;
230
-
231
a = float32_squash_input_denormal(a, fpst);
232
b = float32_squash_input_denormal(b, fpst);
233
234
@@ -XXX,XX +XXX,XX @@ float32 HELPER(recpsf_f32)(float32 a, float32 b, void *fpstp)
235
return float32_muladd(a, b, float32_two, 0, fpst);
236
}
237
238
-float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
239
+float64 HELPER(recpsf_f64)(float64 a, float64 b, float_status *fpst)
240
{
241
- float_status *fpst = fpstp;
242
-
243
a = float64_squash_input_denormal(a, fpst);
244
b = float64_squash_input_denormal(b, fpst);
245
246
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
247
return float64_muladd(a, b, float64_two, 0, fpst);
248
}
249
250
-uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
251
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, float_status *fpst)
252
{
253
- float_status *fpst = fpstp;
254
-
255
a = float16_squash_input_denormal(a, fpst);
256
b = float16_squash_input_denormal(b, fpst);
257
258
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
259
return float16_muladd(a, b, float16_three, float_muladd_halve_result, fpst);
260
}
261
262
-float32 HELPER(rsqrtsf_f32)(float32 a, float32 b, void *fpstp)
263
+float32 HELPER(rsqrtsf_f32)(float32 a, float32 b, float_status *fpst)
264
{
265
- float_status *fpst = fpstp;
266
-
267
a = float32_squash_input_denormal(a, fpst);
268
b = float32_squash_input_denormal(b, fpst);
269
270
@@ -XXX,XX +XXX,XX @@ float32 HELPER(rsqrtsf_f32)(float32 a, float32 b, void *fpstp)
271
return float32_muladd(a, b, float32_three, float_muladd_halve_result, fpst);
272
}
273
274
-float64 HELPER(rsqrtsf_f64)(float64 a, float64 b, void *fpstp)
275
+float64 HELPER(rsqrtsf_f64)(float64 a, float64 b, float_status *fpst)
276
{
277
- float_status *fpst = fpstp;
278
-
279
a = float64_squash_input_denormal(a, fpst);
280
b = float64_squash_input_denormal(b, fpst);
281
282
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrtsf_f64)(float64 a, float64 b, void *fpstp)
283
}
284
285
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
286
-uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
287
+uint32_t HELPER(frecpx_f16)(uint32_t a, float_status *fpst)
288
{
289
- float_status *fpst = fpstp;
290
uint16_t val16, sbit;
291
int16_t exp;
292
293
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
105
}
294
}
106
+ reset_target_data = !(flags & PAGE_VALID) || (flags & PAGE_RESET);
295
}
107
+ flags &= ~PAGE_RESET;
296
108
297
-float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
109
for (addr = start, len = end - start;
298
+float32 HELPER(frecpx_f32)(float32 a, float_status *fpst)
110
len != 0;
299
{
111
@@ -XXX,XX +XXX,XX @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
300
- float_status *fpst = fpstp;
112
p->first_tb) {
301
uint32_t val32, sbit;
113
tb_invalidate_phys_page(addr, 0);
302
int32_t exp;
114
}
303
115
+ if (reset_target_data && p->target_data) {
304
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
116
+ g_free(p->target_data);
117
+ p->target_data = NULL;
118
+ }
119
p->flags = flags;
120
}
305
}
121
}
306
}
122
307
123
+void *page_get_target_data(target_ulong address)
308
-float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
124
+{
309
+float64 HELPER(frecpx_f64)(float64 a, float_status *fpst)
125
+ PageDesc *p = page_find(address >> TARGET_PAGE_BITS);
310
{
126
+ return p ? p->target_data : NULL;
311
- float_status *fpst = fpstp;
127
+}
312
uint64_t val64, sbit;
128
+
313
int64_t exp;
129
+void *page_alloc_target_data(target_ulong address, size_t size)
314
130
+{
315
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(crc32c_64)(uint64_t acc, uint64_t val, uint32_t bytes)
131
+ PageDesc *p = page_find(address >> TARGET_PAGE_BITS);
316
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
132
+ void *ret = NULL;
317
133
+
318
#define ADVSIMD_HALFOP(name) \
134
+ if (p->flags & PAGE_VALID) {
319
-uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
135
+ ret = p->target_data;
320
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, float_status *fpst) \
136
+ if (!ret) {
321
{ \
137
+ p->target_data = ret = g_malloc0(size);
322
- float_status *fpst = fpstp; \
138
+ }
323
return float16_ ## name(a, b, fpst); \
139
+ }
324
}
140
+ return ret;
325
141
+}
326
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(minnum)
142
+
327
ADVSIMD_HALFOP(maxnum)
143
int page_check_range(target_ulong start, target_ulong len, int flags)
328
144
{
329
#define ADVSIMD_TWOHALFOP(name) \
145
PageDesc *p;
330
-uint32_t ADVSIMD_HELPER(name, 2h)(uint32_t two_a, uint32_t two_b, void *fpstp) \
146
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
331
+uint32_t ADVSIMD_HELPER(name, 2h)(uint32_t two_a, uint32_t two_b, \
147
index XXXXXXX..XXXXXXX 100644
332
+ float_status *fpst) \
148
--- a/linux-user/mmap.c
333
{ \
149
+++ b/linux-user/mmap.c
334
float16 a1, a2, b1, b2; \
150
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
335
uint32_t r1, r2; \
151
}
336
- float_status *fpst = fpstp; \
152
}
337
a1 = extract32(two_a, 0, 16); \
153
the_end1:
338
a2 = extract32(two_a, 16, 16); \
154
+ page_flags |= PAGE_RESET;
339
b1 = extract32(two_b, 0, 16); \
155
page_set_flags(start, start + len, page_flags);
340
@@ -XXX,XX +XXX,XX @@ ADVSIMD_TWOHALFOP(minnum)
156
the_end:
341
ADVSIMD_TWOHALFOP(maxnum)
157
trace_target_mmap_complete(start);
342
158
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
343
/* Data processing - scalar floating-point and advanced SIMD */
159
new_addr = h2g(host_addr);
344
-static float16 float16_mulx(float16 a, float16 b, void *fpstp)
160
prot = page_get_flags(old_addr);
345
+static float16 float16_mulx(float16 a, float16 b, float_status *fpst)
161
page_set_flags(old_addr, old_addr + old_size, 0);
346
{
162
- page_set_flags(new_addr, new_addr + new_size, prot | PAGE_VALID);
347
- float_status *fpst = fpstp;
163
+ page_set_flags(new_addr, new_addr + new_size,
348
-
164
+ prot | PAGE_VALID | PAGE_RESET);
349
a = float16_squash_input_denormal(a, fpst);
165
}
350
b = float16_squash_input_denormal(b, fpst);
166
tb_invalidate_phys_range(new_addr, new_addr + new_size);
351
167
mmap_unlock();
352
@@ -XXX,XX +XXX,XX @@ ADVSIMD_TWOHALFOP(mulx)
168
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
353
169
index XXXXXXX..XXXXXXX 100644
354
/* fused multiply-accumulate */
170
--- a/linux-user/syscall.c
355
uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
171
+++ b/linux-user/syscall.c
356
- void *fpstp)
172
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
357
+ float_status *fpst)
173
raddr=h2g((unsigned long)host_raddr);
358
{
174
359
- float_status *fpst = fpstp;
175
page_set_flags(raddr, raddr + shm_info.shm_segsz,
360
return float16_muladd(a, b, c, 0, fpst);
176
- PAGE_VALID | PAGE_READ |
361
}
177
- ((shmflg & SHM_RDONLY)? 0 : PAGE_WRITE));
362
178
+ PAGE_VALID | PAGE_RESET | PAGE_READ |
363
uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
179
+ (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE));
364
- uint32_t two_c, void *fpstp)
180
365
+ uint32_t two_c, float_status *fpst)
181
for (i = 0; i < N_SHM_REGIONS; i++) {
366
{
182
if (!shm_regions[i].in_use) {
367
- float_status *fpst = fpstp;
368
float16 a1, a2, b1, b2, c1, c2;
369
uint32_t r1, r2;
370
a1 = extract32(two_a, 0, 16);
371
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
372
373
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
374
375
-uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
376
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, float_status *fpst)
377
{
378
- float_status *fpst = fpstp;
379
int compare = float16_compare_quiet(a, b, fpst);
380
return ADVSIMD_CMPRES(compare == float_relation_equal);
381
}
382
383
-uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
384
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, float_status *fpst)
385
{
386
- float_status *fpst = fpstp;
387
int compare = float16_compare(a, b, fpst);
388
return ADVSIMD_CMPRES(compare == float_relation_greater ||
389
compare == float_relation_equal);
390
}
391
392
-uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
393
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, float_status *fpst)
394
{
395
- float_status *fpst = fpstp;
396
int compare = float16_compare(a, b, fpst);
397
return ADVSIMD_CMPRES(compare == float_relation_greater);
398
}
399
400
-uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
401
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, float_status *fpst)
402
{
403
- float_status *fpst = fpstp;
404
float16 f0 = float16_abs(a);
405
float16 f1 = float16_abs(b);
406
int compare = float16_compare(f0, f1, fpst);
407
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
408
compare == float_relation_equal);
409
}
410
411
-uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
412
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, float_status *fpst)
413
{
414
- float_status *fpst = fpstp;
415
float16 f0 = float16_abs(a);
416
float16 f1 = float16_abs(b);
417
int compare = float16_compare(f0, f1, fpst);
418
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
419
}
420
421
/* round to integral */
422
-uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
423
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, float_status *fp_status)
424
{
425
return float16_round_to_int(x, fp_status);
426
}
427
428
-uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
429
+uint32_t HELPER(advsimd_rinth)(uint32_t x, float_status *fp_status)
430
{
431
int old_flags = get_float_exception_flags(fp_status), new_flags;
432
float16 ret;
183
--
433
--
184
2.20.1
434
2.34.1
185
435
186
436
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210210000223.884088-31-richard.henderson@linaro.org
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20241206031224.78525-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/cpu.c | 15 +++++++++++++++
8
target/arm/helper.h | 284 ++++++++++++++++++------------------
9
1 file changed, 15 insertions(+)
9
target/arm/tcg/helper-a64.h | 18 +--
10
target/arm/tcg/helper-sve.h | 12 +-
11
target/arm/tcg/vec_helper.c | 60 ++++----
12
4 files changed, 183 insertions(+), 191 deletions(-)
10
13
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.c
16
--- a/target/arm/helper.h
14
+++ b/target/arm/cpu.c
17
+++ b/target/arm/helper.h
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_usdot_idx_b, TCG_CALL_NO_RWG,
16
* Note that this must match useronly_clean_ptr.
19
void, ptr, ptr, ptr, ptr, i32)
17
*/
20
18
env->cp15.tcr_el[1].raw_tcr = (1ULL << 37);
21
DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
19
+
22
- void, ptr, ptr, ptr, ptr, i32)
20
+ /* Enable MTE */
23
+ void, ptr, ptr, ptr, fpst, i32)
21
+ if (cpu_isar_feature(aa64_mte, cpu)) {
24
DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
22
+ /* Enable tag access, but leave TCF0 as No Effect (0). */
25
- void, ptr, ptr, ptr, ptr, i32)
23
+ env->cp15.sctlr_el[1] |= SCTLR_ATA0;
26
+ void, ptr, ptr, ptr, fpst, i32)
24
+ /*
27
DEF_HELPER_FLAGS_5(gvec_fcaddd, TCG_CALL_NO_RWG,
25
+ * Exclude all tags, so that tag 0 is always used.
28
- void, ptr, ptr, ptr, ptr, i32)
26
+ * This corresponds to Linux current->thread.gcr_incl = 0.
29
+ void, ptr, ptr, ptr, fpst, i32)
27
+ *
30
28
+ * Set RRND, so that helper_irg() will generate a seed later.
31
DEF_HELPER_FLAGS_6(gvec_fcmlah, TCG_CALL_NO_RWG,
29
+ * Here in cpu_reset(), the crypto subsystem has not yet been
32
- void, ptr, ptr, ptr, ptr, ptr, i32)
30
+ * initialized.
33
+ void, ptr, ptr, ptr, ptr, fpst, i32)
31
+ */
34
DEF_HELPER_FLAGS_6(gvec_fcmlah_idx, TCG_CALL_NO_RWG,
32
+ env->cp15.gcr_el1 = 0x1ffff;
35
- void, ptr, ptr, ptr, ptr, ptr, i32)
33
+ }
36
+ void, ptr, ptr, ptr, ptr, fpst, i32)
34
#else
37
DEF_HELPER_FLAGS_6(gvec_fcmlas, TCG_CALL_NO_RWG,
35
/* Reset into the highest available EL */
38
- void, ptr, ptr, ptr, ptr, ptr, i32)
36
if (arm_feature(env, ARM_FEATURE_EL3)) {
39
+ void, ptr, ptr, ptr, ptr, fpst, i32)
40
DEF_HELPER_FLAGS_6(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
41
- void, ptr, ptr, ptr, ptr, ptr, i32)
42
+ void, ptr, ptr, ptr, ptr, fpst, i32)
43
DEF_HELPER_FLAGS_6(gvec_fcmlad, TCG_CALL_NO_RWG,
44
- void, ptr, ptr, ptr, ptr, ptr, i32)
45
+ void, ptr, ptr, ptr, ptr, fpst, i32)
46
47
-DEF_HELPER_FLAGS_4(gvec_sstoh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
48
-DEF_HELPER_FLAGS_4(gvec_sitos, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
49
-DEF_HELPER_FLAGS_4(gvec_ustoh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
50
-DEF_HELPER_FLAGS_4(gvec_uitos, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
51
-DEF_HELPER_FLAGS_4(gvec_tosszh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
52
-DEF_HELPER_FLAGS_4(gvec_tosizs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
53
-DEF_HELPER_FLAGS_4(gvec_touszh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
54
-DEF_HELPER_FLAGS_4(gvec_touizs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_4(gvec_sstoh, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
56
+DEF_HELPER_FLAGS_4(gvec_sitos, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
57
+DEF_HELPER_FLAGS_4(gvec_ustoh, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
58
+DEF_HELPER_FLAGS_4(gvec_uitos, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
59
+DEF_HELPER_FLAGS_4(gvec_tosszh, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
60
+DEF_HELPER_FLAGS_4(gvec_tosizs, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
61
+DEF_HELPER_FLAGS_4(gvec_touszh, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
62
+DEF_HELPER_FLAGS_4(gvec_touizs, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
63
64
-DEF_HELPER_FLAGS_4(gvec_vcvt_sf, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
65
-DEF_HELPER_FLAGS_4(gvec_vcvt_uf, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
66
-DEF_HELPER_FLAGS_4(gvec_vcvt_rz_fs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
67
-DEF_HELPER_FLAGS_4(gvec_vcvt_rz_fu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
68
+DEF_HELPER_FLAGS_4(gvec_vcvt_sf, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
69
+DEF_HELPER_FLAGS_4(gvec_vcvt_uf, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
70
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_fs, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
71
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_fu, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
72
73
-DEF_HELPER_FLAGS_4(gvec_vcvt_sh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
74
-DEF_HELPER_FLAGS_4(gvec_vcvt_uh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
75
-DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
76
-DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
77
+DEF_HELPER_FLAGS_4(gvec_vcvt_sh, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
78
+DEF_HELPER_FLAGS_4(gvec_vcvt_uh, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
79
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hs, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
80
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hu, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
81
82
-DEF_HELPER_FLAGS_4(gvec_vcvt_sd, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
83
-DEF_HELPER_FLAGS_4(gvec_vcvt_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
84
-DEF_HELPER_FLAGS_4(gvec_vcvt_rz_ds, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
85
-DEF_HELPER_FLAGS_4(gvec_vcvt_rz_du, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
86
+DEF_HELPER_FLAGS_4(gvec_vcvt_sd, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
87
+DEF_HELPER_FLAGS_4(gvec_vcvt_ud, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
88
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_ds, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
89
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_du, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
90
91
-DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sd, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
92
-DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
93
-DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
94
-DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
95
-DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
96
-DEF_HELPER_FLAGS_4(gvec_vcvt_rm_uh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
97
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sd, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
98
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ud, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
99
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
100
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
101
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sh, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
102
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_uh, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
103
104
-DEF_HELPER_FLAGS_4(gvec_vrint_rm_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
105
-DEF_HELPER_FLAGS_4(gvec_vrint_rm_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
106
+DEF_HELPER_FLAGS_4(gvec_vrint_rm_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
107
+DEF_HELPER_FLAGS_4(gvec_vrint_rm_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
108
109
-DEF_HELPER_FLAGS_4(gvec_vrintx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
110
-DEF_HELPER_FLAGS_4(gvec_vrintx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
111
+DEF_HELPER_FLAGS_4(gvec_vrintx_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
112
+DEF_HELPER_FLAGS_4(gvec_vrintx_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
113
114
-DEF_HELPER_FLAGS_4(gvec_frecpe_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
115
-DEF_HELPER_FLAGS_4(gvec_frecpe_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
116
-DEF_HELPER_FLAGS_4(gvec_frecpe_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
117
+DEF_HELPER_FLAGS_4(gvec_frecpe_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
118
+DEF_HELPER_FLAGS_4(gvec_frecpe_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
119
+DEF_HELPER_FLAGS_4(gvec_frecpe_d, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
120
121
-DEF_HELPER_FLAGS_4(gvec_frsqrte_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
122
-DEF_HELPER_FLAGS_4(gvec_frsqrte_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
123
-DEF_HELPER_FLAGS_4(gvec_frsqrte_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
124
+DEF_HELPER_FLAGS_4(gvec_frsqrte_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
125
+DEF_HELPER_FLAGS_4(gvec_frsqrte_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
126
+DEF_HELPER_FLAGS_4(gvec_frsqrte_d, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
127
128
-DEF_HELPER_FLAGS_4(gvec_fcgt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
129
-DEF_HELPER_FLAGS_4(gvec_fcgt0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
130
-DEF_HELPER_FLAGS_4(gvec_fcgt0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
131
+DEF_HELPER_FLAGS_4(gvec_fcgt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
132
+DEF_HELPER_FLAGS_4(gvec_fcgt0_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
133
+DEF_HELPER_FLAGS_4(gvec_fcgt0_d, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
134
135
-DEF_HELPER_FLAGS_4(gvec_fcge0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
136
-DEF_HELPER_FLAGS_4(gvec_fcge0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
137
-DEF_HELPER_FLAGS_4(gvec_fcge0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
138
+DEF_HELPER_FLAGS_4(gvec_fcge0_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
139
+DEF_HELPER_FLAGS_4(gvec_fcge0_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
140
+DEF_HELPER_FLAGS_4(gvec_fcge0_d, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
141
142
-DEF_HELPER_FLAGS_4(gvec_fceq0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
143
-DEF_HELPER_FLAGS_4(gvec_fceq0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
144
-DEF_HELPER_FLAGS_4(gvec_fceq0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
145
+DEF_HELPER_FLAGS_4(gvec_fceq0_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
146
+DEF_HELPER_FLAGS_4(gvec_fceq0_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
147
+DEF_HELPER_FLAGS_4(gvec_fceq0_d, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
148
149
-DEF_HELPER_FLAGS_4(gvec_fcle0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
150
-DEF_HELPER_FLAGS_4(gvec_fcle0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
151
-DEF_HELPER_FLAGS_4(gvec_fcle0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
152
+DEF_HELPER_FLAGS_4(gvec_fcle0_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
153
+DEF_HELPER_FLAGS_4(gvec_fcle0_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
154
+DEF_HELPER_FLAGS_4(gvec_fcle0_d, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
155
156
-DEF_HELPER_FLAGS_4(gvec_fclt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
157
-DEF_HELPER_FLAGS_4(gvec_fclt0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
158
-DEF_HELPER_FLAGS_4(gvec_fclt0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
159
+DEF_HELPER_FLAGS_4(gvec_fclt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
160
+DEF_HELPER_FLAGS_4(gvec_fclt0_s, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
161
+DEF_HELPER_FLAGS_4(gvec_fclt0_d, TCG_CALL_NO_RWG, void, ptr, ptr, fpst, i32)
162
163
-DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
164
-DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
165
-DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
166
+DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
167
+DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
168
+DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
169
170
-DEF_HELPER_FLAGS_5(gvec_fsub_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
171
-DEF_HELPER_FLAGS_5(gvec_fsub_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
172
-DEF_HELPER_FLAGS_5(gvec_fsub_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
173
+DEF_HELPER_FLAGS_5(gvec_fsub_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
174
+DEF_HELPER_FLAGS_5(gvec_fsub_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
175
+DEF_HELPER_FLAGS_5(gvec_fsub_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
176
177
-DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
178
-DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
179
-DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
180
+DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
181
+DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
182
+DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
183
184
-DEF_HELPER_FLAGS_5(gvec_fabd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
185
-DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
186
-DEF_HELPER_FLAGS_5(gvec_fabd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
187
+DEF_HELPER_FLAGS_5(gvec_fabd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
188
+DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
189
+DEF_HELPER_FLAGS_5(gvec_fabd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
190
191
-DEF_HELPER_FLAGS_5(gvec_fceq_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
192
-DEF_HELPER_FLAGS_5(gvec_fceq_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
193
-DEF_HELPER_FLAGS_5(gvec_fceq_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
194
+DEF_HELPER_FLAGS_5(gvec_fceq_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
195
+DEF_HELPER_FLAGS_5(gvec_fceq_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
196
+DEF_HELPER_FLAGS_5(gvec_fceq_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
197
198
-DEF_HELPER_FLAGS_5(gvec_fcge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
199
-DEF_HELPER_FLAGS_5(gvec_fcge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
200
-DEF_HELPER_FLAGS_5(gvec_fcge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
201
+DEF_HELPER_FLAGS_5(gvec_fcge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
202
+DEF_HELPER_FLAGS_5(gvec_fcge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
203
+DEF_HELPER_FLAGS_5(gvec_fcge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
204
205
-DEF_HELPER_FLAGS_5(gvec_fcgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
206
-DEF_HELPER_FLAGS_5(gvec_fcgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
207
-DEF_HELPER_FLAGS_5(gvec_fcgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
208
+DEF_HELPER_FLAGS_5(gvec_fcgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
209
+DEF_HELPER_FLAGS_5(gvec_fcgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
210
+DEF_HELPER_FLAGS_5(gvec_fcgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
211
212
-DEF_HELPER_FLAGS_5(gvec_facge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
213
-DEF_HELPER_FLAGS_5(gvec_facge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
214
-DEF_HELPER_FLAGS_5(gvec_facge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
215
+DEF_HELPER_FLAGS_5(gvec_facge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
216
+DEF_HELPER_FLAGS_5(gvec_facge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
217
+DEF_HELPER_FLAGS_5(gvec_facge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
218
219
-DEF_HELPER_FLAGS_5(gvec_facgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
220
-DEF_HELPER_FLAGS_5(gvec_facgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
221
-DEF_HELPER_FLAGS_5(gvec_facgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
222
+DEF_HELPER_FLAGS_5(gvec_facgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
223
+DEF_HELPER_FLAGS_5(gvec_facgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
224
+DEF_HELPER_FLAGS_5(gvec_facgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
225
226
-DEF_HELPER_FLAGS_5(gvec_fmax_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
227
-DEF_HELPER_FLAGS_5(gvec_fmax_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
228
-DEF_HELPER_FLAGS_5(gvec_fmax_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
229
+DEF_HELPER_FLAGS_5(gvec_fmax_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
230
+DEF_HELPER_FLAGS_5(gvec_fmax_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
231
+DEF_HELPER_FLAGS_5(gvec_fmax_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
232
233
-DEF_HELPER_FLAGS_5(gvec_fmin_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
234
-DEF_HELPER_FLAGS_5(gvec_fmin_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
235
-DEF_HELPER_FLAGS_5(gvec_fmin_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
236
+DEF_HELPER_FLAGS_5(gvec_fmin_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
237
+DEF_HELPER_FLAGS_5(gvec_fmin_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
238
+DEF_HELPER_FLAGS_5(gvec_fmin_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
239
240
-DEF_HELPER_FLAGS_5(gvec_fmaxnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
241
-DEF_HELPER_FLAGS_5(gvec_fmaxnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
242
-DEF_HELPER_FLAGS_5(gvec_fmaxnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
243
+DEF_HELPER_FLAGS_5(gvec_fmaxnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
244
+DEF_HELPER_FLAGS_5(gvec_fmaxnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
245
+DEF_HELPER_FLAGS_5(gvec_fmaxnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
246
247
-DEF_HELPER_FLAGS_5(gvec_fminnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
248
-DEF_HELPER_FLAGS_5(gvec_fminnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
249
-DEF_HELPER_FLAGS_5(gvec_fminnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
250
+DEF_HELPER_FLAGS_5(gvec_fminnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
251
+DEF_HELPER_FLAGS_5(gvec_fminnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
252
+DEF_HELPER_FLAGS_5(gvec_fminnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
253
254
-DEF_HELPER_FLAGS_5(gvec_recps_nf_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
255
-DEF_HELPER_FLAGS_5(gvec_recps_nf_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
256
+DEF_HELPER_FLAGS_5(gvec_recps_nf_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
257
+DEF_HELPER_FLAGS_5(gvec_recps_nf_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
258
259
-DEF_HELPER_FLAGS_5(gvec_rsqrts_nf_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
260
-DEF_HELPER_FLAGS_5(gvec_rsqrts_nf_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
261
+DEF_HELPER_FLAGS_5(gvec_rsqrts_nf_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
262
+DEF_HELPER_FLAGS_5(gvec_rsqrts_nf_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
263
264
-DEF_HELPER_FLAGS_5(gvec_fmla_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
265
-DEF_HELPER_FLAGS_5(gvec_fmla_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
266
+DEF_HELPER_FLAGS_5(gvec_fmla_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
267
+DEF_HELPER_FLAGS_5(gvec_fmla_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
268
269
-DEF_HELPER_FLAGS_5(gvec_fmls_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
270
-DEF_HELPER_FLAGS_5(gvec_fmls_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
271
+DEF_HELPER_FLAGS_5(gvec_fmls_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
272
+DEF_HELPER_FLAGS_5(gvec_fmls_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
273
274
-DEF_HELPER_FLAGS_5(gvec_vfma_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
275
-DEF_HELPER_FLAGS_5(gvec_vfma_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
276
-DEF_HELPER_FLAGS_5(gvec_vfma_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
277
+DEF_HELPER_FLAGS_5(gvec_vfma_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
278
+DEF_HELPER_FLAGS_5(gvec_vfma_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
279
+DEF_HELPER_FLAGS_5(gvec_vfma_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
280
281
-DEF_HELPER_FLAGS_5(gvec_vfms_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
282
-DEF_HELPER_FLAGS_5(gvec_vfms_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
283
-DEF_HELPER_FLAGS_5(gvec_vfms_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
284
+DEF_HELPER_FLAGS_5(gvec_vfms_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
285
+DEF_HELPER_FLAGS_5(gvec_vfms_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
286
+DEF_HELPER_FLAGS_5(gvec_vfms_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
287
288
DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
289
- void, ptr, ptr, ptr, ptr, i32)
290
+ void, ptr, ptr, ptr, fpst, i32)
291
DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
292
- void, ptr, ptr, ptr, ptr, i32)
293
+ void, ptr, ptr, ptr, fpst, i32)
294
DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
295
- void, ptr, ptr, ptr, ptr, i32)
296
+ void, ptr, ptr, ptr, fpst, i32)
297
298
DEF_HELPER_FLAGS_5(gvec_fmul_idx_h, TCG_CALL_NO_RWG,
299
- void, ptr, ptr, ptr, ptr, i32)
300
+ void, ptr, ptr, ptr, fpst, i32)
301
DEF_HELPER_FLAGS_5(gvec_fmul_idx_s, TCG_CALL_NO_RWG,
302
- void, ptr, ptr, ptr, ptr, i32)
303
+ void, ptr, ptr, ptr, fpst, i32)
304
DEF_HELPER_FLAGS_5(gvec_fmul_idx_d, TCG_CALL_NO_RWG,
305
- void, ptr, ptr, ptr, ptr, i32)
306
+ void, ptr, ptr, ptr, fpst, i32)
307
308
DEF_HELPER_FLAGS_5(gvec_fmla_nf_idx_h, TCG_CALL_NO_RWG,
309
- void, ptr, ptr, ptr, ptr, i32)
310
+ void, ptr, ptr, ptr, fpst, i32)
311
DEF_HELPER_FLAGS_5(gvec_fmla_nf_idx_s, TCG_CALL_NO_RWG,
312
- void, ptr, ptr, ptr, ptr, i32)
313
+ void, ptr, ptr, ptr, fpst, i32)
314
315
DEF_HELPER_FLAGS_5(gvec_fmls_nf_idx_h, TCG_CALL_NO_RWG,
316
- void, ptr, ptr, ptr, ptr, i32)
317
+ void, ptr, ptr, ptr, fpst, i32)
318
DEF_HELPER_FLAGS_5(gvec_fmls_nf_idx_s, TCG_CALL_NO_RWG,
319
- void, ptr, ptr, ptr, ptr, i32)
320
+ void, ptr, ptr, ptr, fpst, i32)
321
322
DEF_HELPER_FLAGS_6(gvec_fmla_idx_h, TCG_CALL_NO_RWG,
323
- void, ptr, ptr, ptr, ptr, ptr, i32)
324
+ void, ptr, ptr, ptr, ptr, fpst, i32)
325
DEF_HELPER_FLAGS_6(gvec_fmla_idx_s, TCG_CALL_NO_RWG,
326
- void, ptr, ptr, ptr, ptr, ptr, i32)
327
+ void, ptr, ptr, ptr, ptr, fpst, i32)
328
DEF_HELPER_FLAGS_6(gvec_fmla_idx_d, TCG_CALL_NO_RWG,
329
- void, ptr, ptr, ptr, ptr, ptr, i32)
330
+ void, ptr, ptr, ptr, ptr, fpst, i32)
331
332
DEF_HELPER_FLAGS_5(gvec_uqadd_b, TCG_CALL_NO_RWG,
333
void, ptr, ptr, ptr, ptr, i32)
334
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_bfmmla, TCG_CALL_NO_RWG,
335
void, ptr, ptr, ptr, ptr, env, i32)
336
337
DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
338
- void, ptr, ptr, ptr, ptr, ptr, i32)
339
+ void, ptr, ptr, ptr, ptr, fpst, i32)
340
DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
341
- void, ptr, ptr, ptr, ptr, ptr, i32)
342
+ void, ptr, ptr, ptr, ptr, fpst, i32)
343
344
DEF_HELPER_FLAGS_5(gvec_sclamp_b, TCG_CALL_NO_RWG,
345
void, ptr, ptr, ptr, ptr, i32)
346
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_uclamp_s, TCG_CALL_NO_RWG,
347
DEF_HELPER_FLAGS_5(gvec_uclamp_d, TCG_CALL_NO_RWG,
348
void, ptr, ptr, ptr, ptr, i32)
349
350
-DEF_HELPER_FLAGS_5(gvec_faddp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
351
-DEF_HELPER_FLAGS_5(gvec_faddp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
352
-DEF_HELPER_FLAGS_5(gvec_faddp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
353
+DEF_HELPER_FLAGS_5(gvec_faddp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
354
+DEF_HELPER_FLAGS_5(gvec_faddp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
355
+DEF_HELPER_FLAGS_5(gvec_faddp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
356
357
-DEF_HELPER_FLAGS_5(gvec_fmaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
358
-DEF_HELPER_FLAGS_5(gvec_fmaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
359
-DEF_HELPER_FLAGS_5(gvec_fmaxp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
360
+DEF_HELPER_FLAGS_5(gvec_fmaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
361
+DEF_HELPER_FLAGS_5(gvec_fmaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
362
+DEF_HELPER_FLAGS_5(gvec_fmaxp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
363
364
-DEF_HELPER_FLAGS_5(gvec_fminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
365
-DEF_HELPER_FLAGS_5(gvec_fminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
366
-DEF_HELPER_FLAGS_5(gvec_fminp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
367
+DEF_HELPER_FLAGS_5(gvec_fminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
368
+DEF_HELPER_FLAGS_5(gvec_fminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
369
+DEF_HELPER_FLAGS_5(gvec_fminp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
370
371
-DEF_HELPER_FLAGS_5(gvec_fmaxnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
372
-DEF_HELPER_FLAGS_5(gvec_fmaxnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
373
-DEF_HELPER_FLAGS_5(gvec_fmaxnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
374
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
375
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
376
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
377
378
-DEF_HELPER_FLAGS_5(gvec_fminnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
379
-DEF_HELPER_FLAGS_5(gvec_fminnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
380
-DEF_HELPER_FLAGS_5(gvec_fminnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
381
+DEF_HELPER_FLAGS_5(gvec_fminnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
382
+DEF_HELPER_FLAGS_5(gvec_fminnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
383
+DEF_HELPER_FLAGS_5(gvec_fminnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
384
385
DEF_HELPER_FLAGS_4(gvec_addp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
386
DEF_HELPER_FLAGS_4(gvec_addp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
387
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
388
index XXXXXXX..XXXXXXX 100644
389
--- a/target/arm/tcg/helper-a64.h
390
+++ b/target/arm/tcg/helper-a64.h
391
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
392
DEF_HELPER_FLAGS_1(guarded_page_check, TCG_CALL_NO_WG, void, env)
393
DEF_HELPER_FLAGS_2(guarded_page_br, TCG_CALL_NO_RWG, void, env, tl)
394
395
-DEF_HELPER_FLAGS_5(gvec_fdiv_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
396
-DEF_HELPER_FLAGS_5(gvec_fdiv_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
397
-DEF_HELPER_FLAGS_5(gvec_fdiv_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
398
+DEF_HELPER_FLAGS_5(gvec_fdiv_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
399
+DEF_HELPER_FLAGS_5(gvec_fdiv_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
400
+DEF_HELPER_FLAGS_5(gvec_fdiv_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
401
402
-DEF_HELPER_FLAGS_5(gvec_fmulx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
403
-DEF_HELPER_FLAGS_5(gvec_fmulx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
404
-DEF_HELPER_FLAGS_5(gvec_fmulx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
405
+DEF_HELPER_FLAGS_5(gvec_fmulx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
406
+DEF_HELPER_FLAGS_5(gvec_fmulx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
407
+DEF_HELPER_FLAGS_5(gvec_fmulx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
408
409
-DEF_HELPER_FLAGS_5(gvec_fmulx_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
410
-DEF_HELPER_FLAGS_5(gvec_fmulx_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
411
-DEF_HELPER_FLAGS_5(gvec_fmulx_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
412
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
413
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
414
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
415
diff --git a/target/arm/tcg/helper-sve.h b/target/arm/tcg/helper-sve.h
416
index XXXXXXX..XXXXXXX 100644
417
--- a/target/arm/tcg/helper-sve.h
418
+++ b/target/arm/tcg/helper-sve.h
419
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
420
DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
421
422
DEF_HELPER_FLAGS_5(gvec_recps_h, TCG_CALL_NO_RWG,
423
- void, ptr, ptr, ptr, ptr, i32)
424
+ void, ptr, ptr, ptr, fpst, i32)
425
DEF_HELPER_FLAGS_5(gvec_recps_s, TCG_CALL_NO_RWG,
426
- void, ptr, ptr, ptr, ptr, i32)
427
+ void, ptr, ptr, ptr, fpst, i32)
428
DEF_HELPER_FLAGS_5(gvec_recps_d, TCG_CALL_NO_RWG,
429
- void, ptr, ptr, ptr, ptr, i32)
430
+ void, ptr, ptr, ptr, fpst, i32)
431
432
DEF_HELPER_FLAGS_5(gvec_rsqrts_h, TCG_CALL_NO_RWG,
433
- void, ptr, ptr, ptr, ptr, i32)
434
+ void, ptr, ptr, ptr, fpst, i32)
435
DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
436
- void, ptr, ptr, ptr, ptr, i32)
437
+ void, ptr, ptr, ptr, fpst, i32)
438
DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
439
- void, ptr, ptr, ptr, ptr, i32)
440
+ void, ptr, ptr, ptr, fpst, i32)
441
442
DEF_HELPER_FLAGS_4(sve_faddv_h, TCG_CALL_NO_RWG,
443
i64, ptr, ptr, ptr, i32)
444
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
445
index XXXXXXX..XXXXXXX 100644
446
--- a/target/arm/tcg/vec_helper.c
447
+++ b/target/arm/tcg/vec_helper.c
448
@@ -XXX,XX +XXX,XX @@ DO_DOT_IDX(gvec_sdot_idx_h, int64_t, int16_t, int16_t, H8)
449
DO_DOT_IDX(gvec_udot_idx_h, uint64_t, uint16_t, uint16_t, H8)
450
451
void HELPER(gvec_fcaddh)(void *vd, void *vn, void *vm,
452
- void *vfpst, uint32_t desc)
453
+ float_status *fpst, uint32_t desc)
454
{
455
uintptr_t opr_sz = simd_oprsz(desc);
456
float16 *d = vd;
457
float16 *n = vn;
458
float16 *m = vm;
459
- float_status *fpst = vfpst;
460
uint32_t neg_real = extract32(desc, SIMD_DATA_SHIFT, 1);
461
uint32_t neg_imag = neg_real ^ 1;
462
uintptr_t i;
463
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcaddh)(void *vd, void *vn, void *vm,
464
}
465
466
void HELPER(gvec_fcadds)(void *vd, void *vn, void *vm,
467
- void *vfpst, uint32_t desc)
468
+ float_status *fpst, uint32_t desc)
469
{
470
uintptr_t opr_sz = simd_oprsz(desc);
471
float32 *d = vd;
472
float32 *n = vn;
473
float32 *m = vm;
474
- float_status *fpst = vfpst;
475
uint32_t neg_real = extract32(desc, SIMD_DATA_SHIFT, 1);
476
uint32_t neg_imag = neg_real ^ 1;
477
uintptr_t i;
478
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcadds)(void *vd, void *vn, void *vm,
479
}
480
481
void HELPER(gvec_fcaddd)(void *vd, void *vn, void *vm,
482
- void *vfpst, uint32_t desc)
483
+ float_status *fpst, uint32_t desc)
484
{
485
uintptr_t opr_sz = simd_oprsz(desc);
486
float64 *d = vd;
487
float64 *n = vn;
488
float64 *m = vm;
489
- float_status *fpst = vfpst;
490
uint64_t neg_real = extract64(desc, SIMD_DATA_SHIFT, 1);
491
uint64_t neg_imag = neg_real ^ 1;
492
uintptr_t i;
493
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcaddd)(void *vd, void *vn, void *vm,
494
}
495
496
void HELPER(gvec_fcmlah)(void *vd, void *vn, void *vm, void *va,
497
- void *vfpst, uint32_t desc)
498
+ float_status *fpst, uint32_t desc)
499
{
500
uintptr_t opr_sz = simd_oprsz(desc);
501
float16 *d = vd, *n = vn, *m = vm, *a = va;
502
- float_status *fpst = vfpst;
503
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
504
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
505
uint32_t neg_real = flip ^ neg_imag;
506
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlah)(void *vd, void *vn, void *vm, void *va,
507
}
508
509
void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm, void *va,
510
- void *vfpst, uint32_t desc)
511
+ float_status *fpst, uint32_t desc)
512
{
513
uintptr_t opr_sz = simd_oprsz(desc);
514
float16 *d = vd, *n = vn, *m = vm, *a = va;
515
- float_status *fpst = vfpst;
516
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
517
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
518
intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 2, 2);
519
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm, void *va,
520
}
521
522
void HELPER(gvec_fcmlas)(void *vd, void *vn, void *vm, void *va,
523
- void *vfpst, uint32_t desc)
524
+ float_status *fpst, uint32_t desc)
525
{
526
uintptr_t opr_sz = simd_oprsz(desc);
527
float32 *d = vd, *n = vn, *m = vm, *a = va;
528
- float_status *fpst = vfpst;
529
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
530
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
531
uint32_t neg_real = flip ^ neg_imag;
532
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlas)(void *vd, void *vn, void *vm, void *va,
533
}
534
535
void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm, void *va,
536
- void *vfpst, uint32_t desc)
537
+ float_status *fpst, uint32_t desc)
538
{
539
uintptr_t opr_sz = simd_oprsz(desc);
540
float32 *d = vd, *n = vn, *m = vm, *a = va;
541
- float_status *fpst = vfpst;
542
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
543
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
544
intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 2, 2);
545
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm, void *va,
546
}
547
548
void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm, void *va,
549
- void *vfpst, uint32_t desc)
550
+ float_status *fpst, uint32_t desc)
551
{
552
uintptr_t opr_sz = simd_oprsz(desc);
553
float64 *d = vd, *n = vn, *m = vm, *a = va;
554
- float_status *fpst = vfpst;
555
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
556
uint64_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
557
uint64_t neg_real = flip ^ neg_imag;
558
@@ -XXX,XX +XXX,XX @@ static uint64_t float64_acgt(float64 op1, float64 op2, float_status *stat)
559
return -float64_lt(float64_abs(op2), float64_abs(op1), stat);
560
}
561
562
-static int16_t vfp_tosszh(float16 x, void *fpstp)
563
+static int16_t vfp_tosszh(float16 x, float_status *fpst)
564
{
565
- float_status *fpst = fpstp;
566
if (float16_is_any_nan(x)) {
567
float_raise(float_flag_invalid, fpst);
568
return 0;
569
@@ -XXX,XX +XXX,XX @@ static int16_t vfp_tosszh(float16 x, void *fpstp)
570
return float16_to_int16_round_to_zero(x, fpst);
571
}
572
573
-static uint16_t vfp_touszh(float16 x, void *fpstp)
574
+static uint16_t vfp_touszh(float16 x, float_status *fpst)
575
{
576
- float_status *fpst = fpstp;
577
if (float16_is_any_nan(x)) {
578
float_raise(float_flag_invalid, fpst);
579
return 0;
580
@@ -XXX,XX +XXX,XX @@ static uint16_t vfp_touszh(float16 x, void *fpstp)
581
}
582
583
#define DO_2OP(NAME, FUNC, TYPE) \
584
-void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
585
+void HELPER(NAME)(void *vd, void *vn, float_status *stat, uint32_t desc) \
586
{ \
587
intptr_t i, oprsz = simd_oprsz(desc); \
588
TYPE *d = vd, *n = vn; \
589
@@ -XXX,XX +XXX,XX @@ static float32 float32_rsqrts_nf(float32 op1, float32 op2, float_status *stat)
590
}
591
592
#define DO_3OP(NAME, FUNC, TYPE) \
593
-void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
594
+void HELPER(NAME)(void *vd, void *vn, void *vm, \
595
+ float_status *stat, uint32_t desc) \
596
{ \
597
intptr_t i, oprsz = simd_oprsz(desc); \
598
TYPE *d = vd, *n = vn, *m = vm; \
599
@@ -XXX,XX +XXX,XX @@ static float64 float64_mulsub_f(float64 dest, float64 op1, float64 op2,
600
return float64_muladd(float64_chs(op1), op2, dest, 0, stat);
601
}
602
603
-#define DO_MULADD(NAME, FUNC, TYPE) \
604
-void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
605
+#define DO_MULADD(NAME, FUNC, TYPE) \
606
+void HELPER(NAME)(void *vd, void *vn, void *vm, \
607
+ float_status *stat, uint32_t desc) \
608
{ \
609
intptr_t i, oprsz = simd_oprsz(desc); \
610
TYPE *d = vd, *n = vn, *m = vm; \
611
@@ -XXX,XX +XXX,XX @@ DO_MLA_IDX(gvec_mls_idx_d, uint64_t, -, H8)
612
#undef DO_MLA_IDX
613
614
#define DO_FMUL_IDX(NAME, ADD, MUL, TYPE, H) \
615
-void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
616
+void HELPER(NAME)(void *vd, void *vn, void *vm, \
617
+ float_status *stat, uint32_t desc) \
618
{ \
619
intptr_t i, j, oprsz = simd_oprsz(desc); \
620
intptr_t segment = MIN(16, oprsz) / sizeof(TYPE); \
621
@@ -XXX,XX +XXX,XX @@ DO_FMUL_IDX(gvec_fmls_nf_idx_s, float32_sub, float32_mul, float32, H4)
622
623
#define DO_FMLA_IDX(NAME, TYPE, H) \
624
void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, \
625
- void *stat, uint32_t desc) \
626
+ float_status *stat, uint32_t desc) \
627
{ \
628
intptr_t i, j, oprsz = simd_oprsz(desc); \
629
intptr_t segment = MIN(16, oprsz) / sizeof(TYPE); \
630
@@ -XXX,XX +XXX,XX @@ DO_ABA(gvec_uaba_d, uint64_t)
631
#undef DO_ABA
632
633
#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
634
-void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
635
+void HELPER(NAME)(void *vd, void *vn, void *vm, \
636
+ float_status *stat, uint32_t desc) \
637
{ \
638
ARMVectorReg scratch; \
639
intptr_t oprsz = simd_oprsz(desc); \
640
@@ -XXX,XX +XXX,XX @@ DO_3OP_PAIR(gvec_uminp_s, MIN, uint32_t, H4)
641
#undef DO_3OP_PAIR
642
643
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
644
- void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
645
+ void HELPER(NAME)(void *vd, void *vn, float_status *stat, uint32_t desc) \
646
{ \
647
intptr_t i, oprsz = simd_oprsz(desc); \
648
int shift = simd_data(desc); \
649
@@ -XXX,XX +XXX,XX @@ DO_VCVT_FIXED(gvec_vcvt_rz_hu, helper_vfp_touhh_round_to_zero, uint16_t)
650
#undef DO_VCVT_FIXED
651
652
#define DO_VCVT_RMODE(NAME, FUNC, TYPE) \
653
- void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
654
+ void HELPER(NAME)(void *vd, void *vn, float_status *fpst, uint32_t desc) \
655
{ \
656
- float_status *fpst = stat; \
657
intptr_t i, oprsz = simd_oprsz(desc); \
658
uint32_t rmode = simd_data(desc); \
659
uint32_t prev_rmode = get_float_rounding_mode(fpst); \
660
@@ -XXX,XX +XXX,XX @@ DO_VCVT_RMODE(gvec_vcvt_rm_uh, helper_vfp_touhh, uint16_t)
661
#undef DO_VCVT_RMODE
662
663
#define DO_VRINT_RMODE(NAME, FUNC, TYPE) \
664
- void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
665
+ void HELPER(NAME)(void *vd, void *vn, float_status *fpst, uint32_t desc) \
666
{ \
667
- float_status *fpst = stat; \
668
intptr_t i, oprsz = simd_oprsz(desc); \
669
uint32_t rmode = simd_data(desc); \
670
uint32_t prev_rmode = get_float_rounding_mode(fpst); \
671
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmmla)(void *vd, void *vn, void *vm, void *va,
672
}
673
674
void HELPER(gvec_bfmlal)(void *vd, void *vn, void *vm, void *va,
675
- void *stat, uint32_t desc)
676
+ float_status *stat, uint32_t desc)
677
{
678
intptr_t i, opr_sz = simd_oprsz(desc);
679
intptr_t sel = simd_data(desc);
680
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmlal)(void *vd, void *vn, void *vm, void *va,
681
}
682
683
void HELPER(gvec_bfmlal_idx)(void *vd, void *vn, void *vm,
684
- void *va, void *stat, uint32_t desc)
685
+ void *va, float_status *stat, uint32_t desc)
686
{
687
intptr_t i, j, opr_sz = simd_oprsz(desc);
688
intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1);
37
--
689
--
38
2.20.1
690
2.34.1
39
691
40
692
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Use g2h_untagged in contexts that have no cpu, e.g. the binary
4
loaders that operate before the primary cpu is created. As a
5
colollary, target_mmap and friends must use untagged addresses,
6
since they are used by the loaders.
7
8
Use g2h_untagged on values returned from target_mmap, as the
9
kernel never applies a tag itself.
10
11
Use g2h_untagged on all pc values. The only current user of
12
tags, aarch64, removes tags from code addresses upon branch,
13
so "pc" is always untagged.
14
15
Use g2h with the cpu context on hand wherever possible.
16
17
Use g2h_untagged in lock_user, which will be updated soon.
18
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20210210000223.884088-13-richard.henderson@linaro.org
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20241206031224.78525-6-richard.henderson@linaro.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
7
---
24
include/exec/cpu_ldst.h | 12 +++++-
8
target/arm/helper.h | 14 +++++++-------
25
include/exec/exec-all.h | 2 +-
9
target/arm/tcg/neon_helper.c | 21 +++++++--------------
26
linux-user/qemu.h | 6 +--
10
2 files changed, 14 insertions(+), 21 deletions(-)
27
accel/tcg/translate-all.c | 4 +-
28
accel/tcg/user-exec.c | 48 ++++++++++++------------
29
linux-user/elfload.c | 12 +++---
30
linux-user/flatload.c | 2 +-
31
linux-user/hppa/cpu_loop.c | 31 ++++++++--------
32
linux-user/i386/cpu_loop.c | 4 +-
33
linux-user/mmap.c | 45 +++++++++++-----------
34
linux-user/ppc/signal.c | 4 +-
35
linux-user/syscall.c | 72 +++++++++++++++++++-----------------
36
target/arm/helper-a64.c | 4 +-
37
target/hppa/op_helper.c | 2 +-
38
target/i386/tcg/mem_helper.c | 2 +-
39
target/s390x/mem_helper.c | 4 +-
40
16 files changed, 135 insertions(+), 119 deletions(-)
41
11
42
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
12
diff --git a/target/arm/helper.h b/target/arm/helper.h
43
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
44
--- a/include/exec/cpu_ldst.h
14
--- a/target/arm/helper.h
45
+++ b/include/exec/cpu_ldst.h
15
+++ b/target/arm/helper.h
46
@@ -XXX,XX +XXX,XX @@ static inline abi_ptr cpu_untagged_addr(CPUState *cs, abi_ptr x)
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qneg_s16, TCG_CALL_NO_RWG, i32, env, i32)
47
#endif
17
DEF_HELPER_FLAGS_2(neon_qneg_s32, TCG_CALL_NO_RWG, i32, env, i32)
48
18
DEF_HELPER_FLAGS_2(neon_qneg_s64, TCG_CALL_NO_RWG, i64, env, i64)
49
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
19
50
-#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
20
-DEF_HELPER_3(neon_ceq_f32, i32, i32, i32, ptr)
51
+static inline void *g2h_untagged(abi_ptr x)
21
-DEF_HELPER_3(neon_cge_f32, i32, i32, i32, ptr)
52
+{
22
-DEF_HELPER_3(neon_cgt_f32, i32, i32, i32, ptr)
53
+ return (void *)((uintptr_t)(x) + guest_base);
23
-DEF_HELPER_3(neon_acge_f32, i32, i32, i32, ptr)
54
+}
24
-DEF_HELPER_3(neon_acgt_f32, i32, i32, i32, ptr)
55
+
25
-DEF_HELPER_3(neon_acge_f64, i64, i64, i64, ptr)
56
+static inline void *g2h(CPUState *cs, abi_ptr x)
26
-DEF_HELPER_3(neon_acgt_f64, i64, i64, i64, ptr)
57
+{
27
+DEF_HELPER_3(neon_ceq_f32, i32, i32, i32, fpst)
58
+ return g2h_untagged(cpu_untagged_addr(cs, x));
28
+DEF_HELPER_3(neon_cge_f32, i32, i32, i32, fpst)
59
+}
29
+DEF_HELPER_3(neon_cgt_f32, i32, i32, i32, fpst)
60
30
+DEF_HELPER_3(neon_acge_f32, i32, i32, i32, fpst)
61
static inline bool guest_addr_valid(abi_ulong x)
31
+DEF_HELPER_3(neon_acgt_f32, i32, i32, i32, fpst)
32
+DEF_HELPER_3(neon_acge_f64, i64, i64, i64, fpst)
33
+DEF_HELPER_3(neon_acgt_f64, i64, i64, i64, fpst)
34
35
/* iwmmxt_helper.c */
36
DEF_HELPER_2(iwmmxt_maddsq, i64, i64, i64)
37
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/neon_helper.c
40
+++ b/target/arm/tcg/neon_helper.c
41
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_qneg_s64)(CPUARMState *env, uint64_t x)
42
* Note that EQ doesn't signal InvalidOp for QNaNs but GE and GT do.
43
* Softfloat routines return 0/1, which we convert to the 0/-1 Neon requires.
44
*/
45
-uint32_t HELPER(neon_ceq_f32)(uint32_t a, uint32_t b, void *fpstp)
46
+uint32_t HELPER(neon_ceq_f32)(uint32_t a, uint32_t b, float_status *fpst)
62
{
47
{
63
@@ -XXX,XX +XXX,XX @@ static inline int cpu_ldsw_code(CPUArchState *env, abi_ptr addr)
48
- float_status *fpst = fpstp;
64
static inline void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
49
return -float32_eq_quiet(make_float32(a), make_float32(b), fpst);
65
MMUAccessType access_type, int mmu_idx)
50
}
51
52
-uint32_t HELPER(neon_cge_f32)(uint32_t a, uint32_t b, void *fpstp)
53
+uint32_t HELPER(neon_cge_f32)(uint32_t a, uint32_t b, float_status *fpst)
66
{
54
{
67
- return g2h(addr);
55
- float_status *fpst = fpstp;
68
+ return g2h(env_cpu(env), addr);
56
return -float32_le(make_float32(b), make_float32(a), fpst);
69
}
57
}
70
#else
58
71
void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
59
-uint32_t HELPER(neon_cgt_f32)(uint32_t a, uint32_t b, void *fpstp)
72
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
60
+uint32_t HELPER(neon_cgt_f32)(uint32_t a, uint32_t b, float_status *fpst)
73
index XXXXXXX..XXXXXXX 100644
74
--- a/include/exec/exec-all.h
75
+++ b/include/exec/exec-all.h
76
@@ -XXX,XX +XXX,XX @@ static inline tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env,
77
void **hostp)
78
{
61
{
79
if (hostp) {
62
- float_status *fpst = fpstp;
80
- *hostp = g2h(addr);
63
return -float32_lt(make_float32(b), make_float32(a), fpst);
81
+ *hostp = g2h_untagged(addr);
82
}
83
return addr;
84
}
64
}
85
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
65
86
index XXXXXXX..XXXXXXX 100644
66
-uint32_t HELPER(neon_acge_f32)(uint32_t a, uint32_t b, void *fpstp)
87
--- a/linux-user/qemu.h
67
+uint32_t HELPER(neon_acge_f32)(uint32_t a, uint32_t b, float_status *fpst)
88
+++ b/linux-user/qemu.h
68
{
89
@@ -XXX,XX +XXX,XX @@ static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy
69
- float_status *fpst = fpstp;
90
return addr;
70
float32 f0 = float32_abs(make_float32(a));
91
}
71
float32 f1 = float32_abs(make_float32(b));
92
#else
72
return -float32_le(f1, f0, fpst);
93
- return g2h(guest_addr);
94
+ return g2h_untagged(guest_addr);
95
#endif
96
}
73
}
97
74
98
@@ -XXX,XX +XXX,XX @@ static inline void unlock_user(void *host_ptr, abi_ulong guest_addr,
75
-uint32_t HELPER(neon_acgt_f32)(uint32_t a, uint32_t b, void *fpstp)
99
#ifdef DEBUG_REMAP
76
+uint32_t HELPER(neon_acgt_f32)(uint32_t a, uint32_t b, float_status *fpst)
100
if (!host_ptr)
77
{
101
return;
78
- float_status *fpst = fpstp;
102
- if (host_ptr == g2h(guest_addr))
79
float32 f0 = float32_abs(make_float32(a));
103
+ if (host_ptr == g2h_untagged(guest_addr))
80
float32 f1 = float32_abs(make_float32(b));
104
return;
81
return -float32_lt(f1, f0, fpst);
105
if (len > 0)
106
- memcpy(g2h(guest_addr), host_ptr, len);
107
+ memcpy(g2h_untagged(guest_addr), host_ptr, len);
108
g_free(host_ptr);
109
#endif
110
}
82
}
111
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
83
112
index XXXXXXX..XXXXXXX 100644
84
-uint64_t HELPER(neon_acge_f64)(uint64_t a, uint64_t b, void *fpstp)
113
--- a/accel/tcg/translate-all.c
85
+uint64_t HELPER(neon_acge_f64)(uint64_t a, uint64_t b, float_status *fpst)
114
+++ b/accel/tcg/translate-all.c
86
{
115
@@ -XXX,XX +XXX,XX @@ static inline void tb_page_add(PageDesc *p, TranslationBlock *tb,
87
- float_status *fpst = fpstp;
116
prot |= p2->flags;
88
float64 f0 = float64_abs(make_float64(a));
117
p2->flags &= ~PAGE_WRITE;
89
float64 f1 = float64_abs(make_float64(b));
118
}
90
return -float64_le(f1, f0, fpst);
119
- mprotect(g2h(page_addr), qemu_host_page_size,
120
+ mprotect(g2h_untagged(page_addr), qemu_host_page_size,
121
(prot & PAGE_BITS) & ~PAGE_WRITE);
122
if (DEBUG_TB_INVALIDATE_GATE) {
123
printf("protecting code page: 0x" TB_PAGE_ADDR_FMT "\n", page_addr);
124
@@ -XXX,XX +XXX,XX @@ int page_unprotect(target_ulong address, uintptr_t pc)
125
}
126
#endif
127
}
128
- mprotect((void *)g2h(host_start), qemu_host_page_size,
129
+ mprotect((void *)g2h_untagged(host_start), qemu_host_page_size,
130
prot & PAGE_BITS);
131
}
132
mmap_unlock();
133
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/accel/tcg/user-exec.c
136
+++ b/accel/tcg/user-exec.c
137
@@ -XXX,XX +XXX,XX @@ int probe_access_flags(CPUArchState *env, target_ulong addr,
138
int flags;
139
140
flags = probe_access_internal(env, addr, 0, access_type, nonfault, ra);
141
- *phost = flags ? NULL : g2h(addr);
142
+ *phost = flags ? NULL : g2h(env_cpu(env), addr);
143
return flags;
144
}
91
}
145
92
146
@@ -XXX,XX +XXX,XX @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
93
-uint64_t HELPER(neon_acgt_f64)(uint64_t a, uint64_t b, void *fpstp)
147
flags = probe_access_internal(env, addr, size, access_type, false, ra);
94
+uint64_t HELPER(neon_acgt_f64)(uint64_t a, uint64_t b, float_status *fpst)
148
g_assert(flags == 0);
149
150
- return size ? g2h(addr) : NULL;
151
+ return size ? g2h(env_cpu(env), addr) : NULL;
152
}
153
154
#if defined(__i386__)
155
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr)
156
uint16_t meminfo = trace_mem_get_info(MO_UB, MMU_USER_IDX, false);
157
158
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
159
- ret = ldub_p(g2h(ptr));
160
+ ret = ldub_p(g2h(env_cpu(env), ptr));
161
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
162
return ret;
163
}
164
@@ -XXX,XX +XXX,XX @@ int cpu_ldsb_data(CPUArchState *env, abi_ptr ptr)
165
uint16_t meminfo = trace_mem_get_info(MO_SB, MMU_USER_IDX, false);
166
167
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
168
- ret = ldsb_p(g2h(ptr));
169
+ ret = ldsb_p(g2h(env_cpu(env), ptr));
170
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
171
return ret;
172
}
173
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_be_data(CPUArchState *env, abi_ptr ptr)
174
uint16_t meminfo = trace_mem_get_info(MO_BEUW, MMU_USER_IDX, false);
175
176
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
177
- ret = lduw_be_p(g2h(ptr));
178
+ ret = lduw_be_p(g2h(env_cpu(env), ptr));
179
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
180
return ret;
181
}
182
@@ -XXX,XX +XXX,XX @@ int cpu_ldsw_be_data(CPUArchState *env, abi_ptr ptr)
183
uint16_t meminfo = trace_mem_get_info(MO_BESW, MMU_USER_IDX, false);
184
185
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
186
- ret = ldsw_be_p(g2h(ptr));
187
+ ret = ldsw_be_p(g2h(env_cpu(env), ptr));
188
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
189
return ret;
190
}
191
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_be_data(CPUArchState *env, abi_ptr ptr)
192
uint16_t meminfo = trace_mem_get_info(MO_BEUL, MMU_USER_IDX, false);
193
194
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
195
- ret = ldl_be_p(g2h(ptr));
196
+ ret = ldl_be_p(g2h(env_cpu(env), ptr));
197
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
198
return ret;
199
}
200
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_be_data(CPUArchState *env, abi_ptr ptr)
201
uint16_t meminfo = trace_mem_get_info(MO_BEQ, MMU_USER_IDX, false);
202
203
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
204
- ret = ldq_be_p(g2h(ptr));
205
+ ret = ldq_be_p(g2h(env_cpu(env), ptr));
206
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
207
return ret;
208
}
209
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_le_data(CPUArchState *env, abi_ptr ptr)
210
uint16_t meminfo = trace_mem_get_info(MO_LEUW, MMU_USER_IDX, false);
211
212
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
213
- ret = lduw_le_p(g2h(ptr));
214
+ ret = lduw_le_p(g2h(env_cpu(env), ptr));
215
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
216
return ret;
217
}
218
@@ -XXX,XX +XXX,XX @@ int cpu_ldsw_le_data(CPUArchState *env, abi_ptr ptr)
219
uint16_t meminfo = trace_mem_get_info(MO_LESW, MMU_USER_IDX, false);
220
221
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
222
- ret = ldsw_le_p(g2h(ptr));
223
+ ret = ldsw_le_p(g2h(env_cpu(env), ptr));
224
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
225
return ret;
226
}
227
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_le_data(CPUArchState *env, abi_ptr ptr)
228
uint16_t meminfo = trace_mem_get_info(MO_LEUL, MMU_USER_IDX, false);
229
230
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
231
- ret = ldl_le_p(g2h(ptr));
232
+ ret = ldl_le_p(g2h(env_cpu(env), ptr));
233
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
234
return ret;
235
}
236
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_le_data(CPUArchState *env, abi_ptr ptr)
237
uint16_t meminfo = trace_mem_get_info(MO_LEQ, MMU_USER_IDX, false);
238
239
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
240
- ret = ldq_le_p(g2h(ptr));
241
+ ret = ldq_le_p(g2h(env_cpu(env), ptr));
242
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
243
return ret;
244
}
245
@@ -XXX,XX +XXX,XX @@ void cpu_stb_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
246
uint16_t meminfo = trace_mem_get_info(MO_UB, MMU_USER_IDX, true);
247
248
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
249
- stb_p(g2h(ptr), val);
250
+ stb_p(g2h(env_cpu(env), ptr), val);
251
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
252
}
253
254
@@ -XXX,XX +XXX,XX @@ void cpu_stw_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
255
uint16_t meminfo = trace_mem_get_info(MO_BEUW, MMU_USER_IDX, true);
256
257
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
258
- stw_be_p(g2h(ptr), val);
259
+ stw_be_p(g2h(env_cpu(env), ptr), val);
260
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
261
}
262
263
@@ -XXX,XX +XXX,XX @@ void cpu_stl_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
264
uint16_t meminfo = trace_mem_get_info(MO_BEUL, MMU_USER_IDX, true);
265
266
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
267
- stl_be_p(g2h(ptr), val);
268
+ stl_be_p(g2h(env_cpu(env), ptr), val);
269
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
270
}
271
272
@@ -XXX,XX +XXX,XX @@ void cpu_stq_be_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
273
uint16_t meminfo = trace_mem_get_info(MO_BEQ, MMU_USER_IDX, true);
274
275
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
276
- stq_be_p(g2h(ptr), val);
277
+ stq_be_p(g2h(env_cpu(env), ptr), val);
278
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
279
}
280
281
@@ -XXX,XX +XXX,XX @@ void cpu_stw_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
282
uint16_t meminfo = trace_mem_get_info(MO_LEUW, MMU_USER_IDX, true);
283
284
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
285
- stw_le_p(g2h(ptr), val);
286
+ stw_le_p(g2h(env_cpu(env), ptr), val);
287
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
288
}
289
290
@@ -XXX,XX +XXX,XX @@ void cpu_stl_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
291
uint16_t meminfo = trace_mem_get_info(MO_LEUL, MMU_USER_IDX, true);
292
293
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
294
- stl_le_p(g2h(ptr), val);
295
+ stl_le_p(g2h(env_cpu(env), ptr), val);
296
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
297
}
298
299
@@ -XXX,XX +XXX,XX @@ void cpu_stq_le_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
300
uint16_t meminfo = trace_mem_get_info(MO_LEQ, MMU_USER_IDX, true);
301
302
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
303
- stq_le_p(g2h(ptr), val);
304
+ stq_le_p(g2h(env_cpu(env), ptr), val);
305
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
306
}
307
308
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr)
309
uint32_t ret;
310
311
set_helper_retaddr(1);
312
- ret = ldub_p(g2h(ptr));
313
+ ret = ldub_p(g2h_untagged(ptr));
314
clear_helper_retaddr();
315
return ret;
316
}
317
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr ptr)
318
uint32_t ret;
319
320
set_helper_retaddr(1);
321
- ret = lduw_p(g2h(ptr));
322
+ ret = lduw_p(g2h_untagged(ptr));
323
clear_helper_retaddr();
324
return ret;
325
}
326
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr ptr)
327
uint32_t ret;
328
329
set_helper_retaddr(1);
330
- ret = ldl_p(g2h(ptr));
331
+ ret = ldl_p(g2h_untagged(ptr));
332
clear_helper_retaddr();
333
return ret;
334
}
335
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr ptr)
336
uint64_t ret;
337
338
set_helper_retaddr(1);
339
- ret = ldq_p(g2h(ptr));
340
+ ret = ldq_p(g2h_untagged(ptr));
341
clear_helper_retaddr();
342
return ret;
343
}
344
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
345
if (unlikely(addr & (size - 1))) {
346
cpu_loop_exit_atomic(env_cpu(env), retaddr);
347
}
348
- void *ret = g2h(addr);
349
+ void *ret = g2h(env_cpu(env), addr);
350
set_helper_retaddr(retaddr);
351
return ret;
352
}
353
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
354
index XXXXXXX..XXXXXXX 100644
355
--- a/linux-user/elfload.c
356
+++ b/linux-user/elfload.c
357
@@ -XXX,XX +XXX,XX @@ enum {
358
359
static bool init_guest_commpage(void)
360
{
95
{
361
- void *want = g2h(ARM_COMMPAGE & -qemu_host_page_size);
96
- float_status *fpst = fpstp;
362
+ void *want = g2h_untagged(ARM_COMMPAGE & -qemu_host_page_size);
97
float64 f0 = float64_abs(make_float64(a));
363
void *addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE,
98
float64 f1 = float64_abs(make_float64(b));
364
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
99
return -float64_lt(f1, f0, fpst);
365
366
@@ -XXX,XX +XXX,XX @@ static bool init_guest_commpage(void)
367
}
368
369
/* Set kernel helper versions; rest of page is 0. */
370
- __put_user(5, (uint32_t *)g2h(0xffff0ffcu));
371
+ __put_user(5, (uint32_t *)g2h_untagged(0xffff0ffcu));
372
373
if (mprotect(addr, qemu_host_page_size, PROT_READ)) {
374
perror("Protecting guest commpage");
375
@@ -XXX,XX +XXX,XX @@ static void zero_bss(abi_ulong elf_bss, abi_ulong last_bss, int prot)
376
here is still actually needed. For now, continue with it,
377
but merge it with the "normal" mmap that would allocate the bss. */
378
379
- host_start = (uintptr_t) g2h(elf_bss);
380
- host_end = (uintptr_t) g2h(last_bss);
381
+ host_start = (uintptr_t) g2h_untagged(elf_bss);
382
+ host_end = (uintptr_t) g2h_untagged(last_bss);
383
host_map_start = REAL_HOST_PAGE_ALIGN(host_start);
384
385
if (host_map_start < host_end) {
386
@@ -XXX,XX +XXX,XX @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr,
387
}
388
389
/* Reserve the address space for the binary, or reserved_va. */
390
- test = g2h(guest_loaddr);
391
+ test = g2h_untagged(guest_loaddr);
392
addr = mmap(test, guest_hiaddr - guest_loaddr, PROT_NONE, flags, -1, 0);
393
if (test != addr) {
394
pgb_fail_in_use(image_name);
395
@@ -XXX,XX +XXX,XX @@ static void pgb_reserved_va(const char *image_name, abi_ulong guest_loaddr,
396
397
/* Reserve the memory on the host. */
398
assert(guest_base != 0);
399
- test = g2h(0);
400
+ test = g2h_untagged(0);
401
addr = mmap(test, reserved_va, PROT_NONE, flags, -1, 0);
402
if (addr == MAP_FAILED || addr != test) {
403
error_report("Unable to reserve 0x%lx bytes of virtual address "
404
diff --git a/linux-user/flatload.c b/linux-user/flatload.c
405
index XXXXXXX..XXXXXXX 100644
406
--- a/linux-user/flatload.c
407
+++ b/linux-user/flatload.c
408
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
409
}
410
411
/* zero the BSS. */
412
- memset(g2h(datapos + data_len), 0, bss_len);
413
+ memset(g2h_untagged(datapos + data_len), 0, bss_len);
414
415
return 0;
416
}
417
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
418
index XXXXXXX..XXXXXXX 100644
419
--- a/linux-user/hppa/cpu_loop.c
420
+++ b/linux-user/hppa/cpu_loop.c
421
@@ -XXX,XX +XXX,XX @@
422
423
static abi_ulong hppa_lws(CPUHPPAState *env)
424
{
425
+ CPUState *cs = env_cpu(env);
426
uint32_t which = env->gr[20];
427
abi_ulong addr = env->gr[26];
428
abi_ulong old = env->gr[25];
429
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
430
}
431
old = tswap32(old);
432
new = tswap32(new);
433
- ret = qatomic_cmpxchg((uint32_t *)g2h(addr), old, new);
434
+ ret = qatomic_cmpxchg((uint32_t *)g2h(cs, addr), old, new);
435
ret = tswap32(ret);
436
break;
437
438
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
439
can be host-endian as well. */
440
switch (size) {
441
case 0:
442
- old = *(uint8_t *)g2h(old);
443
- new = *(uint8_t *)g2h(new);
444
- ret = qatomic_cmpxchg((uint8_t *)g2h(addr), old, new);
445
+ old = *(uint8_t *)g2h(cs, old);
446
+ new = *(uint8_t *)g2h(cs, new);
447
+ ret = qatomic_cmpxchg((uint8_t *)g2h(cs, addr), old, new);
448
ret = ret != old;
449
break;
450
case 1:
451
- old = *(uint16_t *)g2h(old);
452
- new = *(uint16_t *)g2h(new);
453
- ret = qatomic_cmpxchg((uint16_t *)g2h(addr), old, new);
454
+ old = *(uint16_t *)g2h(cs, old);
455
+ new = *(uint16_t *)g2h(cs, new);
456
+ ret = qatomic_cmpxchg((uint16_t *)g2h(cs, addr), old, new);
457
ret = ret != old;
458
break;
459
case 2:
460
- old = *(uint32_t *)g2h(old);
461
- new = *(uint32_t *)g2h(new);
462
- ret = qatomic_cmpxchg((uint32_t *)g2h(addr), old, new);
463
+ old = *(uint32_t *)g2h(cs, old);
464
+ new = *(uint32_t *)g2h(cs, new);
465
+ ret = qatomic_cmpxchg((uint32_t *)g2h(cs, addr), old, new);
466
ret = ret != old;
467
break;
468
case 3:
469
{
470
uint64_t o64, n64, r64;
471
- o64 = *(uint64_t *)g2h(old);
472
- n64 = *(uint64_t *)g2h(new);
473
+ o64 = *(uint64_t *)g2h(cs, old);
474
+ n64 = *(uint64_t *)g2h(cs, new);
475
#ifdef CONFIG_ATOMIC64
476
- r64 = qatomic_cmpxchg__nocheck((uint64_t *)g2h(addr),
477
+ r64 = qatomic_cmpxchg__nocheck((uint64_t *)g2h(cs, addr),
478
o64, n64);
479
ret = r64 != o64;
480
#else
481
start_exclusive();
482
- r64 = *(uint64_t *)g2h(addr);
483
+ r64 = *(uint64_t *)g2h(cs, addr);
484
ret = 1;
485
if (r64 == o64) {
486
- *(uint64_t *)g2h(addr) = n64;
487
+ *(uint64_t *)g2h(cs, addr) = n64;
488
ret = 0;
489
}
490
end_exclusive();
491
diff --git a/linux-user/i386/cpu_loop.c b/linux-user/i386/cpu_loop.c
492
index XXXXXXX..XXXXXXX 100644
493
--- a/linux-user/i386/cpu_loop.c
494
+++ b/linux-user/i386/cpu_loop.c
495
@@ -XXX,XX +XXX,XX @@ void target_cpu_copy_regs(CPUArchState *env, struct target_pt_regs *regs)
496
env->idt.base = target_mmap(0, sizeof(uint64_t) * (env->idt.limit + 1),
497
PROT_READ|PROT_WRITE,
498
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
499
- idt_table = g2h(env->idt.base);
500
+ idt_table = g2h_untagged(env->idt.base);
501
set_idt(0, 0);
502
set_idt(1, 0);
503
set_idt(2, 0);
504
@@ -XXX,XX +XXX,XX @@ void target_cpu_copy_regs(CPUArchState *env, struct target_pt_regs *regs)
505
PROT_READ|PROT_WRITE,
506
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
507
env->gdt.limit = sizeof(uint64_t) * TARGET_GDT_ENTRIES - 1;
508
- gdt_table = g2h(env->gdt.base);
509
+ gdt_table = g2h_untagged(env->gdt.base);
510
#ifdef TARGET_ABI32
511
write_dt(&gdt_table[__USER_CS >> 3], 0, 0xfffff,
512
DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
513
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
514
index XXXXXXX..XXXXXXX 100644
515
--- a/linux-user/mmap.c
516
+++ b/linux-user/mmap.c
517
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
518
}
519
end = host_end;
520
}
521
- ret = mprotect(g2h(host_start), qemu_host_page_size,
522
+ ret = mprotect(g2h_untagged(host_start), qemu_host_page_size,
523
prot1 & PAGE_BITS);
524
if (ret != 0) {
525
goto error;
526
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
527
for (addr = end; addr < host_end; addr += TARGET_PAGE_SIZE) {
528
prot1 |= page_get_flags(addr);
529
}
530
- ret = mprotect(g2h(host_end - qemu_host_page_size),
531
+ ret = mprotect(g2h_untagged(host_end - qemu_host_page_size),
532
qemu_host_page_size, prot1 & PAGE_BITS);
533
if (ret != 0) {
534
goto error;
535
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
536
537
/* handle the pages in the middle */
538
if (host_start < host_end) {
539
- ret = mprotect(g2h(host_start), host_end - host_start, host_prot);
540
+ ret = mprotect(g2h_untagged(host_start),
541
+ host_end - host_start, host_prot);
542
if (ret != 0) {
543
goto error;
544
}
545
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
546
int prot1, prot_new;
547
548
real_end = real_start + qemu_host_page_size;
549
- host_start = g2h(real_start);
550
+ host_start = g2h_untagged(real_start);
551
552
/* get the protection of the target pages outside the mapping */
553
prot1 = 0;
554
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
555
mprotect(host_start, qemu_host_page_size, prot1 | PROT_WRITE);
556
557
/* read the corresponding file data */
558
- if (pread(fd, g2h(start), end - start, offset) == -1)
559
+ if (pread(fd, g2h_untagged(start), end - start, offset) == -1)
560
return -1;
561
562
/* put final protection */
563
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
564
mprotect(host_start, qemu_host_page_size, prot_new);
565
}
566
if (prot_new & PROT_WRITE) {
567
- memset(g2h(start), 0, end - start);
568
+ memset(g2h_untagged(start), 0, end - start);
569
}
570
}
571
return 0;
572
@@ -XXX,XX +XXX,XX @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
573
* - mremap() with MREMAP_FIXED flag
574
* - shmat() with SHM_REMAP flag
575
*/
576
- ptr = mmap(g2h(addr), size, PROT_NONE,
577
+ ptr = mmap(g2h_untagged(addr), size, PROT_NONE,
578
MAP_ANONYMOUS|MAP_PRIVATE|MAP_NORESERVE, -1, 0);
579
580
/* ENOMEM, if host address space has no memory */
581
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
582
/* Note: we prefer to control the mapping address. It is
583
especially important if qemu_host_page_size >
584
qemu_real_host_page_size */
585
- p = mmap(g2h(start), host_len, host_prot,
586
+ p = mmap(g2h_untagged(start), host_len, host_prot,
587
flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
588
if (p == MAP_FAILED) {
589
goto fail;
590
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
591
/* update start so that it points to the file position at 'offset' */
592
host_start = (unsigned long)p;
593
if (!(flags & MAP_ANONYMOUS)) {
594
- p = mmap(g2h(start), len, host_prot,
595
+ p = mmap(g2h_untagged(start), len, host_prot,
596
flags | MAP_FIXED, fd, host_offset);
597
if (p == MAP_FAILED) {
598
- munmap(g2h(start), host_len);
599
+ munmap(g2h_untagged(start), host_len);
600
goto fail;
601
}
602
host_start += offset - host_offset;
603
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
604
-1, 0);
605
if (retaddr == -1)
606
goto fail;
607
- if (pread(fd, g2h(start), len, offset) == -1)
608
+ if (pread(fd, g2h_untagged(start), len, offset) == -1)
609
goto fail;
610
if (!(host_prot & PROT_WRITE)) {
611
ret = target_mprotect(start, len, target_prot);
612
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
613
offset1 = 0;
614
else
615
offset1 = offset + real_start - start;
616
- p = mmap(g2h(real_start), real_end - real_start,
617
+ p = mmap(g2h_untagged(real_start), real_end - real_start,
618
host_prot, flags, fd, offset1);
619
if (p == MAP_FAILED)
620
goto fail;
621
@@ -XXX,XX +XXX,XX @@ static void mmap_reserve(abi_ulong start, abi_ulong size)
622
real_end -= qemu_host_page_size;
623
}
624
if (real_start != real_end) {
625
- mmap(g2h(real_start), real_end - real_start, PROT_NONE,
626
+ mmap(g2h_untagged(real_start), real_end - real_start, PROT_NONE,
627
MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE,
628
-1, 0);
629
}
630
@@ -XXX,XX +XXX,XX @@ int target_munmap(abi_ulong start, abi_ulong len)
631
if (reserved_va) {
632
mmap_reserve(real_start, real_end - real_start);
633
} else {
634
- ret = munmap(g2h(real_start), real_end - real_start);
635
+ ret = munmap(g2h_untagged(real_start), real_end - real_start);
636
}
637
}
638
639
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
640
mmap_lock();
641
642
if (flags & MREMAP_FIXED) {
643
- host_addr = mremap(g2h(old_addr), old_size, new_size,
644
- flags, g2h(new_addr));
645
+ host_addr = mremap(g2h_untagged(old_addr), old_size, new_size,
646
+ flags, g2h_untagged(new_addr));
647
648
if (reserved_va && host_addr != MAP_FAILED) {
649
/* If new and old addresses overlap then the above mremap will
650
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
651
errno = ENOMEM;
652
host_addr = MAP_FAILED;
653
} else {
654
- host_addr = mremap(g2h(old_addr), old_size, new_size,
655
- flags | MREMAP_FIXED, g2h(mmap_start));
656
+ host_addr = mremap(g2h_untagged(old_addr), old_size, new_size,
657
+ flags | MREMAP_FIXED,
658
+ g2h_untagged(mmap_start));
659
if (reserved_va) {
660
mmap_reserve(old_addr, old_size);
661
}
662
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
663
}
664
}
665
if (prot == 0) {
666
- host_addr = mremap(g2h(old_addr), old_size, new_size, flags);
667
+ host_addr = mremap(g2h_untagged(old_addr),
668
+ old_size, new_size, flags);
669
670
if (host_addr != MAP_FAILED) {
671
/* Check if address fits target address space */
672
if (!guest_range_valid(h2g(host_addr), new_size)) {
673
/* Revert mremap() changes */
674
- host_addr = mremap(g2h(old_addr), new_size, old_size,
675
- flags);
676
+ host_addr = mremap(g2h_untagged(old_addr),
677
+ new_size, old_size, flags);
678
errno = ENOMEM;
679
host_addr = MAP_FAILED;
680
} else if (reserved_va && old_size > new_size) {
681
diff --git a/linux-user/ppc/signal.c b/linux-user/ppc/signal.c
682
index XXXXXXX..XXXXXXX 100644
683
--- a/linux-user/ppc/signal.c
684
+++ b/linux-user/ppc/signal.c
685
@@ -XXX,XX +XXX,XX @@ static void restore_user_regs(CPUPPCState *env,
686
uint64_t v_addr;
687
/* 64-bit needs to recover the pointer to the vectors from the frame */
688
__get_user(v_addr, &frame->v_regs);
689
- v_regs = g2h(v_addr);
690
+ v_regs = g2h(env_cpu(env), v_addr);
691
#else
692
v_regs = (ppc_avr_t *)frame->mc_vregs.altivec;
693
#endif
694
@@ -XXX,XX +XXX,XX @@ void setup_rt_frame(int sig, struct target_sigaction *ka,
695
if (get_ppc64_abi(image) < 2) {
696
/* ELFv1 PPC64 function pointers are pointers to OPD entries. */
697
struct target_func_ptr *handler =
698
- (struct target_func_ptr *)g2h(ka->_sa_handler);
699
+ (struct target_func_ptr *)g2h(env_cpu(env), ka->_sa_handler);
700
env->nip = tswapl(handler->entry);
701
env->gpr[2] = tswapl(handler->toc);
702
} else {
703
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
704
index XXXXXXX..XXXXXXX 100644
705
--- a/linux-user/syscall.c
706
+++ b/linux-user/syscall.c
707
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
708
/* Heap contents are initialized to zero, as for anonymous
709
* mapped pages. */
710
if (new_brk > target_brk) {
711
- memset(g2h(target_brk), 0, new_brk - target_brk);
712
+ memset(g2h_untagged(target_brk), 0, new_brk - target_brk);
713
}
714
    target_brk = new_brk;
715
DEBUGF_BRK(TARGET_ABI_FMT_lx " (new_brk <= brk_page)\n", target_brk);
716
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
717
* come from the remaining part of the previous page: it may
718
* contains garbage data due to a previous heap usage (grown
719
* then shrunken). */
720
- memset(g2h(target_brk), 0, brk_page - target_brk);
721
+ memset(g2h_untagged(target_brk), 0, brk_page - target_brk);
722
723
target_brk = new_brk;
724
brk_page = HOST_PAGE_ALIGN(target_brk);
725
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
726
mmap_lock();
727
728
if (shmaddr)
729
- host_raddr = shmat(shmid, (void *)g2h(shmaddr), shmflg);
730
+ host_raddr = shmat(shmid, (void *)g2h_untagged(shmaddr), shmflg);
731
else {
732
abi_ulong mmap_start;
733
734
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
735
errno = ENOMEM;
736
host_raddr = (void *)-1;
737
} else
738
- host_raddr = shmat(shmid, g2h(mmap_start), shmflg | SHM_REMAP);
739
+ host_raddr = shmat(shmid, g2h_untagged(mmap_start),
740
+ shmflg | SHM_REMAP);
741
}
742
743
if (host_raddr == (void *)-1) {
744
@@ -XXX,XX +XXX,XX @@ static inline abi_long do_shmdt(abi_ulong shmaddr)
745
break;
746
}
747
}
748
- rv = get_errno(shmdt(g2h(shmaddr)));
749
+ rv = get_errno(shmdt(g2h_untagged(shmaddr)));
750
751
mmap_unlock();
752
753
@@ -XXX,XX +XXX,XX @@ static abi_long write_ldt(CPUX86State *env,
754
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
755
if (env->ldt.base == -1)
756
return -TARGET_ENOMEM;
757
- memset(g2h(env->ldt.base), 0,
758
+ memset(g2h_untagged(env->ldt.base), 0,
759
TARGET_LDT_ENTRIES * TARGET_LDT_ENTRY_SIZE);
760
env->ldt.limit = 0xffff;
761
- ldt_table = g2h(env->ldt.base);
762
+ ldt_table = g2h_untagged(env->ldt.base);
763
}
764
765
/* NOTE: same code as Linux kernel */
766
@@ -XXX,XX +XXX,XX @@ static abi_long do_modify_ldt(CPUX86State *env, int func, abi_ulong ptr,
767
#if defined(TARGET_ABI32)
768
abi_long do_set_thread_area(CPUX86State *env, abi_ulong ptr)
769
{
770
- uint64_t *gdt_table = g2h(env->gdt.base);
771
+ uint64_t *gdt_table = g2h_untagged(env->gdt.base);
772
struct target_modify_ldt_ldt_s ldt_info;
773
struct target_modify_ldt_ldt_s *target_ldt_info;
774
int seg_32bit, contents, read_exec_only, limit_in_pages;
775
@@ -XXX,XX +XXX,XX @@ install:
776
static abi_long do_get_thread_area(CPUX86State *env, abi_ulong ptr)
777
{
778
struct target_modify_ldt_ldt_s *target_ldt_info;
779
- uint64_t *gdt_table = g2h(env->gdt.base);
780
+ uint64_t *gdt_table = g2h_untagged(env->gdt.base);
781
uint32_t base_addr, limit, flags;
782
int seg_32bit, contents, read_exec_only, limit_in_pages, idx;
783
int seg_not_present, useable, lm;
784
@@ -XXX,XX +XXX,XX @@ static int do_safe_futex(int *uaddr, int op, int val,
785
tricky. However they're probably useless because guest atomic
786
operations won't work either. */
787
#if defined(TARGET_NR_futex)
788
-static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
789
- target_ulong uaddr2, int val3)
790
+static int do_futex(CPUState *cpu, target_ulong uaddr, int op, int val,
791
+ target_ulong timeout, target_ulong uaddr2, int val3)
792
{
793
struct timespec ts, *pts;
794
int base_op;
795
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
796
} else {
797
pts = NULL;
798
}
799
- return do_safe_futex(g2h(uaddr), op, tswap32(val), pts, NULL, val3);
800
+ return do_safe_futex(g2h(cpu, uaddr),
801
+ op, tswap32(val), pts, NULL, val3);
802
case FUTEX_WAKE:
803
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
804
+ return do_safe_futex(g2h(cpu, uaddr),
805
+ op, val, NULL, NULL, 0);
806
case FUTEX_FD:
807
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
808
+ return do_safe_futex(g2h(cpu, uaddr),
809
+ op, val, NULL, NULL, 0);
810
case FUTEX_REQUEUE:
811
case FUTEX_CMP_REQUEUE:
812
case FUTEX_WAKE_OP:
813
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
814
to satisfy the compiler. We do not need to tswap TIMEOUT
815
since it's not compared to guest memory. */
816
pts = (struct timespec *)(uintptr_t) timeout;
817
- return do_safe_futex(g2h(uaddr), op, val, pts, g2h(uaddr2),
818
+ return do_safe_futex(g2h(cpu, uaddr), op, val, pts, g2h(cpu, uaddr2),
819
(base_op == FUTEX_CMP_REQUEUE
820
- ? tswap32(val3)
821
- : val3));
822
+ ? tswap32(val3) : val3));
823
default:
824
return -TARGET_ENOSYS;
825
}
826
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
827
#endif
828
829
#if defined(TARGET_NR_futex_time64)
830
-static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong timeout,
831
+static int do_futex_time64(CPUState *cpu, target_ulong uaddr, int op,
832
+ int val, target_ulong timeout,
833
target_ulong uaddr2, int val3)
834
{
835
struct timespec ts, *pts;
836
@@ -XXX,XX +XXX,XX @@ static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong tim
837
} else {
838
pts = NULL;
839
}
840
- return do_safe_futex(g2h(uaddr), op, tswap32(val), pts, NULL, val3);
841
+ return do_safe_futex(g2h(cpu, uaddr), op,
842
+ tswap32(val), pts, NULL, val3);
843
case FUTEX_WAKE:
844
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
845
+ return do_safe_futex(g2h(cpu, uaddr), op, val, NULL, NULL, 0);
846
case FUTEX_FD:
847
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
848
+ return do_safe_futex(g2h(cpu, uaddr), op, val, NULL, NULL, 0);
849
case FUTEX_REQUEUE:
850
case FUTEX_CMP_REQUEUE:
851
case FUTEX_WAKE_OP:
852
@@ -XXX,XX +XXX,XX @@ static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong tim
853
to satisfy the compiler. We do not need to tswap TIMEOUT
854
since it's not compared to guest memory. */
855
pts = (struct timespec *)(uintptr_t) timeout;
856
- return do_safe_futex(g2h(uaddr), op, val, pts, g2h(uaddr2),
857
+ return do_safe_futex(g2h(cpu, uaddr), op, val, pts, g2h(cpu, uaddr2),
858
(base_op == FUTEX_CMP_REQUEUE
859
- ? tswap32(val3)
860
- : val3));
861
+ ? tswap32(val3) : val3));
862
default:
863
return -TARGET_ENOSYS;
864
}
865
@@ -XXX,XX +XXX,XX @@ static int open_self_maps(void *cpu_env, int fd)
866
const char *path;
867
868
max = h2g_valid(max - 1) ?
869
- max : (uintptr_t) g2h(GUEST_ADDR_MAX) + 1;
870
+ max : (uintptr_t) g2h_untagged(GUEST_ADDR_MAX) + 1;
871
872
if (page_check_range(h2g(min), max - min, flags) == -1) {
873
continue;
874
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
875
876
if (ts->child_tidptr) {
877
put_user_u32(0, ts->child_tidptr);
878
- do_sys_futex(g2h(ts->child_tidptr), FUTEX_WAKE, INT_MAX,
879
- NULL, NULL, 0);
880
+ do_sys_futex(g2h(cpu, ts->child_tidptr),
881
+ FUTEX_WAKE, INT_MAX, NULL, NULL, 0);
882
}
883
thread_cpu = NULL;
884
g_free(ts);
885
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
886
if (!arg5) {
887
ret = mount(p, p2, p3, (unsigned long)arg4, NULL);
888
} else {
889
- ret = mount(p, p2, p3, (unsigned long)arg4, g2h(arg5));
890
+ ret = mount(p, p2, p3, (unsigned long)arg4, g2h(cpu, arg5));
891
}
892
ret = get_errno(ret);
893
894
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
895
/* ??? msync/mlock/munlock are broken for softmmu. */
896
#ifdef TARGET_NR_msync
897
case TARGET_NR_msync:
898
- return get_errno(msync(g2h(arg1), arg2, arg3));
899
+ return get_errno(msync(g2h(cpu, arg1), arg2, arg3));
900
#endif
901
#ifdef TARGET_NR_mlock
902
case TARGET_NR_mlock:
903
- return get_errno(mlock(g2h(arg1), arg2));
904
+ return get_errno(mlock(g2h(cpu, arg1), arg2));
905
#endif
906
#ifdef TARGET_NR_munlock
907
case TARGET_NR_munlock:
908
- return get_errno(munlock(g2h(arg1), arg2));
909
+ return get_errno(munlock(g2h(cpu, arg1), arg2));
910
#endif
911
#ifdef TARGET_NR_mlockall
912
case TARGET_NR_mlockall:
913
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
914
915
#if defined(TARGET_NR_set_tid_address) && defined(__NR_set_tid_address)
916
case TARGET_NR_set_tid_address:
917
- return get_errno(set_tid_address((int *)g2h(arg1)));
918
+ return get_errno(set_tid_address((int *)g2h(cpu, arg1)));
919
#endif
920
921
case TARGET_NR_tkill:
922
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
923
#endif
924
#ifdef TARGET_NR_futex
925
case TARGET_NR_futex:
926
- return do_futex(arg1, arg2, arg3, arg4, arg5, arg6);
927
+ return do_futex(cpu, arg1, arg2, arg3, arg4, arg5, arg6);
928
#endif
929
#ifdef TARGET_NR_futex_time64
930
case TARGET_NR_futex_time64:
931
- return do_futex_time64(arg1, arg2, arg3, arg4, arg5, arg6);
932
+ return do_futex_time64(cpu, arg1, arg2, arg3, arg4, arg5, arg6);
933
#endif
934
#if defined(TARGET_NR_inotify_init) && defined(__NR_inotify_init)
935
case TARGET_NR_inotify_init:
936
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
937
index XXXXXXX..XXXXXXX 100644
938
--- a/target/arm/helper-a64.c
939
+++ b/target/arm/helper-a64.c
940
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_le)(CPUARMState *env, uint64_t addr,
941
942
#ifdef CONFIG_USER_ONLY
943
/* ??? Enforce alignment. */
944
- uint64_t *haddr = g2h(addr);
945
+ uint64_t *haddr = g2h(env_cpu(env), addr);
946
947
set_helper_retaddr(ra);
948
o0 = ldq_le_p(haddr + 0);
949
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_be)(CPUARMState *env, uint64_t addr,
950
951
#ifdef CONFIG_USER_ONLY
952
/* ??? Enforce alignment. */
953
- uint64_t *haddr = g2h(addr);
954
+ uint64_t *haddr = g2h(env_cpu(env), addr);
955
956
set_helper_retaddr(ra);
957
o1 = ldq_be_p(haddr + 0);
958
diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c
959
index XXXXXXX..XXXXXXX 100644
960
--- a/target/hppa/op_helper.c
961
+++ b/target/hppa/op_helper.c
962
@@ -XXX,XX +XXX,XX @@ static void atomic_store_3(CPUHPPAState *env, target_ulong addr, uint32_t val,
963
#ifdef CONFIG_USER_ONLY
964
uint32_t old, new, cmp;
965
966
- uint32_t *haddr = g2h(addr - 1);
967
+ uint32_t *haddr = g2h(env_cpu(env), addr - 1);
968
old = *haddr;
969
while (1) {
970
new = (old & ~mask) | (val & mask);
971
diff --git a/target/i386/tcg/mem_helper.c b/target/i386/tcg/mem_helper.c
972
index XXXXXXX..XXXXXXX 100644
973
--- a/target/i386/tcg/mem_helper.c
974
+++ b/target/i386/tcg/mem_helper.c
975
@@ -XXX,XX +XXX,XX @@ void helper_cmpxchg8b(CPUX86State *env, target_ulong a0)
976
977
#ifdef CONFIG_USER_ONLY
978
{
979
- uint64_t *haddr = g2h(a0);
980
+ uint64_t *haddr = g2h(env_cpu(env), a0);
981
cmpv = cpu_to_le64(cmpv);
982
newv = cpu_to_le64(newv);
983
oldv = qatomic_cmpxchg__nocheck(haddr, cmpv, newv);
984
diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c
985
index XXXXXXX..XXXXXXX 100644
986
--- a/target/s390x/mem_helper.c
987
+++ b/target/s390x/mem_helper.c
988
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
989
990
if (parallel) {
991
#ifdef CONFIG_USER_ONLY
992
- uint32_t *haddr = g2h(a1);
993
+ uint32_t *haddr = g2h(env_cpu(env), a1);
994
ov = qatomic_cmpxchg__nocheck(haddr, cv, nv);
995
#else
996
TCGMemOpIdx oi = make_memop_idx(MO_TEUL | MO_ALIGN, mem_idx);
997
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
998
if (parallel) {
999
#ifdef CONFIG_ATOMIC64
1000
# ifdef CONFIG_USER_ONLY
1001
- uint64_t *haddr = g2h(a1);
1002
+ uint64_t *haddr = g2h(env_cpu(env), a1);
1003
ov = qatomic_cmpxchg__nocheck(haddr, cv, nv);
1004
# else
1005
TCGMemOpIdx oi = make_memop_idx(MO_TEQ | MO_ALIGN, mem_idx);
1006
--
100
--
1007
2.20.1
101
2.34.1
1008
102
1009
103
diff view generated by jsdifflib
1
From: Rebecca Cran <rebecca@nuviainc.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
cpsr has been treated as being the same as spsr, but it isn't.
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Since PSTATE_SS isn't in cpsr, remove it and move it into env->pstate.
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
5
Message-id: 20241206031224.78525-7-richard.henderson@linaro.org
6
This allows us to add support for CPSR_DIT, adding helper functions
7
to merge SPSR_ELx to and from CPSR.
8
9
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210208065700.19454-3-rebecca@nuviainc.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
target/arm/helper-a64.c | 27 +++++++++++++++++++++++----
8
target/arm/tcg/helper-sve.h | 414 ++++++++++++++++++------------------
15
target/arm/helper.c | 24 ++++++++++++++++++------
9
target/arm/tcg/sve_helper.c | 96 +++++----
16
target/arm/op_helper.c | 9 +--------
10
2 files changed, 258 insertions(+), 252 deletions(-)
17
3 files changed, 42 insertions(+), 18 deletions(-)
18
11
19
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
12
diff --git a/target/arm/tcg/helper-sve.h b/target/arm/tcg/helper-sve.h
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-a64.c
14
--- a/target/arm/tcg/helper-sve.h
22
+++ b/target/arm/helper-a64.c
15
+++ b/target/arm/tcg/helper-sve.h
23
@@ -XXX,XX +XXX,XX @@ static int el_from_spsr(uint32_t spsr)
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
17
void, ptr, ptr, ptr, fpst, i32)
18
19
DEF_HELPER_FLAGS_4(sve_faddv_h, TCG_CALL_NO_RWG,
20
- i64, ptr, ptr, ptr, i32)
21
+ i64, ptr, ptr, fpst, i32)
22
DEF_HELPER_FLAGS_4(sve_faddv_s, TCG_CALL_NO_RWG,
23
- i64, ptr, ptr, ptr, i32)
24
+ i64, ptr, ptr, fpst, i32)
25
DEF_HELPER_FLAGS_4(sve_faddv_d, TCG_CALL_NO_RWG,
26
- i64, ptr, ptr, ptr, i32)
27
+ i64, ptr, ptr, fpst, i32)
28
29
DEF_HELPER_FLAGS_4(sve_fmaxnmv_h, TCG_CALL_NO_RWG,
30
- i64, ptr, ptr, ptr, i32)
31
+ i64, ptr, ptr, fpst, i32)
32
DEF_HELPER_FLAGS_4(sve_fmaxnmv_s, TCG_CALL_NO_RWG,
33
- i64, ptr, ptr, ptr, i32)
34
+ i64, ptr, ptr, fpst, i32)
35
DEF_HELPER_FLAGS_4(sve_fmaxnmv_d, TCG_CALL_NO_RWG,
36
- i64, ptr, ptr, ptr, i32)
37
+ i64, ptr, ptr, fpst, i32)
38
39
DEF_HELPER_FLAGS_4(sve_fminnmv_h, TCG_CALL_NO_RWG,
40
- i64, ptr, ptr, ptr, i32)
41
+ i64, ptr, ptr, fpst, i32)
42
DEF_HELPER_FLAGS_4(sve_fminnmv_s, TCG_CALL_NO_RWG,
43
- i64, ptr, ptr, ptr, i32)
44
+ i64, ptr, ptr, fpst, i32)
45
DEF_HELPER_FLAGS_4(sve_fminnmv_d, TCG_CALL_NO_RWG,
46
- i64, ptr, ptr, ptr, i32)
47
+ i64, ptr, ptr, fpst, i32)
48
49
DEF_HELPER_FLAGS_4(sve_fmaxv_h, TCG_CALL_NO_RWG,
50
- i64, ptr, ptr, ptr, i32)
51
+ i64, ptr, ptr, fpst, i32)
52
DEF_HELPER_FLAGS_4(sve_fmaxv_s, TCG_CALL_NO_RWG,
53
- i64, ptr, ptr, ptr, i32)
54
+ i64, ptr, ptr, fpst, i32)
55
DEF_HELPER_FLAGS_4(sve_fmaxv_d, TCG_CALL_NO_RWG,
56
- i64, ptr, ptr, ptr, i32)
57
+ i64, ptr, ptr, fpst, i32)
58
59
DEF_HELPER_FLAGS_4(sve_fminv_h, TCG_CALL_NO_RWG,
60
- i64, ptr, ptr, ptr, i32)
61
+ i64, ptr, ptr, fpst, i32)
62
DEF_HELPER_FLAGS_4(sve_fminv_s, TCG_CALL_NO_RWG,
63
- i64, ptr, ptr, ptr, i32)
64
+ i64, ptr, ptr, fpst, i32)
65
DEF_HELPER_FLAGS_4(sve_fminv_d, TCG_CALL_NO_RWG,
66
- i64, ptr, ptr, ptr, i32)
67
+ i64, ptr, ptr, fpst, i32)
68
69
DEF_HELPER_FLAGS_5(sve_fadda_h, TCG_CALL_NO_RWG,
70
- i64, i64, ptr, ptr, ptr, i32)
71
+ i64, i64, ptr, ptr, fpst, i32)
72
DEF_HELPER_FLAGS_5(sve_fadda_s, TCG_CALL_NO_RWG,
73
- i64, i64, ptr, ptr, ptr, i32)
74
+ i64, i64, ptr, ptr, fpst, i32)
75
DEF_HELPER_FLAGS_5(sve_fadda_d, TCG_CALL_NO_RWG,
76
- i64, i64, ptr, ptr, ptr, i32)
77
+ i64, i64, ptr, ptr, fpst, i32)
78
79
DEF_HELPER_FLAGS_5(sve_fcmge0_h, TCG_CALL_NO_RWG,
80
- void, ptr, ptr, ptr, ptr, i32)
81
+ void, ptr, ptr, ptr, fpst, i32)
82
DEF_HELPER_FLAGS_5(sve_fcmge0_s, TCG_CALL_NO_RWG,
83
- void, ptr, ptr, ptr, ptr, i32)
84
+ void, ptr, ptr, ptr, fpst, i32)
85
DEF_HELPER_FLAGS_5(sve_fcmge0_d, TCG_CALL_NO_RWG,
86
- void, ptr, ptr, ptr, ptr, i32)
87
+ void, ptr, ptr, ptr, fpst, i32)
88
89
DEF_HELPER_FLAGS_5(sve_fcmgt0_h, TCG_CALL_NO_RWG,
90
- void, ptr, ptr, ptr, ptr, i32)
91
+ void, ptr, ptr, ptr, fpst, i32)
92
DEF_HELPER_FLAGS_5(sve_fcmgt0_s, TCG_CALL_NO_RWG,
93
- void, ptr, ptr, ptr, ptr, i32)
94
+ void, ptr, ptr, ptr, fpst, i32)
95
DEF_HELPER_FLAGS_5(sve_fcmgt0_d, TCG_CALL_NO_RWG,
96
- void, ptr, ptr, ptr, ptr, i32)
97
+ void, ptr, ptr, ptr, fpst, i32)
98
99
DEF_HELPER_FLAGS_5(sve_fcmlt0_h, TCG_CALL_NO_RWG,
100
- void, ptr, ptr, ptr, ptr, i32)
101
+ void, ptr, ptr, ptr, fpst, i32)
102
DEF_HELPER_FLAGS_5(sve_fcmlt0_s, TCG_CALL_NO_RWG,
103
- void, ptr, ptr, ptr, ptr, i32)
104
+ void, ptr, ptr, ptr, fpst, i32)
105
DEF_HELPER_FLAGS_5(sve_fcmlt0_d, TCG_CALL_NO_RWG,
106
- void, ptr, ptr, ptr, ptr, i32)
107
+ void, ptr, ptr, ptr, fpst, i32)
108
109
DEF_HELPER_FLAGS_5(sve_fcmle0_h, TCG_CALL_NO_RWG,
110
- void, ptr, ptr, ptr, ptr, i32)
111
+ void, ptr, ptr, ptr, fpst, i32)
112
DEF_HELPER_FLAGS_5(sve_fcmle0_s, TCG_CALL_NO_RWG,
113
- void, ptr, ptr, ptr, ptr, i32)
114
+ void, ptr, ptr, ptr, fpst, i32)
115
DEF_HELPER_FLAGS_5(sve_fcmle0_d, TCG_CALL_NO_RWG,
116
- void, ptr, ptr, ptr, ptr, i32)
117
+ void, ptr, ptr, ptr, fpst, i32)
118
119
DEF_HELPER_FLAGS_5(sve_fcmeq0_h, TCG_CALL_NO_RWG,
120
- void, ptr, ptr, ptr, ptr, i32)
121
+ void, ptr, ptr, ptr, fpst, i32)
122
DEF_HELPER_FLAGS_5(sve_fcmeq0_s, TCG_CALL_NO_RWG,
123
- void, ptr, ptr, ptr, ptr, i32)
124
+ void, ptr, ptr, ptr, fpst, i32)
125
DEF_HELPER_FLAGS_5(sve_fcmeq0_d, TCG_CALL_NO_RWG,
126
- void, ptr, ptr, ptr, ptr, i32)
127
+ void, ptr, ptr, ptr, fpst, i32)
128
129
DEF_HELPER_FLAGS_5(sve_fcmne0_h, TCG_CALL_NO_RWG,
130
- void, ptr, ptr, ptr, ptr, i32)
131
+ void, ptr, ptr, ptr, fpst, i32)
132
DEF_HELPER_FLAGS_5(sve_fcmne0_s, TCG_CALL_NO_RWG,
133
- void, ptr, ptr, ptr, ptr, i32)
134
+ void, ptr, ptr, ptr, fpst, i32)
135
DEF_HELPER_FLAGS_5(sve_fcmne0_d, TCG_CALL_NO_RWG,
136
- void, ptr, ptr, ptr, ptr, i32)
137
+ void, ptr, ptr, ptr, fpst, i32)
138
139
DEF_HELPER_FLAGS_6(sve_fadd_h, TCG_CALL_NO_RWG,
140
- void, ptr, ptr, ptr, ptr, ptr, i32)
141
+ void, ptr, ptr, ptr, ptr, fpst, i32)
142
DEF_HELPER_FLAGS_6(sve_fadd_s, TCG_CALL_NO_RWG,
143
- void, ptr, ptr, ptr, ptr, ptr, i32)
144
+ void, ptr, ptr, ptr, ptr, fpst, i32)
145
DEF_HELPER_FLAGS_6(sve_fadd_d, TCG_CALL_NO_RWG,
146
- void, ptr, ptr, ptr, ptr, ptr, i32)
147
+ void, ptr, ptr, ptr, ptr, fpst, i32)
148
149
DEF_HELPER_FLAGS_6(sve_fsub_h, TCG_CALL_NO_RWG,
150
- void, ptr, ptr, ptr, ptr, ptr, i32)
151
+ void, ptr, ptr, ptr, ptr, fpst, i32)
152
DEF_HELPER_FLAGS_6(sve_fsub_s, TCG_CALL_NO_RWG,
153
- void, ptr, ptr, ptr, ptr, ptr, i32)
154
+ void, ptr, ptr, ptr, ptr, fpst, i32)
155
DEF_HELPER_FLAGS_6(sve_fsub_d, TCG_CALL_NO_RWG,
156
- void, ptr, ptr, ptr, ptr, ptr, i32)
157
+ void, ptr, ptr, ptr, ptr, fpst, i32)
158
159
DEF_HELPER_FLAGS_6(sve_fmul_h, TCG_CALL_NO_RWG,
160
- void, ptr, ptr, ptr, ptr, ptr, i32)
161
+ void, ptr, ptr, ptr, ptr, fpst, i32)
162
DEF_HELPER_FLAGS_6(sve_fmul_s, TCG_CALL_NO_RWG,
163
- void, ptr, ptr, ptr, ptr, ptr, i32)
164
+ void, ptr, ptr, ptr, ptr, fpst, i32)
165
DEF_HELPER_FLAGS_6(sve_fmul_d, TCG_CALL_NO_RWG,
166
- void, ptr, ptr, ptr, ptr, ptr, i32)
167
+ void, ptr, ptr, ptr, ptr, fpst, i32)
168
169
DEF_HELPER_FLAGS_6(sve_fdiv_h, TCG_CALL_NO_RWG,
170
- void, ptr, ptr, ptr, ptr, ptr, i32)
171
+ void, ptr, ptr, ptr, ptr, fpst, i32)
172
DEF_HELPER_FLAGS_6(sve_fdiv_s, TCG_CALL_NO_RWG,
173
- void, ptr, ptr, ptr, ptr, ptr, i32)
174
+ void, ptr, ptr, ptr, ptr, fpst, i32)
175
DEF_HELPER_FLAGS_6(sve_fdiv_d, TCG_CALL_NO_RWG,
176
- void, ptr, ptr, ptr, ptr, ptr, i32)
177
+ void, ptr, ptr, ptr, ptr, fpst, i32)
178
179
DEF_HELPER_FLAGS_6(sve_fmin_h, TCG_CALL_NO_RWG,
180
- void, ptr, ptr, ptr, ptr, ptr, i32)
181
+ void, ptr, ptr, ptr, ptr, fpst, i32)
182
DEF_HELPER_FLAGS_6(sve_fmin_s, TCG_CALL_NO_RWG,
183
- void, ptr, ptr, ptr, ptr, ptr, i32)
184
+ void, ptr, ptr, ptr, ptr, fpst, i32)
185
DEF_HELPER_FLAGS_6(sve_fmin_d, TCG_CALL_NO_RWG,
186
- void, ptr, ptr, ptr, ptr, ptr, i32)
187
+ void, ptr, ptr, ptr, ptr, fpst, i32)
188
189
DEF_HELPER_FLAGS_6(sve_fmax_h, TCG_CALL_NO_RWG,
190
- void, ptr, ptr, ptr, ptr, ptr, i32)
191
+ void, ptr, ptr, ptr, ptr, fpst, i32)
192
DEF_HELPER_FLAGS_6(sve_fmax_s, TCG_CALL_NO_RWG,
193
- void, ptr, ptr, ptr, ptr, ptr, i32)
194
+ void, ptr, ptr, ptr, ptr, fpst, i32)
195
DEF_HELPER_FLAGS_6(sve_fmax_d, TCG_CALL_NO_RWG,
196
- void, ptr, ptr, ptr, ptr, ptr, i32)
197
+ void, ptr, ptr, ptr, ptr, fpst, i32)
198
199
DEF_HELPER_FLAGS_6(sve_fminnum_h, TCG_CALL_NO_RWG,
200
- void, ptr, ptr, ptr, ptr, ptr, i32)
201
+ void, ptr, ptr, ptr, ptr, fpst, i32)
202
DEF_HELPER_FLAGS_6(sve_fminnum_s, TCG_CALL_NO_RWG,
203
- void, ptr, ptr, ptr, ptr, ptr, i32)
204
+ void, ptr, ptr, ptr, ptr, fpst, i32)
205
DEF_HELPER_FLAGS_6(sve_fminnum_d, TCG_CALL_NO_RWG,
206
- void, ptr, ptr, ptr, ptr, ptr, i32)
207
+ void, ptr, ptr, ptr, ptr, fpst, i32)
208
209
DEF_HELPER_FLAGS_6(sve_fmaxnum_h, TCG_CALL_NO_RWG,
210
- void, ptr, ptr, ptr, ptr, ptr, i32)
211
+ void, ptr, ptr, ptr, ptr, fpst, i32)
212
DEF_HELPER_FLAGS_6(sve_fmaxnum_s, TCG_CALL_NO_RWG,
213
- void, ptr, ptr, ptr, ptr, ptr, i32)
214
+ void, ptr, ptr, ptr, ptr, fpst, i32)
215
DEF_HELPER_FLAGS_6(sve_fmaxnum_d, TCG_CALL_NO_RWG,
216
- void, ptr, ptr, ptr, ptr, ptr, i32)
217
+ void, ptr, ptr, ptr, ptr, fpst, i32)
218
219
DEF_HELPER_FLAGS_6(sve_fabd_h, TCG_CALL_NO_RWG,
220
- void, ptr, ptr, ptr, ptr, ptr, i32)
221
+ void, ptr, ptr, ptr, ptr, fpst, i32)
222
DEF_HELPER_FLAGS_6(sve_fabd_s, TCG_CALL_NO_RWG,
223
- void, ptr, ptr, ptr, ptr, ptr, i32)
224
+ void, ptr, ptr, ptr, ptr, fpst, i32)
225
DEF_HELPER_FLAGS_6(sve_fabd_d, TCG_CALL_NO_RWG,
226
- void, ptr, ptr, ptr, ptr, ptr, i32)
227
+ void, ptr, ptr, ptr, ptr, fpst, i32)
228
229
DEF_HELPER_FLAGS_6(sve_fscalbn_h, TCG_CALL_NO_RWG,
230
- void, ptr, ptr, ptr, ptr, ptr, i32)
231
+ void, ptr, ptr, ptr, ptr, fpst, i32)
232
DEF_HELPER_FLAGS_6(sve_fscalbn_s, TCG_CALL_NO_RWG,
233
- void, ptr, ptr, ptr, ptr, ptr, i32)
234
+ void, ptr, ptr, ptr, ptr, fpst, i32)
235
DEF_HELPER_FLAGS_6(sve_fscalbn_d, TCG_CALL_NO_RWG,
236
- void, ptr, ptr, ptr, ptr, ptr, i32)
237
+ void, ptr, ptr, ptr, ptr, fpst, i32)
238
239
DEF_HELPER_FLAGS_6(sve_fmulx_h, TCG_CALL_NO_RWG,
240
- void, ptr, ptr, ptr, ptr, ptr, i32)
241
+ void, ptr, ptr, ptr, ptr, fpst, i32)
242
DEF_HELPER_FLAGS_6(sve_fmulx_s, TCG_CALL_NO_RWG,
243
- void, ptr, ptr, ptr, ptr, ptr, i32)
244
+ void, ptr, ptr, ptr, ptr, fpst, i32)
245
DEF_HELPER_FLAGS_6(sve_fmulx_d, TCG_CALL_NO_RWG,
246
- void, ptr, ptr, ptr, ptr, ptr, i32)
247
+ void, ptr, ptr, ptr, ptr, fpst, i32)
248
249
DEF_HELPER_FLAGS_6(sve_fadds_h, TCG_CALL_NO_RWG,
250
- void, ptr, ptr, ptr, i64, ptr, i32)
251
+ void, ptr, ptr, ptr, i64, fpst, i32)
252
DEF_HELPER_FLAGS_6(sve_fadds_s, TCG_CALL_NO_RWG,
253
- void, ptr, ptr, ptr, i64, ptr, i32)
254
+ void, ptr, ptr, ptr, i64, fpst, i32)
255
DEF_HELPER_FLAGS_6(sve_fadds_d, TCG_CALL_NO_RWG,
256
- void, ptr, ptr, ptr, i64, ptr, i32)
257
+ void, ptr, ptr, ptr, i64, fpst, i32)
258
259
DEF_HELPER_FLAGS_6(sve_fsubs_h, TCG_CALL_NO_RWG,
260
- void, ptr, ptr, ptr, i64, ptr, i32)
261
+ void, ptr, ptr, ptr, i64, fpst, i32)
262
DEF_HELPER_FLAGS_6(sve_fsubs_s, TCG_CALL_NO_RWG,
263
- void, ptr, ptr, ptr, i64, ptr, i32)
264
+ void, ptr, ptr, ptr, i64, fpst, i32)
265
DEF_HELPER_FLAGS_6(sve_fsubs_d, TCG_CALL_NO_RWG,
266
- void, ptr, ptr, ptr, i64, ptr, i32)
267
+ void, ptr, ptr, ptr, i64, fpst, i32)
268
269
DEF_HELPER_FLAGS_6(sve_fmuls_h, TCG_CALL_NO_RWG,
270
- void, ptr, ptr, ptr, i64, ptr, i32)
271
+ void, ptr, ptr, ptr, i64, fpst, i32)
272
DEF_HELPER_FLAGS_6(sve_fmuls_s, TCG_CALL_NO_RWG,
273
- void, ptr, ptr, ptr, i64, ptr, i32)
274
+ void, ptr, ptr, ptr, i64, fpst, i32)
275
DEF_HELPER_FLAGS_6(sve_fmuls_d, TCG_CALL_NO_RWG,
276
- void, ptr, ptr, ptr, i64, ptr, i32)
277
+ void, ptr, ptr, ptr, i64, fpst, i32)
278
279
DEF_HELPER_FLAGS_6(sve_fsubrs_h, TCG_CALL_NO_RWG,
280
- void, ptr, ptr, ptr, i64, ptr, i32)
281
+ void, ptr, ptr, ptr, i64, fpst, i32)
282
DEF_HELPER_FLAGS_6(sve_fsubrs_s, TCG_CALL_NO_RWG,
283
- void, ptr, ptr, ptr, i64, ptr, i32)
284
+ void, ptr, ptr, ptr, i64, fpst, i32)
285
DEF_HELPER_FLAGS_6(sve_fsubrs_d, TCG_CALL_NO_RWG,
286
- void, ptr, ptr, ptr, i64, ptr, i32)
287
+ void, ptr, ptr, ptr, i64, fpst, i32)
288
289
DEF_HELPER_FLAGS_6(sve_fmaxnms_h, TCG_CALL_NO_RWG,
290
- void, ptr, ptr, ptr, i64, ptr, i32)
291
+ void, ptr, ptr, ptr, i64, fpst, i32)
292
DEF_HELPER_FLAGS_6(sve_fmaxnms_s, TCG_CALL_NO_RWG,
293
- void, ptr, ptr, ptr, i64, ptr, i32)
294
+ void, ptr, ptr, ptr, i64, fpst, i32)
295
DEF_HELPER_FLAGS_6(sve_fmaxnms_d, TCG_CALL_NO_RWG,
296
- void, ptr, ptr, ptr, i64, ptr, i32)
297
+ void, ptr, ptr, ptr, i64, fpst, i32)
298
299
DEF_HELPER_FLAGS_6(sve_fminnms_h, TCG_CALL_NO_RWG,
300
- void, ptr, ptr, ptr, i64, ptr, i32)
301
+ void, ptr, ptr, ptr, i64, fpst, i32)
302
DEF_HELPER_FLAGS_6(sve_fminnms_s, TCG_CALL_NO_RWG,
303
- void, ptr, ptr, ptr, i64, ptr, i32)
304
+ void, ptr, ptr, ptr, i64, fpst, i32)
305
DEF_HELPER_FLAGS_6(sve_fminnms_d, TCG_CALL_NO_RWG,
306
- void, ptr, ptr, ptr, i64, ptr, i32)
307
+ void, ptr, ptr, ptr, i64, fpst, i32)
308
309
DEF_HELPER_FLAGS_6(sve_fmaxs_h, TCG_CALL_NO_RWG,
310
- void, ptr, ptr, ptr, i64, ptr, i32)
311
+ void, ptr, ptr, ptr, i64, fpst, i32)
312
DEF_HELPER_FLAGS_6(sve_fmaxs_s, TCG_CALL_NO_RWG,
313
- void, ptr, ptr, ptr, i64, ptr, i32)
314
+ void, ptr, ptr, ptr, i64, fpst, i32)
315
DEF_HELPER_FLAGS_6(sve_fmaxs_d, TCG_CALL_NO_RWG,
316
- void, ptr, ptr, ptr, i64, ptr, i32)
317
+ void, ptr, ptr, ptr, i64, fpst, i32)
318
319
DEF_HELPER_FLAGS_6(sve_fmins_h, TCG_CALL_NO_RWG,
320
- void, ptr, ptr, ptr, i64, ptr, i32)
321
+ void, ptr, ptr, ptr, i64, fpst, i32)
322
DEF_HELPER_FLAGS_6(sve_fmins_s, TCG_CALL_NO_RWG,
323
- void, ptr, ptr, ptr, i64, ptr, i32)
324
+ void, ptr, ptr, ptr, i64, fpst, i32)
325
DEF_HELPER_FLAGS_6(sve_fmins_d, TCG_CALL_NO_RWG,
326
- void, ptr, ptr, ptr, i64, ptr, i32)
327
+ void, ptr, ptr, ptr, i64, fpst, i32)
328
329
DEF_HELPER_FLAGS_5(sve_fcvt_sh, TCG_CALL_NO_RWG,
330
- void, ptr, ptr, ptr, ptr, i32)
331
+ void, ptr, ptr, ptr, fpst, i32)
332
DEF_HELPER_FLAGS_5(sve_fcvt_dh, TCG_CALL_NO_RWG,
333
- void, ptr, ptr, ptr, ptr, i32)
334
+ void, ptr, ptr, ptr, fpst, i32)
335
DEF_HELPER_FLAGS_5(sve_fcvt_hs, TCG_CALL_NO_RWG,
336
- void, ptr, ptr, ptr, ptr, i32)
337
+ void, ptr, ptr, ptr, fpst, i32)
338
DEF_HELPER_FLAGS_5(sve_fcvt_ds, TCG_CALL_NO_RWG,
339
- void, ptr, ptr, ptr, ptr, i32)
340
+ void, ptr, ptr, ptr, fpst, i32)
341
DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
342
- void, ptr, ptr, ptr, ptr, i32)
343
+ void, ptr, ptr, ptr, fpst, i32)
344
DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
345
- void, ptr, ptr, ptr, ptr, i32)
346
+ void, ptr, ptr, ptr, fpst, i32)
347
DEF_HELPER_FLAGS_5(sve_bfcvt, TCG_CALL_NO_RWG,
348
- void, ptr, ptr, ptr, ptr, i32)
349
+ void, ptr, ptr, ptr, fpst, i32)
350
351
DEF_HELPER_FLAGS_5(sve_fcvtzs_hh, TCG_CALL_NO_RWG,
352
- void, ptr, ptr, ptr, ptr, i32)
353
+ void, ptr, ptr, ptr, fpst, i32)
354
DEF_HELPER_FLAGS_5(sve_fcvtzs_hs, TCG_CALL_NO_RWG,
355
- void, ptr, ptr, ptr, ptr, i32)
356
+ void, ptr, ptr, ptr, fpst, i32)
357
DEF_HELPER_FLAGS_5(sve_fcvtzs_ss, TCG_CALL_NO_RWG,
358
- void, ptr, ptr, ptr, ptr, i32)
359
+ void, ptr, ptr, ptr, fpst, i32)
360
DEF_HELPER_FLAGS_5(sve_fcvtzs_ds, TCG_CALL_NO_RWG,
361
- void, ptr, ptr, ptr, ptr, i32)
362
+ void, ptr, ptr, ptr, fpst, i32)
363
DEF_HELPER_FLAGS_5(sve_fcvtzs_hd, TCG_CALL_NO_RWG,
364
- void, ptr, ptr, ptr, ptr, i32)
365
+ void, ptr, ptr, ptr, fpst, i32)
366
DEF_HELPER_FLAGS_5(sve_fcvtzs_sd, TCG_CALL_NO_RWG,
367
- void, ptr, ptr, ptr, ptr, i32)
368
+ void, ptr, ptr, ptr, fpst, i32)
369
DEF_HELPER_FLAGS_5(sve_fcvtzs_dd, TCG_CALL_NO_RWG,
370
- void, ptr, ptr, ptr, ptr, i32)
371
+ void, ptr, ptr, ptr, fpst, i32)
372
373
DEF_HELPER_FLAGS_5(sve_fcvtzu_hh, TCG_CALL_NO_RWG,
374
- void, ptr, ptr, ptr, ptr, i32)
375
+ void, ptr, ptr, ptr, fpst, i32)
376
DEF_HELPER_FLAGS_5(sve_fcvtzu_hs, TCG_CALL_NO_RWG,
377
- void, ptr, ptr, ptr, ptr, i32)
378
+ void, ptr, ptr, ptr, fpst, i32)
379
DEF_HELPER_FLAGS_5(sve_fcvtzu_ss, TCG_CALL_NO_RWG,
380
- void, ptr, ptr, ptr, ptr, i32)
381
+ void, ptr, ptr, ptr, fpst, i32)
382
DEF_HELPER_FLAGS_5(sve_fcvtzu_ds, TCG_CALL_NO_RWG,
383
- void, ptr, ptr, ptr, ptr, i32)
384
+ void, ptr, ptr, ptr, fpst, i32)
385
DEF_HELPER_FLAGS_5(sve_fcvtzu_hd, TCG_CALL_NO_RWG,
386
- void, ptr, ptr, ptr, ptr, i32)
387
+ void, ptr, ptr, ptr, fpst, i32)
388
DEF_HELPER_FLAGS_5(sve_fcvtzu_sd, TCG_CALL_NO_RWG,
389
- void, ptr, ptr, ptr, ptr, i32)
390
+ void, ptr, ptr, ptr, fpst, i32)
391
DEF_HELPER_FLAGS_5(sve_fcvtzu_dd, TCG_CALL_NO_RWG,
392
- void, ptr, ptr, ptr, ptr, i32)
393
+ void, ptr, ptr, ptr, fpst, i32)
394
395
DEF_HELPER_FLAGS_5(sve_frint_h, TCG_CALL_NO_RWG,
396
- void, ptr, ptr, ptr, ptr, i32)
397
+ void, ptr, ptr, ptr, fpst, i32)
398
DEF_HELPER_FLAGS_5(sve_frint_s, TCG_CALL_NO_RWG,
399
- void, ptr, ptr, ptr, ptr, i32)
400
+ void, ptr, ptr, ptr, fpst, i32)
401
DEF_HELPER_FLAGS_5(sve_frint_d, TCG_CALL_NO_RWG,
402
- void, ptr, ptr, ptr, ptr, i32)
403
+ void, ptr, ptr, ptr, fpst, i32)
404
405
DEF_HELPER_FLAGS_5(sve_frintx_h, TCG_CALL_NO_RWG,
406
- void, ptr, ptr, ptr, ptr, i32)
407
+ void, ptr, ptr, ptr, fpst, i32)
408
DEF_HELPER_FLAGS_5(sve_frintx_s, TCG_CALL_NO_RWG,
409
- void, ptr, ptr, ptr, ptr, i32)
410
+ void, ptr, ptr, ptr, fpst, i32)
411
DEF_HELPER_FLAGS_5(sve_frintx_d, TCG_CALL_NO_RWG,
412
- void, ptr, ptr, ptr, ptr, i32)
413
+ void, ptr, ptr, ptr, fpst, i32)
414
415
DEF_HELPER_FLAGS_5(sve_frecpx_h, TCG_CALL_NO_RWG,
416
- void, ptr, ptr, ptr, ptr, i32)
417
+ void, ptr, ptr, ptr, fpst, i32)
418
DEF_HELPER_FLAGS_5(sve_frecpx_s, TCG_CALL_NO_RWG,
419
- void, ptr, ptr, ptr, ptr, i32)
420
+ void, ptr, ptr, ptr, fpst, i32)
421
DEF_HELPER_FLAGS_5(sve_frecpx_d, TCG_CALL_NO_RWG,
422
- void, ptr, ptr, ptr, ptr, i32)
423
+ void, ptr, ptr, ptr, fpst, i32)
424
425
DEF_HELPER_FLAGS_5(sve_fsqrt_h, TCG_CALL_NO_RWG,
426
- void, ptr, ptr, ptr, ptr, i32)
427
+ void, ptr, ptr, ptr, fpst, i32)
428
DEF_HELPER_FLAGS_5(sve_fsqrt_s, TCG_CALL_NO_RWG,
429
- void, ptr, ptr, ptr, ptr, i32)
430
+ void, ptr, ptr, ptr, fpst, i32)
431
DEF_HELPER_FLAGS_5(sve_fsqrt_d, TCG_CALL_NO_RWG,
432
- void, ptr, ptr, ptr, ptr, i32)
433
+ void, ptr, ptr, ptr, fpst, i32)
434
435
DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
436
- void, ptr, ptr, ptr, ptr, i32)
437
+ void, ptr, ptr, ptr, fpst, i32)
438
DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
439
- void, ptr, ptr, ptr, ptr, i32)
440
+ void, ptr, ptr, ptr, fpst, i32)
441
DEF_HELPER_FLAGS_5(sve_scvt_dh, TCG_CALL_NO_RWG,
442
- void, ptr, ptr, ptr, ptr, i32)
443
+ void, ptr, ptr, ptr, fpst, i32)
444
DEF_HELPER_FLAGS_5(sve_scvt_ss, TCG_CALL_NO_RWG,
445
- void, ptr, ptr, ptr, ptr, i32)
446
+ void, ptr, ptr, ptr, fpst, i32)
447
DEF_HELPER_FLAGS_5(sve_scvt_sd, TCG_CALL_NO_RWG,
448
- void, ptr, ptr, ptr, ptr, i32)
449
+ void, ptr, ptr, ptr, fpst, i32)
450
DEF_HELPER_FLAGS_5(sve_scvt_ds, TCG_CALL_NO_RWG,
451
- void, ptr, ptr, ptr, ptr, i32)
452
+ void, ptr, ptr, ptr, fpst, i32)
453
DEF_HELPER_FLAGS_5(sve_scvt_dd, TCG_CALL_NO_RWG,
454
- void, ptr, ptr, ptr, ptr, i32)
455
+ void, ptr, ptr, ptr, fpst, i32)
456
457
DEF_HELPER_FLAGS_5(sve_ucvt_hh, TCG_CALL_NO_RWG,
458
- void, ptr, ptr, ptr, ptr, i32)
459
+ void, ptr, ptr, ptr, fpst, i32)
460
DEF_HELPER_FLAGS_5(sve_ucvt_sh, TCG_CALL_NO_RWG,
461
- void, ptr, ptr, ptr, ptr, i32)
462
+ void, ptr, ptr, ptr, fpst, i32)
463
DEF_HELPER_FLAGS_5(sve_ucvt_dh, TCG_CALL_NO_RWG,
464
- void, ptr, ptr, ptr, ptr, i32)
465
+ void, ptr, ptr, ptr, fpst, i32)
466
DEF_HELPER_FLAGS_5(sve_ucvt_ss, TCG_CALL_NO_RWG,
467
- void, ptr, ptr, ptr, ptr, i32)
468
+ void, ptr, ptr, ptr, fpst, i32)
469
DEF_HELPER_FLAGS_5(sve_ucvt_sd, TCG_CALL_NO_RWG,
470
- void, ptr, ptr, ptr, ptr, i32)
471
+ void, ptr, ptr, ptr, fpst, i32)
472
DEF_HELPER_FLAGS_5(sve_ucvt_ds, TCG_CALL_NO_RWG,
473
- void, ptr, ptr, ptr, ptr, i32)
474
+ void, ptr, ptr, ptr, fpst, i32)
475
DEF_HELPER_FLAGS_5(sve_ucvt_dd, TCG_CALL_NO_RWG,
476
- void, ptr, ptr, ptr, ptr, i32)
477
+ void, ptr, ptr, ptr, fpst, i32)
478
479
DEF_HELPER_FLAGS_6(sve_fcmge_h, TCG_CALL_NO_RWG,
480
- void, ptr, ptr, ptr, ptr, ptr, i32)
481
+ void, ptr, ptr, ptr, ptr, fpst, i32)
482
DEF_HELPER_FLAGS_6(sve_fcmge_s, TCG_CALL_NO_RWG,
483
- void, ptr, ptr, ptr, ptr, ptr, i32)
484
+ void, ptr, ptr, ptr, ptr, fpst, i32)
485
DEF_HELPER_FLAGS_6(sve_fcmge_d, TCG_CALL_NO_RWG,
486
- void, ptr, ptr, ptr, ptr, ptr, i32)
487
+ void, ptr, ptr, ptr, ptr, fpst, i32)
488
489
DEF_HELPER_FLAGS_6(sve_fcmgt_h, TCG_CALL_NO_RWG,
490
- void, ptr, ptr, ptr, ptr, ptr, i32)
491
+ void, ptr, ptr, ptr, ptr, fpst, i32)
492
DEF_HELPER_FLAGS_6(sve_fcmgt_s, TCG_CALL_NO_RWG,
493
- void, ptr, ptr, ptr, ptr, ptr, i32)
494
+ void, ptr, ptr, ptr, ptr, fpst, i32)
495
DEF_HELPER_FLAGS_6(sve_fcmgt_d, TCG_CALL_NO_RWG,
496
- void, ptr, ptr, ptr, ptr, ptr, i32)
497
+ void, ptr, ptr, ptr, ptr, fpst, i32)
498
499
DEF_HELPER_FLAGS_6(sve_fcmeq_h, TCG_CALL_NO_RWG,
500
- void, ptr, ptr, ptr, ptr, ptr, i32)
501
+ void, ptr, ptr, ptr, ptr, fpst, i32)
502
DEF_HELPER_FLAGS_6(sve_fcmeq_s, TCG_CALL_NO_RWG,
503
- void, ptr, ptr, ptr, ptr, ptr, i32)
504
+ void, ptr, ptr, ptr, ptr, fpst, i32)
505
DEF_HELPER_FLAGS_6(sve_fcmeq_d, TCG_CALL_NO_RWG,
506
- void, ptr, ptr, ptr, ptr, ptr, i32)
507
+ void, ptr, ptr, ptr, ptr, fpst, i32)
508
509
DEF_HELPER_FLAGS_6(sve_fcmne_h, TCG_CALL_NO_RWG,
510
- void, ptr, ptr, ptr, ptr, ptr, i32)
511
+ void, ptr, ptr, ptr, ptr, fpst, i32)
512
DEF_HELPER_FLAGS_6(sve_fcmne_s, TCG_CALL_NO_RWG,
513
- void, ptr, ptr, ptr, ptr, ptr, i32)
514
+ void, ptr, ptr, ptr, ptr, fpst, i32)
515
DEF_HELPER_FLAGS_6(sve_fcmne_d, TCG_CALL_NO_RWG,
516
- void, ptr, ptr, ptr, ptr, ptr, i32)
517
+ void, ptr, ptr, ptr, ptr, fpst, i32)
518
519
DEF_HELPER_FLAGS_6(sve_fcmuo_h, TCG_CALL_NO_RWG,
520
- void, ptr, ptr, ptr, ptr, ptr, i32)
521
+ void, ptr, ptr, ptr, ptr, fpst, i32)
522
DEF_HELPER_FLAGS_6(sve_fcmuo_s, TCG_CALL_NO_RWG,
523
- void, ptr, ptr, ptr, ptr, ptr, i32)
524
+ void, ptr, ptr, ptr, ptr, fpst, i32)
525
DEF_HELPER_FLAGS_6(sve_fcmuo_d, TCG_CALL_NO_RWG,
526
- void, ptr, ptr, ptr, ptr, ptr, i32)
527
+ void, ptr, ptr, ptr, ptr, fpst, i32)
528
529
DEF_HELPER_FLAGS_6(sve_facge_h, TCG_CALL_NO_RWG,
530
- void, ptr, ptr, ptr, ptr, ptr, i32)
531
+ void, ptr, ptr, ptr, ptr, fpst, i32)
532
DEF_HELPER_FLAGS_6(sve_facge_s, TCG_CALL_NO_RWG,
533
- void, ptr, ptr, ptr, ptr, ptr, i32)
534
+ void, ptr, ptr, ptr, ptr, fpst, i32)
535
DEF_HELPER_FLAGS_6(sve_facge_d, TCG_CALL_NO_RWG,
536
- void, ptr, ptr, ptr, ptr, ptr, i32)
537
+ void, ptr, ptr, ptr, ptr, fpst, i32)
538
539
DEF_HELPER_FLAGS_6(sve_facgt_h, TCG_CALL_NO_RWG,
540
- void, ptr, ptr, ptr, ptr, ptr, i32)
541
+ void, ptr, ptr, ptr, ptr, fpst, i32)
542
DEF_HELPER_FLAGS_6(sve_facgt_s, TCG_CALL_NO_RWG,
543
- void, ptr, ptr, ptr, ptr, ptr, i32)
544
+ void, ptr, ptr, ptr, ptr, fpst, i32)
545
DEF_HELPER_FLAGS_6(sve_facgt_d, TCG_CALL_NO_RWG,
546
- void, ptr, ptr, ptr, ptr, ptr, i32)
547
+ void, ptr, ptr, ptr, ptr, fpst, i32)
548
549
DEF_HELPER_FLAGS_6(sve_fcadd_h, TCG_CALL_NO_RWG,
550
- void, ptr, ptr, ptr, ptr, ptr, i32)
551
+ void, ptr, ptr, ptr, ptr, fpst, i32)
552
DEF_HELPER_FLAGS_6(sve_fcadd_s, TCG_CALL_NO_RWG,
553
- void, ptr, ptr, ptr, ptr, ptr, i32)
554
+ void, ptr, ptr, ptr, ptr, fpst, i32)
555
DEF_HELPER_FLAGS_6(sve_fcadd_d, TCG_CALL_NO_RWG,
556
- void, ptr, ptr, ptr, ptr, ptr, i32)
557
+ void, ptr, ptr, ptr, ptr, fpst, i32)
558
559
DEF_HELPER_FLAGS_7(sve_fmla_zpzzz_h, TCG_CALL_NO_RWG,
560
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
561
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
562
DEF_HELPER_FLAGS_7(sve_fmla_zpzzz_s, TCG_CALL_NO_RWG,
563
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
564
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
565
DEF_HELPER_FLAGS_7(sve_fmla_zpzzz_d, TCG_CALL_NO_RWG,
566
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
567
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
568
569
DEF_HELPER_FLAGS_7(sve_fmls_zpzzz_h, TCG_CALL_NO_RWG,
570
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
571
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
572
DEF_HELPER_FLAGS_7(sve_fmls_zpzzz_s, TCG_CALL_NO_RWG,
573
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
574
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
575
DEF_HELPER_FLAGS_7(sve_fmls_zpzzz_d, TCG_CALL_NO_RWG,
576
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
577
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
578
579
DEF_HELPER_FLAGS_7(sve_fnmla_zpzzz_h, TCG_CALL_NO_RWG,
580
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
581
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
582
DEF_HELPER_FLAGS_7(sve_fnmla_zpzzz_s, TCG_CALL_NO_RWG,
583
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
584
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
585
DEF_HELPER_FLAGS_7(sve_fnmla_zpzzz_d, TCG_CALL_NO_RWG,
586
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
587
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
588
589
DEF_HELPER_FLAGS_7(sve_fnmls_zpzzz_h, TCG_CALL_NO_RWG,
590
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
591
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
592
DEF_HELPER_FLAGS_7(sve_fnmls_zpzzz_s, TCG_CALL_NO_RWG,
593
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
594
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
595
DEF_HELPER_FLAGS_7(sve_fnmls_zpzzz_d, TCG_CALL_NO_RWG,
596
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
597
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
598
599
DEF_HELPER_FLAGS_7(sve_fcmla_zpzzz_h, TCG_CALL_NO_RWG,
600
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
601
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
602
DEF_HELPER_FLAGS_7(sve_fcmla_zpzzz_s, TCG_CALL_NO_RWG,
603
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
604
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
605
DEF_HELPER_FLAGS_7(sve_fcmla_zpzzz_d, TCG_CALL_NO_RWG,
606
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
607
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
608
609
-DEF_HELPER_FLAGS_5(sve_ftmad_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
610
-DEF_HELPER_FLAGS_5(sve_ftmad_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
611
-DEF_HELPER_FLAGS_5(sve_ftmad_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
612
+DEF_HELPER_FLAGS_5(sve_ftmad_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
613
+DEF_HELPER_FLAGS_5(sve_ftmad_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
614
+DEF_HELPER_FLAGS_5(sve_ftmad_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
615
616
DEF_HELPER_FLAGS_4(sve2_saddl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
617
DEF_HELPER_FLAGS_4(sve2_saddl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
618
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_xar_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
619
DEF_HELPER_FLAGS_4(sve2_xar_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
620
621
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
622
- void, ptr, ptr, ptr, ptr, ptr, i32)
623
+ void, ptr, ptr, ptr, ptr, fpst, i32)
624
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
625
- void, ptr, ptr, ptr, ptr, ptr, i32)
626
+ void, ptr, ptr, ptr, ptr, fpst, i32)
627
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_d, TCG_CALL_NO_RWG,
628
- void, ptr, ptr, ptr, ptr, ptr, i32)
629
+ void, ptr, ptr, ptr, ptr, fpst, i32)
630
631
DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_h, TCG_CALL_NO_RWG,
632
- void, ptr, ptr, ptr, ptr, ptr, i32)
633
+ void, ptr, ptr, ptr, ptr, fpst, i32)
634
DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_s, TCG_CALL_NO_RWG,
635
- void, ptr, ptr, ptr, ptr, ptr, i32)
636
+ void, ptr, ptr, ptr, ptr, fpst, i32)
637
DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_d, TCG_CALL_NO_RWG,
638
- void, ptr, ptr, ptr, ptr, ptr, i32)
639
+ void, ptr, ptr, ptr, ptr, fpst, i32)
640
641
DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_h, TCG_CALL_NO_RWG,
642
- void, ptr, ptr, ptr, ptr, ptr, i32)
643
+ void, ptr, ptr, ptr, ptr, fpst, i32)
644
DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_s, TCG_CALL_NO_RWG,
645
- void, ptr, ptr, ptr, ptr, ptr, i32)
646
+ void, ptr, ptr, ptr, ptr, fpst, i32)
647
DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_d, TCG_CALL_NO_RWG,
648
- void, ptr, ptr, ptr, ptr, ptr, i32)
649
+ void, ptr, ptr, ptr, ptr, fpst, i32)
650
651
DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_h, TCG_CALL_NO_RWG,
652
- void, ptr, ptr, ptr, ptr, ptr, i32)
653
+ void, ptr, ptr, ptr, ptr, fpst, i32)
654
DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_s, TCG_CALL_NO_RWG,
655
- void, ptr, ptr, ptr, ptr, ptr, i32)
656
+ void, ptr, ptr, ptr, ptr, fpst, i32)
657
DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_d, TCG_CALL_NO_RWG,
658
- void, ptr, ptr, ptr, ptr, ptr, i32)
659
+ void, ptr, ptr, ptr, ptr, fpst, i32)
660
661
DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_h, TCG_CALL_NO_RWG,
662
- void, ptr, ptr, ptr, ptr, ptr, i32)
663
+ void, ptr, ptr, ptr, ptr, fpst, i32)
664
DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_s, TCG_CALL_NO_RWG,
665
- void, ptr, ptr, ptr, ptr, ptr, i32)
666
+ void, ptr, ptr, ptr, ptr, fpst, i32)
667
DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_d, TCG_CALL_NO_RWG,
668
- void, ptr, ptr, ptr, ptr, ptr, i32)
669
+ void, ptr, ptr, ptr, ptr, fpst, i32)
670
671
DEF_HELPER_FLAGS_5(sve2_eor3, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
672
DEF_HELPER_FLAGS_5(sve2_bcax, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
673
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_s, TCG_CALL_NO_RWG,
674
DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_d, TCG_CALL_NO_RWG,
675
void, ptr, ptr, ptr, ptr, i32)
676
677
-DEF_HELPER_FLAGS_6(fmmla_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, i32)
678
-DEF_HELPER_FLAGS_6(fmmla_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, i32)
679
+DEF_HELPER_FLAGS_6(fmmla_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, fpst, i32)
680
+DEF_HELPER_FLAGS_6(fmmla_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, fpst, i32)
681
682
DEF_HELPER_FLAGS_5(sve2_sqrdmlah_idx_h, TCG_CALL_NO_RWG,
683
void, ptr, ptr, ptr, ptr, i32)
684
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_cdot_idx_d, TCG_CALL_NO_RWG,
685
void, ptr, ptr, ptr, ptr, i32)
686
687
DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
688
- void, ptr, ptr, ptr, ptr, i32)
689
+ void, ptr, ptr, ptr, fpst, i32)
690
DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
691
- void, ptr, ptr, ptr, ptr, i32)
692
+ void, ptr, ptr, ptr, fpst, i32)
693
DEF_HELPER_FLAGS_5(sve_bfcvtnt, TCG_CALL_NO_RWG,
694
- void, ptr, ptr, ptr, ptr, i32)
695
+ void, ptr, ptr, ptr, fpst, i32)
696
697
DEF_HELPER_FLAGS_5(sve2_fcvtlt_hs, TCG_CALL_NO_RWG,
698
- void, ptr, ptr, ptr, ptr, i32)
699
+ void, ptr, ptr, ptr, fpst, i32)
700
DEF_HELPER_FLAGS_5(sve2_fcvtlt_sd, TCG_CALL_NO_RWG,
701
- void, ptr, ptr, ptr, ptr, i32)
702
+ void, ptr, ptr, ptr, fpst, i32)
703
704
-DEF_HELPER_FLAGS_5(flogb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
705
-DEF_HELPER_FLAGS_5(flogb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
706
-DEF_HELPER_FLAGS_5(flogb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
707
+DEF_HELPER_FLAGS_5(flogb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
708
+DEF_HELPER_FLAGS_5(flogb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
709
+DEF_HELPER_FLAGS_5(flogb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, fpst, i32)
710
711
DEF_HELPER_FLAGS_4(sve2_sqshl_zpzi_b, TCG_CALL_NO_RWG,
712
void, ptr, ptr, ptr, i32)
713
diff --git a/target/arm/tcg/sve_helper.c b/target/arm/tcg/sve_helper.c
714
index XXXXXXX..XXXXXXX 100644
715
--- a/target/arm/tcg/sve_helper.c
716
+++ b/target/arm/tcg/sve_helper.c
717
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_PAIR_D(sve2_sminp_zpzz_d, int64_t, DO_MIN)
718
719
#define DO_ZPZZ_PAIR_FP(NAME, TYPE, H, OP) \
720
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
721
- void *status, uint32_t desc) \
722
+ float_status *status, uint32_t desc) \
723
{ \
724
intptr_t i, opr_sz = simd_oprsz(desc); \
725
for (i = 0; i < opr_sz; ) { \
726
@@ -XXX,XX +XXX,XX @@ static TYPE NAME##_reduce(TYPE *data, float_status *status, uintptr_t n) \
727
return TYPE##_##FUNC(lo, hi, status); \
728
} \
729
} \
730
-uint64_t HELPER(NAME)(void *vn, void *vg, void *vs, uint32_t desc) \
731
+uint64_t HELPER(NAME)(void *vn, void *vg, float_status *s, uint32_t desc) \
732
{ \
733
uintptr_t i, oprsz = simd_oprsz(desc), maxsz = simd_data(desc); \
734
TYPE data[sizeof(ARMVectorReg) / sizeof(TYPE)]; \
735
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(NAME)(void *vn, void *vg, void *vs, uint32_t desc) \
736
for (; i < maxsz; i += sizeof(TYPE)) { \
737
*(TYPE *)((void *)data + i) = IDENT; \
738
} \
739
- return NAME##_reduce(data, vs, maxsz / sizeof(TYPE)); \
740
+ return NAME##_reduce(data, s, maxsz / sizeof(TYPE)); \
741
}
742
743
DO_REDUCE(sve_faddv_h, float16, H1_2, add, float16_zero)
744
@@ -XXX,XX +XXX,XX @@ DO_REDUCE(sve_fmaxv_d, float64, H1_8, max, float64_chs(float64_infinity))
745
#undef DO_REDUCE
746
747
uint64_t HELPER(sve_fadda_h)(uint64_t nn, void *vm, void *vg,
748
- void *status, uint32_t desc)
749
+ float_status *status, uint32_t desc)
750
{
751
intptr_t i = 0, opr_sz = simd_oprsz(desc);
752
float16 result = nn;
753
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_fadda_h)(uint64_t nn, void *vm, void *vg,
754
}
755
756
uint64_t HELPER(sve_fadda_s)(uint64_t nn, void *vm, void *vg,
757
- void *status, uint32_t desc)
758
+ float_status *status, uint32_t desc)
759
{
760
intptr_t i = 0, opr_sz = simd_oprsz(desc);
761
float32 result = nn;
762
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_fadda_s)(uint64_t nn, void *vm, void *vg,
763
}
764
765
uint64_t HELPER(sve_fadda_d)(uint64_t nn, void *vm, void *vg,
766
- void *status, uint32_t desc)
767
+ float_status *status, uint32_t desc)
768
{
769
intptr_t i = 0, opr_sz = simd_oprsz(desc) / 8;
770
uint64_t *m = vm;
771
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_fadda_d)(uint64_t nn, void *vm, void *vg,
772
*/
773
#define DO_ZPZZ_FP(NAME, TYPE, H, OP) \
774
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
775
- void *status, uint32_t desc) \
776
+ float_status *status, uint32_t desc) \
777
{ \
778
intptr_t i = simd_oprsz(desc); \
779
uint64_t *g = vg; \
780
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_FP(sve_fmulx_d, uint64_t, H1_8, helper_vfp_mulxd)
781
*/
782
#define DO_ZPZS_FP(NAME, TYPE, H, OP) \
783
void HELPER(NAME)(void *vd, void *vn, void *vg, uint64_t scalar, \
784
- void *status, uint32_t desc) \
785
+ float_status *status, uint32_t desc) \
786
{ \
787
intptr_t i = simd_oprsz(desc); \
788
uint64_t *g = vg; \
789
@@ -XXX,XX +XXX,XX @@ DO_ZPZS_FP(sve_fmins_d, float64, H1_8, float64_min)
790
* With the extra float_status parameter.
791
*/
792
#define DO_ZPZ_FP(NAME, TYPE, H, OP) \
793
-void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
794
+void HELPER(NAME)(void *vd, void *vn, void *vg, \
795
+ float_status *status, uint32_t desc) \
796
{ \
797
intptr_t i = simd_oprsz(desc); \
798
uint64_t *g = vg; \
799
@@ -XXX,XX +XXX,XX @@ static void do_fmla_zpzzz_h(void *vd, void *vn, void *vm, void *va, void *vg,
800
}
801
802
void HELPER(sve_fmla_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
803
- void *vg, void *status, uint32_t desc)
804
+ void *vg, float_status *status, uint32_t desc)
805
{
806
do_fmla_zpzzz_h(vd, vn, vm, va, vg, status, desc, 0, 0);
807
}
808
809
void HELPER(sve_fmls_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
810
- void *vg, void *status, uint32_t desc)
811
+ void *vg, float_status *status, uint32_t desc)
812
{
813
do_fmla_zpzzz_h(vd, vn, vm, va, vg, status, desc, 0x8000, 0);
814
}
815
816
void HELPER(sve_fnmla_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
817
- void *vg, void *status, uint32_t desc)
818
+ void *vg, float_status *status, uint32_t desc)
819
{
820
do_fmla_zpzzz_h(vd, vn, vm, va, vg, status, desc, 0x8000, 0x8000);
821
}
822
823
void HELPER(sve_fnmls_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
824
- void *vg, void *status, uint32_t desc)
825
+ void *vg, float_status *status, uint32_t desc)
826
{
827
do_fmla_zpzzz_h(vd, vn, vm, va, vg, status, desc, 0, 0x8000);
828
}
829
@@ -XXX,XX +XXX,XX @@ static void do_fmla_zpzzz_s(void *vd, void *vn, void *vm, void *va, void *vg,
830
}
831
832
void HELPER(sve_fmla_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
833
- void *vg, void *status, uint32_t desc)
834
+ void *vg, float_status *status, uint32_t desc)
835
{
836
do_fmla_zpzzz_s(vd, vn, vm, va, vg, status, desc, 0, 0);
837
}
838
839
void HELPER(sve_fmls_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
840
- void *vg, void *status, uint32_t desc)
841
+ void *vg, float_status *status, uint32_t desc)
842
{
843
do_fmla_zpzzz_s(vd, vn, vm, va, vg, status, desc, 0x80000000, 0);
844
}
845
846
void HELPER(sve_fnmla_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
847
- void *vg, void *status, uint32_t desc)
848
+ void *vg, float_status *status, uint32_t desc)
849
{
850
do_fmla_zpzzz_s(vd, vn, vm, va, vg, status, desc, 0x80000000, 0x80000000);
851
}
852
853
void HELPER(sve_fnmls_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
854
- void *vg, void *status, uint32_t desc)
855
+ void *vg, float_status *status, uint32_t desc)
856
{
857
do_fmla_zpzzz_s(vd, vn, vm, va, vg, status, desc, 0, 0x80000000);
858
}
859
@@ -XXX,XX +XXX,XX @@ static void do_fmla_zpzzz_d(void *vd, void *vn, void *vm, void *va, void *vg,
860
}
861
862
void HELPER(sve_fmla_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
863
- void *vg, void *status, uint32_t desc)
864
+ void *vg, float_status *status, uint32_t desc)
865
{
866
do_fmla_zpzzz_d(vd, vn, vm, va, vg, status, desc, 0, 0);
867
}
868
869
void HELPER(sve_fmls_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
870
- void *vg, void *status, uint32_t desc)
871
+ void *vg, float_status *status, uint32_t desc)
872
{
873
do_fmla_zpzzz_d(vd, vn, vm, va, vg, status, desc, INT64_MIN, 0);
874
}
875
876
void HELPER(sve_fnmla_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
877
- void *vg, void *status, uint32_t desc)
878
+ void *vg, float_status *status, uint32_t desc)
879
{
880
do_fmla_zpzzz_d(vd, vn, vm, va, vg, status, desc, INT64_MIN, INT64_MIN);
881
}
882
883
void HELPER(sve_fnmls_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
884
- void *vg, void *status, uint32_t desc)
885
+ void *vg, float_status *status, uint32_t desc)
886
{
887
do_fmla_zpzzz_d(vd, vn, vm, va, vg, status, desc, 0, INT64_MIN);
888
}
889
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fnmls_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
890
*/
891
#define DO_FPCMP_PPZZ(NAME, TYPE, H, OP) \
892
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
893
- void *status, uint32_t desc) \
894
+ float_status *status, uint32_t desc) \
895
{ \
896
intptr_t i = simd_oprsz(desc), j = (i - 1) >> 6; \
897
uint64_t *d = vd, *g = vg; \
898
@@ -XXX,XX +XXX,XX @@ DO_FPCMP_PPZZ_ALL(sve_facgt, DO_FACGT)
899
*/
900
#define DO_FPCMP_PPZ0(NAME, TYPE, H, OP) \
901
void HELPER(NAME)(void *vd, void *vn, void *vg, \
902
- void *status, uint32_t desc) \
903
+ float_status *status, uint32_t desc) \
904
{ \
905
intptr_t i = simd_oprsz(desc), j = (i - 1) >> 6; \
906
uint64_t *d = vd, *g = vg; \
907
@@ -XXX,XX +XXX,XX @@ DO_FPCMP_PPZ0_ALL(sve_fcmne0, DO_FCMNE)
908
909
/* FP Trig Multiply-Add. */
910
911
-void HELPER(sve_ftmad_h)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
912
+void HELPER(sve_ftmad_h)(void *vd, void *vn, void *vm,
913
+ float_status *s, uint32_t desc)
914
{
915
static const float16 coeff[16] = {
916
0x3c00, 0xb155, 0x2030, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
917
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ftmad_h)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
918
mm = float16_abs(mm);
919
xx += 8;
920
}
921
- d[i] = float16_muladd(n[i], mm, coeff[xx], 0, vs);
922
+ d[i] = float16_muladd(n[i], mm, coeff[xx], 0, s);
24
}
923
}
25
}
924
}
26
925
27
+static void cpsr_write_from_spsr_elx(CPUARMState *env,
926
-void HELPER(sve_ftmad_s)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
28
+ uint32_t val)
927
+void HELPER(sve_ftmad_s)(void *vd, void *vn, void *vm,
29
+{
928
+ float_status *s, uint32_t desc)
30
+ uint32_t mask;
929
{
31
+
930
static const float32 coeff[16] = {
32
+ /* Save SPSR_ELx.SS into PSTATE. */
931
0x3f800000, 0xbe2aaaab, 0x3c088886, 0xb95008b9,
33
+ env->pstate = (env->pstate & ~PSTATE_SS) | (val & PSTATE_SS);
932
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ftmad_s)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
34
+ val &= ~PSTATE_SS;
933
mm = float32_abs(mm);
35
+
934
xx += 8;
36
+ /* Move DIT to the correct location for CPSR */
37
+ if (val & PSTATE_DIT) {
38
+ val &= ~PSTATE_DIT;
39
+ val |= CPSR_DIT;
40
+ }
41
+
42
+ mask = aarch32_cpsr_valid_mask(env->features, \
43
+ &env_archcpu(env)->isar);
44
+ cpsr_write(env, val, mask, CPSRWriteRaw);
45
+}
46
+
47
void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
48
{
49
int cur_el = arm_current_el(env);
50
unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
51
- uint32_t mask, spsr = env->banked_spsr[spsr_idx];
52
+ uint32_t spsr = env->banked_spsr[spsr_idx];
53
int new_el;
54
bool return_to_aa64 = (spsr & PSTATE_nRW) == 0;
55
56
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
57
* will sort the register banks out for us, and we've already
58
* caught all the bad-mode cases in el_from_spsr().
59
*/
60
- mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
61
- cpsr_write(env, spsr, mask, CPSRWriteRaw);
62
+ cpsr_write_from_spsr_elx(env, spsr);
63
if (!arm_singlestep_active(env)) {
64
- env->uncached_cpsr &= ~PSTATE_SS;
65
+ env->pstate &= ~PSTATE_SS;
66
}
935
}
67
aarch64_sync_64_to_32(env);
936
- d[i] = float32_muladd(n[i], mm, coeff[xx], 0, vs);
68
937
+ d[i] = float32_muladd(n[i], mm, coeff[xx], 0, s);
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
72
+++ b/target/arm/helper.c
73
@@ -XXX,XX +XXX,XX @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
74
* For exceptions taken to AArch32 we must clear the SS bit in both
75
* PSTATE and in the old-state value we save to SPSR_<mode>, so zero it now.
76
*/
77
- env->uncached_cpsr &= ~PSTATE_SS;
78
+ env->pstate &= ~PSTATE_SS;
79
env->spsr = cpsr_read(env);
80
/* Clear IT bits. */
81
env->condexec_bits = 0;
82
@@ -XXX,XX +XXX,XX @@ static int aarch64_regnum(CPUARMState *env, int aarch32_reg)
83
}
938
}
84
}
939
}
85
940
86
+static uint32_t cpsr_read_for_spsr_elx(CPUARMState *env)
941
-void HELPER(sve_ftmad_d)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
87
+{
942
+void HELPER(sve_ftmad_d)(void *vd, void *vn, void *vm,
88
+ uint32_t ret = cpsr_read(env);
943
+ float_status *s, uint32_t desc)
89
+
944
{
90
+ /* Move DIT to the correct location for SPSR_ELx */
945
static const float64 coeff[16] = {
91
+ if (ret & CPSR_DIT) {
946
0x3ff0000000000000ull, 0xbfc5555555555543ull,
92
+ ret &= ~CPSR_DIT;
947
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ftmad_d)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
93
+ ret |= PSTATE_DIT;
948
mm = float64_abs(mm);
94
+ }
949
xx += 8;
95
+ /* Merge PSTATE.SS into SPSR_ELx */
96
+ ret |= env->pstate & PSTATE_SS;
97
+
98
+ return ret;
99
+}
100
+
101
/* Handle exception entry to a target EL which is using AArch64 */
102
static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
103
{
104
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
105
aarch64_save_sp(env, arm_current_el(env));
106
env->elr_el[new_el] = env->pc;
107
} else {
108
- old_mode = cpsr_read(env);
109
+ old_mode = cpsr_read_for_spsr_elx(env);
110
env->elr_el[new_el] = env->regs[15];
111
112
aarch64_sync_32_to_64(env);
113
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
114
target_ulong *cs_base, uint32_t *pflags)
115
{
116
uint32_t flags = env->hflags;
117
- uint32_t pstate_for_ss;
118
119
*cs_base = 0;
120
assert_hflags_rebuild_correctly(env);
121
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
122
if (cpu_isar_feature(aa64_bti, env_archcpu(env))) {
123
flags = FIELD_DP32(flags, TBFLAG_A64, BTYPE, env->btype);
124
}
950
}
125
- pstate_for_ss = env->pstate;
951
- d[i] = float64_muladd(n[i], mm, coeff[xx], 0, vs);
126
} else {
952
+ d[i] = float64_muladd(n[i], mm, coeff[xx], 0, s);
127
*pc = env->regs[15];
128
129
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
130
131
flags = FIELD_DP32(flags, TBFLAG_AM32, THUMB, env->thumb);
132
flags = FIELD_DP32(flags, TBFLAG_AM32, CONDEXEC, env->condexec_bits);
133
- pstate_for_ss = env->uncached_cpsr;
134
}
953
}
135
954
}
136
/*
955
137
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
956
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ftmad_d)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
138
* SS_ACTIVE is set in hflags; PSTATE_SS is computed every TB.
957
*/
139
*/
958
140
if (FIELD_EX32(flags, TBFLAG_ANY, SS_ACTIVE) &&
959
void HELPER(sve_fcadd_h)(void *vd, void *vn, void *vm, void *vg,
141
- (pstate_for_ss & PSTATE_SS)) {
960
- void *vs, uint32_t desc)
142
+ (env->pstate & PSTATE_SS)) {
961
+ float_status *s, uint32_t desc)
143
flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1);
962
{
144
}
963
intptr_t j, i = simd_oprsz(desc);
145
964
uint64_t *g = vg;
146
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
965
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcadd_h)(void *vd, void *vn, void *vm, void *vg,
147
index XXXXXXX..XXXXXXX 100644
966
e3 = *(float16 *)(vm + H1_2(i)) ^ neg_imag;
148
--- a/target/arm/op_helper.c
967
149
+++ b/target/arm/op_helper.c
968
if (likely((pg >> (i & 63)) & 1)) {
150
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_bkpt_insn)(CPUARMState *env, uint32_t syndrome)
969
- *(float16 *)(vd + H1_2(i)) = float16_add(e0, e1, vs);
151
970
+ *(float16 *)(vd + H1_2(i)) = float16_add(e0, e1, s);
152
uint32_t HELPER(cpsr_read)(CPUARMState *env)
971
}
153
{
972
if (likely((pg >> (j & 63)) & 1)) {
154
- /*
973
- *(float16 *)(vd + H1_2(j)) = float16_add(e2, e3, vs);
155
- * We store the ARMv8 PSTATE.SS bit in env->uncached_cpsr.
974
+ *(float16 *)(vd + H1_2(j)) = float16_add(e2, e3, s);
156
- * This is convenient for populating SPSR_ELx, but must be
975
}
157
- * hidden from aarch32 mode, where it is not visible.
976
} while (i & 63);
158
- *
977
} while (i != 0);
159
- * TODO: ARMv8.4-DIT -- need to move SS somewhere else.
978
}
160
- */
979
161
- return cpsr_read(env) & ~(CPSR_EXEC | PSTATE_SS);
980
void HELPER(sve_fcadd_s)(void *vd, void *vn, void *vm, void *vg,
162
+ return cpsr_read(env) & ~CPSR_EXEC;
981
- void *vs, uint32_t desc)
163
}
982
+ float_status *s, uint32_t desc)
164
983
{
165
void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
984
intptr_t j, i = simd_oprsz(desc);
985
uint64_t *g = vg;
986
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcadd_s)(void *vd, void *vn, void *vm, void *vg,
987
e3 = *(float32 *)(vm + H1_2(i)) ^ neg_imag;
988
989
if (likely((pg >> (i & 63)) & 1)) {
990
- *(float32 *)(vd + H1_2(i)) = float32_add(e0, e1, vs);
991
+ *(float32 *)(vd + H1_2(i)) = float32_add(e0, e1, s);
992
}
993
if (likely((pg >> (j & 63)) & 1)) {
994
- *(float32 *)(vd + H1_2(j)) = float32_add(e2, e3, vs);
995
+ *(float32 *)(vd + H1_2(j)) = float32_add(e2, e3, s);
996
}
997
} while (i & 63);
998
} while (i != 0);
999
}
1000
1001
void HELPER(sve_fcadd_d)(void *vd, void *vn, void *vm, void *vg,
1002
- void *vs, uint32_t desc)
1003
+ float_status *s, uint32_t desc)
1004
{
1005
intptr_t j, i = simd_oprsz(desc);
1006
uint64_t *g = vg;
1007
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcadd_d)(void *vd, void *vn, void *vm, void *vg,
1008
e3 = *(float64 *)(vm + H1_2(i)) ^ neg_imag;
1009
1010
if (likely((pg >> (i & 63)) & 1)) {
1011
- *(float64 *)(vd + H1_2(i)) = float64_add(e0, e1, vs);
1012
+ *(float64 *)(vd + H1_2(i)) = float64_add(e0, e1, s);
1013
}
1014
if (likely((pg >> (j & 63)) & 1)) {
1015
- *(float64 *)(vd + H1_2(j)) = float64_add(e2, e3, vs);
1016
+ *(float64 *)(vd + H1_2(j)) = float64_add(e2, e3, s);
1017
}
1018
} while (i & 63);
1019
} while (i != 0);
1020
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcadd_d)(void *vd, void *vn, void *vm, void *vg,
1021
*/
1022
1023
void HELPER(sve_fcmla_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
1024
- void *vg, void *status, uint32_t desc)
1025
+ void *vg, float_status *status, uint32_t desc)
1026
{
1027
intptr_t j, i = simd_oprsz(desc);
1028
unsigned rot = simd_data(desc);
1029
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcmla_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
1030
}
1031
1032
void HELPER(sve_fcmla_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
1033
- void *vg, void *status, uint32_t desc)
1034
+ void *vg, float_status *status, uint32_t desc)
1035
{
1036
intptr_t j, i = simd_oprsz(desc);
1037
unsigned rot = simd_data(desc);
1038
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcmla_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
1039
}
1040
1041
void HELPER(sve_fcmla_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
1042
- void *vg, void *status, uint32_t desc)
1043
+ void *vg, float_status *status, uint32_t desc)
1044
{
1045
intptr_t j, i = simd_oprsz(desc);
1046
unsigned rot = simd_data(desc);
1047
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_xar_s)(void *vd, void *vn, void *vm, uint32_t desc)
1048
}
1049
1050
void HELPER(fmmla_s)(void *vd, void *vn, void *vm, void *va,
1051
- void *status, uint32_t desc)
1052
+ float_status *status, uint32_t desc)
1053
{
1054
intptr_t s, opr_sz = simd_oprsz(desc) / (sizeof(float32) * 4);
1055
1056
@@ -XXX,XX +XXX,XX @@ void HELPER(fmmla_s)(void *vd, void *vn, void *vm, void *va,
1057
}
1058
1059
void HELPER(fmmla_d)(void *vd, void *vn, void *vm, void *va,
1060
- void *status, uint32_t desc)
1061
+ float_status *status, uint32_t desc)
1062
{
1063
intptr_t s, opr_sz = simd_oprsz(desc) / (sizeof(float64) * 4);
1064
1065
@@ -XXX,XX +XXX,XX @@ void HELPER(fmmla_d)(void *vd, void *vn, void *vm, void *va,
1066
}
1067
1068
#define DO_FCVTNT(NAME, TYPEW, TYPEN, HW, HN, OP) \
1069
-void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
1070
+void HELPER(NAME)(void *vd, void *vn, void *vg, \
1071
+ float_status *status, uint32_t desc) \
1072
{ \
1073
intptr_t i = simd_oprsz(desc); \
1074
uint64_t *g = vg; \
1075
@@ -XXX,XX +XXX,XX @@ DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
1076
DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, H1_8, H1_4, float64_to_float32)
1077
1078
#define DO_FCVTLT(NAME, TYPEW, TYPEN, HW, HN, OP) \
1079
-void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
1080
+void HELPER(NAME)(void *vd, void *vn, void *vg, \
1081
+ float_status *status, uint32_t desc) \
1082
{ \
1083
intptr_t i = simd_oprsz(desc); \
1084
uint64_t *g = vg; \
166
--
1085
--
167
2.20.1
1086
2.34.1
168
1087
169
1088
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The places that use these are better off using untagged
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
addresses, so do not provide a tagged versions. Rename
5
to make it clear about the address type.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241206031224.78525-8-richard.henderson@linaro.org
9
Message-id: 20210210000223.884088-16-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
7
---
12
include/exec/cpu_ldst.h | 4 ++--
8
target/arm/tcg/helper-sme.h | 4 ++--
13
linux-user/qemu.h | 4 ++--
9
target/arm/tcg/sme_helper.c | 8 ++++----
14
accel/tcg/user-exec.c | 3 ++-
10
2 files changed, 6 insertions(+), 6 deletions(-)
15
linux-user/mmap.c | 12 ++++++------
16
linux-user/syscall.c | 2 +-
17
5 files changed, 13 insertions(+), 12 deletions(-)
18
11
19
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
12
diff --git a/target/arm/tcg/helper-sme.h b/target/arm/tcg/helper-sme.h
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu_ldst.h
14
--- a/target/arm/tcg/helper-sme.h
22
+++ b/include/exec/cpu_ldst.h
15
+++ b/target/arm/tcg/helper-sme.h
23
@@ -XXX,XX +XXX,XX @@ static inline void *g2h(CPUState *cs, abi_ptr x)
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sme_addva_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
return g2h_untagged(cpu_untagged_addr(cs, x));
17
DEF_HELPER_FLAGS_7(sme_fmopa_h, TCG_CALL_NO_RWG,
18
void, ptr, ptr, ptr, ptr, ptr, env, i32)
19
DEF_HELPER_FLAGS_7(sme_fmopa_s, TCG_CALL_NO_RWG,
20
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
21
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
22
DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
23
- void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
24
+ void, ptr, ptr, ptr, ptr, ptr, fpst, i32)
25
DEF_HELPER_FLAGS_7(sme_bfmopa, TCG_CALL_NO_RWG,
26
void, ptr, ptr, ptr, ptr, ptr, env, i32)
27
DEF_HELPER_FLAGS_6(sme_smopa_s, TCG_CALL_NO_RWG,
28
diff --git a/target/arm/tcg/sme_helper.c b/target/arm/tcg/sme_helper.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/tcg/sme_helper.c
31
+++ b/target/arm/tcg/sme_helper.c
32
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_addva_d)(void *vzda, void *vzn, void *vpn,
25
}
33
}
26
34
27
-static inline bool guest_addr_valid(abi_ulong x)
35
void HELPER(sme_fmopa_s)(void *vza, void *vzn, void *vzm, void *vpn,
28
+static inline bool guest_addr_valid_untagged(abi_ulong x)
36
- void *vpm, void *vst, uint32_t desc)
37
+ void *vpm, float_status *fpst_in, uint32_t desc)
29
{
38
{
30
return x <= GUEST_ADDR_MAX;
39
intptr_t row, col, oprsz = simd_maxsz(desc);
40
uint32_t neg = simd_data(desc) << 31;
41
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_fmopa_s)(void *vza, void *vzn, void *vzm, void *vpn,
42
* update the cumulative fp exception status. It also produces
43
* default nans.
44
*/
45
- fpst = *(float_status *)vst;
46
+ fpst = *fpst_in;
47
set_default_nan_mode(true, &fpst);
48
49
for (row = 0; row < oprsz; ) {
50
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_fmopa_s)(void *vza, void *vzn, void *vzm, void *vpn,
31
}
51
}
32
52
33
-static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
53
void HELPER(sme_fmopa_d)(void *vza, void *vzn, void *vzm, void *vpn,
34
+static inline bool guest_range_valid_untagged(abi_ulong start, abi_ulong len)
54
- void *vpm, void *vst, uint32_t desc)
55
+ void *vpm, float_status *fpst_in, uint32_t desc)
35
{
56
{
36
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
57
intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
37
}
58
uint64_t neg = (uint64_t)simd_data(desc) << 63;
38
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
59
uint64_t *za = vza, *zn = vzn, *zm = vzm;
39
index XXXXXXX..XXXXXXX 100644
60
uint8_t *pn = vpn, *pm = vpm;
40
--- a/linux-user/qemu.h
61
- float_status fpst = *(float_status *)vst;
41
+++ b/linux-user/qemu.h
62
+ float_status fpst = *fpst_in;
42
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
63
43
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
64
set_default_nan_mode(true, &fpst);
44
{
45
if (size == 0
46
- ? !guest_addr_valid(addr)
47
- : !guest_range_valid(addr, size)) {
48
+ ? !guest_addr_valid_untagged(addr)
49
+ : !guest_range_valid_untagged(addr, size)) {
50
return false;
51
}
52
return page_check_range((target_ulong)addr, size, type) == 0;
53
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/accel/tcg/user-exec.c
56
+++ b/accel/tcg/user-exec.c
57
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
58
g_assert_not_reached();
59
}
60
61
- if (!guest_addr_valid(addr) || page_check_range(addr, 1, flags) < 0) {
62
+ if (!guest_addr_valid_untagged(addr) ||
63
+ page_check_range(addr, 1, flags) < 0) {
64
if (nonfault) {
65
return TLB_INVALID_MASK;
66
} else {
67
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/linux-user/mmap.c
70
+++ b/linux-user/mmap.c
71
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
72
}
73
len = TARGET_PAGE_ALIGN(len);
74
end = start + len;
75
- if (!guest_range_valid(start, len)) {
76
+ if (!guest_range_valid_untagged(start, len)) {
77
return -TARGET_ENOMEM;
78
}
79
if (len == 0) {
80
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
81
* It can fail only on 64-bit host with 32-bit target.
82
* On any other target/host host mmap() handles this error correctly.
83
*/
84
- if (end < start || !guest_range_valid(start, len)) {
85
+ if (end < start || !guest_range_valid_untagged(start, len)) {
86
errno = ENOMEM;
87
goto fail;
88
}
89
@@ -XXX,XX +XXX,XX @@ int target_munmap(abi_ulong start, abi_ulong len)
90
if (start & ~TARGET_PAGE_MASK)
91
return -TARGET_EINVAL;
92
len = TARGET_PAGE_ALIGN(len);
93
- if (len == 0 || !guest_range_valid(start, len)) {
94
+ if (len == 0 || !guest_range_valid_untagged(start, len)) {
95
return -TARGET_EINVAL;
96
}
97
98
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
99
int prot;
100
void *host_addr;
101
102
- if (!guest_range_valid(old_addr, old_size) ||
103
+ if (!guest_range_valid_untagged(old_addr, old_size) ||
104
((flags & MREMAP_FIXED) &&
105
- !guest_range_valid(new_addr, new_size))) {
106
+ !guest_range_valid_untagged(new_addr, new_size))) {
107
errno = ENOMEM;
108
return -1;
109
}
110
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
111
112
if (host_addr != MAP_FAILED) {
113
/* Check if address fits target address space */
114
- if (!guest_range_valid(h2g(host_addr), new_size)) {
115
+ if (!guest_range_valid_untagged(h2g(host_addr), new_size)) {
116
/* Revert mremap() changes */
117
host_addr = mremap(g2h_untagged(old_addr),
118
new_size, old_size, flags);
119
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/linux-user/syscall.c
122
+++ b/linux-user/syscall.c
123
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
124
return -TARGET_EINVAL;
125
}
126
}
127
- if (!guest_range_valid(shmaddr, shm_info.shm_segsz)) {
128
+ if (!guest_range_valid_untagged(shmaddr, shm_info.shm_segsz)) {
129
return -TARGET_EINVAL;
130
}
131
65
132
--
66
--
133
2.20.1
67
2.34.1
134
135
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For copy_*_user, only 0 and -TARGET_EFAULT are returned; no need
3
Allow the helpers to receive CPUARMState* directly
4
to involve abi_long. Use size_t for lengths. Use bool for the
4
instead of via void*.
5
lock_user copy argument. Use ssize_t for target_strlen, because
6
we can't overflow the host memory space.
7
5
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210210000223.884088-19-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241206031224.78525-9-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
linux-user/qemu.h | 14 ++++++--------
11
target/arm/helper.h | 12 ++++++------
14
linux-user/uaccess.c | 45 ++++++++++++++++++++++----------------------
12
target/arm/tcg/helper-a64.h | 2 +-
15
2 files changed, 29 insertions(+), 30 deletions(-)
13
target/arm/tcg/vec_helper.c | 21 +++++++--------------
14
3 files changed, 14 insertions(+), 21 deletions(-)
16
15
17
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
16
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/linux-user/qemu.h
18
--- a/target/arm/helper.h
20
+++ b/linux-user/qemu.h
19
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_suqadd_d, TCG_CALL_NO_RWG,
22
#include "exec/cpu_ldst.h"
21
void, ptr, ptr, ptr, ptr, i32)
23
22
24
#undef DEBUG_REMAP
23
DEF_HELPER_FLAGS_5(gvec_fmlal_a32, TCG_CALL_NO_RWG,
25
-#ifdef DEBUG_REMAP
24
- void, ptr, ptr, ptr, ptr, i32)
26
-#endif /* DEBUG_REMAP */
25
+ void, ptr, ptr, ptr, env, i32)
27
26
DEF_HELPER_FLAGS_5(gvec_fmlal_a64, TCG_CALL_NO_RWG,
28
#include "exec/user/abitypes.h"
27
- void, ptr, ptr, ptr, ptr, i32)
29
28
+ void, ptr, ptr, ptr, env, i32)
30
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(CPUState *cpu, int type,
29
DEF_HELPER_FLAGS_5(gvec_fmlal_idx_a32, TCG_CALL_NO_RWG,
31
* buffers between the target and host. These internally perform
30
- void, ptr, ptr, ptr, ptr, i32)
32
* locking/unlocking of the memory.
31
+ void, ptr, ptr, ptr, env, i32)
33
*/
32
DEF_HELPER_FLAGS_5(gvec_fmlal_idx_a64, TCG_CALL_NO_RWG,
34
-abi_long copy_from_user(void *hptr, abi_ulong gaddr, size_t len);
33
- void, ptr, ptr, ptr, ptr, i32)
35
-abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
34
+ void, ptr, ptr, ptr, env, i32)
36
+int copy_from_user(void *hptr, abi_ulong gaddr, size_t len);
35
37
+int copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
36
DEF_HELPER_FLAGS_2(frint32_s, TCG_CALL_NO_RWG, f32, f32, fpst)
38
37
DEF_HELPER_FLAGS_2(frint64_s, TCG_CALL_NO_RWG, f32, f32, fpst)
39
/* Functions for accessing guest memory. The tget and tput functions
38
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_d, TCG_CALL_NO_RWG,
40
read/write single values, byteswapping as necessary. The lock_user function
39
void, ptr, ptr, ptr, i32)
41
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
40
42
41
DEF_HELPER_FLAGS_6(sve2_fmlal_zzzw_s, TCG_CALL_NO_RWG,
43
/* Lock an area of guest memory into the host. If copy is true then the
42
- void, ptr, ptr, ptr, ptr, ptr, i32)
44
host area will have the same contents as the guest. */
43
+ void, ptr, ptr, ptr, ptr, env, i32)
45
-void *lock_user(int type, abi_ulong guest_addr, long len, int copy);
44
DEF_HELPER_FLAGS_6(sve2_fmlal_zzxw_s, TCG_CALL_NO_RWG,
46
+void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy);
45
- void, ptr, ptr, ptr, ptr, ptr, i32)
47
46
+ void, ptr, ptr, ptr, ptr, env, i32)
48
/* Unlock an area of guest memory. The first LEN bytes must be
47
49
flushed back to guest memory. host_ptr = NULL is explicitly
48
DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
50
allowed and does nothing. */
49
51
-#ifdef DEBUG_REMAP
50
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
52
-static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, long len)
53
+#ifndef DEBUG_REMAP
54
+static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len)
55
{ }
56
#else
57
void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
58
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
59
60
/* Return the length of a string in target memory or -TARGET_EFAULT if
61
access error. */
62
-abi_long target_strlen(abi_ulong gaddr);
63
+ssize_t target_strlen(abi_ulong gaddr);
64
65
/* Like lock_user but for null terminated strings. */
66
void *lock_user_string(abi_ulong guest_addr);
67
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
68
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
69
--- a/linux-user/uaccess.c
52
--- a/target/arm/tcg/helper-a64.h
70
+++ b/linux-user/uaccess.c
53
+++ b/target/arm/tcg/helper-a64.h
71
@@ -XXX,XX +XXX,XX @@
54
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, fpst)
72
55
DEF_HELPER_3(vfp_cmpes_a64, i64, f32, f32, fpst)
73
#include "qemu.h"
56
DEF_HELPER_3(vfp_cmpd_a64, i64, f64, f64, fpst)
74
57
DEF_HELPER_3(vfp_cmped_a64, i64, f64, f64, fpst)
75
-void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
58
-DEF_HELPER_FLAGS_4(simd_tblx, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
76
+void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy)
59
+DEF_HELPER_FLAGS_4(simd_tblx, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
60
DEF_HELPER_FLAGS_3(vfp_mulxs, TCG_CALL_NO_RWG, f32, f32, f32, fpst)
61
DEF_HELPER_FLAGS_3(vfp_mulxd, TCG_CALL_NO_RWG, f64, f64, f64, fpst)
62
DEF_HELPER_FLAGS_3(neon_ceq_f64, TCG_CALL_NO_RWG, i64, i64, i64, fpst)
63
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/tcg/vec_helper.c
66
+++ b/target/arm/tcg/vec_helper.c
67
@@ -XXX,XX +XXX,XX @@ static void do_fmlal(float32 *d, void *vn, void *vm, float_status *fpst,
68
}
69
70
void HELPER(gvec_fmlal_a32)(void *vd, void *vn, void *vm,
71
- void *venv, uint32_t desc)
72
+ CPUARMState *env, uint32_t desc)
77
{
73
{
78
if (!access_ok_untagged(type, guest_addr, len)) {
74
- CPUARMState *env = venv;
79
return NULL;
75
do_fmlal(vd, vn, vm, &env->vfp.standard_fp_status, desc,
80
@@ -XXX,XX +XXX,XX @@ void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
76
get_flush_inputs_to_zero(&env->vfp.fp_status_f16));
81
}
77
}
82
78
83
#ifdef DEBUG_REMAP
79
void HELPER(gvec_fmlal_a64)(void *vd, void *vn, void *vm,
84
-void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
80
- void *venv, uint32_t desc)
85
+void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len);
81
+ CPUARMState *env, uint32_t desc)
86
{
82
{
87
if (!host_ptr) {
83
- CPUARMState *env = venv;
88
return;
84
do_fmlal(vd, vn, vm, &env->vfp.fp_status, desc,
89
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
85
get_flush_inputs_to_zero(&env->vfp.fp_status_f16));
90
if (host_ptr == g2h_untagged(guest_addr)) {
86
}
91
return;
87
92
}
88
void HELPER(sve2_fmlal_zzzw_s)(void *vd, void *vn, void *vm, void *va,
93
- if (len > 0) {
89
- void *venv, uint32_t desc)
94
+ if (len != 0) {
90
+ CPUARMState *env, uint32_t desc)
95
memcpy(g2h_untagged(guest_addr), host_ptr, len);
96
}
97
g_free(host_ptr);
98
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
99
100
void *lock_user_string(abi_ulong guest_addr)
101
{
91
{
102
- abi_long len = target_strlen(guest_addr);
92
intptr_t i, oprsz = simd_oprsz(desc);
103
+ ssize_t len = target_strlen(guest_addr);
93
uint16_t negn = extract32(desc, SIMD_DATA_SHIFT, 1) << 15;
104
if (len < 0) {
94
intptr_t sel = extract32(desc, SIMD_DATA_SHIFT + 1, 1) * sizeof(float16);
105
return NULL;
95
- CPUARMState *env = venv;
106
}
96
float_status *status = &env->vfp.fp_status;
107
- return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
97
bool fz16 = get_flush_inputs_to_zero(&env->vfp.fp_status_f16);
108
+ return lock_user(VERIFY_READ, guest_addr, (size_t)len + 1, 1);
98
99
@@ -XXX,XX +XXX,XX @@ static void do_fmlal_idx(float32 *d, void *vn, void *vm, float_status *fpst,
109
}
100
}
110
101
111
/* copy_from_user() and copy_to_user() are usually used to copy data
102
void HELPER(gvec_fmlal_idx_a32)(void *vd, void *vn, void *vm,
112
* buffers between the target and host. These internally perform
103
- void *venv, uint32_t desc)
113
* locking/unlocking of the memory.
104
+ CPUARMState *env, uint32_t desc)
114
*/
115
-abi_long copy_from_user(void *hptr, abi_ulong gaddr, size_t len)
116
+int copy_from_user(void *hptr, abi_ulong gaddr, size_t len)
117
{
105
{
118
- abi_long ret = 0;
106
- CPUARMState *env = venv;
119
- void *ghptr;
107
do_fmlal_idx(vd, vn, vm, &env->vfp.standard_fp_status, desc,
120
+ int ret = 0;
108
get_flush_inputs_to_zero(&env->vfp.fp_status_f16));
121
+ void *ghptr = lock_user(VERIFY_READ, gaddr, len, 1);
122
123
- if ((ghptr = lock_user(VERIFY_READ, gaddr, len, 1))) {
124
+ if (ghptr) {
125
memcpy(hptr, ghptr, len);
126
unlock_user(ghptr, gaddr, 0);
127
- } else
128
+ } else {
129
ret = -TARGET_EFAULT;
130
-
131
+ }
132
return ret;
133
}
109
}
134
110
135
-
111
void HELPER(gvec_fmlal_idx_a64)(void *vd, void *vn, void *vm,
136
-abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len)
112
- void *venv, uint32_t desc)
137
+int copy_to_user(abi_ulong gaddr, void *hptr, size_t len)
113
+ CPUARMState *env, uint32_t desc)
138
{
114
{
139
- abi_long ret = 0;
115
- CPUARMState *env = venv;
140
- void *ghptr;
116
do_fmlal_idx(vd, vn, vm, &env->vfp.fp_status, desc,
141
+ int ret = 0;
117
get_flush_inputs_to_zero(&env->vfp.fp_status_f16));
142
+ void *ghptr = lock_user(VERIFY_WRITE, gaddr, len, 0);
143
144
- if ((ghptr = lock_user(VERIFY_WRITE, gaddr, len, 0))) {
145
+ if (ghptr) {
146
memcpy(ghptr, hptr, len);
147
unlock_user(ghptr, gaddr, len);
148
- } else
149
+ } else {
150
ret = -TARGET_EFAULT;
151
+ }
152
153
return ret;
154
}
118
}
155
119
156
/* Return the length of a string in target memory or -TARGET_EFAULT if
120
void HELPER(sve2_fmlal_zzxw_s)(void *vd, void *vn, void *vm, void *va,
157
access error */
121
- void *venv, uint32_t desc)
158
-abi_long target_strlen(abi_ulong guest_addr1)
122
+ CPUARMState *env, uint32_t desc)
159
+ssize_t target_strlen(abi_ulong guest_addr1)
160
{
123
{
161
uint8_t *ptr;
124
intptr_t i, j, oprsz = simd_oprsz(desc);
162
abi_ulong guest_addr;
125
uint16_t negn = extract32(desc, SIMD_DATA_SHIFT, 1) << 15;
163
- int max_len, len;
126
intptr_t sel = extract32(desc, SIMD_DATA_SHIFT + 1, 1) * sizeof(float16);
164
+ size_t max_len, len;
127
intptr_t idx = extract32(desc, SIMD_DATA_SHIFT + 2, 3) * sizeof(float16);
165
128
- CPUARMState *env = venv;
166
guest_addr = guest_addr1;
129
float_status *status = &env->vfp.fp_status;
167
for(;;) {
130
bool fz16 = get_flush_inputs_to_zero(&env->vfp.fp_status_f16);
168
@@ -XXX,XX +XXX,XX @@ abi_long target_strlen(abi_ulong guest_addr1)
131
169
unlock_user(ptr, guest_addr, 0);
132
@@ -XXX,XX +XXX,XX @@ DO_VRINT_RMODE(gvec_vrint_rm_s, helper_rints, uint32_t)
170
guest_addr += len;
133
#undef DO_VRINT_RMODE
171
/* we don't allow wrapping or integer overflow */
134
172
- if (guest_addr == 0 ||
135
#ifdef TARGET_AARCH64
173
- (guest_addr - guest_addr1) > 0x7fffffff)
136
-void HELPER(simd_tblx)(void *vd, void *vm, void *venv, uint32_t desc)
174
+ if (guest_addr == 0 || (guest_addr - guest_addr1) > 0x7fffffff) {
137
+void HELPER(simd_tblx)(void *vd, void *vm, CPUARMState *env, uint32_t desc)
175
return -TARGET_EFAULT;
138
{
176
- if (len != max_len)
139
const uint8_t *indices = vm;
177
+ }
140
- CPUARMState *env = venv;
178
+ if (len != max_len) {
141
size_t oprsz = simd_oprsz(desc);
179
break;
142
uint32_t rn = extract32(desc, SIMD_DATA_SHIFT, 5);
180
+ }
143
bool is_tbx = extract32(desc, SIMD_DATA_SHIFT + 5, 1);
181
}
182
return guest_addr - guest_addr1;
183
}
184
--
144
--
185
2.20.1
145
2.34.1
186
146
187
147
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is more descriptive than 'unsigned long'.
4
No functional change, since these match on all linux+bsd hosts.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-5-richard.henderson@linaro.org
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20241206031224.78525-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
7
---
11
include/exec/cpu_ldst.h | 6 +++---
8
target/arm/helper.h | 56 ++++++++++++++++++------------------
12
1 file changed, 3 insertions(+), 3 deletions(-)
9
target/arm/tcg/neon_helper.c | 6 ++--
10
2 files changed, 30 insertions(+), 32 deletions(-)
13
11
14
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
12
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu_ldst.h
14
--- a/target/arm/helper.h
17
+++ b/include/exec/cpu_ldst.h
15
+++ b/target/arm/helper.h
18
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(neon_qrshl_u32, i32, env, i32, i32)
19
#endif
17
DEF_HELPER_3(neon_qrshl_s32, i32, env, i32, i32)
20
18
DEF_HELPER_3(neon_qrshl_u64, i64, env, i64, i64)
21
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
19
DEF_HELPER_3(neon_qrshl_s64, i64, env, i64, i64)
22
-#define g2h(x) ((void *)((unsigned long)(abi_ptr)(x) + guest_base))
20
-DEF_HELPER_FLAGS_5(neon_sqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
21
-DEF_HELPER_FLAGS_5(neon_sqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
22
-DEF_HELPER_FLAGS_5(neon_sqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
#if HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS
23
-DEF_HELPER_FLAGS_5(neon_sqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
#define guest_addr_valid(x) (1)
24
-DEF_HELPER_FLAGS_5(neon_uqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
#else
25
-DEF_HELPER_FLAGS_5(neon_uqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
26
-DEF_HELPER_FLAGS_5(neon_uqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
#endif
27
-DEF_HELPER_FLAGS_5(neon_uqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
-#define h2g_valid(x) guest_addr_valid((unsigned long)(x) - guest_base)
28
-DEF_HELPER_FLAGS_5(neon_sqrshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
+#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
29
-DEF_HELPER_FLAGS_5(neon_sqrshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
32
30
-DEF_HELPER_FLAGS_5(neon_sqrshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
static inline int guest_range_valid(unsigned long start, unsigned long len)
31
-DEF_HELPER_FLAGS_5(neon_sqrshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
{
32
-DEF_HELPER_FLAGS_5(neon_uqrshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
@@ -XXX,XX +XXX,XX @@ static inline int guest_range_valid(unsigned long start, unsigned long len)
33
-DEF_HELPER_FLAGS_5(neon_uqrshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
-DEF_HELPER_FLAGS_5(neon_uqrshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
-DEF_HELPER_FLAGS_5(neon_uqrshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
36
-DEF_HELPER_FLAGS_4(neon_sqshli_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
-DEF_HELPER_FLAGS_4(neon_sqshli_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
-DEF_HELPER_FLAGS_4(neon_sqshli_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
-DEF_HELPER_FLAGS_4(neon_sqshli_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
-DEF_HELPER_FLAGS_4(neon_uqshli_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
-DEF_HELPER_FLAGS_4(neon_uqshli_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
-DEF_HELPER_FLAGS_4(neon_uqshli_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
-DEF_HELPER_FLAGS_4(neon_uqshli_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
-DEF_HELPER_FLAGS_4(neon_sqshlui_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
-DEF_HELPER_FLAGS_4(neon_sqshlui_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
-DEF_HELPER_FLAGS_4(neon_sqshlui_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
47
-DEF_HELPER_FLAGS_4(neon_sqshlui_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_5(neon_sqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
49
+DEF_HELPER_FLAGS_5(neon_sqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
50
+DEF_HELPER_FLAGS_5(neon_sqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
51
+DEF_HELPER_FLAGS_5(neon_sqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
52
+DEF_HELPER_FLAGS_5(neon_uqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
53
+DEF_HELPER_FLAGS_5(neon_uqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
54
+DEF_HELPER_FLAGS_5(neon_uqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
55
+DEF_HELPER_FLAGS_5(neon_uqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
56
+DEF_HELPER_FLAGS_5(neon_sqrshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
57
+DEF_HELPER_FLAGS_5(neon_sqrshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
58
+DEF_HELPER_FLAGS_5(neon_sqrshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
59
+DEF_HELPER_FLAGS_5(neon_sqrshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
60
+DEF_HELPER_FLAGS_5(neon_uqrshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
61
+DEF_HELPER_FLAGS_5(neon_uqrshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
62
+DEF_HELPER_FLAGS_5(neon_uqrshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
63
+DEF_HELPER_FLAGS_5(neon_uqrshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, env, i32)
64
+DEF_HELPER_FLAGS_4(neon_sqshli_b, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
65
+DEF_HELPER_FLAGS_4(neon_sqshli_h, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
66
+DEF_HELPER_FLAGS_4(neon_sqshli_s, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
67
+DEF_HELPER_FLAGS_4(neon_sqshli_d, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
68
+DEF_HELPER_FLAGS_4(neon_uqshli_b, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
69
+DEF_HELPER_FLAGS_4(neon_uqshli_h, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
70
+DEF_HELPER_FLAGS_4(neon_uqshli_s, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
71
+DEF_HELPER_FLAGS_4(neon_uqshli_d, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
72
+DEF_HELPER_FLAGS_4(neon_sqshlui_b, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
73
+DEF_HELPER_FLAGS_4(neon_sqshlui_h, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
74
+DEF_HELPER_FLAGS_4(neon_sqshlui_s, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
75
+DEF_HELPER_FLAGS_4(neon_sqshlui_d, TCG_CALL_NO_RWG, void, ptr, ptr, env, i32)
76
77
DEF_HELPER_FLAGS_4(gvec_srshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
78
DEF_HELPER_FLAGS_4(gvec_srshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
79
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/tcg/neon_helper.c
82
+++ b/target/arm/tcg/neon_helper.c
83
@@ -XXX,XX +XXX,XX @@ void HELPER(name)(void *vd, void *vn, void *vm, uint32_t desc) \
36
}
84
}
37
85
38
#define h2g_nocheck(x) ({ \
86
#define NEON_GVEC_VOP2_ENV(name, vtype) \
39
- unsigned long __ret = (unsigned long)(x) - guest_base; \
87
-void HELPER(name)(void *vd, void *vn, void *vm, void *venv, uint32_t desc) \
40
+ uintptr_t __ret = (uintptr_t)(x) - guest_base; \
88
+void HELPER(name)(void *vd, void *vn, void *vm, CPUARMState *env, uint32_t desc) \
41
(abi_ptr)__ret; \
89
{ \
42
})
90
intptr_t i, opr_sz = simd_oprsz(desc); \
43
91
vtype *d = vd, *n = vn, *m = vm; \
92
- CPUARMState *env = venv; \
93
for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
94
NEON_FN(d[i], n[i], m[i]); \
95
} \
96
@@ -XXX,XX +XXX,XX @@ void HELPER(name)(void *vd, void *vn, void *vm, void *venv, uint32_t desc) \
97
}
98
99
#define NEON_GVEC_VOP2i_ENV(name, vtype) \
100
-void HELPER(name)(void *vd, void *vn, void *venv, uint32_t desc) \
101
+void HELPER(name)(void *vd, void *vn, CPUARMState *env, uint32_t desc) \
102
{ \
103
intptr_t i, opr_sz = simd_oprsz(desc); \
104
int imm = simd_data(desc); \
105
vtype *d = vd, *n = vn; \
106
- CPUARMState *env = venv; \
107
for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
108
NEON_FN(d[i], n[i], imm); \
109
} \
44
--
110
--
45
2.20.1
111
2.34.1
46
112
47
113
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Resolve the untagged address once, using thread_cpu.
3
Pass float_status not env to match other functions.
4
Tidy the DEBUG_REMAP code using glib routines.
5
4
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20210210000223.884088-20-richard.henderson@linaro.org
7
Message-id: 20241206031952.78776-2-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
linux-user/uaccess.c | 27 ++++++++++++++-------------
10
target/arm/tcg/helper-a64.h | 2 +-
12
1 file changed, 14 insertions(+), 13 deletions(-)
11
target/arm/tcg/helper-a64.c | 3 +--
12
target/arm/tcg/translate-a64.c | 2 +-
13
3 files changed, 3 insertions(+), 4 deletions(-)
13
14
14
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
15
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/uaccess.c
17
--- a/target/arm/tcg/helper-a64.h
17
+++ b/linux-user/uaccess.c
18
+++ b/target/arm/tcg/helper-a64.h
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(rsqrtsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, fpst)
19
20
DEF_HELPER_FLAGS_2(frecpx_f64, TCG_CALL_NO_RWG, f64, f64, fpst)
20
void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy)
21
DEF_HELPER_FLAGS_2(frecpx_f32, TCG_CALL_NO_RWG, f32, f32, fpst)
22
DEF_HELPER_FLAGS_2(frecpx_f16, TCG_CALL_NO_RWG, f16, f16, fpst)
23
-DEF_HELPER_FLAGS_2(fcvtx_f64_to_f32, TCG_CALL_NO_RWG, f32, f64, env)
24
+DEF_HELPER_FLAGS_2(fcvtx_f64_to_f32, TCG_CALL_NO_RWG, f32, f64, fpst)
25
DEF_HELPER_FLAGS_3(crc32_64, TCG_CALL_NO_RWG_SE, i64, i64, i64, i32)
26
DEF_HELPER_FLAGS_3(crc32c_64, TCG_CALL_NO_RWG_SE, i64, i64, i64, i32)
27
DEF_HELPER_FLAGS_3(advsimd_maxh, TCG_CALL_NO_RWG, f16, f16, f16, fpst)
28
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/tcg/helper-a64.c
31
+++ b/target/arm/tcg/helper-a64.c
32
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, float_status *fpst)
33
}
34
}
35
36
-float32 HELPER(fcvtx_f64_to_f32)(float64 a, CPUARMState *env)
37
+float32 HELPER(fcvtx_f64_to_f32)(float64 a, float_status *fpst)
21
{
38
{
22
+ void *host_addr;
39
float32 r;
23
+
40
- float_status *fpst = &env->vfp.fp_status;
24
+ guest_addr = cpu_untagged_addr(thread_cpu, guest_addr);
41
int old = get_float_rounding_mode(fpst);
25
if (!access_ok_untagged(type, guest_addr, len)) {
42
26
return NULL;
43
set_float_rounding_mode(float_round_to_odd, fpst);
27
}
44
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
+ host_addr = g2h_untagged(guest_addr);
45
index XXXXXXX..XXXXXXX 100644
29
#ifdef DEBUG_REMAP
46
--- a/target/arm/tcg/translate-a64.c
30
- {
47
+++ b/target/arm/tcg/translate-a64.c
31
- void *addr;
48
@@ -XXX,XX +XXX,XX @@ static void gen_fcvtxn_sd(TCGv_i64 d, TCGv_i64 n)
32
- addr = g_malloc(len);
49
* with von Neumann rounding (round to odd)
33
- if (copy) {
50
*/
34
- memcpy(addr, g2h(guest_addr), len);
51
TCGv_i32 tmp = tcg_temp_new_i32();
35
- } else {
52
- gen_helper_fcvtx_f64_to_f32(tmp, n, tcg_env);
36
- memset(addr, 0, len);
53
+ gen_helper_fcvtx_f64_to_f32(tmp, n, fpstatus_ptr(FPST_FPCR));
37
- }
54
tcg_gen_extu_i32_i64(d, tmp);
38
- return addr;
39
+ if (copy) {
40
+ host_addr = g_memdup(host_addr, len);
41
+ } else {
42
+ host_addr = g_malloc0(len);
43
}
44
-#else
45
- return g2h_untagged(guest_addr);
46
#endif
47
+ return host_addr;
48
}
55
}
49
56
50
#ifdef DEBUG_REMAP
51
void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len);
52
{
53
+ void *host_ptr_conv;
54
+
55
if (!host_ptr) {
56
return;
57
}
58
- if (host_ptr == g2h_untagged(guest_addr)) {
59
+ host_ptr_conv = g2h(thread_cpu, guest_addr);
60
+ if (host_ptr == host_ptr_conv) {
61
return;
62
}
63
if (len != 0) {
64
- memcpy(g2h_untagged(guest_addr), host_ptr, len);
65
+ memcpy(host_ptr_conv, host_ptr, len);
66
}
67
g_free(host_ptr);
68
}
69
--
57
--
70
2.20.1
58
2.34.1
71
59
72
60
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Provide both tagged and untagged versions of access_ok.
3
Pass float_status not env to match other functions.
4
In a few places use thread_cpu, as the user is several
5
callees removed from do_syscall1.
6
4
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-17-richard.henderson@linaro.org
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Message-id: 20241206031952.78776-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
linux-user/qemu.h | 11 +++++++++--
10
target/arm/helper.h | 4 ++--
13
linux-user/elfload.c | 2 +-
11
target/arm/tcg/translate-a64.c | 15 ++++++++++-----
14
linux-user/hppa/cpu_loop.c | 8 ++++----
12
target/arm/tcg/translate-vfp.c | 4 ++--
15
linux-user/i386/cpu_loop.c | 2 +-
13
target/arm/vfp_helper.c | 8 ++++----
16
linux-user/i386/signal.c | 5 +++--
14
4 files changed, 18 insertions(+), 13 deletions(-)
17
linux-user/syscall.c | 9 ++++++---
18
6 files changed, 24 insertions(+), 13 deletions(-)
19
15
20
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
16
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/linux-user/qemu.h
18
--- a/target/arm/helper.h
23
+++ b/linux-user/qemu.h
19
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmpeh, void, f16, f16, env)
25
#define VERIFY_READ PAGE_READ
21
DEF_HELPER_3(vfp_cmpes, void, f32, f32, env)
26
#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
22
DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
27
23
28
-static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
24
-DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
29
+static inline bool access_ok_untagged(int type, abi_ulong addr, abi_ulong size)
25
-DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
26
+DEF_HELPER_2(vfp_fcvtds, f64, f32, fpst)
27
+DEF_HELPER_2(vfp_fcvtsd, f32, f64, fpst)
28
DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, fpst)
29
DEF_HELPER_FLAGS_2(bfcvt_pair, TCG_CALL_NO_RWG, i32, i64, fpst)
30
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-a64.c
34
+++ b/target/arm/tcg/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVT_s_ds(DisasContext *s, arg_rr *a)
36
if (fp_access_check(s)) {
37
TCGv_i32 tcg_rn = read_fp_sreg(s, a->rn);
38
TCGv_i64 tcg_rd = tcg_temp_new_i64();
39
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
40
41
- gen_helper_vfp_fcvtds(tcg_rd, tcg_rn, tcg_env);
42
+ gen_helper_vfp_fcvtds(tcg_rd, tcg_rn, fpst);
43
write_fp_dreg(s, a->rd, tcg_rd);
44
}
45
return true;
46
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVT_s_sd(DisasContext *s, arg_rr *a)
47
if (fp_access_check(s)) {
48
TCGv_i64 tcg_rn = read_fp_dreg(s, a->rn);
49
TCGv_i32 tcg_rd = tcg_temp_new_i32();
50
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
51
52
- gen_helper_vfp_fcvtsd(tcg_rd, tcg_rn, tcg_env);
53
+ gen_helper_vfp_fcvtsd(tcg_rd, tcg_rn, fpst);
54
write_fp_sreg(s, a->rd, tcg_rd);
55
}
56
return true;
57
@@ -XXX,XX +XXX,XX @@ static void gen_fcvtn_hs(TCGv_i64 d, TCGv_i64 n)
58
static void gen_fcvtn_sd(TCGv_i64 d, TCGv_i64 n)
30
{
59
{
31
if (size == 0
60
TCGv_i32 tmp = tcg_temp_new_i32();
32
? !guest_addr_valid_untagged(addr)
61
- gen_helper_vfp_fcvtsd(tmp, n, tcg_env);
33
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
62
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
34
return page_check_range((target_ulong)addr, size, type) == 0;
63
+
64
+ gen_helper_vfp_fcvtsd(tmp, n, fpst);
65
tcg_gen_extu_i32_i64(d, tmp);
35
}
66
}
36
67
37
+static inline bool access_ok(CPUState *cpu, int type,
68
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVTL_v(DisasContext *s, arg_qrr_e *a)
38
+ abi_ulong addr, abi_ulong size)
69
* The only instruction like this is FCVTL.
39
+{
40
+ return access_ok_untagged(type, cpu_untagged_addr(cpu, addr), size);
41
+}
42
+
43
/* NOTE __get_user and __put_user use host pointers and don't check access.
44
These are usually used to access struct data members once the struct has
45
been locked - usually with lock_user_struct. */
46
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
47
host area will have the same contents as the guest. */
48
static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
49
{
50
- if (!access_ok(type, guest_addr, len))
51
+ if (!access_ok_untagged(type, guest_addr, len)) {
52
return NULL;
53
+ }
54
#ifdef DEBUG_REMAP
55
{
56
void *addr;
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/linux-user/elfload.c
60
+++ b/linux-user/elfload.c
61
@@ -XXX,XX +XXX,XX @@ static int vma_get_mapping_count(const struct mm_struct *mm)
62
static abi_ulong vma_dump_size(const struct vm_area_struct *vma)
63
{
64
/* if we cannot even read the first page, skip it */
65
- if (!access_ok(VERIFY_READ, vma->vma_start, TARGET_PAGE_SIZE))
66
+ if (!access_ok_untagged(VERIFY_READ, vma->vma_start, TARGET_PAGE_SIZE))
67
return (0);
68
69
/*
70
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/linux-user/hppa/cpu_loop.c
73
+++ b/linux-user/hppa/cpu_loop.c
74
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
75
return -TARGET_ENOSYS;
76
77
case 0: /* elf32 atomic 32bit cmpxchg */
78
- if ((addr & 3) || !access_ok(VERIFY_WRITE, addr, 4)) {
79
+ if ((addr & 3) || !access_ok(cs, VERIFY_WRITE, addr, 4)) {
80
return -TARGET_EFAULT;
81
}
82
old = tswap32(old);
83
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
84
return -TARGET_ENOSYS;
85
}
86
if (((addr | old | new) & ((1 << size) - 1))
87
- || !access_ok(VERIFY_WRITE, addr, 1 << size)
88
- || !access_ok(VERIFY_READ, old, 1 << size)
89
- || !access_ok(VERIFY_READ, new, 1 << size)) {
90
+ || !access_ok(cs, VERIFY_WRITE, addr, 1 << size)
91
+ || !access_ok(cs, VERIFY_READ, old, 1 << size)
92
+ || !access_ok(cs, VERIFY_READ, new, 1 << size)) {
93
return -TARGET_EFAULT;
94
}
95
/* Note that below we use host-endian loads so that the cmpxchg
96
diff --git a/linux-user/i386/cpu_loop.c b/linux-user/i386/cpu_loop.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/linux-user/i386/cpu_loop.c
99
+++ b/linux-user/i386/cpu_loop.c
100
@@ -XXX,XX +XXX,XX @@ static bool write_ok_or_segv(CPUX86State *env, abi_ptr addr, size_t len)
101
* For all the vsyscalls, NULL means "don't write anything" not
102
* "write it at address 0".
103
*/
70
*/
104
- if (addr == 0 || access_ok(VERIFY_WRITE, addr, len)) {
71
int pass;
105
+ if (addr == 0 || access_ok(env_cpu(env), VERIFY_WRITE, addr, len)) {
72
+ TCGv_ptr fpst;
73
74
if (!fp_access_check(s)) {
106
return true;
75
return true;
107
}
76
}
108
77
109
diff --git a/linux-user/i386/signal.c b/linux-user/i386/signal.c
78
+ fpst = fpstatus_ptr(FPST_FPCR);
79
if (a->esz == MO_64) {
80
/* 32 -> 64 bit fp conversion */
81
TCGv_i64 tcg_res[2];
82
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVTL_v(DisasContext *s, arg_qrr_e *a)
83
for (pass = 0; pass < 2; pass++) {
84
tcg_res[pass] = tcg_temp_new_i64();
85
read_vec_element_i32(s, tcg_op, a->rn, srcelt + pass, MO_32);
86
- gen_helper_vfp_fcvtds(tcg_res[pass], tcg_op, tcg_env);
87
+ gen_helper_vfp_fcvtds(tcg_res[pass], tcg_op, fpst);
88
}
89
for (pass = 0; pass < 2; pass++) {
90
write_vec_element(s, tcg_res[pass], a->rd, pass, MO_64);
91
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVTL_v(DisasContext *s, arg_qrr_e *a)
92
/* 16 -> 32 bit fp conversion */
93
int srcelt = a->q ? 4 : 0;
94
TCGv_i32 tcg_res[4];
95
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
96
TCGv_i32 ahp = get_ahp_flag();
97
98
for (pass = 0; pass < 4; pass++) {
99
diff --git a/target/arm/tcg/translate-vfp.c b/target/arm/tcg/translate-vfp.c
110
index XXXXXXX..XXXXXXX 100644
100
index XXXXXXX..XXXXXXX 100644
111
--- a/linux-user/i386/signal.c
101
--- a/target/arm/tcg/translate-vfp.c
112
+++ b/linux-user/i386/signal.c
102
+++ b/target/arm/tcg/translate-vfp.c
113
@@ -XXX,XX +XXX,XX @@ restore_sigcontext(CPUX86State *env, struct target_sigcontext *sc)
103
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
114
104
vm = tcg_temp_new_i32();
115
fpstate_addr = tswapl(sc->fpstate);
105
vd = tcg_temp_new_i64();
116
if (fpstate_addr != 0) {
106
vfp_load_reg32(vm, a->vm);
117
- if (!access_ok(VERIFY_READ, fpstate_addr,
107
- gen_helper_vfp_fcvtds(vd, vm, tcg_env);
118
- sizeof(struct target_fpstate)))
108
+ gen_helper_vfp_fcvtds(vd, vm, fpstatus_ptr(FPST_FPCR));
119
+ if (!access_ok(env_cpu(env), VERIFY_READ, fpstate_addr,
109
vfp_store_reg64(vd, a->vd);
120
+ sizeof(struct target_fpstate))) {
110
return true;
121
goto badframe;
111
}
122
+ }
112
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
123
#ifndef TARGET_X86_64
113
vd = tcg_temp_new_i32();
124
cpu_x86_frstor(env, fpstate_addr, 1);
114
vm = tcg_temp_new_i64();
125
#else
115
vfp_load_reg64(vm, a->vm);
126
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
116
- gen_helper_vfp_fcvtsd(vd, vm, tcg_env);
117
+ gen_helper_vfp_fcvtsd(vd, vm, fpstatus_ptr(FPST_FPCR));
118
vfp_store_reg32(vd, a->vd);
119
return true;
120
}
121
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
127
index XXXXXXX..XXXXXXX 100644
122
index XXXXXXX..XXXXXXX 100644
128
--- a/linux-user/syscall.c
123
--- a/target/arm/vfp_helper.c
129
+++ b/linux-user/syscall.c
124
+++ b/target/arm/vfp_helper.c
130
@@ -XXX,XX +XXX,XX @@ static abi_long do_accept4(int fd, abi_ulong target_addr,
125
@@ -XXX,XX +XXX,XX @@ FLOAT_CONVS(ui, d, float64, 64, u)
131
return -TARGET_EINVAL;
126
#undef FLOAT_CONVS
132
}
127
133
128
/* floating point conversion */
134
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
129
-float64 VFP_HELPER(fcvtd, s)(float32 x, CPUARMState *env)
135
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
130
+float64 VFP_HELPER(fcvtd, s)(float32 x, float_status *status)
136
return -TARGET_EFAULT;
131
{
137
+ }
132
- return float32_to_float64(x, &env->vfp.fp_status);
138
133
+ return float32_to_float64(x, status);
139
addr = alloca(addrlen);
134
}
140
135
141
@@ -XXX,XX +XXX,XX @@ static abi_long do_getpeername(int fd, abi_ulong target_addr,
136
-float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env)
142
return -TARGET_EINVAL;
137
+float32 VFP_HELPER(fcvts, d)(float64 x, float_status *status)
143
}
138
{
144
139
- return float64_to_float32(x, &env->vfp.fp_status);
145
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
140
+ return float64_to_float32(x, status);
146
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
141
}
147
return -TARGET_EFAULT;
142
148
+ }
143
uint32_t HELPER(bfcvt)(float32 x, float_status *status)
149
150
addr = alloca(addrlen);
151
152
@@ -XXX,XX +XXX,XX @@ static abi_long do_getsockname(int fd, abi_ulong target_addr,
153
return -TARGET_EINVAL;
154
}
155
156
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
157
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
158
return -TARGET_EFAULT;
159
+ }
160
161
addr = alloca(addrlen);
162
163
--
144
--
164
2.20.1
145
2.34.1
165
146
166
147
diff view generated by jsdifflib
1
From: Rebecca Cran <rebecca@nuviainc.com>
1
FEAT_XS introduces a set of new TLBI maintenance instructions with an
2
"nXS" qualifier. These behave like the stardard ones except that
3
they do not wait for memory accesses with the XS attribute to
4
complete. They have an interaction with the fine-grained-trap
5
handling: the FGT bits that a hypervisor can use to trap TLBI
6
maintenance instructions normally trap also the nXS variants, but the
7
hypervisor can elect to not trap the nXS variants by setting
8
HCRX_EL2.FGTnXS to 1.
2
9
3
Add support for FEAT_DIT. DIT (Data Independent Timing) is a required
10
Add support to our FGT mechanism for these TLBI bits. For each
4
feature for ARMv8.4. Since virtual machine execution is largely
11
TLBI-trapping FGT bit we define, for example:
5
nondeterministic and TCG is outside of the security domain, it's
12
* FGT_TLBIVAE1 -- the same value we do at present for the
6
implemented as a NOP.
13
normal variant of the insn
14
* FGT_TLBIVAE1NXS -- for the nXS qualified insn; the value of
15
this enum has an NXS bit ORed into it
7
16
8
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
17
In access_check_cp_reg() we can then ignore the trap bit for an
18
access where ri->fgt has the NXS bit set and HCRX_EL2.FGTnXS is 1.
19
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210208065700.19454-2-rebecca@nuviainc.com
22
Message-id: 20241211144440.2700268-2-peter.maydell@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
23
---
13
target/arm/cpu.h | 12 ++++++++++++
24
target/arm/cpregs.h | 72 ++++++++++++++++++++++----------------
14
target/arm/internals.h | 6 ++++++
25
target/arm/cpu-features.h | 5 +++
15
target/arm/helper.c | 22 ++++++++++++++++++++++
26
target/arm/helper.c | 5 ++-
16
target/arm/translate-a64.c | 12 ++++++++++++
27
target/arm/tcg/op_helper.c | 11 +++++-
17
4 files changed, 52 insertions(+)
28
4 files changed, 61 insertions(+), 32 deletions(-)
18
29
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
30
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
20
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
32
--- a/target/arm/cpregs.h
22
+++ b/target/arm/cpu.h
33
+++ b/target/arm/cpregs.h
23
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
34
@@ -XXX,XX +XXX,XX @@ FIELD(HDFGWTR_EL2, NBRBCTL, 60, 1)
24
#define CPSR_IT_2_7 (0xfc00U)
35
FIELD(HDFGWTR_EL2, NBRBDATA, 61, 1)
25
#define CPSR_GE (0xfU << 16)
36
FIELD(HDFGWTR_EL2, NPMSNEVFR_EL1, 62, 1)
26
#define CPSR_IL (1U << 20)
37
27
+#define CPSR_DIT (1U << 21)
38
+FIELD(FGT, NXS, 13, 1) /* Honour HCR_EL2.FGTnXS to suppress FGT */
28
#define CPSR_PAN (1U << 22)
39
/* Which fine-grained trap bit register to check, if any */
29
#define CPSR_J (1U << 24)
40
FIELD(FGT, TYPE, 10, 3)
30
#define CPSR_IT_0_1 (3U << 25)
41
FIELD(FGT, REV, 9, 1) /* Is bit sense reversed? */
31
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
42
@@ -XXX,XX +XXX,XX @@ FIELD(FGT, BITPOS, 0, 6) /* Bit position within the uint64_t */
32
#define PSTATE_SS (1U << 21)
43
#define DO_REV_BIT(REG, BITNAME) \
33
#define PSTATE_PAN (1U << 22)
44
FGT_##BITNAME = FGT_##REG | FGT_REV | R_##REG##_EL2_##BITNAME##_SHIFT
34
#define PSTATE_UAO (1U << 23)
45
35
+#define PSTATE_DIT (1U << 24)
46
+/*
36
#define PSTATE_TCO (1U << 25)
47
+ * The FGT bits for TLBI maintenance instructions accessible at EL1 always
37
#define PSTATE_V (1U << 28)
48
+ * affect the "normal" TLBI insns; they affect the corresponding TLBI insns
38
#define PSTATE_C (1U << 29)
49
+ * with the nXS qualifier only if HCRX_EL2.FGTnXS is 0. We define e.g.
39
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_tts2uxn(const ARMISARegisters *id)
50
+ * FGT_TLBIVAE1 to use for the normal insn, and FGT_TLBIVAE1NXS to use
40
return FIELD_EX32(id->id_mmfr4, ID_MMFR4, XNX) != 0;
51
+ * for the nXS qualified insn.
52
+ */
53
+#define DO_TLBINXS_BIT(REG, BITNAME) \
54
+ FGT_##BITNAME = FGT_##REG | R_##REG##_EL2_##BITNAME##_SHIFT, \
55
+ FGT_##BITNAME##NXS = FGT_##BITNAME | R_FGT_NXS_MASK
56
+
57
typedef enum FGTBit {
58
/*
59
* These bits tell us which register arrays to use:
60
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
61
DO_BIT(HFGITR, ATS1E0W),
62
DO_BIT(HFGITR, ATS1E1RP),
63
DO_BIT(HFGITR, ATS1E1WP),
64
- DO_BIT(HFGITR, TLBIVMALLE1OS),
65
- DO_BIT(HFGITR, TLBIVAE1OS),
66
- DO_BIT(HFGITR, TLBIASIDE1OS),
67
- DO_BIT(HFGITR, TLBIVAAE1OS),
68
- DO_BIT(HFGITR, TLBIVALE1OS),
69
- DO_BIT(HFGITR, TLBIVAALE1OS),
70
- DO_BIT(HFGITR, TLBIRVAE1OS),
71
- DO_BIT(HFGITR, TLBIRVAAE1OS),
72
- DO_BIT(HFGITR, TLBIRVALE1OS),
73
- DO_BIT(HFGITR, TLBIRVAALE1OS),
74
- DO_BIT(HFGITR, TLBIVMALLE1IS),
75
- DO_BIT(HFGITR, TLBIVAE1IS),
76
- DO_BIT(HFGITR, TLBIASIDE1IS),
77
- DO_BIT(HFGITR, TLBIVAAE1IS),
78
- DO_BIT(HFGITR, TLBIVALE1IS),
79
- DO_BIT(HFGITR, TLBIVAALE1IS),
80
- DO_BIT(HFGITR, TLBIRVAE1IS),
81
- DO_BIT(HFGITR, TLBIRVAAE1IS),
82
- DO_BIT(HFGITR, TLBIRVALE1IS),
83
- DO_BIT(HFGITR, TLBIRVAALE1IS),
84
- DO_BIT(HFGITR, TLBIRVAE1),
85
- DO_BIT(HFGITR, TLBIRVAAE1),
86
- DO_BIT(HFGITR, TLBIRVALE1),
87
- DO_BIT(HFGITR, TLBIRVAALE1),
88
- DO_BIT(HFGITR, TLBIVMALLE1),
89
- DO_BIT(HFGITR, TLBIVAE1),
90
- DO_BIT(HFGITR, TLBIASIDE1),
91
- DO_BIT(HFGITR, TLBIVAAE1),
92
- DO_BIT(HFGITR, TLBIVALE1),
93
- DO_BIT(HFGITR, TLBIVAALE1),
94
+ DO_TLBINXS_BIT(HFGITR, TLBIVMALLE1OS),
95
+ DO_TLBINXS_BIT(HFGITR, TLBIVAE1OS),
96
+ DO_TLBINXS_BIT(HFGITR, TLBIASIDE1OS),
97
+ DO_TLBINXS_BIT(HFGITR, TLBIVAAE1OS),
98
+ DO_TLBINXS_BIT(HFGITR, TLBIVALE1OS),
99
+ DO_TLBINXS_BIT(HFGITR, TLBIVAALE1OS),
100
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAE1OS),
101
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAAE1OS),
102
+ DO_TLBINXS_BIT(HFGITR, TLBIRVALE1OS),
103
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAALE1OS),
104
+ DO_TLBINXS_BIT(HFGITR, TLBIVMALLE1IS),
105
+ DO_TLBINXS_BIT(HFGITR, TLBIVAE1IS),
106
+ DO_TLBINXS_BIT(HFGITR, TLBIASIDE1IS),
107
+ DO_TLBINXS_BIT(HFGITR, TLBIVAAE1IS),
108
+ DO_TLBINXS_BIT(HFGITR, TLBIVALE1IS),
109
+ DO_TLBINXS_BIT(HFGITR, TLBIVAALE1IS),
110
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAE1IS),
111
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAAE1IS),
112
+ DO_TLBINXS_BIT(HFGITR, TLBIRVALE1IS),
113
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAALE1IS),
114
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAE1),
115
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAAE1),
116
+ DO_TLBINXS_BIT(HFGITR, TLBIRVALE1),
117
+ DO_TLBINXS_BIT(HFGITR, TLBIRVAALE1),
118
+ DO_TLBINXS_BIT(HFGITR, TLBIVMALLE1),
119
+ DO_TLBINXS_BIT(HFGITR, TLBIVAE1),
120
+ DO_TLBINXS_BIT(HFGITR, TLBIASIDE1),
121
+ DO_TLBINXS_BIT(HFGITR, TLBIVAAE1),
122
+ DO_TLBINXS_BIT(HFGITR, TLBIVALE1),
123
+ DO_TLBINXS_BIT(HFGITR, TLBIVAALE1),
124
DO_BIT(HFGITR, CFPRCTX),
125
DO_BIT(HFGITR, DVPRCTX),
126
DO_BIT(HFGITR, CPPRCTX),
127
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/cpu-features.h
130
+++ b/target/arm/cpu-features.h
131
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
132
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
41
}
133
}
42
134
43
+static inline bool isar_feature_aa32_dit(const ARMISARegisters *id)
135
+static inline bool isar_feature_aa64_xs(const ARMISARegisters *id)
44
+{
136
+{
45
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, DIT) != 0;
137
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, XS) != 0;
46
+}
138
+}
47
+
139
+
48
/*
140
/*
49
* 64-bit feature tests via id registers.
141
* These are the values from APA/API/APA3.
50
*/
142
* In general these must be compared '>=', per the normal Arm ARM
51
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
52
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
53
}
54
55
+static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
56
+{
57
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
58
+}
59
+
60
/*
61
* Feature tests for "does this exist in either 32-bit or 64-bit?"
62
*/
63
diff --git a/target/arm/internals.h b/target/arm/internals.h
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/internals.h
66
+++ b/target/arm/internals.h
67
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
68
if (isar_feature_aa32_pan(id)) {
69
valid |= CPSR_PAN;
70
}
71
+ if (isar_feature_aa32_dit(id)) {
72
+ valid |= CPSR_DIT;
73
+ }
74
75
return valid;
76
}
77
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
78
if (isar_feature_aa64_uao(id)) {
79
valid |= PSTATE_UAO;
80
}
81
+ if (isar_feature_aa64_dit(id)) {
82
+ valid |= PSTATE_DIT;
83
+ }
84
if (isar_feature_aa64_mte(id)) {
85
valid |= PSTATE_TCO;
86
}
87
diff --git a/target/arm/helper.c b/target/arm/helper.c
143
diff --git a/target/arm/helper.c b/target/arm/helper.c
88
index XXXXXXX..XXXXXXX 100644
144
index XXXXXXX..XXXXXXX 100644
89
--- a/target/arm/helper.c
145
--- a/target/arm/helper.c
90
+++ b/target/arm/helper.c
146
+++ b/target/arm/helper.c
91
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo uao_reginfo = {
147
@@ -XXX,XX +XXX,XX @@ static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
.readfn = aa64_uao_read, .writefn = aa64_uao_write
148
valid_mask |= HCRX_TALLINT | HCRX_VINMI | HCRX_VFNMI;
93
};
94
95
+static uint64_t aa64_dit_read(CPUARMState *env, const ARMCPRegInfo *ri)
96
+{
97
+ return env->pstate & PSTATE_DIT;
98
+}
99
+
100
+static void aa64_dit_write(CPUARMState *env, const ARMCPRegInfo *ri,
101
+ uint64_t value)
102
+{
103
+ env->pstate = (env->pstate & ~PSTATE_DIT) | (value & PSTATE_DIT);
104
+}
105
+
106
+static const ARMCPRegInfo dit_reginfo = {
107
+ .name = "DIT", .state = ARM_CP_STATE_AA64,
108
+ .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 5,
109
+ .type = ARM_CP_NO_RAW, .access = PL0_RW,
110
+ .readfn = aa64_dit_read, .writefn = aa64_dit_write
111
+};
112
+
113
static CPAccessResult aa64_cacheop_poc_access(CPUARMState *env,
114
const ARMCPRegInfo *ri,
115
bool isread)
116
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
117
define_one_arm_cp_reg(cpu, &uao_reginfo);
118
}
149
}
119
150
/* FEAT_CMOW adds CMOW */
120
+ if (cpu_isar_feature(aa64_dit, cpu)) {
151
-
121
+ define_one_arm_cp_reg(cpu, &dit_reginfo);
152
if (cpu_isar_feature(aa64_cmow, cpu)) {
153
valid_mask |= HCRX_CMOW;
154
}
155
+ /* FEAT_XS adds FGTnXS, FnXS */
156
+ if (cpu_isar_feature(aa64_xs, cpu)) {
157
+ valid_mask |= HCRX_FGTNXS | HCRX_FNXS;
122
+ }
158
+ }
123
+
159
124
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
160
/* Clear RES0 bits. */
125
define_arm_cp_regs(cpu, vhe_reginfo);
161
env->cp15.hcrx_el2 = value & valid_mask;
126
}
162
diff --git a/target/arm/tcg/op_helper.c b/target/arm/tcg/op_helper.c
127
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
128
index XXXXXXX..XXXXXXX 100644
163
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/translate-a64.c
164
--- a/target/arm/tcg/op_helper.c
130
+++ b/target/arm/translate-a64.c
165
+++ b/target/arm/tcg/op_helper.c
131
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
166
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
132
tcg_temp_free_i32(t1);
167
unsigned int idx = FIELD_EX32(ri->fgt, FGT, IDX);
133
break;
168
unsigned int bitpos = FIELD_EX32(ri->fgt, FGT, BITPOS);
134
169
bool rev = FIELD_EX32(ri->fgt, FGT, REV);
135
+ case 0x1a: /* DIT */
170
+ bool nxs = FIELD_EX32(ri->fgt, FGT, NXS);
136
+ if (!dc_isar_feature(aa64_dit, s)) {
171
bool trapbit;
137
+ goto do_unallocated;
172
173
if (ri->fgt & FGT_EXEC) {
174
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
175
trapword = env->cp15.fgt_write[idx];
176
}
177
178
- trapbit = extract64(trapword, bitpos, 1);
179
+ if (nxs && (arm_hcrx_el2_eff(env) & HCRX_FGTNXS)) {
180
+ /*
181
+ * If HCRX_EL2.FGTnXS is 1 then the fine-grained trap for
182
+ * TLBI maintenance insns does *not* apply to the nXS variant.
183
+ */
184
+ trapbit = 0;
185
+ } else {
186
+ trapbit = extract64(trapword, bitpos, 1);
138
+ }
187
+ }
139
+ if (crm & 1) {
188
if (trapbit != rev) {
140
+ set_pstate_bits(PSTATE_DIT);
189
res = CP_ACCESS_TRAP_EL2;
141
+ } else {
190
goto fail;
142
+ clear_pstate_bits(PSTATE_DIT);
143
+ }
144
+ /* There's no need to rebuild hflags because DIT is a nop */
145
+ break;
146
+
147
case 0x1e: /* DAIFSet */
148
t1 = tcg_const_i32(crm);
149
gen_helper_msr_i_daifset(cpu_env, t1);
150
--
191
--
151
2.20.1
192
2.34.1
152
153
diff view generated by jsdifflib
1
From: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu>
1
All of the TLBI insns with an NXS variant put that variant at the
2
same encoding but with a CRn field that is one greater than for the
3
original TLBI insn. To avoid having to define every TLBI insn
4
effectively twice, once in the normal way and once in a set of cpreg
5
arrays that are only registered when FEAT_XS is present, we define a
6
new ARM_CP_ADD_TLB_NXS type flag for cpregs. When this flag is set
7
in a cpreg struct and FEAT_XS is present,
8
define_one_arm_cp_reg_with_opaque() will automatically add a second
9
cpreg to the hash table for the TLBI NXS insn with:
10
* the crn+1 encoding
11
* an FGT field that indicates that it should honour HCR_EL2.FGTnXS
12
* a name with the "NXS" suffix
2
13
3
The FW and AW bits of SCR_EL3 are RES1 only in some contexts. Force them
14
(If there are future TLBI NXS insns that don't use this same
4
to 1 only when there is no support for AArch32 at EL1 or above.
15
encoding convention, it is also possible to define them manually.)
5
16
6
The reset value will be 0x30 only if the CPU is AArch64-only; if there
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
is support for AArch32 at EL1 or above, it will be reset to 0.
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20241211144440.2700268-3-peter.maydell@linaro.org
20
---
21
target/arm/cpregs.h | 8 ++++++++
22
target/arm/helper.c | 25 +++++++++++++++++++++++++
23
2 files changed, 33 insertions(+)
8
24
9
Also adds helper function isar_feature_aa64_aa32_el1 to check if AArch32
25
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
10
is supported at EL1 or above.
11
12
Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210203165552.16306-2-michael.nawrocki@gtri.gatech.edu
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
target/arm/cpu.h | 5 +++++
18
target/arm/helper.c | 16 ++++++++++++++--
19
2 files changed, 19 insertions(+), 2 deletions(-)
20
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
27
--- a/target/arm/cpregs.h
24
+++ b/target/arm/cpu.h
28
+++ b/target/arm/cpregs.h
25
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_aa32(const ARMISARegisters *id)
29
@@ -XXX,XX +XXX,XX @@ enum {
26
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL0) >= 2;
30
* equivalent EL1 register when FEAT_NV2 is enabled.
27
}
31
*/
28
32
ARM_CP_NV2_REDIRECT = 1 << 20,
29
+static inline bool isar_feature_aa64_aa32_el1(const ARMISARegisters *id)
33
+ /*
30
+{
34
+ * Flag: this is a TLBI insn which (when FEAT_XS is present) also has
31
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL1) >= 2;
35
+ * an NXS variant at the same encoding except that crn is 1 greater,
32
+}
36
+ * so when registering this cpreg automatically also register one
33
+
37
+ * for the TLBI NXS variant. (For QEMU the NXS variant behaves
34
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
38
+ * identically to the normal one, other than FGT trapping handling.)
35
{
39
+ */
36
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
40
+ ARM_CP_ADD_TLBI_NXS = 1 << 21,
41
};
42
43
/*
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
44
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
46
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
47
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
48
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
42
ARMCPU *cpu = env_archcpu(env);
49
if (r->state != state && r->state != ARM_CP_STATE_BOTH) {
43
50
continue;
44
if (ri->state == ARM_CP_STATE_AA64) {
51
}
45
- value |= SCR_FW | SCR_AW; /* these two bits are RES1. */
52
+ if ((r->type & ARM_CP_ADD_TLBI_NXS) &&
46
+ if (arm_feature(env, ARM_FEATURE_AARCH64) &&
53
+ cpu_isar_feature(aa64_xs, cpu)) {
47
+ !cpu_isar_feature(aa64_aa32_el1, cpu)) {
54
+ /*
48
+ value |= SCR_FW | SCR_AW; /* these two bits are RES1. */
55
+ * This is a TLBI insn which has an NXS variant. The
49
+ }
56
+ * NXS variant is at the same encoding except that
50
valid_mask &= ~SCR_NET;
57
+ * crn is +1, and has the same behaviour except for
51
58
+ * fine-grained trapping. Add the NXS insn here and
52
if (cpu_isar_feature(aa64_lor, cpu)) {
59
+ * then fall through to add the normal register.
53
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
60
+ * add_cpreg_to_hashtable() copies the cpreg struct
54
raw_write(env, ri, value);
61
+ * and name that it is passed, so it's OK to use
55
}
62
+ * a local struct here.
56
63
+ */
57
+static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
64
+ ARMCPRegInfo nxs_ri = *r;
58
+{
65
+ g_autofree char *name = g_strdup_printf("%sNXS", r->name);
59
+ /*
60
+ * scr_write will set the RES1 bits on an AArch64-only CPU.
61
+ * The reset value will be 0x30 on an AArch64-only CPU and 0 otherwise.
62
+ */
63
+ scr_write(env, ri, 0);
64
+}
65
+
66
+
66
static CPAccessResult access_aa64_tid2(CPUARMState *env,
67
+ assert(state == ARM_CP_STATE_AA64);
67
const ARMCPRegInfo *ri,
68
+ assert(nxs_ri.crn < 0xf);
68
bool isread)
69
+ nxs_ri.crn++;
69
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
70
+ if (nxs_ri.fgt) {
70
{ .name = "SCR_EL3", .state = ARM_CP_STATE_AA64,
71
+ nxs_ri.fgt |= R_FGT_NXS_MASK;
71
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 1, .opc2 = 0,
72
+ }
72
.access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.scr_el3),
73
+ add_cpreg_to_hashtable(cpu, &nxs_ri, opaque, state,
73
- .resetvalue = 0, .writefn = scr_write },
74
+ ARM_CP_SECSTATE_NS,
74
+ .resetfn = scr_reset, .writefn = scr_write },
75
+ crm, opc1, opc2, name);
75
{ .name = "SCR", .type = ARM_CP_ALIAS | ARM_CP_NEWEL,
76
+ }
76
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 0,
77
if (state == ARM_CP_STATE_AA32) {
77
.access = PL1_RW, .accessfn = access_trap_aa32s_el1,
78
/*
79
* Under AArch32 CP registers can be common
78
--
80
--
79
2.20.1
81
2.34.1
80
81
diff view generated by jsdifflib
Deleted patch
1
From: Hao Wu <wuhaotsh@google.com>
2
1
3
NPCM7XX GPIO devices have been implemented in hw/gpio/npcm7xx-gpio.c. So
4
we removed them from the unimplemented devices list.
5
6
Reviewed-by: Doug Evans<dje@google.com>
7
Reviewed-by: Tyrong Ting<kfting@nuvoton.com>
8
Signed-off-by: Hao Wu<wuhaotsh@google.com>
9
Message-id: 20210129005845.416272-2-wuhaotsh@google.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/arm/npcm7xx.c | 8 --------
14
1 file changed, 8 deletions(-)
15
16
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/npcm7xx.c
19
+++ b/hw/arm/npcm7xx.c
20
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
21
create_unimplemented_device("npcm7xx.pcierc", 0xe1000000, 64 * KiB);
22
create_unimplemented_device("npcm7xx.kcs", 0xf0007000, 4 * KiB);
23
create_unimplemented_device("npcm7xx.gfxi", 0xf000e000, 4 * KiB);
24
- create_unimplemented_device("npcm7xx.gpio[0]", 0xf0010000, 4 * KiB);
25
- create_unimplemented_device("npcm7xx.gpio[1]", 0xf0011000, 4 * KiB);
26
- create_unimplemented_device("npcm7xx.gpio[2]", 0xf0012000, 4 * KiB);
27
- create_unimplemented_device("npcm7xx.gpio[3]", 0xf0013000, 4 * KiB);
28
- create_unimplemented_device("npcm7xx.gpio[4]", 0xf0014000, 4 * KiB);
29
- create_unimplemented_device("npcm7xx.gpio[5]", 0xf0015000, 4 * KiB);
30
- create_unimplemented_device("npcm7xx.gpio[6]", 0xf0016000, 4 * KiB);
31
- create_unimplemented_device("npcm7xx.gpio[7]", 0xf0017000, 4 * KiB);
32
create_unimplemented_device("npcm7xx.smbus[0]", 0xf0080000, 4 * KiB);
33
create_unimplemented_device("npcm7xx.smbus[1]", 0xf0081000, 4 * KiB);
34
create_unimplemented_device("npcm7xx.smbus[2]", 0xf0082000, 4 * KiB);
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
Deleted patch
1
From: Rebecca Cran <rebecca@nuviainc.com>
2
1
3
Enable FEAT_DIT for the "max" AARCH64 CPU.
4
5
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210208065700.19454-4-rebecca@nuviainc.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/cpu64.c | 5 +++++
11
1 file changed, 5 insertions(+)
12
13
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu64.c
16
+++ b/target/arm/cpu64.c
17
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
18
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
19
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
20
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
21
+ t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1);
22
cpu->isar.id_aa64pfr0 = t;
23
24
t = cpu->isar.id_aa64pfr1;
25
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
26
u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
27
cpu->isar.id_isar6 = u;
28
29
+ u = cpu->isar.id_pfr0;
30
+ u = FIELD_DP32(u, ID_PFR0, DIT, 1);
31
+ cpu->isar.id_pfr0 = u;
32
+
33
u = cpu->isar.id_mmfr3;
34
u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
35
cpu->isar.id_mmfr3 = u;
36
--
37
2.20.1
38
39
diff view generated by jsdifflib
Deleted patch
1
From: Rebecca Cran <rebecca@nuviainc.com>
2
1
3
Enable FEAT_DIT for the "max" 32-bit CPU.
4
5
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210208065700.19454-5-rebecca@nuviainc.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/cpu.c | 4 ++++
11
1 file changed, 4 insertions(+)
12
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.c
16
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
18
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
19
t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
20
cpu->isar.id_mmfr4 = t;
21
+
22
+ t = cpu->isar.id_pfr0;
23
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
24
+ cpu->isar.id_pfr0 = t;
25
}
26
#endif
27
}
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
Deleted patch
1
Update infocenter.arm.com URLs for various pieces of Arm
2
documentation to the new developer.arm.com equivalents. (There is a
3
redirection in place from the old URLs, but we might as well update
4
our comments in case the redirect ever disappears in future.)
5
1
6
This patch covers all the URLs which are not MPS2/SSE-200/IoTKit
7
related (those are dealt with in a different patch).
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20210205171456.19939-1-peter.maydell@linaro.org
13
---
14
include/hw/dma/pl080.h | 7 ++++---
15
include/hw/misc/arm_integrator_debug.h | 2 +-
16
include/hw/ssi/pl022.h | 5 +++--
17
hw/arm/aspeed_ast2600.c | 2 +-
18
hw/arm/musca.c | 4 ++--
19
hw/misc/arm_integrator_debug.c | 2 +-
20
hw/timer/arm_timer.c | 7 ++++---
21
7 files changed, 16 insertions(+), 13 deletions(-)
22
23
diff --git a/include/hw/dma/pl080.h b/include/hw/dma/pl080.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/dma/pl080.h
26
+++ b/include/hw/dma/pl080.h
27
@@ -XXX,XX +XXX,XX @@
28
* (at your option) any later version.
29
*/
30
31
-/* This is a model of the Arm PrimeCell PL080/PL081 DMA controller:
32
+/*
33
+ * This is a model of the Arm PrimeCell PL080/PL081 DMA controller:
34
* The PL080 TRM is:
35
- * http://infocenter.arm.com/help/topic/com.arm.doc.ddi0196g/DDI0196.pdf
36
+ * https://developer.arm.com/documentation/ddi0196/latest
37
* and the PL081 TRM is:
38
- * http://infocenter.arm.com/help/topic/com.arm.doc.ddi0218e/DDI0218.pdf
39
+ * https://developer.arm.com/documentation/ddi0218/latest
40
*
41
* QEMU interface:
42
* + sysbus IRQ 0: DMACINTR combined interrupt line
43
diff --git a/include/hw/misc/arm_integrator_debug.h b/include/hw/misc/arm_integrator_debug.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/include/hw/misc/arm_integrator_debug.h
46
+++ b/include/hw/misc/arm_integrator_debug.h
47
@@ -XXX,XX +XXX,XX @@
48
*
49
* Browse the data sheet:
50
*
51
- * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0159b/Babbfijf.html
52
+ * https://developer.arm.com/documentation/dui0159/b/peripherals-and-interfaces/debug-leds-and-dip-switch-interface
53
*
54
* Copyright (c) 2013 Alex Bennée <alex@bennee.com>
55
*
56
diff --git a/include/hw/ssi/pl022.h b/include/hw/ssi/pl022.h
57
index XXXXXXX..XXXXXXX 100644
58
--- a/include/hw/ssi/pl022.h
59
+++ b/include/hw/ssi/pl022.h
60
@@ -XXX,XX +XXX,XX @@
61
* (at your option) any later version.
62
*/
63
64
-/* This is a model of the Arm PrimeCell PL022 synchronous serial port.
65
+/*
66
+ * This is a model of the Arm PrimeCell PL022 synchronous serial port.
67
* The PL022 TRM is:
68
- * http://infocenter.arm.com/help/topic/com.arm.doc.ddi0194h/DDI0194H_ssp_pl022_trm.pdf
69
+ * https://developer.arm.com/documentation/ddi0194/latest
70
*
71
* QEMU interface:
72
* + sysbus IRQ: SSPINTR combined interrupt line
73
diff --git a/hw/arm/aspeed_ast2600.c b/hw/arm/aspeed_ast2600.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/arm/aspeed_ast2600.c
76
+++ b/hw/arm/aspeed_ast2600.c
77
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_init(Object *obj)
78
/*
79
* ASPEED ast2600 has 0xf as cluster ID
80
*
81
- * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0388e/CIHEBGFG.html
82
+ * https://developer.arm.com/documentation/ddi0388/e/the-system-control-coprocessors/summary-of-system-control-coprocessor-registers/multiprocessor-affinity-register
83
*/
84
static uint64_t aspeed_calc_affinity(int cpu)
85
{
86
diff --git a/hw/arm/musca.c b/hw/arm/musca.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/arm/musca.c
89
+++ b/hw/arm/musca.c
90
@@ -XXX,XX +XXX,XX @@
91
* https://developer.arm.com/products/system-design/development-boards/iot-test-chips-and-boards/musca-a-test-chip-board
92
* https://developer.arm.com/products/system-design/development-boards/iot-test-chips-and-boards/musca-b-test-chip-board
93
* We model the A and B1 variants of this board, as described in the TRMs:
94
- * http://infocenter.arm.com/help/topic/com.arm.doc.101107_0000_00_en/index.html
95
- * http://infocenter.arm.com/help/topic/com.arm.doc.101312_0000_00_en/index.html
96
+ * https://developer.arm.com/documentation/101107/latest/
97
+ * https://developer.arm.com/documentation/101312/latest/
98
*/
99
100
#include "qemu/osdep.h"
101
diff --git a/hw/misc/arm_integrator_debug.c b/hw/misc/arm_integrator_debug.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/hw/misc/arm_integrator_debug.c
104
+++ b/hw/misc/arm_integrator_debug.c
105
@@ -XXX,XX +XXX,XX @@
106
* to this area.
107
*
108
* The real h/w is described at:
109
- * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0159b/Babbfijf.html
110
+ * https://developer.arm.com/documentation/dui0159/b/peripherals-and-interfaces/debug-leds-and-dip-switch-interface
111
*
112
* Copyright (c) 2013 Alex Bennée <alex@bennee.com>
113
*
114
diff --git a/hw/timer/arm_timer.c b/hw/timer/arm_timer.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/hw/timer/arm_timer.c
117
+++ b/hw/timer/arm_timer.c
118
@@ -XXX,XX +XXX,XX @@ static arm_timer_state *arm_timer_init(uint32_t freq)
119
return s;
120
}
121
122
-/* ARM PrimeCell SP804 dual timer module.
123
+/*
124
+ * ARM PrimeCell SP804 dual timer module.
125
* Docs at
126
- * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0271d/index.html
127
-*/
128
+ * https://developer.arm.com/documentation/ddi0271/latest/
129
+ */
130
131
#define TYPE_SP804 "sp804"
132
OBJECT_DECLARE_SIMPLE_TYPE(SP804State, SP804)
133
--
134
2.20.1
135
136
diff view generated by jsdifflib
Deleted patch
1
In cpu_exec() we have a longstanding workaround for compilers which
2
do not correctly implement the part of the sigsetjmp()/siglongjmp()
3
spec which requires that local variables which are not changed
4
between the setjmp and the longjmp retain their value.
5
1
6
I recently ran across the upstream clang bug report for this; add a
7
link to it to the comment describing the workaround, and generally
8
expand the comment, so that we have a reasonable chance in future of
9
understanding why it's there and determining when we can remove it,
10
assuming clang eventually fixes the bug.
11
12
Remove the /* buggy compiler */ comments on the #else and #endif:
13
they don't add anything to understanding and are somewhat misleading
14
since they're sandwiching the code path for *non*-buggy compilers.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
18
Message-id: 20210129130330.30820-1-peter.maydell@linaro.org
19
---
20
accel/tcg/cpu-exec.c | 25 +++++++++++++++++++------
21
1 file changed, 19 insertions(+), 6 deletions(-)
22
23
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/accel/tcg/cpu-exec.c
26
+++ b/accel/tcg/cpu-exec.c
27
@@ -XXX,XX +XXX,XX @@ int cpu_exec(CPUState *cpu)
28
/* prepare setjmp context for exception handling */
29
if (sigsetjmp(cpu->jmp_env, 0) != 0) {
30
#if defined(__clang__)
31
- /* Some compilers wrongly smash all local variables after
32
- * siglongjmp. There were bug reports for gcc 4.5.0 and clang.
33
+ /*
34
+ * Some compilers wrongly smash all local variables after
35
+ * siglongjmp (the spec requires that only non-volatile locals
36
+ * which are changed between the sigsetjmp and siglongjmp are
37
+ * permitted to be trashed). There were bug reports for gcc
38
+ * 4.5.0 and clang. The bug is fixed in all versions of gcc
39
+ * that we support, but is still unfixed in clang:
40
+ * https://bugs.llvm.org/show_bug.cgi?id=21183
41
+ *
42
* Reload essential local variables here for those compilers.
43
- * Newer versions of gcc would complain about this code (-Wclobbered). */
44
+ * Newer versions of gcc would complain about this code (-Wclobbered),
45
+ * so we only perform the workaround for clang.
46
+ */
47
cpu = current_cpu;
48
cc = CPU_GET_CLASS(cpu);
49
-#else /* buggy compiler */
50
- /* Assert that the compiler does not smash local variables. */
51
+#else
52
+ /*
53
+ * Non-buggy compilers preserve these locals; assert that
54
+ * they have the correct value.
55
+ */
56
g_assert(cpu == current_cpu);
57
g_assert(cc == CPU_GET_CLASS(cpu));
58
-#endif /* buggy compiler */
59
+#endif
60
+
61
#ifndef CONFIG_SOFTMMU
62
tcg_debug_assert(!have_mmap_lock());
63
#endif
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Record whether the backing page is anonymous, or if it has file
4
backing. This will allow us to get close to the Linux AArch64
5
ABI for MTE, which allows tag memory only on ram-backed VMAs.
6
7
The real ABI allows tag memory on files, when those files are
8
on ram-backed filesystems, such as tmpfs. We will not be able
9
to implement that in QEMU linux-user.
10
11
Thankfully, anonymous memory for malloc arenas is the primary
12
consumer of this feature, so this restricted version should
13
still be of use.
14
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20210210000223.884088-3-richard.henderson@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
include/exec/cpu-all.h | 2 ++
21
linux-user/mmap.c | 3 +++
22
2 files changed, 5 insertions(+)
23
24
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/cpu-all.h
27
+++ b/include/exec/cpu-all.h
28
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
29
#define PAGE_WRITE_INV 0x0020
30
/* For use with page_set_flags: page is being replaced; target_data cleared. */
31
#define PAGE_RESET 0x0040
32
+/* For linux-user, indicates that the page is MAP_ANON. */
33
+#define PAGE_ANON 0x0080
34
35
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
36
/* FIXME: Code that sets/uses this is broken and needs to go away. */
37
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/linux-user/mmap.c
40
+++ b/linux-user/mmap.c
41
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
42
}
43
}
44
the_end1:
45
+ if (flags & MAP_ANONYMOUS) {
46
+ page_flags |= PAGE_ANON;
47
+ }
48
page_flags |= PAGE_RESET;
49
page_set_flags(start, start + len, page_flags);
50
the_end:
51
--
52
2.20.1
53
54
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is more descriptive than 'unsigned long'.
4
No functional change, since these match on all linux+bsd hosts.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu-all.h | 2 +-
12
bsd-user/main.c | 4 ++--
13
linux-user/elfload.c | 4 ++--
14
linux-user/main.c | 4 ++--
15
4 files changed, 7 insertions(+), 7 deletions(-)
16
17
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/cpu-all.h
20
+++ b/include/exec/cpu-all.h
21
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
22
/* On some host systems the guest address space is reserved on the host.
23
* This allows the guest address space to be offset to a convenient location.
24
*/
25
-extern unsigned long guest_base;
26
+extern uintptr_t guest_base;
27
extern bool have_guest_base;
28
extern unsigned long reserved_va;
29
30
diff --git a/bsd-user/main.c b/bsd-user/main.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/bsd-user/main.c
33
+++ b/bsd-user/main.c
34
@@ -XXX,XX +XXX,XX @@
35
36
int singlestep;
37
unsigned long mmap_min_addr;
38
-unsigned long guest_base;
39
+uintptr_t guest_base;
40
bool have_guest_base;
41
unsigned long reserved_va;
42
43
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
44
g_free(target_environ);
45
46
if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
47
- qemu_log("guest_base 0x%lx\n", guest_base);
48
+ qemu_log("guest_base %p\n", (void *)guest_base);
49
log_page_dump("binary load");
50
51
qemu_log("start_brk 0x" TARGET_ABI_FMT_lx "\n", info->start_brk);
52
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/linux-user/elfload.c
55
+++ b/linux-user/elfload.c
56
@@ -XXX,XX +XXX,XX @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr,
57
void *addr, *test;
58
59
if (!QEMU_IS_ALIGNED(guest_base, align)) {
60
- fprintf(stderr, "Requested guest base 0x%lx does not satisfy "
61
+ fprintf(stderr, "Requested guest base %p does not satisfy "
62
"host minimum alignment (0x%lx)\n",
63
- guest_base, align);
64
+ (void *)guest_base, align);
65
exit(EXIT_FAILURE);
66
}
67
68
diff --git a/linux-user/main.c b/linux-user/main.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/linux-user/main.c
71
+++ b/linux-user/main.c
72
@@ -XXX,XX +XXX,XX @@ static const char *cpu_model;
73
static const char *cpu_type;
74
static const char *seed_optarg;
75
unsigned long mmap_min_addr;
76
-unsigned long guest_base;
77
+uintptr_t guest_base;
78
bool have_guest_base;
79
80
/*
81
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
82
g_free(target_environ);
83
84
if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
85
- qemu_log("guest_base 0x%lx\n", guest_base);
86
+ qemu_log("guest_base %p\n", (void *)guest_base);
87
log_page_dump("binary load");
88
89
qemu_log("start_brk 0x" TARGET_ABI_FMT_lx "\n", info->start_brk);
90
--
91
2.20.1
92
93
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Return bool not int; pass abi_ulong not 'unsigned long'.
4
All callers use abi_ulong already, so the change in type
5
has no effect.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-6-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/exec/cpu_ldst.h | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/exec/cpu_ldst.h
18
+++ b/include/exec/cpu_ldst.h
19
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
20
#endif
21
#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
22
23
-static inline int guest_range_valid(unsigned long start, unsigned long len)
24
+static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
25
{
26
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
27
}
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Verify that addr + size - 1 does not wrap around.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210210000223.884088-7-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
linux-user/qemu.h | 17 ++++++++++++-----
11
1 file changed, 12 insertions(+), 5 deletions(-)
12
13
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/linux-user/qemu.h
16
+++ b/linux-user/qemu.h
17
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
18
#define VERIFY_READ 0
19
#define VERIFY_WRITE 1 /* implies read access */
20
21
-static inline int access_ok(int type, abi_ulong addr, abi_ulong size)
22
+static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
23
{
24
- return guest_addr_valid(addr) &&
25
- (size == 0 || guest_addr_valid(addr + size - 1)) &&
26
- page_check_range((target_ulong)addr, size,
27
- (type == VERIFY_READ) ? PAGE_READ : (PAGE_READ | PAGE_WRITE)) == 0;
28
+ if (!guest_addr_valid(addr)) {
29
+ return false;
30
+ }
31
+ if (size != 0 &&
32
+ (addr + size - 1 < addr ||
33
+ !guest_addr_valid(addr + size - 1))) {
34
+ return false;
35
+ }
36
+ return page_check_range((target_ulong)addr, size,
37
+ (type == VERIFY_READ) ? PAGE_READ :
38
+ (PAGE_READ | PAGE_WRITE)) == 0;
39
}
40
41
/* NOTE __get_user and __put_user use host pointers and don't check access.
42
--
43
2.20.1
44
45
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These constants are only ever used with access_ok, and friends.
4
Rather than translating them to PAGE_* bits, let them equal
5
the PAGE_* bits to begin.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
linux-user/qemu.h | 8 +++-----
13
1 file changed, 3 insertions(+), 5 deletions(-)
14
15
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/qemu.h
18
+++ b/linux-user/qemu.h
19
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
20
21
/* user access */
22
23
-#define VERIFY_READ 0
24
-#define VERIFY_WRITE 1 /* implies read access */
25
+#define VERIFY_READ PAGE_READ
26
+#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
27
28
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
29
{
30
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
31
!guest_addr_valid(addr + size - 1))) {
32
return false;
33
}
34
- return page_check_range((target_ulong)addr, size,
35
- (type == VERIFY_READ) ? PAGE_READ :
36
- (PAGE_READ | PAGE_WRITE)) == 0;
37
+ return page_check_range((target_ulong)addr, size, type) == 0;
38
}
39
40
/* NOTE __get_user and __put_user use host pointers and don't check access.
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These constants are only ever used with access_ok, and friends.
4
Rather than translating them to PAGE_* bits, let them equal
5
the PAGE_* bits to begin.
6
7
Reviewed-by: Warner Losh <imp@bsdimp.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210210000223.884088-9-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
bsd-user/qemu.h | 9 ++++-----
14
1 file changed, 4 insertions(+), 5 deletions(-)
15
16
diff --git a/bsd-user/qemu.h b/bsd-user/qemu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/bsd-user/qemu.h
19
+++ b/bsd-user/qemu.h
20
@@ -XXX,XX +XXX,XX @@ extern unsigned long x86_stack_size;
21
22
/* user access */
23
24
-#define VERIFY_READ 0
25
-#define VERIFY_WRITE 1 /* implies read access */
26
+#define VERIFY_READ PAGE_READ
27
+#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
28
29
-static inline int access_ok(int type, abi_ulong addr, abi_ulong size)
30
+static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
31
{
32
- return page_check_range((target_ulong)addr, size,
33
- (type == VERIFY_READ) ? PAGE_READ : (PAGE_READ | PAGE_WRITE)) == 0;
34
+ return page_check_range((target_ulong)addr, size, type) == 0;
35
}
36
37
/* NOTE __get_user and __put_user use host pointers and don't check access. */
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is the only use of guest_addr_valid that does not begin
4
with a guest address, but a host address being transformed to
5
a guest address.
6
7
We will shortly adjust guest_addr_valid to handle guest memory
8
tags, and the host address should not be subjected to that.
9
10
Move h2g_valid adjacent to the other h2g macros.
11
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210210000223.884088-10-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
include/exec/cpu_ldst.h | 5 ++++-
18
1 file changed, 4 insertions(+), 1 deletion(-)
19
20
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/cpu_ldst.h
23
+++ b/include/exec/cpu_ldst.h
24
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
25
#else
26
#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
27
#endif
28
-#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
29
30
static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
31
{
32
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
33
}
34
35
+#define h2g_valid(x) \
36
+ (HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS || \
37
+ (uintptr_t)(x) - guest_base <= GUEST_ADDR_MAX)
38
+
39
#define h2g_nocheck(x) ({ \
40
uintptr_t __ret = (uintptr_t)(x) - guest_base; \
41
(abi_ptr)__ret; \
42
--
43
2.20.1
44
45
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We must always use GUEST_ADDR_MAX, because even 32-bit hosts can
4
use -R <reserved_va> to restrict the memory address of the guest.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu_ldst.h | 9 ++++-----
12
1 file changed, 4 insertions(+), 5 deletions(-)
13
14
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu_ldst.h
17
+++ b/include/exec/cpu_ldst.h
18
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
19
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
20
#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
21
22
-#if HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS
23
-#define guest_addr_valid(x) (1)
24
-#else
25
-#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
26
-#endif
27
+static inline bool guest_addr_valid(abi_ulong x)
28
+{
29
+ return x <= GUEST_ADDR_MAX;
30
+}
31
32
static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
33
{
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Provide an identity fallback for target that do not
4
use tagged addresses.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu_ldst.h | 7 +++++++
12
1 file changed, 7 insertions(+)
13
14
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu_ldst.h
17
+++ b/include/exec/cpu_ldst.h
18
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
19
#define TARGET_ABI_FMT_ptr "%"PRIx64
20
#endif
21
22
+#ifndef TARGET_TAGGED_ADDRESSES
23
+static inline abi_ptr cpu_untagged_addr(CPUState *cs, abi_ptr x)
24
+{
25
+ return x;
26
+}
27
+#endif
28
+
29
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
30
#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
31
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We define target_mmap et al as untagged, so that they can be
4
used from the binary loaders. Explicitly call cpu_untagged_addr
5
for munmap, mprotect, mremap syscall entry points.
6
7
Add a few comments for the syscalls that are exempted by the
8
kernel's tagged-address-abi.rst.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210210000223.884088-14-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
linux-user/syscall.c | 11 +++++++++++
16
1 file changed, 11 insertions(+)
17
18
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/syscall.c
21
+++ b/linux-user/syscall.c
22
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
23
abi_long mapped_addr;
24
abi_ulong new_alloc_size;
25
26
+ /* brk pointers are always untagged */
27
+
28
DEBUGF_BRK("do_brk(" TARGET_ABI_FMT_lx ") -> ", new_brk);
29
30
if (!new_brk) {
31
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
32
int i,ret;
33
abi_ulong shmlba;
34
35
+ /* shmat pointers are always untagged */
36
+
37
/* find out the length of the shared memory segment */
38
ret = get_errno(shmctl(shmid, IPC_STAT, &shm_info));
39
if (is_error(ret)) {
40
@@ -XXX,XX +XXX,XX @@ static inline abi_long do_shmdt(abi_ulong shmaddr)
41
int i;
42
abi_long rv;
43
44
+ /* shmdt pointers are always untagged */
45
+
46
mmap_lock();
47
48
for (i = 0; i < N_SHM_REGIONS; ++i) {
49
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
50
v5, v6));
51
}
52
#else
53
+ /* mmap pointers are always untagged */
54
ret = get_errno(target_mmap(arg1, arg2, arg3,
55
target_to_host_bitmask(arg4, mmap_flags_tbl),
56
arg5,
57
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
58
return get_errno(ret);
59
#endif
60
case TARGET_NR_munmap:
61
+ arg1 = cpu_untagged_addr(cpu, arg1);
62
return get_errno(target_munmap(arg1, arg2));
63
case TARGET_NR_mprotect:
64
+ arg1 = cpu_untagged_addr(cpu, arg1);
65
{
66
TaskState *ts = cpu->opaque;
67
/* Special hack to detect libc making the stack executable. */
68
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
69
return get_errno(target_mprotect(arg1, arg2, arg3));
70
#ifdef TARGET_NR_mremap
71
case TARGET_NR_mremap:
72
+ arg1 = cpu_untagged_addr(cpu, arg1);
73
+ /* mremap new_addr (arg5) is always untagged */
74
return get_errno(target_mremap(arg1, arg2, arg3, arg4, arg5));
75
#endif
76
/* ??? msync/mlock/munlock are broken for softmmu. */
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We're currently open-coding the range check in access_ok;
4
use guest_range_valid when size != 0.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-15-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
linux-user/qemu.h | 9 +++------
12
1 file changed, 3 insertions(+), 6 deletions(-)
13
14
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/qemu.h
17
+++ b/linux-user/qemu.h
18
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
19
20
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
21
{
22
- if (!guest_addr_valid(addr)) {
23
- return false;
24
- }
25
- if (size != 0 &&
26
- (addr + size - 1 < addr ||
27
- !guest_addr_valid(addr + size - 1))) {
28
+ if (size == 0
29
+ ? !guest_addr_valid(addr)
30
+ : !guest_range_valid(addr, size)) {
31
return false;
32
}
33
return page_check_range((target_ulong)addr, size, type) == 0;
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These functions are not small, except for unlock_user
4
without debugging enabled. Move them out of line, and
5
add missing braces on the way.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-18-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
linux-user/qemu.h | 45 ++++++-------------------------------------
13
linux-user/uaccess.c | 46 ++++++++++++++++++++++++++++++++++++++++++++
14
2 files changed, 52 insertions(+), 39 deletions(-)
15
16
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/linux-user/qemu.h
19
+++ b/linux-user/qemu.h
20
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
21
22
/* Lock an area of guest memory into the host. If copy is true then the
23
host area will have the same contents as the guest. */
24
-static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
25
-{
26
- if (!access_ok_untagged(type, guest_addr, len)) {
27
- return NULL;
28
- }
29
-#ifdef DEBUG_REMAP
30
- {
31
- void *addr;
32
- addr = g_malloc(len);
33
- if (copy)
34
- memcpy(addr, g2h(guest_addr), len);
35
- else
36
- memset(addr, 0, len);
37
- return addr;
38
- }
39
-#else
40
- return g2h_untagged(guest_addr);
41
-#endif
42
-}
43
+void *lock_user(int type, abi_ulong guest_addr, long len, int copy);
44
45
/* Unlock an area of guest memory. The first LEN bytes must be
46
flushed back to guest memory. host_ptr = NULL is explicitly
47
allowed and does nothing. */
48
-static inline void unlock_user(void *host_ptr, abi_ulong guest_addr,
49
- long len)
50
-{
51
-
52
#ifdef DEBUG_REMAP
53
- if (!host_ptr)
54
- return;
55
- if (host_ptr == g2h_untagged(guest_addr))
56
- return;
57
- if (len > 0)
58
- memcpy(g2h_untagged(guest_addr), host_ptr, len);
59
- g_free(host_ptr);
60
+static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, long len)
61
+{ }
62
+#else
63
+void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
64
#endif
65
-}
66
67
/* Return the length of a string in target memory or -TARGET_EFAULT if
68
access error. */
69
abi_long target_strlen(abi_ulong gaddr);
70
71
/* Like lock_user but for null terminated strings. */
72
-static inline void *lock_user_string(abi_ulong guest_addr)
73
-{
74
- abi_long len;
75
- len = target_strlen(guest_addr);
76
- if (len < 0)
77
- return NULL;
78
- return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
79
-}
80
+void *lock_user_string(abi_ulong guest_addr);
81
82
/* Helper macros for locking/unlocking a target struct. */
83
#define lock_user_struct(type, host_ptr, guest_addr, copy)    \
84
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/linux-user/uaccess.c
87
+++ b/linux-user/uaccess.c
88
@@ -XXX,XX +XXX,XX @@
89
90
#include "qemu.h"
91
92
+void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
93
+{
94
+ if (!access_ok_untagged(type, guest_addr, len)) {
95
+ return NULL;
96
+ }
97
+#ifdef DEBUG_REMAP
98
+ {
99
+ void *addr;
100
+ addr = g_malloc(len);
101
+ if (copy) {
102
+ memcpy(addr, g2h(guest_addr), len);
103
+ } else {
104
+ memset(addr, 0, len);
105
+ }
106
+ return addr;
107
+ }
108
+#else
109
+ return g2h_untagged(guest_addr);
110
+#endif
111
+}
112
+
113
+#ifdef DEBUG_REMAP
114
+void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
115
+{
116
+ if (!host_ptr) {
117
+ return;
118
+ }
119
+ if (host_ptr == g2h_untagged(guest_addr)) {
120
+ return;
121
+ }
122
+ if (len > 0) {
123
+ memcpy(g2h_untagged(guest_addr), host_ptr, len);
124
+ }
125
+ g_free(host_ptr);
126
+}
127
+#endif
128
+
129
+void *lock_user_string(abi_ulong guest_addr)
130
+{
131
+ abi_long len = target_strlen(guest_addr);
132
+ if (len < 0) {
133
+ return NULL;
134
+ }
135
+ return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
136
+}
137
+
138
/* copy_from_user() and copy_to_user() are usually used to copy data
139
* buffers between the target and host. These internally perform
140
* locking/unlocking of the memory.
141
--
142
2.20.1
143
144
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is the prctl bit that controls whether syscalls accept tagged
4
addresses. See Documentation/arm64/tagged-address-abi.rst in the
5
linux kernel.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-21-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
linux-user/aarch64/target_syscall.h | 4 ++++
13
target/arm/cpu-param.h | 3 +++
14
target/arm/cpu.h | 31 +++++++++++++++++++++++++++++
15
linux-user/syscall.c | 24 ++++++++++++++++++++++
16
4 files changed, 62 insertions(+)
17
18
diff --git a/linux-user/aarch64/target_syscall.h b/linux-user/aarch64/target_syscall.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/aarch64/target_syscall.h
21
+++ b/linux-user/aarch64/target_syscall.h
22
@@ -XXX,XX +XXX,XX @@ struct target_pt_regs {
23
# define TARGET_PR_PAC_APDBKEY (1 << 3)
24
# define TARGET_PR_PAC_APGAKEY (1 << 4)
25
26
+#define TARGET_PR_SET_TAGGED_ADDR_CTRL 55
27
+#define TARGET_PR_GET_TAGGED_ADDR_CTRL 56
28
+# define TARGET_PR_TAGGED_ADDR_ENABLE (1UL << 0)
29
+
30
#endif /* AARCH64_TARGET_SYSCALL_H */
31
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu-param.h
34
+++ b/target/arm/cpu-param.h
35
@@ -XXX,XX +XXX,XX @@
36
37
#ifdef CONFIG_USER_ONLY
38
#define TARGET_PAGE_BITS 12
39
+# ifdef TARGET_AARCH64
40
+# define TARGET_TAGGED_ADDRESSES
41
+# endif
42
#else
43
/*
44
* ARMv7 and later CPUs have 4K pages minimum, but ARMv5 and v6
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
49
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
50
const struct arm_boot_info *boot_info;
51
/* Store GICv3CPUState to access from this struct */
52
void *gicv3state;
53
+
54
+#ifdef TARGET_TAGGED_ADDRESSES
55
+ /* Linux syscall tagged address support */
56
+ bool tagged_addr_enable;
57
+#endif
58
} CPUARMState;
59
60
static inline void set_feature(CPUARMState *env, int feature)
61
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
62
*/
63
#define PAGE_BTI PAGE_TARGET_1
64
65
+#ifdef TARGET_TAGGED_ADDRESSES
66
+/**
67
+ * cpu_untagged_addr:
68
+ * @cs: CPU context
69
+ * @x: tagged address
70
+ *
71
+ * Remove any address tag from @x. This is explicitly related to the
72
+ * linux syscall TIF_TAGGED_ADDR setting, not TBI in general.
73
+ *
74
+ * There should be a better place to put this, but we need this in
75
+ * include/exec/cpu_ldst.h, and not some place linux-user specific.
76
+ */
77
+static inline target_ulong cpu_untagged_addr(CPUState *cs, target_ulong x)
78
+{
79
+ ARMCPU *cpu = ARM_CPU(cs);
80
+ if (cpu->env.tagged_addr_enable) {
81
+ /*
82
+ * TBI is enabled for userspace but not kernelspace addresses.
83
+ * Only clear the tag if bit 55 is clear.
84
+ */
85
+ x &= sextract64(x, 0, 56);
86
+ }
87
+ return x;
88
+}
89
+#endif
90
+
91
/*
92
* Naming convention for isar_feature functions:
93
* Functions which test 32-bit ID registers should have _aa32_ in
94
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
95
index XXXXXXX..XXXXXXX 100644
96
--- a/linux-user/syscall.c
97
+++ b/linux-user/syscall.c
98
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
99
}
100
}
101
return -TARGET_EINVAL;
102
+ case TARGET_PR_SET_TAGGED_ADDR_CTRL:
103
+ {
104
+ abi_ulong valid_mask = TARGET_PR_TAGGED_ADDR_ENABLE;
105
+ CPUARMState *env = cpu_env;
106
+
107
+ if ((arg2 & ~valid_mask) || arg3 || arg4 || arg5) {
108
+ return -TARGET_EINVAL;
109
+ }
110
+ env->tagged_addr_enable = arg2 & TARGET_PR_TAGGED_ADDR_ENABLE;
111
+ return 0;
112
+ }
113
+ case TARGET_PR_GET_TAGGED_ADDR_CTRL:
114
+ {
115
+ abi_long ret = 0;
116
+ CPUARMState *env = cpu_env;
117
+
118
+ if (arg2 || arg3 || arg4 || arg5) {
119
+ return -TARGET_EINVAL;
120
+ }
121
+ if (env->tagged_addr_enable) {
122
+ ret |= TARGET_PR_TAGGED_ADDR_ENABLE;
123
+ }
124
+ return ret;
125
+ }
126
#endif /* AARCH64 */
127
case PR_GET_SECCOMP:
128
case PR_SET_SECCOMP:
129
--
130
2.20.1
131
132
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Use simple arithmetic instead of a conditional
4
move when tbi0 != tbi1.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-22-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate-a64.c | 25 ++++++++++++++-----------
12
1 file changed, 14 insertions(+), 11 deletions(-)
13
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void gen_top_byte_ignore(DisasContext *s, TCGv_i64 dst,
19
/* Sign-extend from bit 55. */
20
tcg_gen_sextract_i64(dst, src, 0, 56);
21
22
- if (tbi != 3) {
23
- TCGv_i64 tcg_zero = tcg_const_i64(0);
24
-
25
- /*
26
- * The two TBI bits differ.
27
- * If tbi0, then !tbi1: only use the extension if positive.
28
- * if !tbi0, then tbi1: only use the extension if negative.
29
- */
30
- tcg_gen_movcond_i64(tbi == 1 ? TCG_COND_GE : TCG_COND_LT,
31
- dst, dst, tcg_zero, dst, src);
32
- tcg_temp_free_i64(tcg_zero);
33
+ switch (tbi) {
34
+ case 1:
35
+ /* tbi0 but !tbi1: only use the extension if positive */
36
+ tcg_gen_and_i64(dst, dst, src);
37
+ break;
38
+ case 2:
39
+ /* !tbi0 but tbi1: only use the extension if negative */
40
+ tcg_gen_or_i64(dst, dst, src);
41
+ break;
42
+ case 3:
43
+ /* tbi0 and tbi1: always use the extension */
44
+ break;
45
+ default:
46
+ g_assert_not_reached();
47
}
48
}
49
}
50
--
51
2.20.1
52
53
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
These prctl fields are required for the function of MTE.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210210000223.884088-24-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
linux-user/aarch64/target_syscall.h | 9 ++++++
11
linux-user/syscall.c | 43 +++++++++++++++++++++++++++++
12
2 files changed, 52 insertions(+)
13
14
diff --git a/linux-user/aarch64/target_syscall.h b/linux-user/aarch64/target_syscall.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/aarch64/target_syscall.h
17
+++ b/linux-user/aarch64/target_syscall.h
18
@@ -XXX,XX +XXX,XX @@ struct target_pt_regs {
19
#define TARGET_PR_SET_TAGGED_ADDR_CTRL 55
20
#define TARGET_PR_GET_TAGGED_ADDR_CTRL 56
21
# define TARGET_PR_TAGGED_ADDR_ENABLE (1UL << 0)
22
+/* MTE tag check fault modes */
23
+# define TARGET_PR_MTE_TCF_SHIFT 1
24
+# define TARGET_PR_MTE_TCF_NONE (0UL << TARGET_PR_MTE_TCF_SHIFT)
25
+# define TARGET_PR_MTE_TCF_SYNC (1UL << TARGET_PR_MTE_TCF_SHIFT)
26
+# define TARGET_PR_MTE_TCF_ASYNC (2UL << TARGET_PR_MTE_TCF_SHIFT)
27
+# define TARGET_PR_MTE_TCF_MASK (3UL << TARGET_PR_MTE_TCF_SHIFT)
28
+/* MTE tag inclusion mask */
29
+# define TARGET_PR_MTE_TAG_SHIFT 3
30
+# define TARGET_PR_MTE_TAG_MASK (0xffffUL << TARGET_PR_MTE_TAG_SHIFT)
31
32
#endif /* AARCH64_TARGET_SYSCALL_H */
33
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/linux-user/syscall.c
36
+++ b/linux-user/syscall.c
37
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
38
{
39
abi_ulong valid_mask = TARGET_PR_TAGGED_ADDR_ENABLE;
40
CPUARMState *env = cpu_env;
41
+ ARMCPU *cpu = env_archcpu(env);
42
+
43
+ if (cpu_isar_feature(aa64_mte, cpu)) {
44
+ valid_mask |= TARGET_PR_MTE_TCF_MASK;
45
+ valid_mask |= TARGET_PR_MTE_TAG_MASK;
46
+ }
47
48
if ((arg2 & ~valid_mask) || arg3 || arg4 || arg5) {
49
return -TARGET_EINVAL;
50
}
51
env->tagged_addr_enable = arg2 & TARGET_PR_TAGGED_ADDR_ENABLE;
52
+
53
+ if (cpu_isar_feature(aa64_mte, cpu)) {
54
+ switch (arg2 & TARGET_PR_MTE_TCF_MASK) {
55
+ case TARGET_PR_MTE_TCF_NONE:
56
+ case TARGET_PR_MTE_TCF_SYNC:
57
+ case TARGET_PR_MTE_TCF_ASYNC:
58
+ break;
59
+ default:
60
+ return -EINVAL;
61
+ }
62
+
63
+ /*
64
+ * Write PR_MTE_TCF to SCTLR_EL1[TCF0].
65
+ * Note that the syscall values are consistent with hw.
66
+ */
67
+ env->cp15.sctlr_el[1] =
68
+ deposit64(env->cp15.sctlr_el[1], 38, 2,
69
+ arg2 >> TARGET_PR_MTE_TCF_SHIFT);
70
+
71
+ /*
72
+ * Write PR_MTE_TAG to GCR_EL1[Exclude].
73
+ * Note that the syscall uses an include mask,
74
+ * and hardware uses an exclude mask -- invert.
75
+ */
76
+ env->cp15.gcr_el1 =
77
+ deposit64(env->cp15.gcr_el1, 0, 16,
78
+ ~arg2 >> TARGET_PR_MTE_TAG_SHIFT);
79
+ arm_rebuild_hflags(env);
80
+ }
81
return 0;
82
}
83
case TARGET_PR_GET_TAGGED_ADDR_CTRL:
84
{
85
abi_long ret = 0;
86
CPUARMState *env = cpu_env;
87
+ ARMCPU *cpu = env_archcpu(env);
88
89
if (arg2 || arg3 || arg4 || arg5) {
90
return -TARGET_EINVAL;
91
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
92
if (env->tagged_addr_enable) {
93
ret |= TARGET_PR_TAGGED_ADDR_ENABLE;
94
}
95
+ if (cpu_isar_feature(aa64_mte, cpu)) {
96
+ /* See above. */
97
+ ret |= (extract64(env->cp15.sctlr_el[1], 38, 2)
98
+ << TARGET_PR_MTE_TCF_SHIFT);
99
+ ret = deposit64(ret, TARGET_PR_MTE_TAG_SHIFT, 16,
100
+ ~env->cp15.gcr_el1);
101
+ }
102
return ret;
103
}
104
#endif /* AARCH64 */
105
--
106
2.20.1
107
108
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Remember the PROT_MTE bit as PAGE_MTE/PAGE_TARGET_2.
4
Otherwise this does not yet have effect.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-25-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu-all.h | 1 +
12
linux-user/syscall_defs.h | 1 +
13
target/arm/cpu.h | 1 +
14
linux-user/mmap.c | 22 ++++++++++++++--------
15
4 files changed, 17 insertions(+), 8 deletions(-)
16
17
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/cpu-all.h
20
+++ b/include/exec/cpu-all.h
21
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
22
#endif
23
/* Target-specific bits that will be used via page_get_flags(). */
24
#define PAGE_TARGET_1 0x0080
25
+#define PAGE_TARGET_2 0x0200
26
27
#if defined(CONFIG_USER_ONLY)
28
void page_dump(FILE *f);
29
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/linux-user/syscall_defs.h
32
+++ b/linux-user/syscall_defs.h
33
@@ -XXX,XX +XXX,XX @@ struct target_winsize {
34
35
#ifdef TARGET_AARCH64
36
#define TARGET_PROT_BTI 0x10
37
+#define TARGET_PROT_MTE 0x20
38
#endif
39
40
/* Common */
41
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/cpu.h
44
+++ b/target/arm/cpu.h
45
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
46
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
47
*/
48
#define PAGE_BTI PAGE_TARGET_1
49
+#define PAGE_MTE PAGE_TARGET_2
50
51
#ifdef TARGET_TAGGED_ADDRESSES
52
/**
53
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/linux-user/mmap.c
56
+++ b/linux-user/mmap.c
57
@@ -XXX,XX +XXX,XX @@ static int validate_prot_to_pageflags(int *host_prot, int prot)
58
| (prot & PROT_EXEC ? PROT_READ : 0);
59
60
#ifdef TARGET_AARCH64
61
- /*
62
- * The PROT_BTI bit is only accepted if the cpu supports the feature.
63
- * Since this is the unusual case, don't bother checking unless
64
- * the bit has been requested. If set and valid, record the bit
65
- * within QEMU's page_flags.
66
- */
67
- if (prot & TARGET_PROT_BTI) {
68
+ {
69
ARMCPU *cpu = ARM_CPU(thread_cpu);
70
- if (cpu_isar_feature(aa64_bti, cpu)) {
71
+
72
+ /*
73
+ * The PROT_BTI bit is only accepted if the cpu supports the feature.
74
+ * Since this is the unusual case, don't bother checking unless
75
+ * the bit has been requested. If set and valid, record the bit
76
+ * within QEMU's page_flags.
77
+ */
78
+ if ((prot & TARGET_PROT_BTI) && cpu_isar_feature(aa64_bti, cpu)) {
79
valid |= TARGET_PROT_BTI;
80
page_flags |= PAGE_BTI;
81
}
82
+ /* Similarly for the PROT_MTE bit. */
83
+ if ((prot & TARGET_PROT_MTE) && cpu_isar_feature(aa64_mte, cpu)) {
84
+ valid |= TARGET_PROT_MTE;
85
+ page_flags |= PAGE_MTE;
86
+ }
87
}
88
#endif
89
90
--
91
2.20.1
92
93
diff view generated by jsdifflib
1
From: Doug Evans <dje@google.com>
1
Add the ARM_CP_ADD_TLBI_NXS to the TLBI insns with an NXS variant.
2
This is every AArch64 TLBI encoding except for the four FEAT_RME TLBI
3
insns.
2
4
3
This is a 10/100 ethernet device that has several features.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Only the ones needed by the Linux driver have been implemented.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
See npcm7xx_emc.c for a list of unimplemented features.
7
Message-id: 20241211144440.2700268-4-peter.maydell@linaro.org
8
---
9
target/arm/tcg/tlb-insns.c | 202 +++++++++++++++++++++++--------------
10
1 file changed, 124 insertions(+), 78 deletions(-)
6
11
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
12
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
8
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Doug Evans <dje@google.com>
11
Message-id: 20210209015541.778833-3-dje@google.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
docs/system/arm/nuvoton.rst | 3 ++-
15
include/hw/arm/npcm7xx.h | 2 ++
16
hw/arm/npcm7xx.c | 50 +++++++++++++++++++++++++++++++++++--
17
3 files changed, 52 insertions(+), 3 deletions(-)
18
19
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/docs/system/arm/nuvoton.rst
14
--- a/target/arm/tcg/tlb-insns.c
22
+++ b/docs/system/arm/nuvoton.rst
15
+++ b/target/arm/tcg/tlb-insns.c
23
@@ -XXX,XX +XXX,XX @@ Supported devices
16
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_v8_cp_reginfo[] = {
24
* GPIO controller
17
/* AArch64 TLBI operations */
25
* Analog to Digital Converter (ADC)
18
{ .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
26
* Pulse Width Modulation (PWM)
19
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
27
+ * Ethernet controller (EMC)
20
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
28
21
+ .access = PL1_W, .accessfn = access_ttlbis,
29
Missing devices
22
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
30
---------------
23
.fgt = FGT_TLBIVMALLE1IS,
31
@@ -XXX,XX +XXX,XX @@ Missing devices
24
.writefn = tlbi_aa64_vmalle1is_write },
32
* Shared memory (SHM)
25
{ .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
33
* eSPI slave interface
26
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
34
27
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
35
- * Ethernet controllers (GMAC and EMC)
28
+ .access = PL1_W, .accessfn = access_ttlbis,
36
+ * Ethernet controller (GMAC)
29
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
37
* USB device (USBD)
30
.fgt = FGT_TLBIVAE1IS,
38
* SMBus controller (SMBF)
31
.writefn = tlbi_aa64_vae1is_write },
39
* Peripheral SPI controller (PSPI)
32
{ .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
40
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
33
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
41
index XXXXXXX..XXXXXXX 100644
34
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
42
--- a/include/hw/arm/npcm7xx.h
35
+ .access = PL1_W, .accessfn = access_ttlbis,
43
+++ b/include/hw/arm/npcm7xx.h
36
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
44
@@ -XXX,XX +XXX,XX @@
37
.fgt = FGT_TLBIASIDE1IS,
45
#include "hw/misc/npcm7xx_gcr.h"
38
.writefn = tlbi_aa64_vmalle1is_write },
46
#include "hw/misc/npcm7xx_pwm.h"
39
{ .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
47
#include "hw/misc/npcm7xx_rng.h"
40
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
48
+#include "hw/net/npcm7xx_emc.h"
41
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
49
#include "hw/nvram/npcm7xx_otp.h"
42
+ .access = PL1_W, .accessfn = access_ttlbis,
50
#include "hw/timer/npcm7xx_timer.h"
43
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
51
#include "hw/ssi/npcm7xx_fiu.h"
44
.fgt = FGT_TLBIVAAE1IS,
52
@@ -XXX,XX +XXX,XX @@ typedef struct NPCM7xxState {
45
.writefn = tlbi_aa64_vae1is_write },
53
EHCISysBusState ehci;
46
{ .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
54
OHCISysBusState ohci;
47
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
55
NPCM7xxFIUState fiu[2];
48
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
56
+ NPCM7xxEMCState emc[2];
49
+ .access = PL1_W, .accessfn = access_ttlbis,
57
} NPCM7xxState;
50
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
58
51
.fgt = FGT_TLBIVALE1IS,
59
#define TYPE_NPCM7XX "npcm7xx"
52
.writefn = tlbi_aa64_vae1is_write },
60
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
53
{ .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
61
index XXXXXXX..XXXXXXX 100644
54
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
62
--- a/hw/arm/npcm7xx.c
55
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
63
+++ b/hw/arm/npcm7xx.c
56
+ .access = PL1_W, .accessfn = access_ttlbis,
64
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
57
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
65
NPCM7XX_UART1_IRQ,
58
.fgt = FGT_TLBIVAALE1IS,
66
NPCM7XX_UART2_IRQ,
59
.writefn = tlbi_aa64_vae1is_write },
67
NPCM7XX_UART3_IRQ,
60
{ .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
68
+ NPCM7XX_EMC1RX_IRQ = 15,
61
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
69
+ NPCM7XX_EMC1TX_IRQ,
62
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
70
NPCM7XX_TIMER0_IRQ = 32, /* Timer Module 0 */
63
+ .access = PL1_W, .accessfn = access_ttlb,
71
NPCM7XX_TIMER1_IRQ,
64
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
72
NPCM7XX_TIMER2_IRQ,
65
.fgt = FGT_TLBIVMALLE1,
73
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
66
.writefn = tlbi_aa64_vmalle1_write },
74
NPCM7XX_OHCI_IRQ = 62,
67
{ .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64,
75
NPCM7XX_PWM0_IRQ = 93, /* PWM module 0 */
68
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
76
NPCM7XX_PWM1_IRQ, /* PWM module 1 */
69
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
77
+ NPCM7XX_EMC2RX_IRQ = 114,
70
+ .access = PL1_W, .accessfn = access_ttlb,
78
+ NPCM7XX_EMC2TX_IRQ,
71
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
79
NPCM7XX_GPIO0_IRQ = 116,
72
.fgt = FGT_TLBIVAE1,
80
NPCM7XX_GPIO1_IRQ,
73
.writefn = tlbi_aa64_vae1_write },
81
NPCM7XX_GPIO2_IRQ,
74
{ .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64,
82
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_pwm_addr[] = {
75
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
83
0xf0104000,
76
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
77
+ .access = PL1_W, .accessfn = access_ttlb,
78
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
79
.fgt = FGT_TLBIASIDE1,
80
.writefn = tlbi_aa64_vmalle1_write },
81
{ .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64,
82
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
83
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
84
+ .access = PL1_W, .accessfn = access_ttlb,
85
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
86
.fgt = FGT_TLBIVAAE1,
87
.writefn = tlbi_aa64_vae1_write },
88
{ .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64,
89
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
90
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
91
+ .access = PL1_W, .accessfn = access_ttlb,
92
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
93
.fgt = FGT_TLBIVALE1,
94
.writefn = tlbi_aa64_vae1_write },
95
{ .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64,
96
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
97
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
98
+ .access = PL1_W, .accessfn = access_ttlb,
99
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
100
.fgt = FGT_TLBIVAALE1,
101
.writefn = tlbi_aa64_vae1_write },
102
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
103
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
104
- .access = PL2_W, .type = ARM_CP_NO_RAW,
105
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
106
.writefn = tlbi_aa64_ipas2e1is_write },
107
{ .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
108
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
109
- .access = PL2_W, .type = ARM_CP_NO_RAW,
110
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
111
.writefn = tlbi_aa64_ipas2e1is_write },
112
{ .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
113
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
114
- .access = PL2_W, .type = ARM_CP_NO_RAW,
115
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
116
.writefn = tlbi_aa64_alle1is_write },
117
{ .name = "TLBI_VMALLS12E1IS", .state = ARM_CP_STATE_AA64,
118
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 6,
119
- .access = PL2_W, .type = ARM_CP_NO_RAW,
120
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
121
.writefn = tlbi_aa64_alle1is_write },
122
{ .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
123
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
124
- .access = PL2_W, .type = ARM_CP_NO_RAW,
125
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
126
.writefn = tlbi_aa64_ipas2e1_write },
127
{ .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
128
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
129
- .access = PL2_W, .type = ARM_CP_NO_RAW,
130
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
131
.writefn = tlbi_aa64_ipas2e1_write },
132
{ .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
133
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
134
- .access = PL2_W, .type = ARM_CP_NO_RAW,
135
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
136
.writefn = tlbi_aa64_alle1_write },
137
{ .name = "TLBI_VMALLS12E1", .state = ARM_CP_STATE_AA64,
138
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 6,
139
- .access = PL2_W, .type = ARM_CP_NO_RAW,
140
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
141
.writefn = tlbi_aa64_alle1is_write },
84
};
142
};
85
143
86
+/* Register base address for each EMC Module */
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
87
+static const hwaddr npcm7xx_emc_addr[] = {
145
.writefn = tlbimva_hyp_is_write },
88
+ 0xf0825000,
146
{ .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
89
+ 0xf0826000,
147
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
90
+};
148
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
91
+
149
+ .access = PL2_W,
92
static const struct {
150
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
93
hwaddr regs_addr;
151
.writefn = tlbi_aa64_alle2_write },
94
uint32_t unconnected_pins;
152
{ .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
95
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
153
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
96
for (i = 0; i < ARRAY_SIZE(s->pwm); i++) {
154
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
97
object_initialize_child(obj, "pwm[*]", &s->pwm[i], TYPE_NPCM7XX_PWM);
155
+ .access = PL2_W,
98
}
156
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
99
+
157
.writefn = tlbi_aa64_vae2_write },
100
+ for (i = 0; i < ARRAY_SIZE(s->emc); i++) {
158
{ .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
101
+ object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
159
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
102
+ }
160
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
103
}
161
+ .access = PL2_W,
104
162
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
105
static void npcm7xx_realize(DeviceState *dev, Error **errp)
163
.writefn = tlbi_aa64_vae2_write },
106
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
164
{ .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
107
sysbus_connect_irq(sbd, i, npcm7xx_irq(s, NPCM7XX_PWM0_IRQ + i));
165
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
108
}
166
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
109
167
+ .access = PL2_W,
110
+ /*
168
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
111
+ * EMC Modules. Cannot fail.
169
.writefn = tlbi_aa64_alle2is_write },
112
+ * The mapping of the device to its netdev backend works as follows:
170
{ .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
113
+ * emc[i] = nd_table[i]
171
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
114
+ * This works around the inability to specify the netdev property for the
172
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
115
+ * emc device: it's not pluggable and thus the -device option can't be
173
+ .access = PL2_W,
116
+ * used.
174
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
117
+ */
175
.writefn = tlbi_aa64_vae2is_write },
118
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_emc_addr) != ARRAY_SIZE(s->emc));
176
{ .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
119
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(s->emc) != 2);
177
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
120
+ for (i = 0; i < ARRAY_SIZE(s->emc); i++) {
178
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
121
+ s->emc[i].emc_num = i;
179
+ .access = PL2_W,
122
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->emc[i]);
180
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
123
+ if (nd_table[i].used) {
181
.writefn = tlbi_aa64_vae2is_write },
124
+ qemu_check_nic_model(&nd_table[i], TYPE_NPCM7XX_EMC);
182
};
125
+ qdev_set_nic_properties(DEVICE(sbd), &nd_table[i]);
183
126
+ }
184
static const ARMCPRegInfo tlbi_el3_cp_reginfo[] = {
127
+ /*
185
{ .name = "TLBI_ALLE3IS", .state = ARM_CP_STATE_AA64,
128
+ * The device exists regardless of whether it's connected to a QEMU
186
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 0,
129
+ * netdev backend. So always instantiate it even if there is no
187
- .access = PL3_W, .type = ARM_CP_NO_RAW,
130
+ * backend.
188
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
131
+ */
189
.writefn = tlbi_aa64_alle3is_write },
132
+ sysbus_realize(sbd, &error_abort);
190
{ .name = "TLBI_VAE3IS", .state = ARM_CP_STATE_AA64,
133
+ sysbus_mmio_map(sbd, 0, npcm7xx_emc_addr[i]);
191
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 1,
134
+ int tx_irq = i == 0 ? NPCM7XX_EMC1TX_IRQ : NPCM7XX_EMC2TX_IRQ;
192
- .access = PL3_W, .type = ARM_CP_NO_RAW,
135
+ int rx_irq = i == 0 ? NPCM7XX_EMC1RX_IRQ : NPCM7XX_EMC2RX_IRQ;
193
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
136
+ /*
194
.writefn = tlbi_aa64_vae3is_write },
137
+ * N.B. The values for the second argument sysbus_connect_irq are
195
{ .name = "TLBI_VALE3IS", .state = ARM_CP_STATE_AA64,
138
+ * chosen to match the registration order in npcm7xx_emc_realize.
196
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 5,
139
+ */
197
- .access = PL3_W, .type = ARM_CP_NO_RAW,
140
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, tx_irq));
198
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
141
+ sysbus_connect_irq(sbd, 1, npcm7xx_irq(s, rx_irq));
199
.writefn = tlbi_aa64_vae3is_write },
142
+ }
200
{ .name = "TLBI_ALLE3", .state = ARM_CP_STATE_AA64,
143
+
201
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 0,
144
/*
202
- .access = PL3_W, .type = ARM_CP_NO_RAW,
145
* Flash Interface Unit (FIU). Can fail if incorrect number of chip selects
203
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
146
* specified, but this is a programming error.
204
.writefn = tlbi_aa64_alle3_write },
147
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
205
{ .name = "TLBI_VAE3", .state = ARM_CP_STATE_AA64,
148
create_unimplemented_device("npcm7xx.vcd", 0xf0810000, 64 * KiB);
206
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 1,
149
create_unimplemented_device("npcm7xx.ece", 0xf0820000, 8 * KiB);
207
- .access = PL3_W, .type = ARM_CP_NO_RAW,
150
create_unimplemented_device("npcm7xx.vdma", 0xf0822000, 8 * KiB);
208
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
151
- create_unimplemented_device("npcm7xx.emc1", 0xf0825000, 4 * KiB);
209
.writefn = tlbi_aa64_vae3_write },
152
- create_unimplemented_device("npcm7xx.emc2", 0xf0826000, 4 * KiB);
210
{ .name = "TLBI_VALE3", .state = ARM_CP_STATE_AA64,
153
create_unimplemented_device("npcm7xx.usbd[0]", 0xf0830000, 4 * KiB);
211
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
154
create_unimplemented_device("npcm7xx.usbd[1]", 0xf0831000, 4 * KiB);
212
- .access = PL3_W, .type = ARM_CP_NO_RAW,
155
create_unimplemented_device("npcm7xx.usbd[2]", 0xf0832000, 4 * KiB);
213
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
214
.writefn = tlbi_aa64_vae3_write },
215
};
216
217
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
218
static const ARMCPRegInfo tlbirange_reginfo[] = {
219
{ .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
220
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
221
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
222
+ .access = PL1_W, .accessfn = access_ttlbis,
223
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
224
.fgt = FGT_TLBIRVAE1IS,
225
.writefn = tlbi_aa64_rvae1is_write },
226
{ .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
227
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
228
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
229
+ .access = PL1_W, .accessfn = access_ttlbis,
230
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
231
.fgt = FGT_TLBIRVAAE1IS,
232
.writefn = tlbi_aa64_rvae1is_write },
233
{ .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
234
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
235
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
236
+ .access = PL1_W, .accessfn = access_ttlbis,
237
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
238
.fgt = FGT_TLBIRVALE1IS,
239
.writefn = tlbi_aa64_rvae1is_write },
240
{ .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
241
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
242
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
243
+ .access = PL1_W, .accessfn = access_ttlbis,
244
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
245
.fgt = FGT_TLBIRVAALE1IS,
246
.writefn = tlbi_aa64_rvae1is_write },
247
{ .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
248
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
249
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
250
+ .access = PL1_W, .accessfn = access_ttlbos,
251
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
252
.fgt = FGT_TLBIRVAE1OS,
253
.writefn = tlbi_aa64_rvae1is_write },
254
{ .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
255
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
256
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
257
+ .access = PL1_W, .accessfn = access_ttlbos,
258
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
259
.fgt = FGT_TLBIRVAAE1OS,
260
.writefn = tlbi_aa64_rvae1is_write },
261
{ .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
262
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
263
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
264
+ .access = PL1_W, .accessfn = access_ttlbos,
265
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
266
.fgt = FGT_TLBIRVALE1OS,
267
.writefn = tlbi_aa64_rvae1is_write },
268
{ .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
269
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
270
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
271
+ .access = PL1_W, .accessfn = access_ttlbos,
272
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
273
.fgt = FGT_TLBIRVAALE1OS,
274
.writefn = tlbi_aa64_rvae1is_write },
275
{ .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
276
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
277
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
278
+ .access = PL1_W, .accessfn = access_ttlb,
279
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
280
.fgt = FGT_TLBIRVAE1,
281
.writefn = tlbi_aa64_rvae1_write },
282
{ .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
283
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
284
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
285
+ .access = PL1_W, .accessfn = access_ttlb,
286
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
287
.fgt = FGT_TLBIRVAAE1,
288
.writefn = tlbi_aa64_rvae1_write },
289
{ .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
290
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
291
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
292
+ .access = PL1_W, .accessfn = access_ttlb,
293
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
294
.fgt = FGT_TLBIRVALE1,
295
.writefn = tlbi_aa64_rvae1_write },
296
{ .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
297
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
298
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
299
+ .access = PL1_W, .accessfn = access_ttlb,
300
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
301
.fgt = FGT_TLBIRVAALE1,
302
.writefn = tlbi_aa64_rvae1_write },
303
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
304
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
305
- .access = PL2_W, .type = ARM_CP_NO_RAW,
306
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
307
.writefn = tlbi_aa64_ripas2e1is_write },
308
{ .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
309
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
310
- .access = PL2_W, .type = ARM_CP_NO_RAW,
311
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
312
.writefn = tlbi_aa64_ripas2e1is_write },
313
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
314
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
315
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
316
+ .access = PL2_W,
317
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
318
.writefn = tlbi_aa64_rvae2is_write },
319
{ .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
320
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
321
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
322
+ .access = PL2_W,
323
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
324
.writefn = tlbi_aa64_rvae2is_write },
325
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
326
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
327
- .access = PL2_W, .type = ARM_CP_NO_RAW,
328
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
329
.writefn = tlbi_aa64_ripas2e1_write },
330
{ .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
331
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
332
- .access = PL2_W, .type = ARM_CP_NO_RAW,
333
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
334
.writefn = tlbi_aa64_ripas2e1_write },
335
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
336
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
337
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
338
+ .access = PL2_W,
339
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
340
.writefn = tlbi_aa64_rvae2is_write },
341
{ .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
342
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
343
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
344
+ .access = PL2_W,
345
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
346
.writefn = tlbi_aa64_rvae2is_write },
347
{ .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
348
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
349
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
350
+ .access = PL2_W,
351
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
352
.writefn = tlbi_aa64_rvae2_write },
353
{ .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
354
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
355
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
356
+ .access = PL2_W,
357
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
358
.writefn = tlbi_aa64_rvae2_write },
359
{ .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
360
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
361
- .access = PL3_W, .type = ARM_CP_NO_RAW,
362
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
363
.writefn = tlbi_aa64_rvae3is_write },
364
{ .name = "TLBI_RVALE3IS", .state = ARM_CP_STATE_AA64,
365
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 5,
366
- .access = PL3_W, .type = ARM_CP_NO_RAW,
367
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
368
.writefn = tlbi_aa64_rvae3is_write },
369
{ .name = "TLBI_RVAE3OS", .state = ARM_CP_STATE_AA64,
370
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 1,
371
- .access = PL3_W, .type = ARM_CP_NO_RAW,
372
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
373
.writefn = tlbi_aa64_rvae3is_write },
374
{ .name = "TLBI_RVALE3OS", .state = ARM_CP_STATE_AA64,
375
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 5,
376
- .access = PL3_W, .type = ARM_CP_NO_RAW,
377
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
378
.writefn = tlbi_aa64_rvae3is_write },
379
{ .name = "TLBI_RVAE3", .state = ARM_CP_STATE_AA64,
380
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 1,
381
- .access = PL3_W, .type = ARM_CP_NO_RAW,
382
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
383
.writefn = tlbi_aa64_rvae3_write },
384
{ .name = "TLBI_RVALE3", .state = ARM_CP_STATE_AA64,
385
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
386
- .access = PL3_W, .type = ARM_CP_NO_RAW,
387
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
388
.writefn = tlbi_aa64_rvae3_write },
389
};
390
391
static const ARMCPRegInfo tlbios_reginfo[] = {
392
{ .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
393
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
394
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
395
+ .access = PL1_W, .accessfn = access_ttlbos,
396
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
397
.fgt = FGT_TLBIVMALLE1OS,
398
.writefn = tlbi_aa64_vmalle1is_write },
399
{ .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
400
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
401
.fgt = FGT_TLBIVAE1OS,
402
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
403
+ .access = PL1_W, .accessfn = access_ttlbos,
404
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
405
.writefn = tlbi_aa64_vae1is_write },
406
{ .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
407
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
408
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
409
+ .access = PL1_W, .accessfn = access_ttlbos,
410
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
411
.fgt = FGT_TLBIASIDE1OS,
412
.writefn = tlbi_aa64_vmalle1is_write },
413
{ .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
414
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
415
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
416
+ .access = PL1_W, .accessfn = access_ttlbos,
417
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
418
.fgt = FGT_TLBIVAAE1OS,
419
.writefn = tlbi_aa64_vae1is_write },
420
{ .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
421
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
422
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
423
+ .access = PL1_W, .accessfn = access_ttlbos,
424
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
425
.fgt = FGT_TLBIVALE1OS,
426
.writefn = tlbi_aa64_vae1is_write },
427
{ .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
428
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
429
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
430
+ .access = PL1_W, .accessfn = access_ttlbos,
431
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
432
.fgt = FGT_TLBIVAALE1OS,
433
.writefn = tlbi_aa64_vae1is_write },
434
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
435
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
436
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
437
+ .access = PL2_W,
438
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
439
.writefn = tlbi_aa64_alle2is_write },
440
{ .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
441
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
442
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
443
+ .access = PL2_W,
444
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
445
.writefn = tlbi_aa64_vae2is_write },
446
{ .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
447
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
448
- .access = PL2_W, .type = ARM_CP_NO_RAW,
449
+ .access = PL2_W,
450
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
451
.writefn = tlbi_aa64_alle1is_write },
452
{ .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
453
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
454
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
455
+ .access = PL2_W,
456
+ .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS | ARM_CP_EL3_NO_EL2_UNDEF,
457
.writefn = tlbi_aa64_vae2is_write },
458
{ .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
459
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
460
- .access = PL2_W, .type = ARM_CP_NO_RAW,
461
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
462
.writefn = tlbi_aa64_alle1is_write },
463
{ .name = "TLBI_IPAS2E1OS", .state = ARM_CP_STATE_AA64,
464
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 0,
465
- .access = PL2_W, .type = ARM_CP_NOP },
466
+ .access = PL2_W, .type = ARM_CP_NOP | ARM_CP_ADD_TLBI_NXS },
467
{ .name = "TLBI_RIPAS2E1OS", .state = ARM_CP_STATE_AA64,
468
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 3,
469
- .access = PL2_W, .type = ARM_CP_NOP },
470
+ .access = PL2_W, .type = ARM_CP_NOP | ARM_CP_ADD_TLBI_NXS },
471
{ .name = "TLBI_IPAS2LE1OS", .state = ARM_CP_STATE_AA64,
472
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 4,
473
- .access = PL2_W, .type = ARM_CP_NOP },
474
+ .access = PL2_W, .type = ARM_CP_NOP | ARM_CP_ADD_TLBI_NXS },
475
{ .name = "TLBI_RIPAS2LE1OS", .state = ARM_CP_STATE_AA64,
476
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 7,
477
- .access = PL2_W, .type = ARM_CP_NOP },
478
+ .access = PL2_W, .type = ARM_CP_NOP | ARM_CP_ADD_TLBI_NXS },
479
{ .name = "TLBI_ALLE3OS", .state = ARM_CP_STATE_AA64,
480
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 0,
481
- .access = PL3_W, .type = ARM_CP_NO_RAW,
482
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
483
.writefn = tlbi_aa64_alle3is_write },
484
{ .name = "TLBI_VAE3OS", .state = ARM_CP_STATE_AA64,
485
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 1,
486
- .access = PL3_W, .type = ARM_CP_NO_RAW,
487
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
488
.writefn = tlbi_aa64_vae3is_write },
489
{ .name = "TLBI_VALE3OS", .state = ARM_CP_STATE_AA64,
490
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
491
- .access = PL3_W, .type = ARM_CP_NO_RAW,
492
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_ADD_TLBI_NXS,
493
.writefn = tlbi_aa64_vae3is_write },
494
};
495
156
--
496
--
157
2.20.1
497
2.34.1
158
159
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
2
2
3
Move everything related to syndromes to a new file,
3
The DSB nXS variant is always both a reads and writes request type.
4
which can be shared with linux-user.
4
Ignore the domain field like we do in plain DSB and perform a full
5
system barrier operation.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
The DSB nXS variant is part of FEAT_XS made mandatory from Armv8.7.
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
8
Message-id: 20210210000223.884088-26-richard.henderson@linaro.org
9
Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241211144440.2700268-5-peter.maydell@linaro.org
13
[PMM: added missing "UNDEF unless feature present" check]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
target/arm/internals.h | 245 +-----------------------------------
16
target/arm/tcg/a64.decode | 3 +++
12
target/arm/syndrome.h | 273 +++++++++++++++++++++++++++++++++++++++++
17
target/arm/tcg/translate-a64.c | 9 +++++++++
13
2 files changed, 274 insertions(+), 244 deletions(-)
18
2 files changed, 12 insertions(+)
14
create mode 100644 target/arm/syndrome.h
15
19
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
20
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
22
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/internals.h
23
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ WFIT 1101 0101 0000 0011 0001 0000 001 rd:5
21
#define TARGET_ARM_INTERNALS_H
25
22
26
CLREX 1101 0101 0000 0011 0011 ---- 010 11111
23
#include "hw/registerfields.h"
27
DSB_DMB 1101 0101 0000 0011 0011 domain:2 types:2 10- 11111
24
+#include "syndrome.h"
28
+# For the DSB nXS variant, types always equals MBReqTypes_All and we ignore the
25
29
+# domain bits.
26
/* register banks for CPU modes */
30
+DSB_nXS 1101 0101 0000 0011 0011 -- 10 001 11111
27
#define BANK_USRSYS 0
31
ISB 1101 0101 0000 0011 0011 ---- 110 11111
28
@@ -XXX,XX +XXX,XX @@ static inline bool extended_addresses_enabled(CPUARMState *env)
32
SB 1101 0101 0000 0011 0011 0000 111 11111
29
(arm_feature(env, ARM_FEATURE_LPAE) && (tcr->raw_tcr & TTBCR_EAE));
33
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ static bool trans_DSB_DMB(DisasContext *s, arg_DSB_DMB *a)
39
return true;
30
}
40
}
31
41
32
-/* Valid Syndrome Register EC field values */
42
+static bool trans_DSB_nXS(DisasContext *s, arg_DSB_nXS *a)
33
-enum arm_exception_class {
34
- EC_UNCATEGORIZED = 0x00,
35
- EC_WFX_TRAP = 0x01,
36
- EC_CP15RTTRAP = 0x03,
37
- EC_CP15RRTTRAP = 0x04,
38
- EC_CP14RTTRAP = 0x05,
39
- EC_CP14DTTRAP = 0x06,
40
- EC_ADVSIMDFPACCESSTRAP = 0x07,
41
- EC_FPIDTRAP = 0x08,
42
- EC_PACTRAP = 0x09,
43
- EC_CP14RRTTRAP = 0x0c,
44
- EC_BTITRAP = 0x0d,
45
- EC_ILLEGALSTATE = 0x0e,
46
- EC_AA32_SVC = 0x11,
47
- EC_AA32_HVC = 0x12,
48
- EC_AA32_SMC = 0x13,
49
- EC_AA64_SVC = 0x15,
50
- EC_AA64_HVC = 0x16,
51
- EC_AA64_SMC = 0x17,
52
- EC_SYSTEMREGISTERTRAP = 0x18,
53
- EC_SVEACCESSTRAP = 0x19,
54
- EC_INSNABORT = 0x20,
55
- EC_INSNABORT_SAME_EL = 0x21,
56
- EC_PCALIGNMENT = 0x22,
57
- EC_DATAABORT = 0x24,
58
- EC_DATAABORT_SAME_EL = 0x25,
59
- EC_SPALIGNMENT = 0x26,
60
- EC_AA32_FPTRAP = 0x28,
61
- EC_AA64_FPTRAP = 0x2c,
62
- EC_SERROR = 0x2f,
63
- EC_BREAKPOINT = 0x30,
64
- EC_BREAKPOINT_SAME_EL = 0x31,
65
- EC_SOFTWARESTEP = 0x32,
66
- EC_SOFTWARESTEP_SAME_EL = 0x33,
67
- EC_WATCHPOINT = 0x34,
68
- EC_WATCHPOINT_SAME_EL = 0x35,
69
- EC_AA32_BKPT = 0x38,
70
- EC_VECTORCATCH = 0x3a,
71
- EC_AA64_BKPT = 0x3c,
72
-};
73
-
74
-#define ARM_EL_EC_SHIFT 26
75
-#define ARM_EL_IL_SHIFT 25
76
-#define ARM_EL_ISV_SHIFT 24
77
-#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
78
-#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
79
-
80
-static inline uint32_t syn_get_ec(uint32_t syn)
81
-{
82
- return syn >> ARM_EL_EC_SHIFT;
83
-}
84
-
85
-/* Utility functions for constructing various kinds of syndrome value.
86
- * Note that in general we follow the AArch64 syndrome values; in a
87
- * few cases the value in HSR for exceptions taken to AArch32 Hyp
88
- * mode differs slightly, and we fix this up when populating HSR in
89
- * arm_cpu_do_interrupt_aarch32_hyp().
90
- * The exception is FP/SIMD access traps -- these report extra information
91
- * when taking an exception to AArch32. For those we include the extra coproc
92
- * and TA fields, and mask them out when taking the exception to AArch64.
93
- */
94
-static inline uint32_t syn_uncategorized(void)
95
-{
96
- return (EC_UNCATEGORIZED << ARM_EL_EC_SHIFT) | ARM_EL_IL;
97
-}
98
-
99
-static inline uint32_t syn_aa64_svc(uint32_t imm16)
100
-{
101
- return (EC_AA64_SVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
102
-}
103
-
104
-static inline uint32_t syn_aa64_hvc(uint32_t imm16)
105
-{
106
- return (EC_AA64_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
107
-}
108
-
109
-static inline uint32_t syn_aa64_smc(uint32_t imm16)
110
-{
111
- return (EC_AA64_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
112
-}
113
-
114
-static inline uint32_t syn_aa32_svc(uint32_t imm16, bool is_16bit)
115
-{
116
- return (EC_AA32_SVC << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
117
- | (is_16bit ? 0 : ARM_EL_IL);
118
-}
119
-
120
-static inline uint32_t syn_aa32_hvc(uint32_t imm16)
121
-{
122
- return (EC_AA32_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
123
-}
124
-
125
-static inline uint32_t syn_aa32_smc(void)
126
-{
127
- return (EC_AA32_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL;
128
-}
129
-
130
-static inline uint32_t syn_aa64_bkpt(uint32_t imm16)
131
-{
132
- return (EC_AA64_BKPT << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
133
-}
134
-
135
-static inline uint32_t syn_aa32_bkpt(uint32_t imm16, bool is_16bit)
136
-{
137
- return (EC_AA32_BKPT << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
138
- | (is_16bit ? 0 : ARM_EL_IL);
139
-}
140
-
141
-static inline uint32_t syn_aa64_sysregtrap(int op0, int op1, int op2,
142
- int crn, int crm, int rt,
143
- int isread)
144
-{
145
- return (EC_SYSTEMREGISTERTRAP << ARM_EL_EC_SHIFT) | ARM_EL_IL
146
- | (op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (rt << 5)
147
- | (crm << 1) | isread;
148
-}
149
-
150
-static inline uint32_t syn_cp14_rt_trap(int cv, int cond, int opc1, int opc2,
151
- int crn, int crm, int rt, int isread,
152
- bool is_16bit)
153
-{
154
- return (EC_CP14RTTRAP << ARM_EL_EC_SHIFT)
155
- | (is_16bit ? 0 : ARM_EL_IL)
156
- | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
157
- | (crn << 10) | (rt << 5) | (crm << 1) | isread;
158
-}
159
-
160
-static inline uint32_t syn_cp15_rt_trap(int cv, int cond, int opc1, int opc2,
161
- int crn, int crm, int rt, int isread,
162
- bool is_16bit)
163
-{
164
- return (EC_CP15RTTRAP << ARM_EL_EC_SHIFT)
165
- | (is_16bit ? 0 : ARM_EL_IL)
166
- | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
167
- | (crn << 10) | (rt << 5) | (crm << 1) | isread;
168
-}
169
-
170
-static inline uint32_t syn_cp14_rrt_trap(int cv, int cond, int opc1, int crm,
171
- int rt, int rt2, int isread,
172
- bool is_16bit)
173
-{
174
- return (EC_CP14RRTTRAP << ARM_EL_EC_SHIFT)
175
- | (is_16bit ? 0 : ARM_EL_IL)
176
- | (cv << 24) | (cond << 20) | (opc1 << 16)
177
- | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
178
-}
179
-
180
-static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
181
- int rt, int rt2, int isread,
182
- bool is_16bit)
183
-{
184
- return (EC_CP15RRTTRAP << ARM_EL_EC_SHIFT)
185
- | (is_16bit ? 0 : ARM_EL_IL)
186
- | (cv << 24) | (cond << 20) | (opc1 << 16)
187
- | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
188
-}
189
-
190
-static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
191
-{
192
- /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
193
- return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
194
- | (is_16bit ? 0 : ARM_EL_IL)
195
- | (cv << 24) | (cond << 20) | 0xa;
196
-}
197
-
198
-static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
199
-{
200
- /* AArch32 SIMD trap: TA == 1 coproc == 0 */
201
- return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
202
- | (is_16bit ? 0 : ARM_EL_IL)
203
- | (cv << 24) | (cond << 20) | (1 << 5);
204
-}
205
-
206
-static inline uint32_t syn_sve_access_trap(void)
207
-{
208
- return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
209
-}
210
-
211
-static inline uint32_t syn_pactrap(void)
212
-{
213
- return EC_PACTRAP << ARM_EL_EC_SHIFT;
214
-}
215
-
216
-static inline uint32_t syn_btitrap(int btype)
217
-{
218
- return (EC_BTITRAP << ARM_EL_EC_SHIFT) | btype;
219
-}
220
-
221
-static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
222
-{
223
- return (EC_INSNABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
224
- | ARM_EL_IL | (ea << 9) | (s1ptw << 7) | fsc;
225
-}
226
-
227
-static inline uint32_t syn_data_abort_no_iss(int same_el, int fnv,
228
- int ea, int cm, int s1ptw,
229
- int wnr, int fsc)
230
-{
231
- return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
232
- | ARM_EL_IL
233
- | (fnv << 10) | (ea << 9) | (cm << 8) | (s1ptw << 7)
234
- | (wnr << 6) | fsc;
235
-}
236
-
237
-static inline uint32_t syn_data_abort_with_iss(int same_el,
238
- int sas, int sse, int srt,
239
- int sf, int ar,
240
- int ea, int cm, int s1ptw,
241
- int wnr, int fsc,
242
- bool is_16bit)
243
-{
244
- return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
245
- | (is_16bit ? 0 : ARM_EL_IL)
246
- | ARM_EL_ISV | (sas << 22) | (sse << 21) | (srt << 16)
247
- | (sf << 15) | (ar << 14)
248
- | (ea << 9) | (cm << 8) | (s1ptw << 7) | (wnr << 6) | fsc;
249
-}
250
-
251
-static inline uint32_t syn_swstep(int same_el, int isv, int ex)
252
-{
253
- return (EC_SOFTWARESTEP << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
254
- | ARM_EL_IL | (isv << 24) | (ex << 6) | 0x22;
255
-}
256
-
257
-static inline uint32_t syn_watchpoint(int same_el, int cm, int wnr)
258
-{
259
- return (EC_WATCHPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
260
- | ARM_EL_IL | (cm << 8) | (wnr << 6) | 0x22;
261
-}
262
-
263
-static inline uint32_t syn_breakpoint(int same_el)
264
-{
265
- return (EC_BREAKPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
266
- | ARM_EL_IL | 0x22;
267
-}
268
-
269
-static inline uint32_t syn_wfx(int cv, int cond, int ti, bool is_16bit)
270
-{
271
- return (EC_WFX_TRAP << ARM_EL_EC_SHIFT) |
272
- (is_16bit ? 0 : (1 << ARM_EL_IL_SHIFT)) |
273
- (cv << 24) | (cond << 20) | ti;
274
-}
275
-
276
/* Update a QEMU watchpoint based on the information the guest has set in the
277
* DBGWCR<n>_EL1 and DBGWVR<n>_EL1 registers.
278
*/
279
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
280
new file mode 100644
281
index XXXXXXX..XXXXXXX
282
--- /dev/null
283
+++ b/target/arm/syndrome.h
284
@@ -XXX,XX +XXX,XX @@
285
+/*
286
+ * QEMU ARM CPU -- syndrome functions and types
287
+ *
288
+ * Copyright (c) 2014 Linaro Ltd
289
+ *
290
+ * This program is free software; you can redistribute it and/or
291
+ * modify it under the terms of the GNU General Public License
292
+ * as published by the Free Software Foundation; either version 2
293
+ * of the License, or (at your option) any later version.
294
+ *
295
+ * This program is distributed in the hope that it will be useful,
296
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
297
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
298
+ * GNU General Public License for more details.
299
+ *
300
+ * You should have received a copy of the GNU General Public License
301
+ * along with this program; if not, see
302
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
303
+ *
304
+ * This header defines functions, types, etc which need to be shared
305
+ * between different source files within target/arm/ but which are
306
+ * private to it and not required by the rest of QEMU.
307
+ */
308
+
309
+#ifndef TARGET_ARM_SYNDROME_H
310
+#define TARGET_ARM_SYNDROME_H
311
+
312
+/* Valid Syndrome Register EC field values */
313
+enum arm_exception_class {
314
+ EC_UNCATEGORIZED = 0x00,
315
+ EC_WFX_TRAP = 0x01,
316
+ EC_CP15RTTRAP = 0x03,
317
+ EC_CP15RRTTRAP = 0x04,
318
+ EC_CP14RTTRAP = 0x05,
319
+ EC_CP14DTTRAP = 0x06,
320
+ EC_ADVSIMDFPACCESSTRAP = 0x07,
321
+ EC_FPIDTRAP = 0x08,
322
+ EC_PACTRAP = 0x09,
323
+ EC_CP14RRTTRAP = 0x0c,
324
+ EC_BTITRAP = 0x0d,
325
+ EC_ILLEGALSTATE = 0x0e,
326
+ EC_AA32_SVC = 0x11,
327
+ EC_AA32_HVC = 0x12,
328
+ EC_AA32_SMC = 0x13,
329
+ EC_AA64_SVC = 0x15,
330
+ EC_AA64_HVC = 0x16,
331
+ EC_AA64_SMC = 0x17,
332
+ EC_SYSTEMREGISTERTRAP = 0x18,
333
+ EC_SVEACCESSTRAP = 0x19,
334
+ EC_INSNABORT = 0x20,
335
+ EC_INSNABORT_SAME_EL = 0x21,
336
+ EC_PCALIGNMENT = 0x22,
337
+ EC_DATAABORT = 0x24,
338
+ EC_DATAABORT_SAME_EL = 0x25,
339
+ EC_SPALIGNMENT = 0x26,
340
+ EC_AA32_FPTRAP = 0x28,
341
+ EC_AA64_FPTRAP = 0x2c,
342
+ EC_SERROR = 0x2f,
343
+ EC_BREAKPOINT = 0x30,
344
+ EC_BREAKPOINT_SAME_EL = 0x31,
345
+ EC_SOFTWARESTEP = 0x32,
346
+ EC_SOFTWARESTEP_SAME_EL = 0x33,
347
+ EC_WATCHPOINT = 0x34,
348
+ EC_WATCHPOINT_SAME_EL = 0x35,
349
+ EC_AA32_BKPT = 0x38,
350
+ EC_VECTORCATCH = 0x3a,
351
+ EC_AA64_BKPT = 0x3c,
352
+};
353
+
354
+#define ARM_EL_EC_SHIFT 26
355
+#define ARM_EL_IL_SHIFT 25
356
+#define ARM_EL_ISV_SHIFT 24
357
+#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
358
+#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
359
+
360
+static inline uint32_t syn_get_ec(uint32_t syn)
361
+{
43
+{
362
+ return syn >> ARM_EL_EC_SHIFT;
44
+ if (!dc_isar_feature(aa64_xs, s)) {
45
+ return false;
46
+ }
47
+ tcg_gen_mb(TCG_BAR_SC | TCG_MO_ALL);
48
+ return true;
363
+}
49
+}
364
+
50
+
365
+/*
51
static bool trans_ISB(DisasContext *s, arg_ISB *a)
366
+ * Utility functions for constructing various kinds of syndrome value.
52
{
367
+ * Note that in general we follow the AArch64 syndrome values; in a
53
/*
368
+ * few cases the value in HSR for exceptions taken to AArch32 Hyp
369
+ * mode differs slightly, and we fix this up when populating HSR in
370
+ * arm_cpu_do_interrupt_aarch32_hyp().
371
+ * The exception is FP/SIMD access traps -- these report extra information
372
+ * when taking an exception to AArch32. For those we include the extra coproc
373
+ * and TA fields, and mask them out when taking the exception to AArch64.
374
+ */
375
+static inline uint32_t syn_uncategorized(void)
376
+{
377
+ return (EC_UNCATEGORIZED << ARM_EL_EC_SHIFT) | ARM_EL_IL;
378
+}
379
+
380
+static inline uint32_t syn_aa64_svc(uint32_t imm16)
381
+{
382
+ return (EC_AA64_SVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
383
+}
384
+
385
+static inline uint32_t syn_aa64_hvc(uint32_t imm16)
386
+{
387
+ return (EC_AA64_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
388
+}
389
+
390
+static inline uint32_t syn_aa64_smc(uint32_t imm16)
391
+{
392
+ return (EC_AA64_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
393
+}
394
+
395
+static inline uint32_t syn_aa32_svc(uint32_t imm16, bool is_16bit)
396
+{
397
+ return (EC_AA32_SVC << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
398
+ | (is_16bit ? 0 : ARM_EL_IL);
399
+}
400
+
401
+static inline uint32_t syn_aa32_hvc(uint32_t imm16)
402
+{
403
+ return (EC_AA32_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
404
+}
405
+
406
+static inline uint32_t syn_aa32_smc(void)
407
+{
408
+ return (EC_AA32_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL;
409
+}
410
+
411
+static inline uint32_t syn_aa64_bkpt(uint32_t imm16)
412
+{
413
+ return (EC_AA64_BKPT << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
414
+}
415
+
416
+static inline uint32_t syn_aa32_bkpt(uint32_t imm16, bool is_16bit)
417
+{
418
+ return (EC_AA32_BKPT << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
419
+ | (is_16bit ? 0 : ARM_EL_IL);
420
+}
421
+
422
+static inline uint32_t syn_aa64_sysregtrap(int op0, int op1, int op2,
423
+ int crn, int crm, int rt,
424
+ int isread)
425
+{
426
+ return (EC_SYSTEMREGISTERTRAP << ARM_EL_EC_SHIFT) | ARM_EL_IL
427
+ | (op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (rt << 5)
428
+ | (crm << 1) | isread;
429
+}
430
+
431
+static inline uint32_t syn_cp14_rt_trap(int cv, int cond, int opc1, int opc2,
432
+ int crn, int crm, int rt, int isread,
433
+ bool is_16bit)
434
+{
435
+ return (EC_CP14RTTRAP << ARM_EL_EC_SHIFT)
436
+ | (is_16bit ? 0 : ARM_EL_IL)
437
+ | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
438
+ | (crn << 10) | (rt << 5) | (crm << 1) | isread;
439
+}
440
+
441
+static inline uint32_t syn_cp15_rt_trap(int cv, int cond, int opc1, int opc2,
442
+ int crn, int crm, int rt, int isread,
443
+ bool is_16bit)
444
+{
445
+ return (EC_CP15RTTRAP << ARM_EL_EC_SHIFT)
446
+ | (is_16bit ? 0 : ARM_EL_IL)
447
+ | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
448
+ | (crn << 10) | (rt << 5) | (crm << 1) | isread;
449
+}
450
+
451
+static inline uint32_t syn_cp14_rrt_trap(int cv, int cond, int opc1, int crm,
452
+ int rt, int rt2, int isread,
453
+ bool is_16bit)
454
+{
455
+ return (EC_CP14RRTTRAP << ARM_EL_EC_SHIFT)
456
+ | (is_16bit ? 0 : ARM_EL_IL)
457
+ | (cv << 24) | (cond << 20) | (opc1 << 16)
458
+ | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
459
+}
460
+
461
+static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
462
+ int rt, int rt2, int isread,
463
+ bool is_16bit)
464
+{
465
+ return (EC_CP15RRTTRAP << ARM_EL_EC_SHIFT)
466
+ | (is_16bit ? 0 : ARM_EL_IL)
467
+ | (cv << 24) | (cond << 20) | (opc1 << 16)
468
+ | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
469
+}
470
+
471
+static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
472
+{
473
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
474
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
475
+ | (is_16bit ? 0 : ARM_EL_IL)
476
+ | (cv << 24) | (cond << 20) | 0xa;
477
+}
478
+
479
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
480
+{
481
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
482
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
483
+ | (is_16bit ? 0 : ARM_EL_IL)
484
+ | (cv << 24) | (cond << 20) | (1 << 5);
485
+}
486
+
487
+static inline uint32_t syn_sve_access_trap(void)
488
+{
489
+ return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
490
+}
491
+
492
+static inline uint32_t syn_pactrap(void)
493
+{
494
+ return EC_PACTRAP << ARM_EL_EC_SHIFT;
495
+}
496
+
497
+static inline uint32_t syn_btitrap(int btype)
498
+{
499
+ return (EC_BTITRAP << ARM_EL_EC_SHIFT) | btype;
500
+}
501
+
502
+static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
503
+{
504
+ return (EC_INSNABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
505
+ | ARM_EL_IL | (ea << 9) | (s1ptw << 7) | fsc;
506
+}
507
+
508
+static inline uint32_t syn_data_abort_no_iss(int same_el, int fnv,
509
+ int ea, int cm, int s1ptw,
510
+ int wnr, int fsc)
511
+{
512
+ return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
513
+ | ARM_EL_IL
514
+ | (fnv << 10) | (ea << 9) | (cm << 8) | (s1ptw << 7)
515
+ | (wnr << 6) | fsc;
516
+}
517
+
518
+static inline uint32_t syn_data_abort_with_iss(int same_el,
519
+ int sas, int sse, int srt,
520
+ int sf, int ar,
521
+ int ea, int cm, int s1ptw,
522
+ int wnr, int fsc,
523
+ bool is_16bit)
524
+{
525
+ return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
526
+ | (is_16bit ? 0 : ARM_EL_IL)
527
+ | ARM_EL_ISV | (sas << 22) | (sse << 21) | (srt << 16)
528
+ | (sf << 15) | (ar << 14)
529
+ | (ea << 9) | (cm << 8) | (s1ptw << 7) | (wnr << 6) | fsc;
530
+}
531
+
532
+static inline uint32_t syn_swstep(int same_el, int isv, int ex)
533
+{
534
+ return (EC_SOFTWARESTEP << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
535
+ | ARM_EL_IL | (isv << 24) | (ex << 6) | 0x22;
536
+}
537
+
538
+static inline uint32_t syn_watchpoint(int same_el, int cm, int wnr)
539
+{
540
+ return (EC_WATCHPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
541
+ | ARM_EL_IL | (cm << 8) | (wnr << 6) | 0x22;
542
+}
543
+
544
+static inline uint32_t syn_breakpoint(int same_el)
545
+{
546
+ return (EC_BREAKPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
547
+ | ARM_EL_IL | 0x22;
548
+}
549
+
550
+static inline uint32_t syn_wfx(int cv, int cond, int ti, bool is_16bit)
551
+{
552
+ return (EC_WFX_TRAP << ARM_EL_EC_SHIFT) |
553
+ (is_16bit ? 0 : (1 << ARM_EL_IL_SHIFT)) |
554
+ (cv << 24) | (cond << 20) | ti;
555
+}
556
+
557
+#endif /* TARGET_ARM_SYNDROME_H */
558
--
54
--
559
2.20.1
55
2.34.1
560
561
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
A proper syndrome is required to fill in the proper si_code.
4
Use page_get_flags to determine permission vs translation for user-only.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-27-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
linux-user/aarch64/cpu_loop.c | 24 +++++++++++++++++++++---
12
target/arm/tlb_helper.c | 15 +++++++++------
13
2 files changed, 30 insertions(+), 9 deletions(-)
14
15
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/aarch64/cpu_loop.c
18
+++ b/linux-user/aarch64/cpu_loop.c
19
@@ -XXX,XX +XXX,XX @@
20
#include "cpu_loop-common.h"
21
#include "qemu/guest-random.h"
22
#include "hw/semihosting/common-semi.h"
23
+#include "target/arm/syndrome.h"
24
25
#define get_user_code_u32(x, gaddr, env) \
26
({ abi_long __r = get_user_u32((x), (gaddr)); \
27
@@ -XXX,XX +XXX,XX @@
28
void cpu_loop(CPUARMState *env)
29
{
30
CPUState *cs = env_cpu(env);
31
- int trapnr;
32
+ int trapnr, ec, fsc;
33
abi_long ret;
34
target_siginfo_t info;
35
36
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
37
case EXCP_DATA_ABORT:
38
info.si_signo = TARGET_SIGSEGV;
39
info.si_errno = 0;
40
- /* XXX: check env->error_code */
41
- info.si_code = TARGET_SEGV_MAPERR;
42
info._sifields._sigfault._addr = env->exception.vaddress;
43
+
44
+ /* We should only arrive here with EC in {DATAABORT, INSNABORT}. */
45
+ ec = syn_get_ec(env->exception.syndrome);
46
+ assert(ec == EC_DATAABORT || ec == EC_INSNABORT);
47
+
48
+ /* Both EC have the same format for FSC, or close enough. */
49
+ fsc = extract32(env->exception.syndrome, 0, 6);
50
+ switch (fsc) {
51
+ case 0x04 ... 0x07: /* Translation fault, level {0-3} */
52
+ info.si_code = TARGET_SEGV_MAPERR;
53
+ break;
54
+ case 0x09 ... 0x0b: /* Access flag fault, level {1-3} */
55
+ case 0x0d ... 0x0f: /* Permission fault, level {1-3} */
56
+ info.si_code = TARGET_SEGV_ACCERR;
57
+ break;
58
+ default:
59
+ g_assert_not_reached();
60
+ }
61
+
62
queue_signal(env, info.si_signo, QEMU_SI_FAULT, &info);
63
break;
64
case EXCP_DEBUG:
65
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/tlb_helper.c
68
+++ b/target/arm/tlb_helper.c
69
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
70
bool probe, uintptr_t retaddr)
71
{
72
ARMCPU *cpu = ARM_CPU(cs);
73
+ ARMMMUFaultInfo fi = {};
74
75
#ifdef CONFIG_USER_ONLY
76
- cpu->env.exception.vaddress = address;
77
- if (access_type == MMU_INST_FETCH) {
78
- cs->exception_index = EXCP_PREFETCH_ABORT;
79
+ int flags = page_get_flags(useronly_clean_ptr(address));
80
+ if (flags & PAGE_VALID) {
81
+ fi.type = ARMFault_Permission;
82
} else {
83
- cs->exception_index = EXCP_DATA_ABORT;
84
+ fi.type = ARMFault_Translation;
85
}
86
- cpu_loop_exit_restore(cs, retaddr);
87
+
88
+ /* now we have a real cpu fault */
89
+ cpu_restore_state(cs, retaddr, true);
90
+ arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi);
91
#else
92
hwaddr phys_addr;
93
target_ulong page_size;
94
int prot, ret;
95
MemTxAttrs attrs = {};
96
- ARMMMUFaultInfo fi = {};
97
ARMCacheAttrs cacheattrs = {};
98
99
/*
100
--
101
2.20.1
102
103
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210210000223.884088-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
linux-user/aarch64/target_signal.h | 2 ++
9
linux-user/aarch64/cpu_loop.c | 3 +++
10
2 files changed, 5 insertions(+)
11
12
diff --git a/linux-user/aarch64/target_signal.h b/linux-user/aarch64/target_signal.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/linux-user/aarch64/target_signal.h
15
+++ b/linux-user/aarch64/target_signal.h
16
@@ -XXX,XX +XXX,XX @@ typedef struct target_sigaltstack {
17
18
#include "../generic/signal.h"
19
20
+#define TARGET_SEGV_MTESERR 9 /* Synchronous ARM MTE exception */
21
+
22
#define TARGET_ARCH_HAS_SETUP_FRAME
23
#endif /* AARCH64_TARGET_SIGNAL_H */
24
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/linux-user/aarch64/cpu_loop.c
27
+++ b/linux-user/aarch64/cpu_loop.c
28
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
29
case 0x0d ... 0x0f: /* Permission fault, level {1-3} */
30
info.si_code = TARGET_SEGV_ACCERR;
31
break;
32
+ case 0x11: /* Synchronous Tag Check Fault */
33
+ info.si_code = TARGET_SEGV_MTESERR;
34
+ break;
35
default:
36
g_assert_not_reached();
37
}
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The real kernel collects _TIF_MTE_ASYNC_FAULT into the current thread's
4
state on any kernel entry (interrupt, exception etc), and then delivers
5
the signal in advance of resuming the thread.
6
7
This means that while the signal won't be delivered immediately, it will
8
not be delayed forever -- at minimum it will be delivered after the next
9
clock interrupt.
10
11
We don't have a clock interrupt in linux-user, so we issue a cpu_kick
12
to signal a return to the main loop at the end of the current TB.
13
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210210000223.884088-29-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
linux-user/aarch64/target_signal.h | 1 +
20
linux-user/aarch64/cpu_loop.c | 11 +++++++++++
21
target/arm/mte_helper.c | 10 ++++++++++
22
3 files changed, 22 insertions(+)
23
24
diff --git a/linux-user/aarch64/target_signal.h b/linux-user/aarch64/target_signal.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/linux-user/aarch64/target_signal.h
27
+++ b/linux-user/aarch64/target_signal.h
28
@@ -XXX,XX +XXX,XX @@ typedef struct target_sigaltstack {
29
30
#include "../generic/signal.h"
31
32
+#define TARGET_SEGV_MTEAERR 8 /* Asynchronous ARM MTE error */
33
#define TARGET_SEGV_MTESERR 9 /* Synchronous ARM MTE exception */
34
35
#define TARGET_ARCH_HAS_SETUP_FRAME
36
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/linux-user/aarch64/cpu_loop.c
39
+++ b/linux-user/aarch64/cpu_loop.c
40
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
41
EXCP_DUMP(env, "qemu: unhandled CPU exception 0x%x - aborting\n", trapnr);
42
abort();
43
}
44
+
45
+ /* Check for MTE asynchronous faults */
46
+ if (unlikely(env->cp15.tfsr_el[0])) {
47
+ env->cp15.tfsr_el[0] = 0;
48
+ info.si_signo = TARGET_SIGSEGV;
49
+ info.si_errno = 0;
50
+ info._sifields._sigfault._addr = 0;
51
+ info.si_code = TARGET_SEGV_MTEAERR;
52
+ queue_signal(env, info.si_signo, QEMU_SI_FAULT, &info);
53
+ }
54
+
55
process_pending_signals(env);
56
/* Exception return on AArch64 always clears the exclusive monitor,
57
* so any return to running guest code implies this.
58
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mte_helper.c
61
+++ b/target/arm/mte_helper.c
62
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
63
select = 0;
64
}
65
env->cp15.tfsr_el[el] |= 1 << select;
66
+#ifdef CONFIG_USER_ONLY
67
+ /*
68
+ * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
69
+ * which then sends a SIGSEGV when the thread is next scheduled.
70
+ * This cpu will return to the main loop at the end of the TB,
71
+ * which is rather sooner than "normal". But the alternative
72
+ * is waiting until the next syscall.
73
+ */
74
+ qemu_cpu_kick(env_cpu(env));
75
+#endif
76
break;
77
78
default:
79
--
80
2.20.1
81
82
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Use the now-saved PAGE_ANON and PAGE_MTE bits,
4
and the per-page saved data.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-30-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/mte_helper.c | 29 +++++++++++++++++++++++++++--
12
1 file changed, 27 insertions(+), 2 deletions(-)
13
14
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/mte_helper.c
17
+++ b/target/arm/mte_helper.c
18
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
19
int tag_size, uintptr_t ra)
20
{
21
#ifdef CONFIG_USER_ONLY
22
- /* Tag storage not implemented. */
23
- return NULL;
24
+ uint64_t clean_ptr = useronly_clean_ptr(ptr);
25
+ int flags = page_get_flags(clean_ptr);
26
+ uint8_t *tags;
27
+ uintptr_t index;
28
+
29
+ if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE : PAGE_READ))) {
30
+ /* SIGSEGV */
31
+ arm_cpu_tlb_fill(env_cpu(env), ptr, ptr_size, ptr_access,
32
+ ptr_mmu_idx, false, ra);
33
+ g_assert_not_reached();
34
+ }
35
+
36
+ /* Require both MAP_ANON and PROT_MTE for the page. */
37
+ if (!(flags & PAGE_ANON) || !(flags & PAGE_MTE)) {
38
+ return NULL;
39
+ }
40
+
41
+ tags = page_get_target_data(clean_ptr);
42
+ if (tags == NULL) {
43
+ size_t alloc_size = TARGET_PAGE_SIZE >> (LOG2_TAG_GRANULE + 1);
44
+ tags = page_alloc_target_data(clean_ptr, alloc_size);
45
+ assert(tags != NULL);
46
+ }
47
+
48
+ index = extract32(ptr, LOG2_TAG_GRANULE + 1,
49
+ TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1);
50
+ return tags + index;
51
#else
52
uintptr_t index;
53
CPUIOTLBEntry *iotlbentry;
54
--
55
2.20.1
56
57
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
2
2
3
Use nr_apu_cpus in favor of hard coding 2.
3
Add FEAT_XS feature report value in max cpu's ID_AA64ISAR1 sys register.
4
4
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
5
Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Luc Michel <luc@lmichel.fr>
8
Message-id: 20210210142048.3125878-2-edgar.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211144440.2700268-6-peter.maydell@linaro.org
9
[PMM: Add entry for FEAT_XS to documentation]
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
---
11
hw/arm/xlnx-versal.c | 4 ++--
12
docs/system/arm/emulation.rst | 1 +
12
1 file changed, 2 insertions(+), 2 deletions(-)
13
target/arm/tcg/cpu64.c | 1 +
14
2 files changed, 2 insertions(+)
13
15
14
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/xlnx-versal.c
18
--- a/docs/system/arm/emulation.rst
17
+++ b/hw/arm/xlnx-versal.c
19
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_gic(Versal *s, qemu_irq *pic)
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
gicbusdev = SYS_BUS_DEVICE(&s->fpd.apu.gic);
21
- FEAT_VMID16 (16-bit VMID)
20
gicdev = DEVICE(&s->fpd.apu.gic);
22
- FEAT_WFxT (WFE and WFI instructions with timeout)
21
qdev_prop_set_uint32(gicdev, "revision", 3);
23
- FEAT_XNX (Translation table stage 2 Unprivileged Execute-never)
22
- qdev_prop_set_uint32(gicdev, "num-cpu", 2);
24
+- FEAT_XS (XS attribute)
23
+ qdev_prop_set_uint32(gicdev, "num-cpu", nr_apu_cpus);
25
24
qdev_prop_set_uint32(gicdev, "num-irq", XLNX_VERSAL_NR_IRQS + 32);
26
For information on the specifics of these extensions, please refer
25
qdev_prop_set_uint32(gicdev, "len-redist-region-count", 1);
27
to the `Arm Architecture Reference Manual for A-profile architecture
26
- qdev_prop_set_uint32(gicdev, "redist-region-count[0]", 2);
28
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
27
+ qdev_prop_set_uint32(gicdev, "redist-region-count[0]", nr_apu_cpus);
29
index XXXXXXX..XXXXXXX 100644
28
qdev_prop_set_bit(gicdev, "has-security-extensions", true);
30
--- a/target/arm/tcg/cpu64.c
29
31
+++ b/target/arm/tcg/cpu64.c
30
sysbus_realize(SYS_BUS_DEVICE(&s->fpd.apu.gic), &error_fatal);
32
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
33
t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 2); /* FEAT_BF16, FEAT_EBF16 */
34
t = FIELD_DP64(t, ID_AA64ISAR1, DGH, 1); /* FEAT_DGH */
35
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
36
+ t = FIELD_DP64(t, ID_AA64ISAR1, XS, 1); /* FEAT_XS */
37
cpu->isar.id_aa64isar1 = t;
38
39
t = cpu->isar.id_aa64isar2;
31
--
40
--
32
2.20.1
41
2.34.1
33
34
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Add system test to make sure FEAT_XS is enabled for max cpu emulation
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
and that QEMU doesn't crash when encountering an NXS instruction
5
Message-id: 20210210000223.884088-32-richard.henderson@linaro.org
5
variant.
6
7
Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20241211144440.2700268-7-peter.maydell@linaro.org
10
[PMM: In ISAR field test, mask with 0xf, not 0xff; use < rather
11
than an equality test to follow the standard ID register field
12
check guidelines]
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
tests/tcg/aarch64/mte.h | 60 +++++++++++++++++++++++++++++++
15
tests/tcg/aarch64/system/feat-xs.c | 27 +++++++++++++++++++++++++++
9
tests/tcg/aarch64/mte-1.c | 28 +++++++++++++++
16
1 file changed, 27 insertions(+)
10
tests/tcg/aarch64/mte-2.c | 45 +++++++++++++++++++++++
17
create mode 100644 tests/tcg/aarch64/system/feat-xs.c
11
tests/tcg/aarch64/mte-3.c | 51 ++++++++++++++++++++++++++
12
tests/tcg/aarch64/mte-4.c | 45 +++++++++++++++++++++++
13
tests/tcg/aarch64/Makefile.target | 6 ++++
14
tests/tcg/configure.sh | 4 +++
15
7 files changed, 239 insertions(+)
16
create mode 100644 tests/tcg/aarch64/mte.h
17
create mode 100644 tests/tcg/aarch64/mte-1.c
18
create mode 100644 tests/tcg/aarch64/mte-2.c
19
create mode 100644 tests/tcg/aarch64/mte-3.c
20
create mode 100644 tests/tcg/aarch64/mte-4.c
21
18
22
diff --git a/tests/tcg/aarch64/mte.h b/tests/tcg/aarch64/mte.h
19
diff --git a/tests/tcg/aarch64/system/feat-xs.c b/tests/tcg/aarch64/system/feat-xs.c
23
new file mode 100644
20
new file mode 100644
24
index XXXXXXX..XXXXXXX
21
index XXXXXXX..XXXXXXX
25
--- /dev/null
22
--- /dev/null
26
+++ b/tests/tcg/aarch64/mte.h
23
+++ b/tests/tcg/aarch64/system/feat-xs.c
27
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@
28
+/*
25
+/*
29
+ * Linux kernel fallback API definitions for MTE and test helpers.
26
+ * FEAT_XS Test
30
+ *
27
+ *
31
+ * Copyright (c) 2021 Linaro Ltd
28
+ * Copyright (c) 2024 Linaro Ltd
29
+ *
32
+ * SPDX-License-Identifier: GPL-2.0-or-later
30
+ * SPDX-License-Identifier: GPL-2.0-or-later
33
+ */
31
+ */
34
+
32
+
35
+#include <assert.h>
33
+#include <minilib.h>
36
+#include <string.h>
34
+#include <stdint.h>
37
+#include <stdlib.h>
38
+#include <stdio.h>
39
+#include <unistd.h>
40
+#include <signal.h>
41
+#include <sys/mman.h>
42
+#include <sys/prctl.h>
43
+
35
+
44
+#ifndef PR_SET_TAGGED_ADDR_CTRL
36
+int main(void)
45
+# define PR_SET_TAGGED_ADDR_CTRL 55
37
+{
46
+#endif
38
+ uint64_t isar1;
47
+#ifndef PR_TAGGED_ADDR_ENABLE
48
+# define PR_TAGGED_ADDR_ENABLE (1UL << 0)
49
+#endif
50
+#ifndef PR_MTE_TCF_SHIFT
51
+# define PR_MTE_TCF_SHIFT 1
52
+# define PR_MTE_TCF_NONE (0UL << PR_MTE_TCF_SHIFT)
53
+# define PR_MTE_TCF_SYNC (1UL << PR_MTE_TCF_SHIFT)
54
+# define PR_MTE_TCF_ASYNC (2UL << PR_MTE_TCF_SHIFT)
55
+# define PR_MTE_TAG_SHIFT 3
56
+#endif
57
+
39
+
58
+#ifndef PROT_MTE
40
+ asm volatile ("mrs %0, id_aa64isar1_el1" : "=r"(isar1));
59
+# define PROT_MTE 0x20
41
+ if (((isar1 >> 56) & 0xf) < 1) {
60
+#endif
42
+ ml_printf("FEAT_XS not supported by CPU");
61
+
43
+ return 1;
62
+#ifndef SEGV_MTEAERR
63
+# define SEGV_MTEAERR 8
64
+# define SEGV_MTESERR 9
65
+#endif
66
+
67
+static void enable_mte(int tcf)
68
+{
69
+ int r = prctl(PR_SET_TAGGED_ADDR_CTRL,
70
+ PR_TAGGED_ADDR_ENABLE | tcf | (0xfffe << PR_MTE_TAG_SHIFT),
71
+ 0, 0, 0);
72
+ if (r < 0) {
73
+ perror("PR_SET_TAGGED_ADDR_CTRL");
74
+ exit(2);
75
+ }
44
+ }
76
+}
45
+ /* VMALLE1NXS */
77
+
46
+ asm volatile (".inst 0xd508971f");
78
+static void *alloc_mte_mem(size_t size)
47
+ /* VMALLE1OSNXS */
79
+{
48
+ asm volatile (".inst 0xd508911f");
80
+ void *p = mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_MTE,
81
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
82
+ if (p == MAP_FAILED) {
83
+ perror("mmap PROT_MTE");
84
+ exit(2);
85
+ }
86
+ return p;
87
+}
88
diff --git a/tests/tcg/aarch64/mte-1.c b/tests/tcg/aarch64/mte-1.c
89
new file mode 100644
90
index XXXXXXX..XXXXXXX
91
--- /dev/null
92
+++ b/tests/tcg/aarch64/mte-1.c
93
@@ -XXX,XX +XXX,XX @@
94
+/*
95
+ * Memory tagging, basic pass cases.
96
+ *
97
+ * Copyright (c) 2021 Linaro Ltd
98
+ * SPDX-License-Identifier: GPL-2.0-or-later
99
+ */
100
+
101
+#include "mte.h"
102
+
103
+int main(int ac, char **av)
104
+{
105
+ int *p0, *p1, *p2;
106
+ long c;
107
+
108
+ enable_mte(PR_MTE_TCF_NONE);
109
+ p0 = alloc_mte_mem(sizeof(*p0));
110
+
111
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(1));
112
+ assert(p1 != p0);
113
+ asm("subp %0,%1,%2" : "=r"(c) : "r"(p0), "r"(p1));
114
+ assert(c == 0);
115
+
116
+ asm("stg %0, [%0]" : : "r"(p1));
117
+ asm("ldg %0, [%1]" : "=r"(p2) : "r"(p0), "0"(p0));
118
+ assert(p1 == p2);
119
+
49
+
120
+ return 0;
50
+ return 0;
121
+}
51
+}
122
diff --git a/tests/tcg/aarch64/mte-2.c b/tests/tcg/aarch64/mte-2.c
123
new file mode 100644
124
index XXXXXXX..XXXXXXX
125
--- /dev/null
126
+++ b/tests/tcg/aarch64/mte-2.c
127
@@ -XXX,XX +XXX,XX @@
128
+/*
129
+ * Memory tagging, basic fail cases, synchronous signals.
130
+ *
131
+ * Copyright (c) 2021 Linaro Ltd
132
+ * SPDX-License-Identifier: GPL-2.0-or-later
133
+ */
134
+
135
+#include "mte.h"
136
+
137
+void pass(int sig, siginfo_t *info, void *uc)
138
+{
139
+ assert(info->si_code == SEGV_MTESERR);
140
+ exit(0);
141
+}
142
+
143
+int main(int ac, char **av)
144
+{
145
+ struct sigaction sa;
146
+ int *p0, *p1, *p2;
147
+ long excl = 1;
148
+
149
+ enable_mte(PR_MTE_TCF_SYNC);
150
+ p0 = alloc_mte_mem(sizeof(*p0));
151
+
152
+ /* Create two differently tagged pointers. */
153
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
154
+ asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
155
+ assert(excl != 1);
156
+ asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
157
+ assert(p1 != p2);
158
+
159
+ /* Store the tag from the first pointer. */
160
+ asm("stg %0, [%0]" : : "r"(p1));
161
+
162
+ *p1 = 0;
163
+
164
+ memset(&sa, 0, sizeof(sa));
165
+ sa.sa_sigaction = pass;
166
+ sa.sa_flags = SA_SIGINFO;
167
+ sigaction(SIGSEGV, &sa, NULL);
168
+
169
+ *p2 = 0;
170
+
171
+ abort();
172
+}
173
diff --git a/tests/tcg/aarch64/mte-3.c b/tests/tcg/aarch64/mte-3.c
174
new file mode 100644
175
index XXXXXXX..XXXXXXX
176
--- /dev/null
177
+++ b/tests/tcg/aarch64/mte-3.c
178
@@ -XXX,XX +XXX,XX @@
179
+/*
180
+ * Memory tagging, basic fail cases, asynchronous signals.
181
+ *
182
+ * Copyright (c) 2021 Linaro Ltd
183
+ * SPDX-License-Identifier: GPL-2.0-or-later
184
+ */
185
+
186
+#include "mte.h"
187
+
188
+void pass(int sig, siginfo_t *info, void *uc)
189
+{
190
+ assert(info->si_code == SEGV_MTEAERR);
191
+ exit(0);
192
+}
193
+
194
+int main(int ac, char **av)
195
+{
196
+ struct sigaction sa;
197
+ long *p0, *p1, *p2;
198
+ long excl = 1;
199
+
200
+ enable_mte(PR_MTE_TCF_ASYNC);
201
+ p0 = alloc_mte_mem(sizeof(*p0));
202
+
203
+ /* Create two differently tagged pointers. */
204
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
205
+ asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
206
+ assert(excl != 1);
207
+ asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
208
+ assert(p1 != p2);
209
+
210
+ /* Store the tag from the first pointer. */
211
+ asm("stg %0, [%0]" : : "r"(p1));
212
+
213
+ *p1 = 0;
214
+
215
+ memset(&sa, 0, sizeof(sa));
216
+ sa.sa_sigaction = pass;
217
+ sa.sa_flags = SA_SIGINFO;
218
+ sigaction(SIGSEGV, &sa, NULL);
219
+
220
+ /*
221
+ * Signal for async error will happen eventually.
222
+ * For a real kernel this should be after the next IRQ (e.g. timer).
223
+ * For qemu linux-user, we kick the cpu and exit at the next TB.
224
+ * In either case, loop until this happens (or killed by timeout).
225
+ * For extra sauce, yield, producing EXCP_YIELD to cpu_loop().
226
+ */
227
+ asm("str %0, [%0]; yield" : : "r"(p2));
228
+ while (1);
229
+}
230
diff --git a/tests/tcg/aarch64/mte-4.c b/tests/tcg/aarch64/mte-4.c
231
new file mode 100644
232
index XXXXXXX..XXXXXXX
233
--- /dev/null
234
+++ b/tests/tcg/aarch64/mte-4.c
235
@@ -XXX,XX +XXX,XX @@
236
+/*
237
+ * Memory tagging, re-reading tag checks.
238
+ *
239
+ * Copyright (c) 2021 Linaro Ltd
240
+ * SPDX-License-Identifier: GPL-2.0-or-later
241
+ */
242
+
243
+#include "mte.h"
244
+
245
+void __attribute__((noinline)) tagset(void *p, size_t size)
246
+{
247
+ size_t i;
248
+ for (i = 0; i < size; i += 16) {
249
+ asm("stg %0, [%0]" : : "r"(p + i));
250
+ }
251
+}
252
+
253
+void __attribute__((noinline)) tagcheck(void *p, size_t size)
254
+{
255
+ size_t i;
256
+ void *c;
257
+
258
+ for (i = 0; i < size; i += 16) {
259
+ asm("ldg %0, [%1]" : "=r"(c) : "r"(p + i), "0"(p));
260
+ assert(c == p);
261
+ }
262
+}
263
+
264
+int main(int ac, char **av)
265
+{
266
+ size_t size = getpagesize() * 4;
267
+ long excl = 1;
268
+ int *p0, *p1;
269
+
270
+ enable_mte(PR_MTE_TCF_ASYNC);
271
+ p0 = alloc_mte_mem(size);
272
+
273
+ /* Tag the pointer. */
274
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
275
+
276
+ tagset(p1, size);
277
+ tagcheck(p1, size);
278
+
279
+ return 0;
280
+}
281
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
282
index XXXXXXX..XXXXXXX 100644
283
--- a/tests/tcg/aarch64/Makefile.target
284
+++ b/tests/tcg/aarch64/Makefile.target
285
@@ -XXX,XX +XXX,XX @@ endif
286
# bti-2 tests PROT_BTI, so no special compiler support required.
287
AARCH64_TESTS += bti-2
288
289
+# MTE Tests
290
+ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_MTE),)
291
+AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4
292
+mte-%: CFLAGS += -march=armv8.5-a+memtag
293
+endif
294
+
295
# Semihosting smoke test for linux-user
296
AARCH64_TESTS += semihosting
297
run-semihosting: semihosting
298
diff --git a/tests/tcg/configure.sh b/tests/tcg/configure.sh
299
index XXXXXXX..XXXXXXX 100755
300
--- a/tests/tcg/configure.sh
301
+++ b/tests/tcg/configure.sh
302
@@ -XXX,XX +XXX,XX @@ for target in $target_list; do
303
-mbranch-protection=standard -o $TMPE $TMPC; then
304
echo "CROSS_CC_HAS_ARMV8_BTI=y" >> $config_target_mak
305
fi
306
+ if do_compiler "$target_compiler" $target_compiler_cflags \
307
+ -march=armv8.5-a+memtag -o $TMPE $TMPC; then
308
+ echo "CROSS_CC_HAS_ARMV8_MTE=y" >> $config_target_mak
309
+ fi
310
;;
311
esac
312
313
--
52
--
314
2.20.1
53
2.34.1
315
316
diff view generated by jsdifflib
1
From: Doug Evans <dje@google.com>
1
In the GICv3 ITS model, we have a common coding pattern which has a
2
local C struct like "DTEntry dte", which is a C representation of an
3
in-guest-memory data structure, and we call a function such as
4
get_dte() to read guest memory and fill in the C struct. These
5
functions to read in the struct sometimes have cases where they will
6
leave early and not fill in the whole struct (for instance get_dte()
7
will set "dte->valid = false" and nothing else for the case where it
8
is passed an entry_addr implying that there is no L2 table entry for
9
the DTE). This then causes potential use of uninitialized memory
10
later, for instance when we call a trace event which prints all the
11
fields of the struct. Sufficiently advanced compilers may produce
12
-Wmaybe-uninitialized warnings about this, especially if LTO is
13
enabled.
2
14
3
Reviewed-by: Hao Wu <wuhaotsh@google.com>
15
Rather than trying to carefully separate out these trace events into
4
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
16
"only the 'valid' field is initialized" and "all fields can be
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
printed", zero-init all the structs when we define them. None of
6
Signed-off-by: Doug Evans <dje@google.com>
18
these structs are large (the biggest is 24 bytes) and having
7
Message-id: 20210209015541.778833-4-dje@google.com
19
consistent behaviour is less likely to be buggy.
20
21
Cc: qemu-stable@nongnu.org
22
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2718
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
26
Message-id: 20241213182337.3343068-1-peter.maydell@linaro.org
9
---
27
---
10
tests/qtest/npcm7xx_emc-test.c | 812 +++++++++++++++++++++++++++++++++
28
hw/intc/arm_gicv3_its.c | 44 ++++++++++++++++++++---------------------
11
tests/qtest/meson.build | 1 +
29
1 file changed, 22 insertions(+), 22 deletions(-)
12
2 files changed, 813 insertions(+)
13
create mode 100644 tests/qtest/npcm7xx_emc-test.c
14
30
15
diff --git a/tests/qtest/npcm7xx_emc-test.c b/tests/qtest/npcm7xx_emc-test.c
31
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
16
new file mode 100644
17
index XXXXXXX..XXXXXXX
18
--- /dev/null
19
+++ b/tests/qtest/npcm7xx_emc-test.c
20
@@ -XXX,XX +XXX,XX @@
21
+/*
22
+ * QTests for Nuvoton NPCM7xx EMC Modules.
23
+ *
24
+ * Copyright 2020 Google LLC
25
+ *
26
+ * This program is free software; you can redistribute it and/or modify it
27
+ * under the terms of the GNU General Public License as published by the
28
+ * Free Software Foundation; either version 2 of the License, or
29
+ * (at your option) any later version.
30
+ *
31
+ * This program is distributed in the hope that it will be useful, but WITHOUT
32
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
33
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
34
+ * for more details.
35
+ */
36
+
37
+#include "qemu/osdep.h"
38
+#include "qemu-common.h"
39
+#include "libqos/libqos.h"
40
+#include "qapi/qmp/qdict.h"
41
+#include "qapi/qmp/qnum.h"
42
+#include "qemu/bitops.h"
43
+#include "qemu/iov.h"
44
+
45
+/* Name of the emc device. */
46
+#define TYPE_NPCM7XX_EMC "npcm7xx-emc"
47
+
48
+/* Timeout for various operations, in seconds. */
49
+#define TIMEOUT_SECONDS 10
50
+
51
+/* Address in memory of the descriptor. */
52
+#define DESC_ADDR (1 << 20) /* 1 MiB */
53
+
54
+/* Address in memory of the data packet. */
55
+#define DATA_ADDR (DESC_ADDR + 4096)
56
+
57
+#define CRC_LENGTH 4
58
+
59
+#define NUM_TX_DESCRIPTORS 3
60
+#define NUM_RX_DESCRIPTORS 2
61
+
62
+/* Size of tx,rx test buffers. */
63
+#define TX_DATA_LEN 64
64
+#define RX_DATA_LEN 64
65
+
66
+#define TX_STEP_COUNT 10000
67
+#define RX_STEP_COUNT 10000
68
+
69
+/* 32-bit register indices. */
70
+typedef enum NPCM7xxPWMRegister {
71
+ /* Control registers. */
72
+ REG_CAMCMR,
73
+ REG_CAMEN,
74
+
75
+ /* There are 16 CAMn[ML] registers. */
76
+ REG_CAMM_BASE,
77
+ REG_CAML_BASE,
78
+
79
+ REG_TXDLSA = 0x22,
80
+ REG_RXDLSA,
81
+ REG_MCMDR,
82
+ REG_MIID,
83
+ REG_MIIDA,
84
+ REG_FFTCR,
85
+ REG_TSDR,
86
+ REG_RSDR,
87
+ REG_DMARFC,
88
+ REG_MIEN,
89
+
90
+ /* Status registers. */
91
+ REG_MISTA,
92
+ REG_MGSTA,
93
+ REG_MPCNT,
94
+ REG_MRPC,
95
+ REG_MRPCC,
96
+ REG_MREPC,
97
+ REG_DMARFS,
98
+ REG_CTXDSA,
99
+ REG_CTXBSA,
100
+ REG_CRXDSA,
101
+ REG_CRXBSA,
102
+
103
+ NPCM7XX_NUM_EMC_REGS,
104
+} NPCM7xxPWMRegister;
105
+
106
+enum { NUM_CAMML_REGS = 16 };
107
+
108
+/* REG_CAMCMR fields */
109
+/* Enable CAM Compare */
110
+#define REG_CAMCMR_ECMP (1 << 4)
111
+/* Accept Unicast Packet */
112
+#define REG_CAMCMR_AUP (1 << 0)
113
+
114
+/* REG_MCMDR fields */
115
+/* Software Reset */
116
+#define REG_MCMDR_SWR (1 << 24)
117
+/* Frame Transmission On */
118
+#define REG_MCMDR_TXON (1 << 8)
119
+/* Accept Long Packet */
120
+#define REG_MCMDR_ALP (1 << 1)
121
+/* Frame Reception On */
122
+#define REG_MCMDR_RXON (1 << 0)
123
+
124
+/* REG_MIEN fields */
125
+/* Enable Transmit Completion Interrupt */
126
+#define REG_MIEN_ENTXCP (1 << 18)
127
+/* Enable Transmit Interrupt */
128
+#define REG_MIEN_ENTXINTR (1 << 16)
129
+/* Enable Receive Good Interrupt */
130
+#define REG_MIEN_ENRXGD (1 << 4)
131
+/* ENable Receive Interrupt */
132
+#define REG_MIEN_ENRXINTR (1 << 0)
133
+
134
+/* REG_MISTA fields */
135
+/* Transmit Bus Error Interrupt */
136
+#define REG_MISTA_TXBERR (1 << 24)
137
+/* Transmit Descriptor Unavailable Interrupt */
138
+#define REG_MISTA_TDU (1 << 23)
139
+/* Transmit Completion Interrupt */
140
+#define REG_MISTA_TXCP (1 << 18)
141
+/* Transmit Interrupt */
142
+#define REG_MISTA_TXINTR (1 << 16)
143
+/* Receive Bus Error Interrupt */
144
+#define REG_MISTA_RXBERR (1 << 11)
145
+/* Receive Descriptor Unavailable Interrupt */
146
+#define REG_MISTA_RDU (1 << 10)
147
+/* DMA Early Notification Interrupt */
148
+#define REG_MISTA_DENI (1 << 9)
149
+/* Maximum Frame Length Interrupt */
150
+#define REG_MISTA_DFOI (1 << 8)
151
+/* Receive Good Interrupt */
152
+#define REG_MISTA_RXGD (1 << 4)
153
+/* Packet Too Long Interrupt */
154
+#define REG_MISTA_PTLE (1 << 3)
155
+/* Receive Interrupt */
156
+#define REG_MISTA_RXINTR (1 << 0)
157
+
158
+typedef struct NPCM7xxEMCTxDesc NPCM7xxEMCTxDesc;
159
+typedef struct NPCM7xxEMCRxDesc NPCM7xxEMCRxDesc;
160
+
161
+struct NPCM7xxEMCTxDesc {
162
+ uint32_t flags;
163
+ uint32_t txbsa;
164
+ uint32_t status_and_length;
165
+ uint32_t ntxdsa;
166
+};
167
+
168
+struct NPCM7xxEMCRxDesc {
169
+ uint32_t status_and_length;
170
+ uint32_t rxbsa;
171
+ uint32_t reserved;
172
+ uint32_t nrxdsa;
173
+};
174
+
175
+/* NPCM7xxEMCTxDesc.flags values */
176
+/* Owner: 0 = cpu, 1 = emc */
177
+#define TX_DESC_FLAG_OWNER_MASK (1 << 31)
178
+/* Transmit interrupt enable */
179
+#define TX_DESC_FLAG_INTEN (1 << 2)
180
+
181
+/* NPCM7xxEMCTxDesc.status_and_length values */
182
+/* Transmission complete */
183
+#define TX_DESC_STATUS_TXCP (1 << 19)
184
+/* Transmit interrupt */
185
+#define TX_DESC_STATUS_TXINTR (1 << 16)
186
+
187
+/* NPCM7xxEMCRxDesc.status_and_length values */
188
+/* Owner: 0b00 = cpu, 0b10 = emc */
189
+#define RX_DESC_STATUS_OWNER_SHIFT 30
190
+#define RX_DESC_STATUS_OWNER_MASK 0xc0000000
191
+/* Frame Reception Complete */
192
+#define RX_DESC_STATUS_RXGD (1 << 20)
193
+/* Packet too long */
194
+#define RX_DESC_STATUS_PTLE (1 << 19)
195
+/* Receive Interrupt */
196
+#define RX_DESC_STATUS_RXINTR (1 << 16)
197
+
198
+#define RX_DESC_PKT_LEN(word) ((uint32_t) (word) & 0xffff)
199
+
200
+typedef struct EMCModule {
201
+ int rx_irq;
202
+ int tx_irq;
203
+ uint64_t base_addr;
204
+} EMCModule;
205
+
206
+typedef struct TestData {
207
+ const EMCModule *module;
208
+} TestData;
209
+
210
+static const EMCModule emc_module_list[] = {
211
+ {
212
+ .rx_irq = 15,
213
+ .tx_irq = 16,
214
+ .base_addr = 0xf0825000
215
+ },
216
+ {
217
+ .rx_irq = 114,
218
+ .tx_irq = 115,
219
+ .base_addr = 0xf0826000
220
+ }
221
+};
222
+
223
+/* Returns the index of the EMC module. */
224
+static int emc_module_index(const EMCModule *mod)
225
+{
226
+ ptrdiff_t diff = mod - emc_module_list;
227
+
228
+ g_assert_true(diff >= 0 && diff < ARRAY_SIZE(emc_module_list));
229
+
230
+ return diff;
231
+}
232
+
233
+static void packet_test_clear(void *sockets)
234
+{
235
+ int *test_sockets = sockets;
236
+
237
+ close(test_sockets[0]);
238
+ g_free(test_sockets);
239
+}
240
+
241
+static int *packet_test_init(int module_num, GString *cmd_line)
242
+{
243
+ int *test_sockets = g_new(int, 2);
244
+ int ret = socketpair(PF_UNIX, SOCK_STREAM, 0, test_sockets);
245
+ g_assert_cmpint(ret, != , -1);
246
+
247
+ /*
248
+ * KISS and use -nic. We specify two nics (both emc{0,1}) because there's
249
+ * currently no way to specify only emc1: The driver implicitly relies on
250
+ * emc[i] == nd_table[i].
251
+ */
252
+ if (module_num == 0) {
253
+ g_string_append_printf(cmd_line,
254
+ " -nic socket,fd=%d,model=" TYPE_NPCM7XX_EMC " "
255
+ " -nic user,model=" TYPE_NPCM7XX_EMC " ",
256
+ test_sockets[1]);
257
+ } else {
258
+ g_string_append_printf(cmd_line,
259
+ " -nic user,model=" TYPE_NPCM7XX_EMC " "
260
+ " -nic socket,fd=%d,model=" TYPE_NPCM7XX_EMC " ",
261
+ test_sockets[1]);
262
+ }
263
+
264
+ g_test_queue_destroy(packet_test_clear, test_sockets);
265
+ return test_sockets;
266
+}
267
+
268
+static uint32_t emc_read(QTestState *qts, const EMCModule *mod,
269
+ NPCM7xxPWMRegister regno)
270
+{
271
+ return qtest_readl(qts, mod->base_addr + regno * sizeof(uint32_t));
272
+}
273
+
274
+static void emc_write(QTestState *qts, const EMCModule *mod,
275
+ NPCM7xxPWMRegister regno, uint32_t value)
276
+{
277
+ qtest_writel(qts, mod->base_addr + regno * sizeof(uint32_t), value);
278
+}
279
+
280
+/*
281
+ * Reset the EMC module.
282
+ * The module must be reset before, e.g., TXDLSA,RXDLSA are changed.
283
+ */
284
+static bool emc_soft_reset(QTestState *qts, const EMCModule *mod)
285
+{
286
+ uint32_t val;
287
+ uint64_t end_time;
288
+
289
+ emc_write(qts, mod, REG_MCMDR, REG_MCMDR_SWR);
290
+
291
+ /*
292
+ * Wait for device to reset as the linux driver does.
293
+ * During reset the AHB reads 0 for all registers. So first wait for
294
+ * something that resets to non-zero, and then wait for SWR becoming 0.
295
+ */
296
+ end_time = g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
297
+
298
+ do {
299
+ qtest_clock_step(qts, 100);
300
+ val = emc_read(qts, mod, REG_FFTCR);
301
+ } while (val == 0 && g_get_monotonic_time() < end_time);
302
+ if (val != 0) {
303
+ do {
304
+ qtest_clock_step(qts, 100);
305
+ val = emc_read(qts, mod, REG_MCMDR);
306
+ if ((val & REG_MCMDR_SWR) == 0) {
307
+ /*
308
+ * N.B. The CAMs have been reset here, so macaddr matching of
309
+ * incoming packets will not work.
310
+ */
311
+ return true;
312
+ }
313
+ } while (g_get_monotonic_time() < end_time);
314
+ }
315
+
316
+ g_message("%s: Timeout expired", __func__);
317
+ return false;
318
+}
319
+
320
+/* Check emc registers are reset to default value. */
321
+static void test_init(gconstpointer test_data)
322
+{
323
+ const TestData *td = test_data;
324
+ const EMCModule *mod = td->module;
325
+ QTestState *qts = qtest_init("-machine quanta-gsj");
326
+ int i;
327
+
328
+#define CHECK_REG(regno, value) \
329
+ do { \
330
+ g_assert_cmphex(emc_read(qts, mod, (regno)), ==, (value)); \
331
+ } while (0)
332
+
333
+ CHECK_REG(REG_CAMCMR, 0);
334
+ CHECK_REG(REG_CAMEN, 0);
335
+ CHECK_REG(REG_TXDLSA, 0xfffffffc);
336
+ CHECK_REG(REG_RXDLSA, 0xfffffffc);
337
+ CHECK_REG(REG_MCMDR, 0);
338
+ CHECK_REG(REG_MIID, 0);
339
+ CHECK_REG(REG_MIIDA, 0x00900000);
340
+ CHECK_REG(REG_FFTCR, 0x0101);
341
+ CHECK_REG(REG_DMARFC, 0x0800);
342
+ CHECK_REG(REG_MIEN, 0);
343
+ CHECK_REG(REG_MISTA, 0);
344
+ CHECK_REG(REG_MGSTA, 0);
345
+ CHECK_REG(REG_MPCNT, 0x7fff);
346
+ CHECK_REG(REG_MRPC, 0);
347
+ CHECK_REG(REG_MRPCC, 0);
348
+ CHECK_REG(REG_MREPC, 0);
349
+ CHECK_REG(REG_DMARFS, 0);
350
+ CHECK_REG(REG_CTXDSA, 0);
351
+ CHECK_REG(REG_CTXBSA, 0);
352
+ CHECK_REG(REG_CRXDSA, 0);
353
+ CHECK_REG(REG_CRXBSA, 0);
354
+
355
+#undef CHECK_REG
356
+
357
+ for (i = 0; i < NUM_CAMML_REGS; ++i) {
358
+ g_assert_cmpuint(emc_read(qts, mod, REG_CAMM_BASE + i * 2), ==,
359
+ 0);
360
+ g_assert_cmpuint(emc_read(qts, mod, REG_CAML_BASE + i * 2), ==,
361
+ 0);
362
+ }
363
+
364
+ qtest_quit(qts);
365
+}
366
+
367
+static bool emc_wait_irq(QTestState *qts, const EMCModule *mod, int step,
368
+ bool is_tx)
369
+{
370
+ uint64_t end_time =
371
+ g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
372
+
373
+ do {
374
+ if (qtest_get_irq(qts, is_tx ? mod->tx_irq : mod->rx_irq)) {
375
+ return true;
376
+ }
377
+ qtest_clock_step(qts, step);
378
+ } while (g_get_monotonic_time() < end_time);
379
+
380
+ g_message("%s: Timeout expired", __func__);
381
+ return false;
382
+}
383
+
384
+static bool emc_wait_mista(QTestState *qts, const EMCModule *mod, int step,
385
+ uint32_t flag)
386
+{
387
+ uint64_t end_time =
388
+ g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
389
+
390
+ do {
391
+ uint32_t mista = emc_read(qts, mod, REG_MISTA);
392
+ if (mista & flag) {
393
+ return true;
394
+ }
395
+ qtest_clock_step(qts, step);
396
+ } while (g_get_monotonic_time() < end_time);
397
+
398
+ g_message("%s: Timeout expired", __func__);
399
+ return false;
400
+}
401
+
402
+static bool wait_socket_readable(int fd)
403
+{
404
+ fd_set read_fds;
405
+ struct timeval tv;
406
+ int rv;
407
+
408
+ FD_ZERO(&read_fds);
409
+ FD_SET(fd, &read_fds);
410
+ tv.tv_sec = TIMEOUT_SECONDS;
411
+ tv.tv_usec = 0;
412
+ rv = select(fd + 1, &read_fds, NULL, NULL, &tv);
413
+ if (rv == -1) {
414
+ perror("select");
415
+ } else if (rv == 0) {
416
+ g_message("%s: Timeout expired", __func__);
417
+ }
418
+ return rv == 1;
419
+}
420
+
421
+static void init_tx_desc(NPCM7xxEMCTxDesc *desc, size_t count,
422
+ uint32_t desc_addr)
423
+{
424
+ g_assert(count >= 2);
425
+ memset(&desc[0], 0, sizeof(*desc) * count);
426
+ /* Leave the last one alone, owned by the cpu -> stops transmission. */
427
+ for (size_t i = 0; i < count - 1; ++i) {
428
+ desc[i].flags =
429
+ cpu_to_le32(TX_DESC_FLAG_OWNER_MASK | /* owner = 1: emc */
430
+ TX_DESC_FLAG_INTEN |
431
+ 0 | /* crc append = 0 */
432
+ 0 /* padding enable = 0 */);
433
+ desc[i].status_and_length =
434
+ cpu_to_le32(0 | /* collision count = 0 */
435
+ 0 | /* SQE = 0 */
436
+ 0 | /* PAU = 0 */
437
+ 0 | /* TXHA = 0 */
438
+ 0 | /* LC = 0 */
439
+ 0 | /* TXABT = 0 */
440
+ 0 | /* NCS = 0 */
441
+ 0 | /* EXDEF = 0 */
442
+ 0 | /* TXCP = 0 */
443
+ 0 | /* DEF = 0 */
444
+ 0 | /* TXINTR = 0 */
445
+ 0 /* length filled in later */);
446
+ desc[i].ntxdsa = cpu_to_le32(desc_addr + (i + 1) * sizeof(*desc));
447
+ }
448
+}
449
+
450
+static void enable_tx(QTestState *qts, const EMCModule *mod,
451
+ const NPCM7xxEMCTxDesc *desc, size_t count,
452
+ uint32_t desc_addr, uint32_t mien_flags)
453
+{
454
+ /* Write the descriptors to guest memory. */
455
+ qtest_memwrite(qts, desc_addr, desc, sizeof(*desc) * count);
456
+
457
+ /* Trigger sending the packet. */
458
+ /* The module must be reset before changing TXDLSA. */
459
+ g_assert(emc_soft_reset(qts, mod));
460
+ emc_write(qts, mod, REG_TXDLSA, desc_addr);
461
+ emc_write(qts, mod, REG_CTXDSA, ~0);
462
+ emc_write(qts, mod, REG_MIEN, REG_MIEN_ENTXCP | mien_flags);
463
+ {
464
+ uint32_t mcmdr = emc_read(qts, mod, REG_MCMDR);
465
+ mcmdr |= REG_MCMDR_TXON;
466
+ emc_write(qts, mod, REG_MCMDR, mcmdr);
467
+ }
468
+
469
+ /* Prod the device to send the packet. */
470
+ emc_write(qts, mod, REG_TSDR, 1);
471
+}
472
+
473
+static void emc_send_verify1(QTestState *qts, const EMCModule *mod, int fd,
474
+ bool with_irq, uint32_t desc_addr,
475
+ uint32_t next_desc_addr,
476
+ const char *test_data, int test_size)
477
+{
478
+ NPCM7xxEMCTxDesc result_desc;
479
+ uint32_t expected_mask, expected_value, recv_len;
480
+ int ret;
481
+ char buffer[TX_DATA_LEN];
482
+
483
+ g_assert(wait_socket_readable(fd));
484
+
485
+ /* Read the descriptor back. */
486
+ qtest_memread(qts, desc_addr, &result_desc, sizeof(result_desc));
487
+ /* Descriptor should be owned by cpu now. */
488
+ g_assert((result_desc.flags & TX_DESC_FLAG_OWNER_MASK) == 0);
489
+ /* Test the status bits, ignoring the length field. */
490
+ expected_mask = 0xffff << 16;
491
+ expected_value = TX_DESC_STATUS_TXCP;
492
+ if (with_irq) {
493
+ expected_value |= TX_DESC_STATUS_TXINTR;
494
+ }
495
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
496
+ expected_value);
497
+
498
+ /* Check data sent to the backend. */
499
+ recv_len = ~0;
500
+ ret = qemu_recv(fd, &recv_len, sizeof(recv_len), MSG_DONTWAIT);
501
+ g_assert_cmpint(ret, == , sizeof(recv_len));
502
+
503
+ g_assert(wait_socket_readable(fd));
504
+ memset(buffer, 0xff, sizeof(buffer));
505
+ ret = qemu_recv(fd, buffer, test_size, MSG_DONTWAIT);
506
+ g_assert_cmpmem(buffer, ret, test_data, test_size);
507
+}
508
+
509
+static void emc_send_verify(QTestState *qts, const EMCModule *mod, int fd,
510
+ bool with_irq)
511
+{
512
+ NPCM7xxEMCTxDesc desc[NUM_TX_DESCRIPTORS];
513
+ uint32_t desc_addr = DESC_ADDR;
514
+ static const char test1_data[] = "TEST1";
515
+ static const char test2_data[] = "Testing 1 2 3 ...";
516
+ uint32_t data1_addr = DATA_ADDR;
517
+ uint32_t data2_addr = data1_addr + sizeof(test1_data);
518
+ bool got_tdu;
519
+ uint32_t end_desc_addr;
520
+
521
+ /* Prepare test data buffer. */
522
+ qtest_memwrite(qts, data1_addr, test1_data, sizeof(test1_data));
523
+ qtest_memwrite(qts, data2_addr, test2_data, sizeof(test2_data));
524
+
525
+ init_tx_desc(&desc[0], NUM_TX_DESCRIPTORS, desc_addr);
526
+ desc[0].txbsa = cpu_to_le32(data1_addr);
527
+ desc[0].status_and_length |= sizeof(test1_data);
528
+ desc[1].txbsa = cpu_to_le32(data2_addr);
529
+ desc[1].status_and_length |= sizeof(test2_data);
530
+
531
+ enable_tx(qts, mod, &desc[0], NUM_TX_DESCRIPTORS, desc_addr,
532
+ with_irq ? REG_MIEN_ENTXINTR : 0);
533
+
534
+ /*
535
+ * It's problematic to observe the interrupt for each packet.
536
+ * Instead just wait until all the packets go out.
537
+ */
538
+ got_tdu = false;
539
+ while (!got_tdu) {
540
+ if (with_irq) {
541
+ g_assert_true(emc_wait_irq(qts, mod, TX_STEP_COUNT,
542
+ /*is_tx=*/true));
543
+ } else {
544
+ g_assert_true(emc_wait_mista(qts, mod, TX_STEP_COUNT,
545
+ REG_MISTA_TXINTR));
546
+ }
547
+ got_tdu = !!(emc_read(qts, mod, REG_MISTA) & REG_MISTA_TDU);
548
+ /* If we don't have TDU yet, reset the interrupt. */
549
+ if (!got_tdu) {
550
+ emc_write(qts, mod, REG_MISTA,
551
+ emc_read(qts, mod, REG_MISTA) & 0xffff0000);
552
+ }
553
+ }
554
+
555
+ end_desc_addr = desc_addr + 2 * sizeof(desc[0]);
556
+ g_assert_cmphex(emc_read(qts, mod, REG_CTXDSA), ==, end_desc_addr);
557
+ g_assert_cmphex(emc_read(qts, mod, REG_MISTA), ==,
558
+ REG_MISTA_TXCP | REG_MISTA_TXINTR | REG_MISTA_TDU);
559
+
560
+ emc_send_verify1(qts, mod, fd, with_irq,
561
+ desc_addr, end_desc_addr,
562
+ test1_data, sizeof(test1_data));
563
+ emc_send_verify1(qts, mod, fd, with_irq,
564
+ desc_addr + sizeof(desc[0]), end_desc_addr,
565
+ test2_data, sizeof(test2_data));
566
+}
567
+
568
+static void init_rx_desc(NPCM7xxEMCRxDesc *desc, size_t count,
569
+ uint32_t desc_addr, uint32_t data_addr)
570
+{
571
+ g_assert_true(count >= 2);
572
+ memset(desc, 0, sizeof(*desc) * count);
573
+ desc[0].rxbsa = cpu_to_le32(data_addr);
574
+ desc[0].status_and_length =
575
+ cpu_to_le32(0b10 << RX_DESC_STATUS_OWNER_SHIFT | /* owner = 10: emc */
576
+ 0 | /* RP = 0 */
577
+ 0 | /* ALIE = 0 */
578
+ 0 | /* RXGD = 0 */
579
+ 0 | /* PTLE = 0 */
580
+ 0 | /* CRCE = 0 */
581
+ 0 | /* RXINTR = 0 */
582
+ 0 /* length (filled in later) */);
583
+ /* Leave the last one alone, owned by the cpu -> stops transmission. */
584
+ desc[0].nrxdsa = cpu_to_le32(desc_addr + sizeof(*desc));
585
+}
586
+
587
+static void enable_rx(QTestState *qts, const EMCModule *mod,
588
+ const NPCM7xxEMCRxDesc *desc, size_t count,
589
+ uint32_t desc_addr, uint32_t mien_flags,
590
+ uint32_t mcmdr_flags)
591
+{
592
+ /*
593
+ * Write the descriptor to guest memory.
594
+ * FWIW, IWBN if the docs said the buffer needs to be at least DMARFC
595
+ * bytes.
596
+ */
597
+ qtest_memwrite(qts, desc_addr, desc, sizeof(*desc) * count);
598
+
599
+ /* Trigger receiving the packet. */
600
+ /* The module must be reset before changing RXDLSA. */
601
+ g_assert(emc_soft_reset(qts, mod));
602
+ emc_write(qts, mod, REG_RXDLSA, desc_addr);
603
+ emc_write(qts, mod, REG_MIEN, REG_MIEN_ENRXGD | mien_flags);
604
+
605
+ /*
606
+ * We don't know what the device's macaddr is, so just accept all
607
+ * unicast packets (AUP).
608
+ */
609
+ emc_write(qts, mod, REG_CAMCMR, REG_CAMCMR_AUP);
610
+ emc_write(qts, mod, REG_CAMEN, 1 << 0);
611
+ {
612
+ uint32_t mcmdr = emc_read(qts, mod, REG_MCMDR);
613
+ mcmdr |= REG_MCMDR_RXON | mcmdr_flags;
614
+ emc_write(qts, mod, REG_MCMDR, mcmdr);
615
+ }
616
+
617
+ /* Prod the device to accept a packet. */
618
+ emc_write(qts, mod, REG_RSDR, 1);
619
+}
620
+
621
+static void emc_recv_verify(QTestState *qts, const EMCModule *mod, int fd,
622
+ bool with_irq)
623
+{
624
+ NPCM7xxEMCRxDesc desc[NUM_RX_DESCRIPTORS];
625
+ uint32_t desc_addr = DESC_ADDR;
626
+ uint32_t data_addr = DATA_ADDR;
627
+ int ret;
628
+ uint32_t expected_mask, expected_value;
629
+ NPCM7xxEMCRxDesc result_desc;
630
+
631
+ /* Prepare test data buffer. */
632
+ const char test[RX_DATA_LEN] = "TEST";
633
+ int len = htonl(sizeof(test));
634
+ const struct iovec iov[] = {
635
+ {
636
+ .iov_base = &len,
637
+ .iov_len = sizeof(len),
638
+ },{
639
+ .iov_base = (char *) test,
640
+ .iov_len = sizeof(test),
641
+ },
642
+ };
643
+
644
+ /*
645
+ * Reset the device BEFORE sending a test packet, otherwise the packet
646
+ * may get swallowed by an active device of an earlier test.
647
+ */
648
+ init_rx_desc(&desc[0], NUM_RX_DESCRIPTORS, desc_addr, data_addr);
649
+ enable_rx(qts, mod, &desc[0], NUM_RX_DESCRIPTORS, desc_addr,
650
+ with_irq ? REG_MIEN_ENRXINTR : 0, 0);
651
+
652
+ /* Send test packet to device's socket. */
653
+ ret = iov_send(fd, iov, 2, 0, sizeof(len) + sizeof(test));
654
+ g_assert_cmpint(ret, == , sizeof(test) + sizeof(len));
655
+
656
+ /* Wait for RX interrupt. */
657
+ if (with_irq) {
658
+ g_assert_true(emc_wait_irq(qts, mod, RX_STEP_COUNT, /*is_tx=*/false));
659
+ } else {
660
+ g_assert_true(emc_wait_mista(qts, mod, RX_STEP_COUNT, REG_MISTA_RXGD));
661
+ }
662
+
663
+ g_assert_cmphex(emc_read(qts, mod, REG_CRXDSA), ==,
664
+ desc_addr + sizeof(desc[0]));
665
+
666
+ expected_mask = 0xffff;
667
+ expected_value = (REG_MISTA_DENI |
668
+ REG_MISTA_RXGD |
669
+ REG_MISTA_RXINTR);
670
+ g_assert_cmphex((emc_read(qts, mod, REG_MISTA) & expected_mask),
671
+ ==, expected_value);
672
+
673
+ /* Read the descriptor back. */
674
+ qtest_memread(qts, desc_addr, &result_desc, sizeof(result_desc));
675
+ /* Descriptor should be owned by cpu now. */
676
+ g_assert((result_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK) == 0);
677
+ /* Test the status bits, ignoring the length field. */
678
+ expected_mask = 0xffff << 16;
679
+ expected_value = RX_DESC_STATUS_RXGD;
680
+ if (with_irq) {
681
+ expected_value |= RX_DESC_STATUS_RXINTR;
682
+ }
683
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
684
+ expected_value);
685
+ g_assert_cmpint(RX_DESC_PKT_LEN(result_desc.status_and_length), ==,
686
+ RX_DATA_LEN + CRC_LENGTH);
687
+
688
+ {
689
+ char buffer[RX_DATA_LEN];
690
+ qtest_memread(qts, data_addr, buffer, sizeof(buffer));
691
+ g_assert_cmpstr(buffer, == , "TEST");
692
+ }
693
+}
694
+
695
+static void emc_test_ptle(QTestState *qts, const EMCModule *mod, int fd)
696
+{
697
+ NPCM7xxEMCRxDesc desc[NUM_RX_DESCRIPTORS];
698
+ uint32_t desc_addr = DESC_ADDR;
699
+ uint32_t data_addr = DATA_ADDR;
700
+ int ret;
701
+ NPCM7xxEMCRxDesc result_desc;
702
+ uint32_t expected_mask, expected_value;
703
+
704
+ /* Prepare test data buffer. */
705
+#define PTLE_DATA_LEN 1600
706
+ char test_data[PTLE_DATA_LEN];
707
+ int len = htonl(sizeof(test_data));
708
+ const struct iovec iov[] = {
709
+ {
710
+ .iov_base = &len,
711
+ .iov_len = sizeof(len),
712
+ },{
713
+ .iov_base = (char *) test_data,
714
+ .iov_len = sizeof(test_data),
715
+ },
716
+ };
717
+ memset(test_data, 42, sizeof(test_data));
718
+
719
+ /*
720
+ * Reset the device BEFORE sending a test packet, otherwise the packet
721
+ * may get swallowed by an active device of an earlier test.
722
+ */
723
+ init_rx_desc(&desc[0], NUM_RX_DESCRIPTORS, desc_addr, data_addr);
724
+ enable_rx(qts, mod, &desc[0], NUM_RX_DESCRIPTORS, desc_addr,
725
+ REG_MIEN_ENRXINTR, REG_MCMDR_ALP);
726
+
727
+ /* Send test packet to device's socket. */
728
+ ret = iov_send(fd, iov, 2, 0, sizeof(len) + sizeof(test_data));
729
+ g_assert_cmpint(ret, == , sizeof(test_data) + sizeof(len));
730
+
731
+ /* Wait for RX interrupt. */
732
+ g_assert_true(emc_wait_irq(qts, mod, RX_STEP_COUNT, /*is_tx=*/false));
733
+
734
+ /* Read the descriptor back. */
735
+ qtest_memread(qts, desc_addr, &result_desc, sizeof(result_desc));
736
+ /* Descriptor should be owned by cpu now. */
737
+ g_assert((result_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK) == 0);
738
+ /* Test the status bits, ignoring the length field. */
739
+ expected_mask = 0xffff << 16;
740
+ expected_value = (RX_DESC_STATUS_RXGD |
741
+ RX_DESC_STATUS_PTLE |
742
+ RX_DESC_STATUS_RXINTR);
743
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
744
+ expected_value);
745
+ g_assert_cmpint(RX_DESC_PKT_LEN(result_desc.status_and_length), ==,
746
+ PTLE_DATA_LEN + CRC_LENGTH);
747
+
748
+ {
749
+ char buffer[PTLE_DATA_LEN];
750
+ qtest_memread(qts, data_addr, buffer, sizeof(buffer));
751
+ g_assert(memcmp(buffer, test_data, PTLE_DATA_LEN) == 0);
752
+ }
753
+}
754
+
755
+static void test_tx(gconstpointer test_data)
756
+{
757
+ const TestData *td = test_data;
758
+ GString *cmd_line = g_string_new("-machine quanta-gsj");
759
+ int *test_sockets = packet_test_init(emc_module_index(td->module),
760
+ cmd_line);
761
+ QTestState *qts = qtest_init(cmd_line->str);
762
+
763
+ /*
764
+ * TODO: For pedantic correctness test_sockets[0] should be closed after
765
+ * the fork and before the exec, but that will require some harness
766
+ * improvements.
767
+ */
768
+ close(test_sockets[1]);
769
+ /* Defensive programming */
770
+ test_sockets[1] = -1;
771
+
772
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
773
+
774
+ emc_send_verify(qts, td->module, test_sockets[0], /*with_irq=*/false);
775
+ emc_send_verify(qts, td->module, test_sockets[0], /*with_irq=*/true);
776
+
777
+ qtest_quit(qts);
778
+}
779
+
780
+static void test_rx(gconstpointer test_data)
781
+{
782
+ const TestData *td = test_data;
783
+ GString *cmd_line = g_string_new("-machine quanta-gsj");
784
+ int *test_sockets = packet_test_init(emc_module_index(td->module),
785
+ cmd_line);
786
+ QTestState *qts = qtest_init(cmd_line->str);
787
+
788
+ /*
789
+ * TODO: For pedantic correctness test_sockets[0] should be closed after
790
+ * the fork and before the exec, but that will require some harness
791
+ * improvements.
792
+ */
793
+ close(test_sockets[1]);
794
+ /* Defensive programming */
795
+ test_sockets[1] = -1;
796
+
797
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
798
+
799
+ emc_recv_verify(qts, td->module, test_sockets[0], /*with_irq=*/false);
800
+ emc_recv_verify(qts, td->module, test_sockets[0], /*with_irq=*/true);
801
+ emc_test_ptle(qts, td->module, test_sockets[0]);
802
+
803
+ qtest_quit(qts);
804
+}
805
+
806
+static void emc_add_test(const char *name, const TestData* td,
807
+ GTestDataFunc fn)
808
+{
809
+ g_autofree char *full_name = g_strdup_printf(
810
+ "npcm7xx_emc/emc[%d]/%s", emc_module_index(td->module), name);
811
+ qtest_add_data_func(full_name, td, fn);
812
+}
813
+#define add_test(name, td) emc_add_test(#name, td, test_##name)
814
+
815
+int main(int argc, char **argv)
816
+{
817
+ TestData test_data_list[ARRAY_SIZE(emc_module_list)];
818
+
819
+ g_test_init(&argc, &argv, NULL);
820
+
821
+ for (int i = 0; i < ARRAY_SIZE(emc_module_list); ++i) {
822
+ TestData *td = &test_data_list[i];
823
+
824
+ td->module = &emc_module_list[i];
825
+
826
+ add_test(init, td);
827
+ add_test(tx, td);
828
+ add_test(rx, td);
829
+ }
830
+
831
+ return g_test_run();
832
+}
833
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
834
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
835
--- a/tests/qtest/meson.build
33
--- a/hw/intc/arm_gicv3_its.c
836
+++ b/tests/qtest/meson.build
34
+++ b/hw/intc/arm_gicv3_its.c
837
@@ -XXX,XX +XXX,XX @@ qtests_sparc64 = \
35
@@ -XXX,XX +XXX,XX @@ static ItsCmdResult lookup_vte(GICv3ITSState *s, const char *who,
838
36
static ItsCmdResult process_its_cmd_phys(GICv3ITSState *s, const ITEntry *ite,
839
qtests_npcm7xx = \
37
int irqlevel)
840
['npcm7xx_adc-test',
38
{
841
+ 'npcm7xx_emc-test',
39
- CTEntry cte;
842
'npcm7xx_gpio-test',
40
+ CTEntry cte = {};
843
'npcm7xx_pwm-test',
41
ItsCmdResult cmdres;
844
'npcm7xx_rng-test',
42
43
cmdres = lookup_cte(s, __func__, ite->icid, &cte);
44
@@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_its_cmd_phys(GICv3ITSState *s, const ITEntry *ite,
45
static ItsCmdResult process_its_cmd_virt(GICv3ITSState *s, const ITEntry *ite,
46
int irqlevel)
47
{
48
- VTEntry vte;
49
+ VTEntry vte = {};
50
ItsCmdResult cmdres;
51
52
cmdres = lookup_vte(s, __func__, ite->vpeid, &vte);
53
@@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_its_cmd_virt(GICv3ITSState *s, const ITEntry *ite,
54
static ItsCmdResult do_process_its_cmd(GICv3ITSState *s, uint32_t devid,
55
uint32_t eventid, ItsCmdType cmd)
56
{
57
- DTEntry dte;
58
- ITEntry ite;
59
+ DTEntry dte = {};
60
+ ITEntry ite = {};
61
ItsCmdResult cmdres;
62
int irqlevel;
63
64
@@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_mapti(GICv3ITSState *s, const uint64_t *cmdpkt,
65
uint32_t pIntid = 0;
66
uint64_t num_eventids;
67
uint16_t icid = 0;
68
- DTEntry dte;
69
- ITEntry ite;
70
+ DTEntry dte = {};
71
+ ITEntry ite = {};
72
73
devid = (cmdpkt[0] & DEVID_MASK) >> DEVID_SHIFT;
74
eventid = cmdpkt[1] & EVENTID_MASK;
75
@@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vmapti(GICv3ITSState *s, const uint64_t *cmdpkt,
76
{
77
uint32_t devid, eventid, vintid, doorbell, vpeid;
78
uint32_t num_eventids;
79
- DTEntry dte;
80
- ITEntry ite;
81
+ DTEntry dte = {};
82
+ ITEntry ite = {};
83
84
if (!its_feature_virtual(s)) {
85
return CMD_CONTINUE;
86
@@ -XXX,XX +XXX,XX @@ static bool update_cte(GICv3ITSState *s, uint16_t icid, const CTEntry *cte)
87
static ItsCmdResult process_mapc(GICv3ITSState *s, const uint64_t *cmdpkt)
88
{
89
uint16_t icid;
90
- CTEntry cte;
91
+ CTEntry cte = {};
92
93
icid = cmdpkt[2] & ICID_MASK;
94
cte.valid = cmdpkt[2] & CMD_FIELD_VALID_MASK;
95
@@ -XXX,XX +XXX,XX @@ static bool update_dte(GICv3ITSState *s, uint32_t devid, const DTEntry *dte)
96
static ItsCmdResult process_mapd(GICv3ITSState *s, const uint64_t *cmdpkt)
97
{
98
uint32_t devid;
99
- DTEntry dte;
100
+ DTEntry dte = {};
101
102
devid = (cmdpkt[0] & DEVID_MASK) >> DEVID_SHIFT;
103
dte.size = cmdpkt[1] & SIZE_MASK;
104
@@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_movi(GICv3ITSState *s, const uint64_t *cmdpkt)
105
{
106
uint32_t devid, eventid;
107
uint16_t new_icid;
108
- DTEntry dte;
109
- CTEntry old_cte, new_cte;
110
- ITEntry old_ite;
111
+ DTEntry dte = {};
112
+ CTEntry old_cte = {}, new_cte = {};
113
+ ITEntry old_ite = {};
114
ItsCmdResult cmdres;
115
116
devid = FIELD_EX64(cmdpkt[0], MOVI_0, DEVICEID);
117
@@ -XXX,XX +XXX,XX @@ static bool update_vte(GICv3ITSState *s, uint32_t vpeid, const VTEntry *vte)
118
119
static ItsCmdResult process_vmapp(GICv3ITSState *s, const uint64_t *cmdpkt)
120
{
121
- VTEntry vte;
122
+ VTEntry vte = {};
123
uint32_t vpeid;
124
125
if (!its_feature_virtual(s)) {
126
@@ -XXX,XX +XXX,XX @@ static void vmovp_callback(gpointer data, gpointer opaque)
127
*/
128
GICv3ITSState *s = data;
129
VmovpCallbackData *cbdata = opaque;
130
- VTEntry vte;
131
+ VTEntry vte = {};
132
ItsCmdResult cmdres;
133
134
cmdres = lookup_vte(s, __func__, cbdata->vpeid, &vte);
135
@@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vmovi(GICv3ITSState *s, const uint64_t *cmdpkt)
136
{
137
uint32_t devid, eventid, vpeid, doorbell;
138
bool doorbell_valid;
139
- DTEntry dte;
140
- ITEntry ite;
141
- VTEntry old_vte, new_vte;
142
+ DTEntry dte = {};
143
+ ITEntry ite = {};
144
+ VTEntry old_vte = {}, new_vte = {};
145
ItsCmdResult cmdres;
146
147
if (!its_feature_virtual(s)) {
148
@@ -XXX,XX +XXX,XX @@ static ItsCmdResult process_vinvall(GICv3ITSState *s, const uint64_t *cmdpkt)
149
static ItsCmdResult process_inv(GICv3ITSState *s, const uint64_t *cmdpkt)
150
{
151
uint32_t devid, eventid;
152
- ITEntry ite;
153
- DTEntry dte;
154
- CTEntry cte;
155
- VTEntry vte;
156
+ ITEntry ite = {};
157
+ DTEntry dte = {};
158
+ CTEntry cte = {};
159
+ VTEntry vte = {};
160
ItsCmdResult cmdres;
161
162
devid = FIELD_EX64(cmdpkt[0], INV_0, DEVICEID);
845
--
163
--
846
2.20.1
164
2.34.1
847
165
848
166
diff view generated by jsdifflib
1
From: Doug Evans <dje@google.com>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
This is a 10/100 ethernet device that has several features.
3
Update the URLs for the binaries we use for the firmware in the
4
Only the ones needed by the Linux driver have been implemented.
4
sbsa-ref functional tests.
5
See npcm7xx_emc.c for a list of unimplemented features.
6
5
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
6
The firmware is built using Debian 'bookworm' cross toolchain (gcc
8
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
7
12.2.0).
9
Signed-off-by: Doug Evans <dje@google.com>
8
10
Message-id: 20210209015541.778833-2-dje@google.com
9
Used versions:
10
11
- Trusted Firmware v2.12.0
12
- Tianocore EDK2 stable202411
13
- Tianocore EDK2 Platforms code commit 4b3530d
14
15
This allows us to move away from "some git commit on trunk"
16
to a stable release for both TF-A and EDK2.
17
18
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
19
Message-id: 20241125125448.185504-1-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
22
---
14
include/hw/net/npcm7xx_emc.h | 286 ++++++++++++
23
tests/functional/test_aarch64_sbsaref.py | 20 ++++++++++----------
15
hw/net/npcm7xx_emc.c | 857 +++++++++++++++++++++++++++++++++++
24
1 file changed, 10 insertions(+), 10 deletions(-)
16
hw/net/meson.build | 1 +
17
hw/net/trace-events | 17 +
18
4 files changed, 1161 insertions(+)
19
create mode 100644 include/hw/net/npcm7xx_emc.h
20
create mode 100644 hw/net/npcm7xx_emc.c
21
25
22
diff --git a/include/hw/net/npcm7xx_emc.h b/include/hw/net/npcm7xx_emc.h
26
diff --git a/tests/functional/test_aarch64_sbsaref.py b/tests/functional/test_aarch64_sbsaref.py
23
new file mode 100644
27
index XXXXXXX..XXXXXXX 100755
24
index XXXXXXX..XXXXXXX
28
--- a/tests/functional/test_aarch64_sbsaref.py
25
--- /dev/null
29
+++ b/tests/functional/test_aarch64_sbsaref.py
26
+++ b/include/hw/net/npcm7xx_emc.h
30
@@ -XXX,XX +XXX,XX @@ def fetch_firmware(test):
27
@@ -XXX,XX +XXX,XX @@
31
28
+/*
32
Used components:
29
+ * Nuvoton NPCM7xx EMC Module
33
30
+ *
34
- - Trusted Firmware v2.11.0
31
+ * Copyright 2020 Google LLC
35
- - Tianocore EDK2 4d4f569924
32
+ *
36
- - Tianocore EDK2-platforms 3f08401
33
+ * This program is free software; you can redistribute it and/or modify it
37
+ - Trusted Firmware v2.12.0
34
+ * under the terms of the GNU General Public License as published by the
38
+ - Tianocore EDK2 edk2-stable202411
35
+ * Free Software Foundation; either version 2 of the License, or
39
+ - Tianocore EDK2-platforms 4b3530d
36
+ * (at your option) any later version.
40
37
+ *
41
"""
38
+ * This program is distributed in the hope that it will be useful, but WITHOUT
42
39
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
43
@@ -XXX,XX +XXX,XX @@ class Aarch64SbsarefMachine(QemuSystemTest):
40
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
44
41
+ * for more details.
45
ASSET_FLASH0 = Asset(
42
+ */
46
('https://artifacts.codelinaro.org/artifactory/linaro-419-sbsa-ref/'
43
+
47
- '20240619-148232/edk2/SBSA_FLASH0.fd.xz'),
44
+#ifndef NPCM7XX_EMC_H
48
- '0c954842a590988f526984de22e21ae0ab9cb351a0c99a8a58e928f0c7359cf7')
45
+#define NPCM7XX_EMC_H
49
+ '20241122-189881/edk2/SBSA_FLASH0.fd.xz'),
46
+
50
+ '76eb89d42eebe324e4395329f47447cda9ac920aabcf99aca85424609c3384a5')
47
+#include "hw/irq.h"
51
48
+#include "hw/sysbus.h"
52
ASSET_FLASH1 = Asset(
49
+#include "net/net.h"
53
('https://artifacts.codelinaro.org/artifactory/linaro-419-sbsa-ref/'
50
+
54
- '20240619-148232/edk2/SBSA_FLASH1.fd.xz'),
51
+/* 32-bit register indices. */
55
- 'c6ec39374c4d79bb9e9cdeeb6db44732d90bb4a334cec92002b3f4b9cac4b5ee')
52
+enum NPCM7xxPWMRegister {
56
+ '20241122-189881/edk2/SBSA_FLASH1.fd.xz'),
53
+ /* Control registers. */
57
+ 'f850f243bd8dbd49c51e061e0f79f1697546938f454aeb59ab7d93e5f0d412fc')
54
+ REG_CAMCMR,
58
55
+ REG_CAMEN,
59
def test_sbsaref_edk2_firmware(self):
56
+
60
57
+ /* There are 16 CAMn[ML] registers. */
61
@@ -XXX,XX +XXX,XX @@ def test_sbsaref_edk2_firmware(self):
58
+ REG_CAMM_BASE,
62
59
+ REG_CAML_BASE,
63
# AP Trusted ROM
60
+ REG_CAMML_LAST = 0x21,
64
wait_for_console_pattern(self, "Booting Trusted Firmware")
61
+
65
- wait_for_console_pattern(self, "BL1: v2.11.0(release):")
62
+ REG_TXDLSA = 0x22,
66
+ wait_for_console_pattern(self, "BL1: v2.12.0(release):")
63
+ REG_RXDLSA,
67
wait_for_console_pattern(self, "BL1: Booting BL2")
64
+ REG_MCMDR,
68
65
+ REG_MIID,
69
# Trusted Boot Firmware
66
+ REG_MIIDA,
70
- wait_for_console_pattern(self, "BL2: v2.11.0(release)")
67
+ REG_FFTCR,
71
+ wait_for_console_pattern(self, "BL2: v2.12.0(release)")
68
+ REG_TSDR,
72
wait_for_console_pattern(self, "Booting BL31")
69
+ REG_RSDR,
73
70
+ REG_DMARFC,
74
# EL3 Runtime Software
71
+ REG_MIEN,
75
- wait_for_console_pattern(self, "BL31: v2.11.0(release)")
72
+
76
+ wait_for_console_pattern(self, "BL31: v2.12.0(release)")
73
+ /* Status registers. */
77
74
+ REG_MISTA,
78
# Non-trusted Firmware
75
+ REG_MGSTA,
79
wait_for_console_pattern(self, "UEFI firmware (version 1.0")
76
+ REG_MPCNT,
77
+ REG_MRPC,
78
+ REG_MRPCC,
79
+ REG_MREPC,
80
+ REG_DMARFS,
81
+ REG_CTXDSA,
82
+ REG_CTXBSA,
83
+ REG_CRXDSA,
84
+ REG_CRXBSA,
85
+
86
+ NPCM7XX_NUM_EMC_REGS,
87
+};
88
+
89
+/* REG_CAMCMR fields */
90
+/* Enable CAM Compare */
91
+#define REG_CAMCMR_ECMP (1 << 4)
92
+/* Complement CAM Compare */
93
+#define REG_CAMCMR_CCAM (1 << 3)
94
+/* Accept Broadcast Packet */
95
+#define REG_CAMCMR_ABP (1 << 2)
96
+/* Accept Multicast Packet */
97
+#define REG_CAMCMR_AMP (1 << 1)
98
+/* Accept Unicast Packet */
99
+#define REG_CAMCMR_AUP (1 << 0)
100
+
101
+/* REG_MCMDR fields */
102
+/* Software Reset */
103
+#define REG_MCMDR_SWR (1 << 24)
104
+/* Internal Loopback Select */
105
+#define REG_MCMDR_LBK (1 << 21)
106
+/* Operation Mode Select */
107
+#define REG_MCMDR_OPMOD (1 << 20)
108
+/* Enable MDC Clock Generation */
109
+#define REG_MCMDR_ENMDC (1 << 19)
110
+/* Full-Duplex Mode Select */
111
+#define REG_MCMDR_FDUP (1 << 18)
112
+/* Enable SQE Checking */
113
+#define REG_MCMDR_ENSEQ (1 << 17)
114
+/* Send PAUSE Frame */
115
+#define REG_MCMDR_SDPZ (1 << 16)
116
+/* No Defer */
117
+#define REG_MCMDR_NDEF (1 << 9)
118
+/* Frame Transmission On */
119
+#define REG_MCMDR_TXON (1 << 8)
120
+/* Strip CRC Checksum */
121
+#define REG_MCMDR_SPCRC (1 << 5)
122
+/* Accept CRC Error Packet */
123
+#define REG_MCMDR_AEP (1 << 4)
124
+/* Accept Control Packet */
125
+#define REG_MCMDR_ACP (1 << 3)
126
+/* Accept Runt Packet */
127
+#define REG_MCMDR_ARP (1 << 2)
128
+/* Accept Long Packet */
129
+#define REG_MCMDR_ALP (1 << 1)
130
+/* Frame Reception On */
131
+#define REG_MCMDR_RXON (1 << 0)
132
+
133
+/* REG_MIEN fields */
134
+/* Enable Transmit Descriptor Unavailable Interrupt */
135
+#define REG_MIEN_ENTDU (1 << 23)
136
+/* Enable Transmit Completion Interrupt */
137
+#define REG_MIEN_ENTXCP (1 << 18)
138
+/* Enable Transmit Interrupt */
139
+#define REG_MIEN_ENTXINTR (1 << 16)
140
+/* Enable Receive Descriptor Unavailable Interrupt */
141
+#define REG_MIEN_ENRDU (1 << 10)
142
+/* Enable Receive Good Interrupt */
143
+#define REG_MIEN_ENRXGD (1 << 4)
144
+/* Enable Receive Interrupt */
145
+#define REG_MIEN_ENRXINTR (1 << 0)
146
+
147
+/* REG_MISTA fields */
148
+/* TODO: Add error fields and support simulated errors? */
149
+/* Transmit Bus Error Interrupt */
150
+#define REG_MISTA_TXBERR (1 << 24)
151
+/* Transmit Descriptor Unavailable Interrupt */
152
+#define REG_MISTA_TDU (1 << 23)
153
+/* Transmit Completion Interrupt */
154
+#define REG_MISTA_TXCP (1 << 18)
155
+/* Transmit Interrupt */
156
+#define REG_MISTA_TXINTR (1 << 16)
157
+/* Receive Bus Error Interrupt */
158
+#define REG_MISTA_RXBERR (1 << 11)
159
+/* Receive Descriptor Unavailable Interrupt */
160
+#define REG_MISTA_RDU (1 << 10)
161
+/* DMA Early Notification Interrupt */
162
+#define REG_MISTA_DENI (1 << 9)
163
+/* Maximum Frame Length Interrupt */
164
+#define REG_MISTA_DFOI (1 << 8)
165
+/* Receive Good Interrupt */
166
+#define REG_MISTA_RXGD (1 << 4)
167
+/* Packet Too Long Interrupt */
168
+#define REG_MISTA_PTLE (1 << 3)
169
+/* Receive Interrupt */
170
+#define REG_MISTA_RXINTR (1 << 0)
171
+
172
+/* REG_MGSTA fields */
173
+/* Transmission Halted */
174
+#define REG_MGSTA_TXHA (1 << 11)
175
+/* Receive Halted */
176
+#define REG_MGSTA_RXHA (1 << 11)
177
+
178
+/* REG_DMARFC fields */
179
+/* Maximum Receive Frame Length */
180
+#define REG_DMARFC_RXMS(word) extract32((word), 0, 16)
181
+
182
+/* REG MIIDA fields */
183
+/* Busy Bit */
184
+#define REG_MIIDA_BUSY (1 << 17)
185
+
186
+/* Transmit and receive descriptors */
187
+typedef struct NPCM7xxEMCTxDesc NPCM7xxEMCTxDesc;
188
+typedef struct NPCM7xxEMCRxDesc NPCM7xxEMCRxDesc;
189
+
190
+struct NPCM7xxEMCTxDesc {
191
+ uint32_t flags;
192
+ uint32_t txbsa;
193
+ uint32_t status_and_length;
194
+ uint32_t ntxdsa;
195
+};
196
+
197
+struct NPCM7xxEMCRxDesc {
198
+ uint32_t status_and_length;
199
+ uint32_t rxbsa;
200
+ uint32_t reserved;
201
+ uint32_t nrxdsa;
202
+};
203
+
204
+/* NPCM7xxEMCTxDesc.flags values */
205
+/* Owner: 0 = cpu, 1 = emc */
206
+#define TX_DESC_FLAG_OWNER_MASK (1 << 31)
207
+/* Transmit interrupt enable */
208
+#define TX_DESC_FLAG_INTEN (1 << 2)
209
+/* CRC append */
210
+#define TX_DESC_FLAG_CRCAPP (1 << 1)
211
+/* Padding enable */
212
+#define TX_DESC_FLAG_PADEN (1 << 0)
213
+
214
+/* NPCM7xxEMCTxDesc.status_and_length values */
215
+/* Collision count */
216
+#define TX_DESC_STATUS_CCNT_SHIFT 28
217
+#define TX_DESC_STATUS_CCNT_BITSIZE 4
218
+/* SQE error */
219
+#define TX_DESC_STATUS_SQE (1 << 26)
220
+/* Transmission paused */
221
+#define TX_DESC_STATUS_PAU (1 << 25)
222
+/* P transmission halted */
223
+#define TX_DESC_STATUS_TXHA (1 << 24)
224
+/* Late collision */
225
+#define TX_DESC_STATUS_LC (1 << 23)
226
+/* Transmission abort */
227
+#define TX_DESC_STATUS_TXABT (1 << 22)
228
+/* No carrier sense */
229
+#define TX_DESC_STATUS_NCS (1 << 21)
230
+/* Defer exceed */
231
+#define TX_DESC_STATUS_EXDEF (1 << 20)
232
+/* Transmission complete */
233
+#define TX_DESC_STATUS_TXCP (1 << 19)
234
+/* Transmission deferred */
235
+#define TX_DESC_STATUS_DEF (1 << 17)
236
+/* Transmit interrupt */
237
+#define TX_DESC_STATUS_TXINTR (1 << 16)
238
+
239
+#define TX_DESC_PKT_LEN(word) extract32((word), 0, 16)
240
+
241
+/* Transmit buffer start address */
242
+#define TX_DESC_TXBSA(word) ((uint32_t) (word) & ~3u)
243
+
244
+/* Next transmit descriptor start address */
245
+#define TX_DESC_NTXDSA(word) ((uint32_t) (word) & ~3u)
246
+
247
+/* NPCM7xxEMCRxDesc.status_and_length values */
248
+/* Owner: 0b00 = cpu, 0b01 = undefined, 0b10 = emc, 0b11 = undefined */
249
+#define RX_DESC_STATUS_OWNER_SHIFT 30
250
+#define RX_DESC_STATUS_OWNER_BITSIZE 2
251
+#define RX_DESC_STATUS_OWNER_MASK (3 << RX_DESC_STATUS_OWNER_SHIFT)
252
+/* Runt packet */
253
+#define RX_DESC_STATUS_RP (1 << 22)
254
+/* Alignment error */
255
+#define RX_DESC_STATUS_ALIE (1 << 21)
256
+/* Frame reception complete */
257
+#define RX_DESC_STATUS_RXGD (1 << 20)
258
+/* Packet too long */
259
+#define RX_DESC_STATUS_PTLE (1 << 19)
260
+/* CRC error */
261
+#define RX_DESC_STATUS_CRCE (1 << 17)
262
+/* Receive interrupt */
263
+#define RX_DESC_STATUS_RXINTR (1 << 16)
264
+
265
+#define RX_DESC_PKT_LEN(word) extract32((word), 0, 16)
266
+
267
+/* Receive buffer start address */
268
+#define RX_DESC_RXBSA(word) ((uint32_t) (word) & ~3u)
269
+
270
+/* Next receive descriptor start address */
271
+#define RX_DESC_NRXDSA(word) ((uint32_t) (word) & ~3u)
272
+
273
+/* Minimum packet length, when TX_DESC_FLAG_PADEN is set. */
274
+#define MIN_PACKET_LENGTH 64
275
+
276
+struct NPCM7xxEMCState {
277
+ /*< private >*/
278
+ SysBusDevice parent;
279
+ /*< public >*/
280
+
281
+ MemoryRegion iomem;
282
+
283
+ qemu_irq tx_irq;
284
+ qemu_irq rx_irq;
285
+
286
+ NICState *nic;
287
+ NICConf conf;
288
+
289
+ /* 0 or 1, for log messages */
290
+ uint8_t emc_num;
291
+
292
+ uint32_t regs[NPCM7XX_NUM_EMC_REGS];
293
+
294
+ /*
295
+ * tx is active. Set to true by TSDR and then switches off when out of
296
+ * descriptors. If the TXON bit in REG_MCMDR is off then this is off.
297
+ */
298
+ bool tx_active;
299
+
300
+ /*
301
+ * rx is active. Set to true by RSDR and then switches off when out of
302
+ * descriptors. If the RXON bit in REG_MCMDR is off then this is off.
303
+ */
304
+ bool rx_active;
305
+};
306
+
307
+typedef struct NPCM7xxEMCState NPCM7xxEMCState;
308
+
309
+#define TYPE_NPCM7XX_EMC "npcm7xx-emc"
310
+#define NPCM7XX_EMC(obj) \
311
+ OBJECT_CHECK(NPCM7xxEMCState, (obj), TYPE_NPCM7XX_EMC)
312
+
313
+#endif /* NPCM7XX_EMC_H */
314
diff --git a/hw/net/npcm7xx_emc.c b/hw/net/npcm7xx_emc.c
315
new file mode 100644
316
index XXXXXXX..XXXXXXX
317
--- /dev/null
318
+++ b/hw/net/npcm7xx_emc.c
319
@@ -XXX,XX +XXX,XX @@
320
+/*
321
+ * Nuvoton NPCM7xx EMC Module
322
+ *
323
+ * Copyright 2020 Google LLC
324
+ *
325
+ * This program is free software; you can redistribute it and/or modify it
326
+ * under the terms of the GNU General Public License as published by the
327
+ * Free Software Foundation; either version 2 of the License, or
328
+ * (at your option) any later version.
329
+ *
330
+ * This program is distributed in the hope that it will be useful, but WITHOUT
331
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
332
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
333
+ * for more details.
334
+ *
335
+ * Unsupported/unimplemented features:
336
+ * - MCMDR.FDUP (full duplex) is ignored, half duplex is not supported
337
+ * - Only CAM0 is supported, CAM[1-15] are not
338
+ * - writes to CAMEN.[1-15] are ignored, these bits always read as zeroes
339
+ * - MII is not implemented, MIIDA.BUSY and MIID always return zero
340
+ * - MCMDR.LBK is not implemented
341
+ * - MCMDR.{OPMOD,ENSQE,AEP,ARP} are not supported
342
+ * - H/W FIFOs are not supported, MCMDR.FFTCR is ignored
343
+ * - MGSTA.SQE is not supported
344
+ * - pause and control frames are not implemented
345
+ * - MGSTA.CCNT is not supported
346
+ * - MPCNT, DMARFS are not implemented
347
+ */
348
+
349
+#include "qemu/osdep.h"
350
+
351
+/* For crc32 */
352
+#include <zlib.h>
353
+
354
+#include "qemu-common.h"
355
+#include "hw/irq.h"
356
+#include "hw/qdev-clock.h"
357
+#include "hw/qdev-properties.h"
358
+#include "hw/net/npcm7xx_emc.h"
359
+#include "net/eth.h"
360
+#include "migration/vmstate.h"
361
+#include "qemu/bitops.h"
362
+#include "qemu/error-report.h"
363
+#include "qemu/log.h"
364
+#include "qemu/module.h"
365
+#include "qemu/units.h"
366
+#include "sysemu/dma.h"
367
+#include "trace.h"
368
+
369
+#define CRC_LENGTH 4
370
+
371
+/*
372
+ * The maximum size of a (layer 2) ethernet frame as defined by 802.3.
373
+ * 1518 = 6(dest macaddr) + 6(src macaddr) + 2(proto) + 4(crc) + 1500(payload)
374
+ * This does not include an additional 4 for the vlan field (802.1q).
375
+ */
376
+#define MAX_ETH_FRAME_SIZE 1518
377
+
378
+static const char *emc_reg_name(int regno)
379
+{
380
+#define REG(name) case REG_ ## name: return #name;
381
+ switch (regno) {
382
+ REG(CAMCMR)
383
+ REG(CAMEN)
384
+ REG(TXDLSA)
385
+ REG(RXDLSA)
386
+ REG(MCMDR)
387
+ REG(MIID)
388
+ REG(MIIDA)
389
+ REG(FFTCR)
390
+ REG(TSDR)
391
+ REG(RSDR)
392
+ REG(DMARFC)
393
+ REG(MIEN)
394
+ REG(MISTA)
395
+ REG(MGSTA)
396
+ REG(MPCNT)
397
+ REG(MRPC)
398
+ REG(MRPCC)
399
+ REG(MREPC)
400
+ REG(DMARFS)
401
+ REG(CTXDSA)
402
+ REG(CTXBSA)
403
+ REG(CRXDSA)
404
+ REG(CRXBSA)
405
+ case REG_CAMM_BASE + 0: return "CAM0M";
406
+ case REG_CAML_BASE + 0: return "CAM0L";
407
+ case REG_CAMM_BASE + 2 ... REG_CAMML_LAST:
408
+ /* Only CAM0 is supported, fold the others into something simple. */
409
+ if (regno & 1) {
410
+ return "CAM<n>L";
411
+ } else {
412
+ return "CAM<n>M";
413
+ }
414
+ default: return "UNKNOWN";
415
+ }
416
+#undef REG
417
+}
418
+
419
+static void emc_reset(NPCM7xxEMCState *emc)
420
+{
421
+ trace_npcm7xx_emc_reset(emc->emc_num);
422
+
423
+ memset(&emc->regs[0], 0, sizeof(emc->regs));
424
+
425
+ /* These regs have non-zero reset values. */
426
+ emc->regs[REG_TXDLSA] = 0xfffffffc;
427
+ emc->regs[REG_RXDLSA] = 0xfffffffc;
428
+ emc->regs[REG_MIIDA] = 0x00900000;
429
+ emc->regs[REG_FFTCR] = 0x0101;
430
+ emc->regs[REG_DMARFC] = 0x0800;
431
+ emc->regs[REG_MPCNT] = 0x7fff;
432
+
433
+ emc->tx_active = false;
434
+ emc->rx_active = false;
435
+}
436
+
437
+static void npcm7xx_emc_reset(DeviceState *dev)
438
+{
439
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
440
+ emc_reset(emc);
441
+}
442
+
443
+static void emc_soft_reset(NPCM7xxEMCState *emc)
444
+{
445
+ /*
446
+ * The docs say at least MCMDR.{LBK,OPMOD} bits are not changed during a
447
+ * soft reset, but does not go into further detail. For now, KISS.
448
+ */
449
+ uint32_t mcmdr = emc->regs[REG_MCMDR];
450
+ emc_reset(emc);
451
+ emc->regs[REG_MCMDR] = mcmdr & (REG_MCMDR_LBK | REG_MCMDR_OPMOD);
452
+
453
+ qemu_set_irq(emc->tx_irq, 0);
454
+ qemu_set_irq(emc->rx_irq, 0);
455
+}
456
+
457
+static void emc_set_link(NetClientState *nc)
458
+{
459
+ /* Nothing to do yet. */
460
+}
461
+
462
+/* MISTA.TXINTR is the union of the individual bits with their enables. */
463
+static void emc_update_mista_txintr(NPCM7xxEMCState *emc)
464
+{
465
+ /* Only look at the bits we support. */
466
+ uint32_t mask = (REG_MISTA_TXBERR |
467
+ REG_MISTA_TDU |
468
+ REG_MISTA_TXCP);
469
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & mask) {
470
+ emc->regs[REG_MISTA] |= REG_MISTA_TXINTR;
471
+ } else {
472
+ emc->regs[REG_MISTA] &= ~REG_MISTA_TXINTR;
473
+ }
474
+}
475
+
476
+/* MISTA.RXINTR is the union of the individual bits with their enables. */
477
+static void emc_update_mista_rxintr(NPCM7xxEMCState *emc)
478
+{
479
+ /* Only look at the bits we support. */
480
+ uint32_t mask = (REG_MISTA_RXBERR |
481
+ REG_MISTA_RDU |
482
+ REG_MISTA_RXGD);
483
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & mask) {
484
+ emc->regs[REG_MISTA] |= REG_MISTA_RXINTR;
485
+ } else {
486
+ emc->regs[REG_MISTA] &= ~REG_MISTA_RXINTR;
487
+ }
488
+}
489
+
490
+/* N.B. emc_update_mista_txintr must have already been called. */
491
+static void emc_update_tx_irq(NPCM7xxEMCState *emc)
492
+{
493
+ int level = !!(emc->regs[REG_MISTA] &
494
+ emc->regs[REG_MIEN] &
495
+ REG_MISTA_TXINTR);
496
+ trace_npcm7xx_emc_update_tx_irq(level);
497
+ qemu_set_irq(emc->tx_irq, level);
498
+}
499
+
500
+/* N.B. emc_update_mista_rxintr must have already been called. */
501
+static void emc_update_rx_irq(NPCM7xxEMCState *emc)
502
+{
503
+ int level = !!(emc->regs[REG_MISTA] &
504
+ emc->regs[REG_MIEN] &
505
+ REG_MISTA_RXINTR);
506
+ trace_npcm7xx_emc_update_rx_irq(level);
507
+ qemu_set_irq(emc->rx_irq, level);
508
+}
509
+
510
+/* Update IRQ states due to changes in MIEN,MISTA. */
511
+static void emc_update_irq_from_reg_change(NPCM7xxEMCState *emc)
512
+{
513
+ emc_update_mista_txintr(emc);
514
+ emc_update_tx_irq(emc);
515
+
516
+ emc_update_mista_rxintr(emc);
517
+ emc_update_rx_irq(emc);
518
+}
519
+
520
+static int emc_read_tx_desc(dma_addr_t addr, NPCM7xxEMCTxDesc *desc)
521
+{
522
+ if (dma_memory_read(&address_space_memory, addr, desc, sizeof(*desc))) {
523
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read descriptor @ 0x%"
524
+ HWADDR_PRIx "\n", __func__, addr);
525
+ return -1;
526
+ }
527
+ desc->flags = le32_to_cpu(desc->flags);
528
+ desc->txbsa = le32_to_cpu(desc->txbsa);
529
+ desc->status_and_length = le32_to_cpu(desc->status_and_length);
530
+ desc->ntxdsa = le32_to_cpu(desc->ntxdsa);
531
+ return 0;
532
+}
533
+
534
+static int emc_write_tx_desc(const NPCM7xxEMCTxDesc *desc, dma_addr_t addr)
535
+{
536
+ NPCM7xxEMCTxDesc le_desc;
537
+
538
+ le_desc.flags = cpu_to_le32(desc->flags);
539
+ le_desc.txbsa = cpu_to_le32(desc->txbsa);
540
+ le_desc.status_and_length = cpu_to_le32(desc->status_and_length);
541
+ le_desc.ntxdsa = cpu_to_le32(desc->ntxdsa);
542
+ if (dma_memory_write(&address_space_memory, addr, &le_desc,
543
+ sizeof(le_desc))) {
544
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to write descriptor @ 0x%"
545
+ HWADDR_PRIx "\n", __func__, addr);
546
+ return -1;
547
+ }
548
+ return 0;
549
+}
550
+
551
+static int emc_read_rx_desc(dma_addr_t addr, NPCM7xxEMCRxDesc *desc)
552
+{
553
+ if (dma_memory_read(&address_space_memory, addr, desc, sizeof(*desc))) {
554
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read descriptor @ 0x%"
555
+ HWADDR_PRIx "\n", __func__, addr);
556
+ return -1;
557
+ }
558
+ desc->status_and_length = le32_to_cpu(desc->status_and_length);
559
+ desc->rxbsa = le32_to_cpu(desc->rxbsa);
560
+ desc->reserved = le32_to_cpu(desc->reserved);
561
+ desc->nrxdsa = le32_to_cpu(desc->nrxdsa);
562
+ return 0;
563
+}
564
+
565
+static int emc_write_rx_desc(const NPCM7xxEMCRxDesc *desc, dma_addr_t addr)
566
+{
567
+ NPCM7xxEMCRxDesc le_desc;
568
+
569
+ le_desc.status_and_length = cpu_to_le32(desc->status_and_length);
570
+ le_desc.rxbsa = cpu_to_le32(desc->rxbsa);
571
+ le_desc.reserved = cpu_to_le32(desc->reserved);
572
+ le_desc.nrxdsa = cpu_to_le32(desc->nrxdsa);
573
+ if (dma_memory_write(&address_space_memory, addr, &le_desc,
574
+ sizeof(le_desc))) {
575
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to write descriptor @ 0x%"
576
+ HWADDR_PRIx "\n", __func__, addr);
577
+ return -1;
578
+ }
579
+ return 0;
580
+}
581
+
582
+static void emc_set_mista(NPCM7xxEMCState *emc, uint32_t flags)
583
+{
584
+ trace_npcm7xx_emc_set_mista(flags);
585
+ emc->regs[REG_MISTA] |= flags;
586
+ if (extract32(flags, 16, 16)) {
587
+ emc_update_mista_txintr(emc);
588
+ }
589
+ if (extract32(flags, 0, 16)) {
590
+ emc_update_mista_rxintr(emc);
591
+ }
592
+}
593
+
594
+static void emc_halt_tx(NPCM7xxEMCState *emc, uint32_t mista_flag)
595
+{
596
+ emc->tx_active = false;
597
+ emc_set_mista(emc, mista_flag);
598
+}
599
+
600
+static void emc_halt_rx(NPCM7xxEMCState *emc, uint32_t mista_flag)
601
+{
602
+ emc->rx_active = false;
603
+ emc_set_mista(emc, mista_flag);
604
+}
605
+
606
+static void emc_set_next_tx_descriptor(NPCM7xxEMCState *emc,
607
+ const NPCM7xxEMCTxDesc *tx_desc,
608
+ uint32_t desc_addr)
609
+{
610
+ /* Update the current descriptor, if only to reset the owner flag. */
611
+ if (emc_write_tx_desc(tx_desc, desc_addr)) {
612
+ /*
613
+ * We just read it so this shouldn't generally happen.
614
+ * Error already reported.
615
+ */
616
+ emc_set_mista(emc, REG_MISTA_TXBERR);
617
+ }
618
+ emc->regs[REG_CTXDSA] = TX_DESC_NTXDSA(tx_desc->ntxdsa);
619
+}
620
+
621
+static void emc_set_next_rx_descriptor(NPCM7xxEMCState *emc,
622
+ const NPCM7xxEMCRxDesc *rx_desc,
623
+ uint32_t desc_addr)
624
+{
625
+ /* Update the current descriptor, if only to reset the owner flag. */
626
+ if (emc_write_rx_desc(rx_desc, desc_addr)) {
627
+ /*
628
+ * We just read it so this shouldn't generally happen.
629
+ * Error already reported.
630
+ */
631
+ emc_set_mista(emc, REG_MISTA_RXBERR);
632
+ }
633
+ emc->regs[REG_CRXDSA] = RX_DESC_NRXDSA(rx_desc->nrxdsa);
634
+}
635
+
636
+static void emc_try_send_next_packet(NPCM7xxEMCState *emc)
637
+{
638
+ /* Working buffer for sending out packets. Most packets fit in this. */
639
+#define TX_BUFFER_SIZE 2048
640
+ uint8_t tx_send_buffer[TX_BUFFER_SIZE];
641
+ uint32_t desc_addr = TX_DESC_NTXDSA(emc->regs[REG_CTXDSA]);
642
+ NPCM7xxEMCTxDesc tx_desc;
643
+ uint32_t next_buf_addr, length;
644
+ uint8_t *buf;
645
+ g_autofree uint8_t *malloced_buf = NULL;
646
+
647
+ if (emc_read_tx_desc(desc_addr, &tx_desc)) {
648
+ /* Error reading descriptor, already reported. */
649
+ emc_halt_tx(emc, REG_MISTA_TXBERR);
650
+ emc_update_tx_irq(emc);
651
+ return;
652
+ }
653
+
654
+ /* Nothing we can do if we don't own the descriptor. */
655
+ if (!(tx_desc.flags & TX_DESC_FLAG_OWNER_MASK)) {
656
+ trace_npcm7xx_emc_cpu_owned_desc(desc_addr);
657
+ emc_halt_tx(emc, REG_MISTA_TDU);
658
+ emc_update_tx_irq(emc);
659
+ return;
660
+ }
661
+
662
+ /* Give the descriptor back regardless of what happens. */
663
+ tx_desc.flags &= ~TX_DESC_FLAG_OWNER_MASK;
664
+ tx_desc.status_and_length &= 0xffff;
665
+
666
+ /*
667
+ * Despite the h/w documentation saying the tx buffer is word aligned,
668
+ * the linux driver does not word align the buffer. There is value in not
669
+ * aligning the buffer: See the description of NET_IP_ALIGN in linux
670
+ * kernel sources.
671
+ */
672
+ next_buf_addr = tx_desc.txbsa;
673
+ emc->regs[REG_CTXBSA] = next_buf_addr;
674
+ length = TX_DESC_PKT_LEN(tx_desc.status_and_length);
675
+ buf = &tx_send_buffer[0];
676
+
677
+ if (length > sizeof(tx_send_buffer)) {
678
+ malloced_buf = g_malloc(length);
679
+ buf = malloced_buf;
680
+ }
681
+
682
+ if (dma_memory_read(&address_space_memory, next_buf_addr, buf, length)) {
683
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read packet @ 0x%x\n",
684
+ __func__, next_buf_addr);
685
+ emc_set_mista(emc, REG_MISTA_TXBERR);
686
+ emc_set_next_tx_descriptor(emc, &tx_desc, desc_addr);
687
+ emc_update_tx_irq(emc);
688
+ trace_npcm7xx_emc_tx_done(emc->regs[REG_CTXDSA]);
689
+ return;
690
+ }
691
+
692
+ if ((tx_desc.flags & TX_DESC_FLAG_PADEN) && (length < MIN_PACKET_LENGTH)) {
693
+ memset(buf + length, 0, MIN_PACKET_LENGTH - length);
694
+ length = MIN_PACKET_LENGTH;
695
+ }
696
+
697
+ /* N.B. emc_receive can get called here. */
698
+ qemu_send_packet(qemu_get_queue(emc->nic), buf, length);
699
+ trace_npcm7xx_emc_sent_packet(length);
700
+
701
+ tx_desc.status_and_length |= TX_DESC_STATUS_TXCP;
702
+ if (tx_desc.flags & TX_DESC_FLAG_INTEN) {
703
+ emc_set_mista(emc, REG_MISTA_TXCP);
704
+ }
705
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & REG_MISTA_TXINTR) {
706
+ tx_desc.status_and_length |= TX_DESC_STATUS_TXINTR;
707
+ }
708
+
709
+ emc_set_next_tx_descriptor(emc, &tx_desc, desc_addr);
710
+ emc_update_tx_irq(emc);
711
+ trace_npcm7xx_emc_tx_done(emc->regs[REG_CTXDSA]);
712
+}
713
+
714
+static bool emc_can_receive(NetClientState *nc)
715
+{
716
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(qemu_get_nic_opaque(nc));
717
+
718
+ bool can_receive = emc->rx_active;
719
+ trace_npcm7xx_emc_can_receive(can_receive);
720
+ return can_receive;
721
+}
722
+
723
+/* If result is false then *fail_reason contains the reason. */
724
+static bool emc_receive_filter1(NPCM7xxEMCState *emc, const uint8_t *buf,
725
+ size_t len, const char **fail_reason)
726
+{
727
+ eth_pkt_types_e pkt_type = get_eth_packet_type(PKT_GET_ETH_HDR(buf));
728
+
729
+ switch (pkt_type) {
730
+ case ETH_PKT_BCAST:
731
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
732
+ return true;
733
+ } else {
734
+ *fail_reason = "Broadcast packet disabled";
735
+ return !!(emc->regs[REG_CAMCMR] & REG_CAMCMR_ABP);
736
+ }
737
+ case ETH_PKT_MCAST:
738
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
739
+ return true;
740
+ } else {
741
+ *fail_reason = "Multicast packet disabled";
742
+ return !!(emc->regs[REG_CAMCMR] & REG_CAMCMR_AMP);
743
+ }
744
+ case ETH_PKT_UCAST: {
745
+ bool matches;
746
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_AUP) {
747
+ return true;
748
+ }
749
+ matches = ((emc->regs[REG_CAMCMR] & REG_CAMCMR_ECMP) &&
750
+ /* We only support one CAM register, CAM0. */
751
+ (emc->regs[REG_CAMEN] & (1 << 0)) &&
752
+ memcmp(buf, emc->conf.macaddr.a, ETH_ALEN) == 0);
753
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
754
+ *fail_reason = "MACADDR matched, comparison complemented";
755
+ return !matches;
756
+ } else {
757
+ *fail_reason = "MACADDR didn't match";
758
+ return matches;
759
+ }
760
+ }
761
+ default:
762
+ g_assert_not_reached();
763
+ }
764
+}
765
+
766
+static bool emc_receive_filter(NPCM7xxEMCState *emc, const uint8_t *buf,
767
+ size_t len)
768
+{
769
+ const char *fail_reason = NULL;
770
+ bool ok = emc_receive_filter1(emc, buf, len, &fail_reason);
771
+ if (!ok) {
772
+ trace_npcm7xx_emc_packet_filtered_out(fail_reason);
773
+ }
774
+ return ok;
775
+}
776
+
777
+static ssize_t emc_receive(NetClientState *nc, const uint8_t *buf, size_t len1)
778
+{
779
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(qemu_get_nic_opaque(nc));
780
+ const uint32_t len = len1;
781
+ size_t max_frame_len;
782
+ bool long_frame;
783
+ uint32_t desc_addr;
784
+ NPCM7xxEMCRxDesc rx_desc;
785
+ uint32_t crc;
786
+ uint8_t *crc_ptr;
787
+ uint32_t buf_addr;
788
+
789
+ trace_npcm7xx_emc_receiving_packet(len);
790
+
791
+ if (!emc_can_receive(nc)) {
792
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Unexpected packet\n", __func__);
793
+ return -1;
794
+ }
795
+
796
+ if (len < ETH_HLEN ||
797
+ /* Defensive programming: drop unsupportable large packets. */
798
+ len > 0xffff - CRC_LENGTH) {
799
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Dropped frame of %u bytes\n",
800
+ __func__, len);
801
+ return len;
802
+ }
803
+
804
+ /*
805
+ * DENI is set if EMC received the Length/Type field of the incoming
806
+ * packet, so it will be set regardless of what happens next.
807
+ */
808
+ emc_set_mista(emc, REG_MISTA_DENI);
809
+
810
+ if (!emc_receive_filter(emc, buf, len)) {
811
+ emc_update_rx_irq(emc);
812
+ return len;
813
+ }
814
+
815
+ /* Huge frames (> DMARFC) are dropped. */
816
+ max_frame_len = REG_DMARFC_RXMS(emc->regs[REG_DMARFC]);
817
+ if (len + CRC_LENGTH > max_frame_len) {
818
+ trace_npcm7xx_emc_packet_dropped(len);
819
+ emc_set_mista(emc, REG_MISTA_DFOI);
820
+ emc_update_rx_irq(emc);
821
+ return len;
822
+ }
823
+
824
+ /*
825
+ * Long Frames (> MAX_ETH_FRAME_SIZE) are also dropped, unless MCMDR.ALP
826
+ * is set.
827
+ */
828
+ long_frame = false;
829
+ if (len + CRC_LENGTH > MAX_ETH_FRAME_SIZE) {
830
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_ALP) {
831
+ long_frame = true;
832
+ } else {
833
+ trace_npcm7xx_emc_packet_dropped(len);
834
+ emc_set_mista(emc, REG_MISTA_PTLE);
835
+ emc_update_rx_irq(emc);
836
+ return len;
837
+ }
838
+ }
839
+
840
+ desc_addr = RX_DESC_NRXDSA(emc->regs[REG_CRXDSA]);
841
+ if (emc_read_rx_desc(desc_addr, &rx_desc)) {
842
+ /* Error reading descriptor, already reported. */
843
+ emc_halt_rx(emc, REG_MISTA_RXBERR);
844
+ emc_update_rx_irq(emc);
845
+ return len;
846
+ }
847
+
848
+ /* Nothing we can do if we don't own the descriptor. */
849
+ if (!(rx_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK)) {
850
+ trace_npcm7xx_emc_cpu_owned_desc(desc_addr);
851
+ emc_halt_rx(emc, REG_MISTA_RDU);
852
+ emc_update_rx_irq(emc);
853
+ return len;
854
+ }
855
+
856
+ crc = 0;
857
+ crc_ptr = (uint8_t *) &crc;
858
+ if (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC)) {
859
+ crc = cpu_to_be32(crc32(~0, buf, len));
860
+ }
861
+
862
+ /* Give the descriptor back regardless of what happens. */
863
+ rx_desc.status_and_length &= ~RX_DESC_STATUS_OWNER_MASK;
864
+
865
+ buf_addr = rx_desc.rxbsa;
866
+ emc->regs[REG_CRXBSA] = buf_addr;
867
+ if (dma_memory_write(&address_space_memory, buf_addr, buf, len) ||
868
+ (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC) &&
869
+ dma_memory_write(&address_space_memory, buf_addr + len, crc_ptr,
870
+ 4))) {
871
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bus error writing packet\n",
872
+ __func__);
873
+ emc_set_mista(emc, REG_MISTA_RXBERR);
874
+ emc_set_next_rx_descriptor(emc, &rx_desc, desc_addr);
875
+ emc_update_rx_irq(emc);
876
+ trace_npcm7xx_emc_rx_done(emc->regs[REG_CRXDSA]);
877
+ return len;
878
+ }
879
+
880
+ trace_npcm7xx_emc_received_packet(len);
881
+
882
+ /* Note: We've already verified len+4 <= 0xffff. */
883
+ rx_desc.status_and_length = len;
884
+ if (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC)) {
885
+ rx_desc.status_and_length += 4;
886
+ }
887
+ rx_desc.status_and_length |= RX_DESC_STATUS_RXGD;
888
+ emc_set_mista(emc, REG_MISTA_RXGD);
889
+
890
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & REG_MISTA_RXINTR) {
891
+ rx_desc.status_and_length |= RX_DESC_STATUS_RXINTR;
892
+ }
893
+ if (long_frame) {
894
+ rx_desc.status_and_length |= RX_DESC_STATUS_PTLE;
895
+ }
896
+
897
+ emc_set_next_rx_descriptor(emc, &rx_desc, desc_addr);
898
+ emc_update_rx_irq(emc);
899
+ trace_npcm7xx_emc_rx_done(emc->regs[REG_CRXDSA]);
900
+ return len;
901
+}
902
+
903
+static void emc_try_receive_next_packet(NPCM7xxEMCState *emc)
904
+{
905
+ if (emc_can_receive(qemu_get_queue(emc->nic))) {
906
+ qemu_flush_queued_packets(qemu_get_queue(emc->nic));
907
+ }
908
+}
909
+
910
+static uint64_t npcm7xx_emc_read(void *opaque, hwaddr offset, unsigned size)
911
+{
912
+ NPCM7xxEMCState *emc = opaque;
913
+ uint32_t reg = offset / sizeof(uint32_t);
914
+ uint32_t result;
915
+
916
+ if (reg >= NPCM7XX_NUM_EMC_REGS) {
917
+ qemu_log_mask(LOG_GUEST_ERROR,
918
+ "%s: Invalid offset 0x%04" HWADDR_PRIx "\n",
919
+ __func__, offset);
920
+ return 0;
921
+ }
922
+
923
+ switch (reg) {
924
+ case REG_MIID:
925
+ /*
926
+ * We don't implement MII. For determinism, always return zero as
927
+ * writes record the last value written for debugging purposes.
928
+ */
929
+ qemu_log_mask(LOG_UNIMP, "%s: Read of MIID, returning 0\n", __func__);
930
+ result = 0;
931
+ break;
932
+ case REG_TSDR:
933
+ case REG_RSDR:
934
+ qemu_log_mask(LOG_GUEST_ERROR,
935
+ "%s: Read of write-only reg, %s/%d\n",
936
+ __func__, emc_reg_name(reg), reg);
937
+ return 0;
938
+ default:
939
+ result = emc->regs[reg];
940
+ break;
941
+ }
942
+
943
+ trace_npcm7xx_emc_reg_read(emc->emc_num, result, emc_reg_name(reg), reg);
944
+ return result;
945
+}
946
+
947
+static void npcm7xx_emc_write(void *opaque, hwaddr offset,
948
+ uint64_t v, unsigned size)
949
+{
950
+ NPCM7xxEMCState *emc = opaque;
951
+ uint32_t reg = offset / sizeof(uint32_t);
952
+ uint32_t value = v;
953
+
954
+ g_assert(size == sizeof(uint32_t));
955
+
956
+ if (reg >= NPCM7XX_NUM_EMC_REGS) {
957
+ qemu_log_mask(LOG_GUEST_ERROR,
958
+ "%s: Invalid offset 0x%04" HWADDR_PRIx "\n",
959
+ __func__, offset);
960
+ return;
961
+ }
962
+
963
+ trace_npcm7xx_emc_reg_write(emc->emc_num, emc_reg_name(reg), reg, value);
964
+
965
+ switch (reg) {
966
+ case REG_CAMCMR:
967
+ emc->regs[reg] = value;
968
+ break;
969
+ case REG_CAMEN:
970
+ /* Only CAM0 is supported, don't pretend otherwise. */
971
+ if (value & ~1) {
972
+ qemu_log_mask(LOG_GUEST_ERROR,
973
+ "%s: Only CAM0 is supported, cannot enable others"
974
+ ": 0x%x\n",
975
+ __func__, value);
976
+ }
977
+ emc->regs[reg] = value & 1;
978
+ break;
979
+ case REG_CAMM_BASE + 0:
980
+ emc->regs[reg] = value;
981
+ emc->conf.macaddr.a[0] = value >> 24;
982
+ emc->conf.macaddr.a[1] = value >> 16;
983
+ emc->conf.macaddr.a[2] = value >> 8;
984
+ emc->conf.macaddr.a[3] = value >> 0;
985
+ break;
986
+ case REG_CAML_BASE + 0:
987
+ emc->regs[reg] = value;
988
+ emc->conf.macaddr.a[4] = value >> 24;
989
+ emc->conf.macaddr.a[5] = value >> 16;
990
+ break;
991
+ case REG_MCMDR: {
992
+ uint32_t prev;
993
+ if (value & REG_MCMDR_SWR) {
994
+ emc_soft_reset(emc);
995
+ /* On h/w the reset happens over multiple cycles. For now KISS. */
996
+ break;
997
+ }
998
+ prev = emc->regs[reg];
999
+ emc->regs[reg] = value;
1000
+ /* Update tx state. */
1001
+ if (!(prev & REG_MCMDR_TXON) &&
1002
+ (value & REG_MCMDR_TXON)) {
1003
+ emc->regs[REG_CTXDSA] = emc->regs[REG_TXDLSA];
1004
+ /*
1005
+ * Linux kernel turns TX on with CPU still holding descriptor,
1006
+ * which suggests we should wait for a write to TSDR before trying
1007
+ * to send a packet: so we don't send one here.
1008
+ */
1009
+ } else if ((prev & REG_MCMDR_TXON) &&
1010
+ !(value & REG_MCMDR_TXON)) {
1011
+ emc->regs[REG_MGSTA] |= REG_MGSTA_TXHA;
1012
+ }
1013
+ if (!(value & REG_MCMDR_TXON)) {
1014
+ emc_halt_tx(emc, 0);
1015
+ }
1016
+ /* Update rx state. */
1017
+ if (!(prev & REG_MCMDR_RXON) &&
1018
+ (value & REG_MCMDR_RXON)) {
1019
+ emc->regs[REG_CRXDSA] = emc->regs[REG_RXDLSA];
1020
+ } else if ((prev & REG_MCMDR_RXON) &&
1021
+ !(value & REG_MCMDR_RXON)) {
1022
+ emc->regs[REG_MGSTA] |= REG_MGSTA_RXHA;
1023
+ }
1024
+ if (!(value & REG_MCMDR_RXON)) {
1025
+ emc_halt_rx(emc, 0);
1026
+ }
1027
+ break;
1028
+ }
1029
+ case REG_TXDLSA:
1030
+ case REG_RXDLSA:
1031
+ case REG_DMARFC:
1032
+ case REG_MIID:
1033
+ emc->regs[reg] = value;
1034
+ break;
1035
+ case REG_MIEN:
1036
+ emc->regs[reg] = value;
1037
+ emc_update_irq_from_reg_change(emc);
1038
+ break;
1039
+ case REG_MISTA:
1040
+ /* Clear the bits that have 1 in "value". */
1041
+ emc->regs[reg] &= ~value;
1042
+ emc_update_irq_from_reg_change(emc);
1043
+ break;
1044
+ case REG_MGSTA:
1045
+ /* Clear the bits that have 1 in "value". */
1046
+ emc->regs[reg] &= ~value;
1047
+ break;
1048
+ case REG_TSDR:
1049
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_TXON) {
1050
+ emc->tx_active = true;
1051
+ /* Keep trying to send packets until we run out. */
1052
+ while (emc->tx_active) {
1053
+ emc_try_send_next_packet(emc);
1054
+ }
1055
+ }
1056
+ break;
1057
+ case REG_RSDR:
1058
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_RXON) {
1059
+ emc->rx_active = true;
1060
+ emc_try_receive_next_packet(emc);
1061
+ }
1062
+ break;
1063
+ case REG_MIIDA:
1064
+ emc->regs[reg] = value & ~REG_MIIDA_BUSY;
1065
+ break;
1066
+ case REG_MRPC:
1067
+ case REG_MRPCC:
1068
+ case REG_MREPC:
1069
+ case REG_CTXDSA:
1070
+ case REG_CTXBSA:
1071
+ case REG_CRXDSA:
1072
+ case REG_CRXBSA:
1073
+ qemu_log_mask(LOG_GUEST_ERROR,
1074
+ "%s: Write to read-only reg %s/%d\n",
1075
+ __func__, emc_reg_name(reg), reg);
1076
+ break;
1077
+ default:
1078
+ qemu_log_mask(LOG_UNIMP, "%s: Write to unimplemented reg %s/%d\n",
1079
+ __func__, emc_reg_name(reg), reg);
1080
+ break;
1081
+ }
1082
+}
1083
+
1084
+static const struct MemoryRegionOps npcm7xx_emc_ops = {
1085
+ .read = npcm7xx_emc_read,
1086
+ .write = npcm7xx_emc_write,
1087
+ .endianness = DEVICE_LITTLE_ENDIAN,
1088
+ .valid = {
1089
+ .min_access_size = 4,
1090
+ .max_access_size = 4,
1091
+ .unaligned = false,
1092
+ },
1093
+};
1094
+
1095
+static void emc_cleanup(NetClientState *nc)
1096
+{
1097
+ /* Nothing to do yet. */
1098
+}
1099
+
1100
+static NetClientInfo net_npcm7xx_emc_info = {
1101
+ .type = NET_CLIENT_DRIVER_NIC,
1102
+ .size = sizeof(NICState),
1103
+ .can_receive = emc_can_receive,
1104
+ .receive = emc_receive,
1105
+ .cleanup = emc_cleanup,
1106
+ .link_status_changed = emc_set_link,
1107
+};
1108
+
1109
+static void npcm7xx_emc_realize(DeviceState *dev, Error **errp)
1110
+{
1111
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
1112
+ SysBusDevice *sbd = SYS_BUS_DEVICE(emc);
1113
+
1114
+ memory_region_init_io(&emc->iomem, OBJECT(emc), &npcm7xx_emc_ops, emc,
1115
+ TYPE_NPCM7XX_EMC, 4 * KiB);
1116
+ sysbus_init_mmio(sbd, &emc->iomem);
1117
+ sysbus_init_irq(sbd, &emc->tx_irq);
1118
+ sysbus_init_irq(sbd, &emc->rx_irq);
1119
+
1120
+ qemu_macaddr_default_if_unset(&emc->conf.macaddr);
1121
+ emc->nic = qemu_new_nic(&net_npcm7xx_emc_info, &emc->conf,
1122
+ object_get_typename(OBJECT(dev)), dev->id, emc);
1123
+ qemu_format_nic_info_str(qemu_get_queue(emc->nic), emc->conf.macaddr.a);
1124
+}
1125
+
1126
+static void npcm7xx_emc_unrealize(DeviceState *dev)
1127
+{
1128
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
1129
+
1130
+ qemu_del_nic(emc->nic);
1131
+}
1132
+
1133
+static const VMStateDescription vmstate_npcm7xx_emc = {
1134
+ .name = TYPE_NPCM7XX_EMC,
1135
+ .version_id = 0,
1136
+ .minimum_version_id = 0,
1137
+ .fields = (VMStateField[]) {
1138
+ VMSTATE_UINT8(emc_num, NPCM7xxEMCState),
1139
+ VMSTATE_UINT32_ARRAY(regs, NPCM7xxEMCState, NPCM7XX_NUM_EMC_REGS),
1140
+ VMSTATE_BOOL(tx_active, NPCM7xxEMCState),
1141
+ VMSTATE_BOOL(rx_active, NPCM7xxEMCState),
1142
+ VMSTATE_END_OF_LIST(),
1143
+ },
1144
+};
1145
+
1146
+static Property npcm7xx_emc_properties[] = {
1147
+ DEFINE_NIC_PROPERTIES(NPCM7xxEMCState, conf),
1148
+ DEFINE_PROP_END_OF_LIST(),
1149
+};
1150
+
1151
+static void npcm7xx_emc_class_init(ObjectClass *klass, void *data)
1152
+{
1153
+ DeviceClass *dc = DEVICE_CLASS(klass);
1154
+
1155
+ set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
1156
+ dc->desc = "NPCM7xx EMC Controller";
1157
+ dc->realize = npcm7xx_emc_realize;
1158
+ dc->unrealize = npcm7xx_emc_unrealize;
1159
+ dc->reset = npcm7xx_emc_reset;
1160
+ dc->vmsd = &vmstate_npcm7xx_emc;
1161
+ device_class_set_props(dc, npcm7xx_emc_properties);
1162
+}
1163
+
1164
+static const TypeInfo npcm7xx_emc_info = {
1165
+ .name = TYPE_NPCM7XX_EMC,
1166
+ .parent = TYPE_SYS_BUS_DEVICE,
1167
+ .instance_size = sizeof(NPCM7xxEMCState),
1168
+ .class_init = npcm7xx_emc_class_init,
1169
+};
1170
+
1171
+static void npcm7xx_emc_register_type(void)
1172
+{
1173
+ type_register_static(&npcm7xx_emc_info);
1174
+}
1175
+
1176
+type_init(npcm7xx_emc_register_type)
1177
diff --git a/hw/net/meson.build b/hw/net/meson.build
1178
index XXXXXXX..XXXXXXX 100644
1179
--- a/hw/net/meson.build
1180
+++ b/hw/net/meson.build
1181
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_I82596_COMMON', if_true: files('i82596.c'))
1182
softmmu_ss.add(when: 'CONFIG_SUNHME', if_true: files('sunhme.c'))
1183
softmmu_ss.add(when: 'CONFIG_FTGMAC100', if_true: files('ftgmac100.c'))
1184
softmmu_ss.add(when: 'CONFIG_SUNGEM', if_true: files('sungem.c'))
1185
+softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_emc.c'))
1186
1187
softmmu_ss.add(when: 'CONFIG_ETRAXFS', if_true: files('etraxfs_eth.c'))
1188
softmmu_ss.add(when: 'CONFIG_COLDFIRE', if_true: files('mcf_fec.c'))
1189
diff --git a/hw/net/trace-events b/hw/net/trace-events
1190
index XXXXXXX..XXXXXXX 100644
1191
--- a/hw/net/trace-events
1192
+++ b/hw/net/trace-events
1193
@@ -XXX,XX +XXX,XX @@ imx_fec_receive_last(int last) "rx frame flags 0x%04x"
1194
imx_enet_receive(size_t size) "len %zu"
1195
imx_enet_receive_len(uint64_t addr, int len) "rx_bd 0x%"PRIx64" length %d"
1196
imx_enet_receive_last(int last) "rx frame flags 0x%04x"
1197
+
1198
+# npcm7xx_emc.c
1199
+npcm7xx_emc_reset(int emc_num) "Resetting emc%d"
1200
+npcm7xx_emc_update_tx_irq(int level) "Setting tx irq to %d"
1201
+npcm7xx_emc_update_rx_irq(int level) "Setting rx irq to %d"
1202
+npcm7xx_emc_set_mista(uint32_t flags) "ORing 0x%x into MISTA"
1203
+npcm7xx_emc_cpu_owned_desc(uint32_t addr) "Can't process cpu-owned descriptor @0x%x"
1204
+npcm7xx_emc_sent_packet(uint32_t len) "Sent %u byte packet"
1205
+npcm7xx_emc_tx_done(uint32_t ctxdsa) "TX done, CTXDSA=0x%x"
1206
+npcm7xx_emc_can_receive(int can_receive) "Can receive: %d"
1207
+npcm7xx_emc_packet_filtered_out(const char* fail_reason) "Packet filtered out: %s"
1208
+npcm7xx_emc_packet_dropped(uint32_t len) "%u byte packet dropped"
1209
+npcm7xx_emc_receiving_packet(uint32_t len) "Receiving %u byte packet"
1210
+npcm7xx_emc_received_packet(uint32_t len) "Received %u byte packet"
1211
+npcm7xx_emc_rx_done(uint32_t crxdsa) "RX done, CRXDSA=0x%x"
1212
+npcm7xx_emc_reg_read(int emc_num, uint32_t result, const char *name, int regno) "emc%d: 0x%x = reg[%s/%d]"
1213
+npcm7xx_emc_reg_write(int emc_num, const char *name, int regno, uint32_t value) "emc%d: reg[%s/%d] = 0x%x"
1214
--
80
--
1215
2.20.1
81
2.34.1
1216
1217
diff view generated by jsdifflib