1
Small pile of bug fixes for rc1. I've included my patches to get
1
Mostly straightforward bugfixes. The new Xilinx devices are
2
our docs building with Sphinx 3, just for convenience...
2
arguably 'new feature', but they're fixing a regression where
3
our changes to PSCI in commit 3f37979bf mean that EL3 guest
4
code now needs to talk to a proper emulated power-controller
5
device to turn on secondary CPUs; and it's not yet rc1 and
6
they only affect the Xilinx board, so it seems OK to me.
3
7
8
thanks
4
-- PMM
9
-- PMM
5
10
6
The following changes since commit b149dea55cce97cb226683d06af61984a1c11e96:
11
The following changes since commit 1d60bb4b14601e38ed17384277aa4c30c57925d3:
7
12
8
Merge remote-tracking branch 'remotes/cschoenebeck/tags/pull-9p-20201102' into staging (2020-11-02 10:57:48 +0000)
13
Merge tag 'pull-request-2022-03-15v2' of https://gitlab.com/thuth/qemu into staging (2022-03-16 10:43:58 +0000)
9
14
10
are available in the Git repository at:
15
are available in the Git repository at:
11
16
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201102
17
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220318
13
18
14
for you to fetch changes up to ffb4fbf90a2f63c9cb33e4bb9f854c79bf04ca4a:
19
for you to fetch changes up to 79d54c9eac04c554e3c081589542f801ace71797:
15
20
16
tests/qtest/npcm7xx_rng-test: Disable randomness tests (2020-11-02 16:52:18 +0000)
21
util/osdep: Remove some early cruft (2022-03-18 11:32:13 +0000)
17
22
18
----------------------------------------------------------------
23
----------------------------------------------------------------
19
target-arm queue:
24
target-arm queue:
20
* target/arm: Fix Neon emulation bugs on big-endian hosts
25
* Fix sve2 ldnt1 and stnt1
21
* target/arm: fix handling of HCR.FB
26
* Fix pauth_check_trap vs SEL2
22
* target/arm: fix LORID_EL1 access check
27
* Fix handling of LPAE block descriptors
23
* disas/capstone: Fix monitor disassembly of >32 bytes
28
* hw/dma/xlnx_csu_dma: Set TYPE_XLNX_CSU_DMA class_size
24
* hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
29
* hw/misc/npcm7xx_clk: Don't leak string in npcm7xx_clk_sel_init()
25
* hw/arm/boot: fix SVE for EL3 direct kernel boot
30
* nsis installer: List emulators in alphabetical order
26
* hw/display/omap_lcdc: Fix potential NULL pointer dereference
31
* nsis installer: Suppress "ANSI targets are deprecated" warning
27
* hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
32
* nsis installer: Fix mouse-over descriptions for emulators
28
* target/arm: Get correct MMU index for other-security-state
33
* hw/arm/virt: Fix gic-version=max when CONFIG_ARM_GICV3_TCG is unset
29
* configure: Test that gio libs from pkg-config work
34
* Improve M-profile vector table access logging
30
* hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
35
* Xilinx ZynqMP: model CRF and APU control
31
* docs: Fix building with Sphinx 3
36
* Fix compile issues on modern Solaris
32
* tests/qtest/npcm7xx_rng-test: Disable randomness tests
33
37
34
----------------------------------------------------------------
38
----------------------------------------------------------------
35
AlexChen (2):
39
Andrew Deason (3):
36
hw/display/omap_lcdc: Fix potential NULL pointer dereference
40
util/osdep: Avoid madvise proto on modern Solaris
37
hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
41
hw/i386/acpi-build: Avoid 'sun' identifier
42
util/osdep: Remove some early cruft
38
43
39
Peter Maydell (9):
44
Edgar E. Iglesias (6):
40
target/arm: Fix float16 pairwise Neon ops on big-endian hosts
45
hw/arm/xlnx-zynqmp: Add an unimplemented SERDES area
41
target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hosts
46
target/arm: Make rvbar settable after realize
42
disas/capstone: Fix monitor disassembly of >32 bytes
47
hw/misc: Add a model of the Xilinx ZynqMP CRF
43
target/arm: Get correct MMU index for other-security-state
48
hw/arm/xlnx-zynqmp: Connect the ZynqMP CRF
44
configure: Test that gio libs from pkg-config work
49
hw/misc: Add a model of the Xilinx ZynqMP APU Control
45
hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
50
hw/arm/xlnx-zynqmp: Connect the ZynqMP APU Control
46
scripts/kerneldoc: For Sphinx 3 use c:macro for macros with arguments
47
qemu-option-trace.rst.inc: Don't use option:: markup
48
tests/qtest/npcm7xx_rng-test: Disable randomness tests
49
51
50
Philippe Mathieu-Daudé (1):
52
Eric Auger (2):
51
hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
53
hw/intc: Rename CONFIG_ARM_GIC_TCG into CONFIG_ARM_GICV3_TCG
54
hw/arm/virt: Fix gic-version=max when CONFIG_ARM_GICV3_TCG is unset
52
55
53
Richard Henderson (11):
56
Peter Maydell (8):
54
target/arm: Introduce neon_full_reg_offset
57
target/arm: Fix handling of LPAE block descriptors
55
target/arm: Move neon_element_offset to translate.c
58
hw/dma/xlnx_csu_dma: Set TYPE_XLNX_CSU_DMA class_size
56
target/arm: Use neon_element_offset in neon_load/store_reg
59
hw/misc/npcm7xx_clk: Don't leak string in npcm7xx_clk_sel_init()
57
target/arm: Use neon_element_offset in vfp_reg_offset
60
nsis installer: List emulators in alphabetical order
58
target/arm: Add read/write_neon_element32
61
nsis installer: Suppress "ANSI targets are deprecated" warning
59
target/arm: Expand read/write_neon_element32 to all MemOp
62
nsis installer: Fix mouse-over descriptions for emulators
60
target/arm: Rename neon_load_reg32 to vfp_load_reg32
63
target/arm: Log M-profile vector table accesses
61
target/arm: Add read/write_neon_element64
64
target/arm: Log fault address for M-profile faults
62
target/arm: Rename neon_load_reg64 to vfp_load_reg64
63
target/arm: Simplify do_long_3d and do_2scalar_long
64
target/arm: Improve do_prewiden_3d
65
65
66
Rémi Denis-Courmont (3):
66
Richard Henderson (2):
67
target/arm: fix handling of HCR.FB
67
target/arm: Fix sve2 ldnt1 and stnt1
68
target/arm: fix LORID_EL1 access check
68
target/arm: Fix pauth_check_trap vs SEL2
69
hw/arm/boot: fix SVE for EL3 direct kernel boot
70
69
71
docs/qemu-option-trace.rst.inc | 6 +-
70
meson.build | 23 ++-
72
configure | 10 +-
71
include/hw/arm/xlnx-zynqmp.h | 4 +
73
include/hw/intc/arm_gicv3_common.h | 1 -
72
include/hw/misc/xlnx-zynqmp-apu-ctrl.h | 93 ++++++++++++
74
disas/capstone.c | 2 +-
73
include/hw/misc/xlnx-zynqmp-crf.h | 211 ++++++++++++++++++++++++++
75
hw/arm/boot.c | 3 +
74
include/qemu/osdep.h | 8 +
76
hw/arm/smmuv3.c | 3 +-
75
target/arm/cpu.h | 3 +-
77
hw/display/exynos4210_fimd.c | 4 +-
76
target/arm/sve.decode | 5 +-
78
hw/display/omap_lcdc.c | 10 +-
77
hw/arm/virt.c | 7 +-
79
hw/intc/arm_gicv3_cpuif.c | 5 +-
78
hw/arm/xlnx-zynqmp.c | 46 +++++-
80
target/arm/helper.c | 24 +-
79
hw/dma/xlnx_csu_dma.c | 1 +
81
target/arm/m_helper.c | 3 +-
80
hw/i386/acpi-build.c | 4 +-
82
target/arm/translate.c | 153 +++++++++---
81
hw/misc/npcm7xx_clk.c | 4 +-
83
target/arm/vec_helper.c | 12 +-
82
hw/misc/xlnx-zynqmp-apu-ctrl.c | 253 +++++++++++++++++++++++++++++++
84
tests/qtest/npcm7xx_rng-test.c | 14 +-
83
hw/misc/xlnx-zynqmp-crf.c | 266 +++++++++++++++++++++++++++++++++
85
scripts/kernel-doc | 18 +-
84
target/arm/cpu.c | 17 ++-
86
target/arm/translate-neon.c.inc | 472 ++++++++++++++++++++-----------------
85
target/arm/helper.c | 20 ++-
87
target/arm/translate-vfp.c.inc | 341 +++++++++++----------------
86
target/arm/m_helper.c | 11 ++
88
17 files changed, 588 insertions(+), 493 deletions(-)
87
target/arm/pauth_helper.c | 2 +-
89
88
target/arm/translate-sve.c | 51 ++++++-
89
tests/tcg/aarch64/test-826.c | 50 +++++++
90
util/osdep.c | 10 --
91
hw/intc/Kconfig | 2 +-
92
hw/intc/meson.build | 4 +-
93
hw/misc/meson.build | 2 +
94
qemu.nsi | 8 +-
95
scripts/nsis.py | 17 ++-
96
tests/tcg/aarch64/Makefile.target | 4 +
97
tests/tcg/configure.sh | 4 +
98
28 files changed, 1084 insertions(+), 46 deletions(-)
99
create mode 100644 include/hw/misc/xlnx-zynqmp-apu-ctrl.h
100
create mode 100644 include/hw/misc/xlnx-zynqmp-crf.h
101
create mode 100644 hw/misc/xlnx-zynqmp-apu-ctrl.c
102
create mode 100644 hw/misc/xlnx-zynqmp-crf.c
103
create mode 100644 tests/tcg/aarch64/test-826.c
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Model these off the aa64 read/write_vec_element functions.
3
For both ldnt1 and stnt1, the meaning of the Rn and Rm are different
4
Use it within translate-neon.c.inc. The new functions do
4
from ld1 and st1: the vector and integer registers are reversed, and
5
not allocate or free temps, so this rearranges the calling
5
the integer register 31 refers to XZR instead of SP.
6
code a bit.
6
7
7
Secondly, the 64-bit version of ldnt1 was being interpreted as
8
32-bit unpacked unscaled offset instead of 64-bit unscaled offset,
9
which discarded the upper 32 bits of the address coming from
10
the vector argument.
11
12
Thirdly, validate that the memory element size is in range for the
13
vector element size for ldnt1. For ld1, we do this via independent
14
decode patterns, but for ldnt1 we need to do it manually.
15
16
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/826
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201030022618.785675-6-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Message-id: 20220308031655.240710-1-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
21
---
13
target/arm/translate.c | 26 ++++
22
target/arm/sve.decode | 5 ++-
14
target/arm/translate-neon.c.inc | 256 ++++++++++++++++++++------------
23
target/arm/translate-sve.c | 51 +++++++++++++++++++++++++++++--
15
2 files changed, 183 insertions(+), 99 deletions(-)
24
tests/tcg/aarch64/test-826.c | 50 ++++++++++++++++++++++++++++++
16
25
tests/tcg/aarch64/Makefile.target | 4 +++
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
26
tests/tcg/configure.sh | 4 +++
27
5 files changed, 109 insertions(+), 5 deletions(-)
28
create mode 100644 tests/tcg/aarch64/test-826.c
29
30
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
18
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
32
--- a/target/arm/sve.decode
20
+++ b/target/arm/translate.c
33
+++ b/target/arm/sve.decode
21
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
34
@@ -XXX,XX +XXX,XX @@ USDOT_zzzz 01000100 .. 0 ..... 011 110 ..... ..... @rda_rn_rm
22
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
35
36
### SVE2 Memory Gather Load Group
37
38
-# SVE2 64-bit gather non-temporal load
39
-# (scalar plus unpacked 32-bit unscaled offsets)
40
+# SVE2 64-bit gather non-temporal load (scalar plus 64-bit unscaled offsets)
41
LDNT1_zprz 1100010 msz:2 00 rm:5 1 u:1 0 pg:3 rn:5 rd:5 \
42
- &rprr_gather_load xs=0 esz=3 scale=0 ff=0
43
+ &rprr_gather_load xs=2 esz=3 scale=0 ff=0
44
45
# SVE2 32-bit gather non-temporal load (scalar plus 32-bit unscaled offsets)
46
LDNT1_zprz 1000010 msz:2 00 rm:5 10 u:1 pg:3 rn:5 rd:5 \
47
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-sve.c
50
+++ b/target/arm/translate-sve.c
51
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
52
53
static bool trans_LDNT1_zprz(DisasContext *s, arg_LD1_zprz *a)
54
{
55
+ gen_helper_gvec_mem_scatter *fn = NULL;
56
+ bool be = s->be_data == MO_BE;
57
+ bool mte = s->mte_active[0];
58
+
59
+ if (a->esz < a->msz + !a->u) {
60
+ return false;
61
+ }
62
if (!dc_isar_feature(aa64_sve2, s)) {
63
return false;
64
}
65
- return trans_LD1_zprz(s, a);
66
+ if (!sve_access_check(s)) {
67
+ return true;
68
+ }
69
+
70
+ switch (a->esz) {
71
+ case MO_32:
72
+ fn = gather_load_fn32[mte][be][0][0][a->u][a->msz];
73
+ break;
74
+ case MO_64:
75
+ fn = gather_load_fn64[mte][be][0][2][a->u][a->msz];
76
+ break;
77
+ }
78
+ assert(fn != NULL);
79
+
80
+ do_mem_zpz(s, a->rd, a->pg, a->rn, 0,
81
+ cpu_reg(s, a->rm), a->msz, false, fn);
82
+ return true;
23
}
83
}
24
84
25
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
85
/* Indexed by [mte][be][xs][msz]. */
26
+{
86
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
27
+ long off = neon_element_offset(reg, ele, size);
87
28
+
88
static bool trans_STNT1_zprz(DisasContext *s, arg_ST1_zprz *a)
29
+ switch (size) {
89
{
90
+ gen_helper_gvec_mem_scatter *fn;
91
+ bool be = s->be_data == MO_BE;
92
+ bool mte = s->mte_active[0];
93
+
94
+ if (a->esz < a->msz) {
95
+ return false;
96
+ }
97
if (!dc_isar_feature(aa64_sve2, s)) {
98
return false;
99
}
100
- return trans_ST1_zprz(s, a);
101
+ if (!sve_access_check(s)) {
102
+ return true;
103
+ }
104
+
105
+ switch (a->esz) {
30
+ case MO_32:
106
+ case MO_32:
31
+ tcg_gen_ld_i32(dest, cpu_env, off);
107
+ fn = scatter_store_fn32[mte][be][0][a->msz];
108
+ break;
109
+ case MO_64:
110
+ fn = scatter_store_fn64[mte][be][2][a->msz];
32
+ break;
111
+ break;
33
+ default:
112
+ default:
34
+ g_assert_not_reached();
113
+ g_assert_not_reached();
35
+ }
114
+ }
115
+
116
+ do_mem_zpz(s, a->rd, a->pg, a->rn, 0,
117
+ cpu_reg(s, a->rm), a->msz, true, fn);
118
+ return true;
119
}
120
121
/*
122
diff --git a/tests/tcg/aarch64/test-826.c b/tests/tcg/aarch64/test-826.c
123
new file mode 100644
124
index XXXXXXX..XXXXXXX
125
--- /dev/null
126
+++ b/tests/tcg/aarch64/test-826.c
127
@@ -XXX,XX +XXX,XX @@
128
+#include <sys/mman.h>
129
+#include <unistd.h>
130
+#include <signal.h>
131
+#include <stdlib.h>
132
+#include <stdio.h>
133
+#include <assert.h>
134
+
135
+static void *expected;
136
+
137
+void sigsegv(int sig, siginfo_t *info, void *vuc)
138
+{
139
+ ucontext_t *uc = vuc;
140
+
141
+ assert(info->si_addr == expected);
142
+ uc->uc_mcontext.pc += 4;
36
+}
143
+}
37
+
144
+
38
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
145
+int main()
39
+{
146
+{
40
+ long off = neon_element_offset(reg, ele, size);
147
+ struct sigaction sa = {
41
+
148
+ .sa_sigaction = sigsegv,
42
+ switch (size) {
149
+ .sa_flags = SA_SIGINFO
43
+ case MO_32:
150
+ };
44
+ tcg_gen_st_i32(src, cpu_env, off);
151
+
45
+ break;
152
+ void *page;
46
+ default:
153
+ long ofs;
47
+ g_assert_not_reached();
154
+
48
+ }
155
+ if (sigaction(SIGSEGV, &sa, NULL) < 0) {
156
+ perror("sigaction");
157
+ return EXIT_FAILURE;
158
+ }
159
+
160
+ page = mmap(0, getpagesize(), PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0);
161
+ if (page == MAP_FAILED) {
162
+ perror("mmap");
163
+ return EXIT_FAILURE;
164
+ }
165
+
166
+ ofs = 0x124;
167
+ expected = page + ofs;
168
+
169
+ asm("ptrue p0.d, vl1\n\t"
170
+ "dup z0.d, %0\n\t"
171
+ "ldnt1h {z1.d}, p0/z, [z0.d, %1]\n\t"
172
+ "dup z1.d, %1\n\t"
173
+ "ldnt1h {z0.d}, p0/z, [z1.d, %0]"
174
+ : : "r"(page), "r"(ofs) : "v0", "v1");
175
+
176
+ return EXIT_SUCCESS;
49
+}
177
+}
50
+
178
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
51
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
52
{
53
TCGv_ptr ret = tcg_temp_new_ptr();
54
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
55
index XXXXXXX..XXXXXXX 100644
179
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/translate-neon.c.inc
180
--- a/tests/tcg/aarch64/Makefile.target
57
+++ b/target/arm/translate-neon.c.inc
181
+++ b/tests/tcg/aarch64/Makefile.target
58
@@ -XXX,XX +XXX,XX @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
182
@@ -XXX,XX +XXX,XX @@ run-gdbstub-sve-ioctls: sve-ioctls
59
* early. Since Q is 0 there are always just two passes, so instead
183
60
* of a complicated loop over each pass we just unroll.
184
EXTRA_RUNS += run-gdbstub-sysregs run-gdbstub-sve-ioctls
61
*/
185
endif
62
- tmp = neon_load_reg(a->vn, 0);
186
+endif
63
- tmp2 = neon_load_reg(a->vn, 1);
187
64
+ tmp = tcg_temp_new_i32();
188
+ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_SVE2),)
65
+ tmp2 = tcg_temp_new_i32();
189
+AARCH64_TESTS += test-826
66
+ tmp3 = tcg_temp_new_i32();
190
+test-826: CFLAGS+=-march=armv8.1-a+sve2
67
+
191
endif
68
+ read_neon_element32(tmp, a->vn, 0, MO_32);
192
69
+ read_neon_element32(tmp2, a->vn, 1, MO_32);
193
TESTS += $(AARCH64_TESTS)
70
fn(tmp, tmp, tmp2);
194
diff --git a/tests/tcg/configure.sh b/tests/tcg/configure.sh
71
- tcg_temp_free_i32(tmp2);
195
index XXXXXXX..XXXXXXX 100755
72
196
--- a/tests/tcg/configure.sh
73
- tmp3 = neon_load_reg(a->vm, 0);
197
+++ b/tests/tcg/configure.sh
74
- tmp2 = neon_load_reg(a->vm, 1);
198
@@ -XXX,XX +XXX,XX @@ for target in $target_list; do
75
+ read_neon_element32(tmp3, a->vm, 0, MO_32);
199
-march=armv8.1-a+sve -o $TMPE $TMPC; then
76
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
200
echo "CROSS_CC_HAS_SVE=y" >> $config_target_mak
77
fn(tmp3, tmp3, tmp2);
201
fi
78
- tcg_temp_free_i32(tmp2);
202
+ if do_compiler "$target_compiler" $target_compiler_cflags \
79
203
+ -march=armv8.1-a+sve2 -o $TMPE $TMPC; then
80
- neon_store_reg(a->vd, 0, tmp);
204
+ echo "CROSS_CC_HAS_SVE2=y" >> $config_target_mak
81
- neon_store_reg(a->vd, 1, tmp3);
205
+ fi
82
+ write_neon_element32(tmp, a->vd, 0, MO_32);
206
if do_compiler "$target_compiler" $target_compiler_cflags \
83
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
207
-march=armv8.3-a -o $TMPE $TMPC; then
84
+
208
echo "CROSS_CC_HAS_ARMV8_3=y" >> $config_target_mak
85
+ tcg_temp_free_i32(tmp);
86
+ tcg_temp_free_i32(tmp2);
87
+ tcg_temp_free_i32(tmp3);
88
return true;
89
}
90
91
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
92
* 2-reg-and-shift operations, size < 3 case, where the
93
* helper needs to be passed cpu_env.
94
*/
95
- TCGv_i32 constimm;
96
+ TCGv_i32 constimm, tmp;
97
int pass;
98
99
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
100
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
101
* by immediate using the variable shift operations.
102
*/
103
constimm = tcg_const_i32(dup_const(a->size, a->shift));
104
+ tmp = tcg_temp_new_i32();
105
106
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
107
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
108
+ read_neon_element32(tmp, a->vm, pass, MO_32);
109
fn(tmp, cpu_env, tmp, constimm);
110
- neon_store_reg(a->vd, pass, tmp);
111
+ write_neon_element32(tmp, a->vd, pass, MO_32);
112
}
113
+ tcg_temp_free_i32(tmp);
114
tcg_temp_free_i32(constimm);
115
return true;
116
}
117
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
118
constimm = tcg_const_i64(-a->shift);
119
rm1 = tcg_temp_new_i64();
120
rm2 = tcg_temp_new_i64();
121
+ rd = tcg_temp_new_i32();
122
123
/* Load both inputs first to avoid potential overwrite if rm == rd */
124
neon_load_reg64(rm1, a->vm);
125
neon_load_reg64(rm2, a->vm + 1);
126
127
shiftfn(rm1, rm1, constimm);
128
- rd = tcg_temp_new_i32();
129
narrowfn(rd, cpu_env, rm1);
130
- neon_store_reg(a->vd, 0, rd);
131
+ write_neon_element32(rd, a->vd, 0, MO_32);
132
133
shiftfn(rm2, rm2, constimm);
134
- rd = tcg_temp_new_i32();
135
narrowfn(rd, cpu_env, rm2);
136
- neon_store_reg(a->vd, 1, rd);
137
+ write_neon_element32(rd, a->vd, 1, MO_32);
138
139
+ tcg_temp_free_i32(rd);
140
tcg_temp_free_i64(rm1);
141
tcg_temp_free_i64(rm2);
142
tcg_temp_free_i64(constimm);
143
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
144
constimm = tcg_const_i32(imm);
145
146
/* Load all inputs first to avoid potential overwrite */
147
- rm1 = neon_load_reg(a->vm, 0);
148
- rm2 = neon_load_reg(a->vm, 1);
149
- rm3 = neon_load_reg(a->vm + 1, 0);
150
- rm4 = neon_load_reg(a->vm + 1, 1);
151
+ rm1 = tcg_temp_new_i32();
152
+ rm2 = tcg_temp_new_i32();
153
+ rm3 = tcg_temp_new_i32();
154
+ rm4 = tcg_temp_new_i32();
155
+ read_neon_element32(rm1, a->vm, 0, MO_32);
156
+ read_neon_element32(rm2, a->vm, 1, MO_32);
157
+ read_neon_element32(rm3, a->vm, 2, MO_32);
158
+ read_neon_element32(rm4, a->vm, 3, MO_32);
159
rtmp = tcg_temp_new_i64();
160
161
shiftfn(rm1, rm1, constimm);
162
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
163
tcg_temp_free_i32(rm2);
164
165
narrowfn(rm1, cpu_env, rtmp);
166
- neon_store_reg(a->vd, 0, rm1);
167
+ write_neon_element32(rm1, a->vd, 0, MO_32);
168
+ tcg_temp_free_i32(rm1);
169
170
shiftfn(rm3, rm3, constimm);
171
shiftfn(rm4, rm4, constimm);
172
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
173
174
narrowfn(rm3, cpu_env, rtmp);
175
tcg_temp_free_i64(rtmp);
176
- neon_store_reg(a->vd, 1, rm3);
177
+ write_neon_element32(rm3, a->vd, 1, MO_32);
178
+ tcg_temp_free_i32(rm3);
179
return true;
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
183
widen_mask = dup_const(a->size + 1, widen_mask);
184
}
185
186
- rm0 = neon_load_reg(a->vm, 0);
187
- rm1 = neon_load_reg(a->vm, 1);
188
+ rm0 = tcg_temp_new_i32();
189
+ rm1 = tcg_temp_new_i32();
190
+ read_neon_element32(rm0, a->vm, 0, MO_32);
191
+ read_neon_element32(rm1, a->vm, 1, MO_32);
192
tmp = tcg_temp_new_i64();
193
194
widenfn(tmp, rm0);
195
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
196
if (src1_wide) {
197
neon_load_reg64(rn0_64, a->vn);
198
} else {
199
- TCGv_i32 tmp = neon_load_reg(a->vn, 0);
200
+ TCGv_i32 tmp = tcg_temp_new_i32();
201
+ read_neon_element32(tmp, a->vn, 0, MO_32);
202
widenfn(rn0_64, tmp);
203
tcg_temp_free_i32(tmp);
204
}
205
- rm = neon_load_reg(a->vm, 0);
206
+ rm = tcg_temp_new_i32();
207
+ read_neon_element32(rm, a->vm, 0, MO_32);
208
209
widenfn(rm_64, rm);
210
tcg_temp_free_i32(rm);
211
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
212
if (src1_wide) {
213
neon_load_reg64(rn1_64, a->vn + 1);
214
} else {
215
- TCGv_i32 tmp = neon_load_reg(a->vn, 1);
216
+ TCGv_i32 tmp = tcg_temp_new_i32();
217
+ read_neon_element32(tmp, a->vn, 1, MO_32);
218
widenfn(rn1_64, tmp);
219
tcg_temp_free_i32(tmp);
220
}
221
- rm = neon_load_reg(a->vm, 1);
222
+ rm = tcg_temp_new_i32();
223
+ read_neon_element32(rm, a->vm, 1, MO_32);
224
225
neon_store_reg64(rn0_64, a->vd);
226
227
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
228
229
narrowfn(rd1, rn_64);
230
231
- neon_store_reg(a->vd, 0, rd0);
232
- neon_store_reg(a->vd, 1, rd1);
233
+ write_neon_element32(rd0, a->vd, 0, MO_32);
234
+ write_neon_element32(rd1, a->vd, 1, MO_32);
235
236
+ tcg_temp_free_i32(rd0);
237
+ tcg_temp_free_i32(rd1);
238
tcg_temp_free_i64(rn_64);
239
tcg_temp_free_i64(rm_64);
240
241
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
242
rd0 = tcg_temp_new_i64();
243
rd1 = tcg_temp_new_i64();
244
245
- rn = neon_load_reg(a->vn, 0);
246
- rm = neon_load_reg(a->vm, 0);
247
+ rn = tcg_temp_new_i32();
248
+ rm = tcg_temp_new_i32();
249
+ read_neon_element32(rn, a->vn, 0, MO_32);
250
+ read_neon_element32(rm, a->vm, 0, MO_32);
251
opfn(rd0, rn, rm);
252
- tcg_temp_free_i32(rn);
253
- tcg_temp_free_i32(rm);
254
255
- rn = neon_load_reg(a->vn, 1);
256
- rm = neon_load_reg(a->vm, 1);
257
+ read_neon_element32(rn, a->vn, 1, MO_32);
258
+ read_neon_element32(rm, a->vm, 1, MO_32);
259
opfn(rd1, rn, rm);
260
tcg_temp_free_i32(rn);
261
tcg_temp_free_i32(rm);
262
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
263
264
static inline TCGv_i32 neon_get_scalar(int size, int reg)
265
{
266
- TCGv_i32 tmp;
267
- if (size == 1) {
268
- tmp = neon_load_reg(reg & 7, reg >> 4);
269
+ TCGv_i32 tmp = tcg_temp_new_i32();
270
+ if (size == MO_16) {
271
+ read_neon_element32(tmp, reg & 7, reg >> 4, MO_32);
272
if (reg & 8) {
273
gen_neon_dup_high16(tmp);
274
} else {
275
gen_neon_dup_low16(tmp);
276
}
277
} else {
278
- tmp = neon_load_reg(reg & 15, reg >> 4);
279
+ read_neon_element32(tmp, reg & 15, reg >> 4, MO_32);
280
}
281
return tmp;
282
}
283
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
284
* perform an accumulation operation of that result into the
285
* destination.
286
*/
287
- TCGv_i32 scalar;
288
+ TCGv_i32 scalar, tmp;
289
int pass;
290
291
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
292
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
293
}
294
295
scalar = neon_get_scalar(a->size, a->vm);
296
+ tmp = tcg_temp_new_i32();
297
298
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
299
- TCGv_i32 tmp = neon_load_reg(a->vn, pass);
300
+ read_neon_element32(tmp, a->vn, pass, MO_32);
301
opfn(tmp, tmp, scalar);
302
if (accfn) {
303
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
304
+ TCGv_i32 rd = tcg_temp_new_i32();
305
+ read_neon_element32(rd, a->vd, pass, MO_32);
306
accfn(tmp, rd, tmp);
307
tcg_temp_free_i32(rd);
308
}
309
- neon_store_reg(a->vd, pass, tmp);
310
+ write_neon_element32(tmp, a->vd, pass, MO_32);
311
}
312
+ tcg_temp_free_i32(tmp);
313
tcg_temp_free_i32(scalar);
314
return true;
315
}
316
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
317
* performs a kind of fused op-then-accumulate using a helper
318
* function that takes all of rd, rn and the scalar at once.
319
*/
320
- TCGv_i32 scalar;
321
+ TCGv_i32 scalar, rn, rd;
322
int pass;
323
324
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
325
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
326
}
327
328
scalar = neon_get_scalar(a->size, a->vm);
329
+ rn = tcg_temp_new_i32();
330
+ rd = tcg_temp_new_i32();
331
332
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
333
- TCGv_i32 rn = neon_load_reg(a->vn, pass);
334
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
335
+ read_neon_element32(rn, a->vn, pass, MO_32);
336
+ read_neon_element32(rd, a->vd, pass, MO_32);
337
opfn(rd, cpu_env, rn, scalar, rd);
338
- tcg_temp_free_i32(rn);
339
- neon_store_reg(a->vd, pass, rd);
340
+ write_neon_element32(rd, a->vd, pass, MO_32);
341
}
342
+ tcg_temp_free_i32(rn);
343
+ tcg_temp_free_i32(rd);
344
tcg_temp_free_i32(scalar);
345
346
return true;
347
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
348
scalar = neon_get_scalar(a->size, a->vm);
349
350
/* Load all inputs before writing any outputs, in case of overlap */
351
- rn = neon_load_reg(a->vn, 0);
352
+ rn = tcg_temp_new_i32();
353
+ read_neon_element32(rn, a->vn, 0, MO_32);
354
rn0_64 = tcg_temp_new_i64();
355
opfn(rn0_64, rn, scalar);
356
- tcg_temp_free_i32(rn);
357
358
- rn = neon_load_reg(a->vn, 1);
359
+ read_neon_element32(rn, a->vn, 1, MO_32);
360
rn1_64 = tcg_temp_new_i64();
361
opfn(rn1_64, rn, scalar);
362
tcg_temp_free_i32(rn);
363
@@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a)
364
return false;
365
}
366
n <<= 3;
367
+ tmp = tcg_temp_new_i32();
368
if (a->op) {
369
- tmp = neon_load_reg(a->vd, 0);
370
+ read_neon_element32(tmp, a->vd, 0, MO_32);
371
} else {
372
- tmp = tcg_temp_new_i32();
373
tcg_gen_movi_i32(tmp, 0);
374
}
375
- tmp2 = neon_load_reg(a->vm, 0);
376
+ tmp2 = tcg_temp_new_i32();
377
+ read_neon_element32(tmp2, a->vm, 0, MO_32);
378
ptr1 = vfp_reg_ptr(true, a->vn);
379
tmp4 = tcg_const_i32(n);
380
gen_helper_neon_tbl(tmp2, tmp2, tmp, ptr1, tmp4);
381
- tcg_temp_free_i32(tmp);
382
+
383
if (a->op) {
384
- tmp = neon_load_reg(a->vd, 1);
385
+ read_neon_element32(tmp, a->vd, 1, MO_32);
386
} else {
387
- tmp = tcg_temp_new_i32();
388
tcg_gen_movi_i32(tmp, 0);
389
}
390
- tmp3 = neon_load_reg(a->vm, 1);
391
+ tmp3 = tcg_temp_new_i32();
392
+ read_neon_element32(tmp3, a->vm, 1, MO_32);
393
gen_helper_neon_tbl(tmp3, tmp3, tmp, ptr1, tmp4);
394
+ tcg_temp_free_i32(tmp);
395
tcg_temp_free_i32(tmp4);
396
tcg_temp_free_ptr(ptr1);
397
- neon_store_reg(a->vd, 0, tmp2);
398
- neon_store_reg(a->vd, 1, tmp3);
399
- tcg_temp_free_i32(tmp);
400
+
401
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
402
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
403
+ tcg_temp_free_i32(tmp2);
404
+ tcg_temp_free_i32(tmp3);
405
return true;
406
}
407
408
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
409
static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
410
{
411
int pass, half;
412
+ TCGv_i32 tmp[2];
413
414
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
415
return false;
416
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
417
return true;
418
}
419
420
- for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
421
- TCGv_i32 tmp[2];
422
+ tmp[0] = tcg_temp_new_i32();
423
+ tmp[1] = tcg_temp_new_i32();
424
425
+ for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
426
for (half = 0; half < 2; half++) {
427
- tmp[half] = neon_load_reg(a->vm, pass * 2 + half);
428
+ read_neon_element32(tmp[half], a->vm, pass * 2 + half, MO_32);
429
switch (a->size) {
430
case 0:
431
tcg_gen_bswap32_i32(tmp[half], tmp[half]);
432
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
433
g_assert_not_reached();
434
}
435
}
436
- neon_store_reg(a->vd, pass * 2, tmp[1]);
437
- neon_store_reg(a->vd, pass * 2 + 1, tmp[0]);
438
+ write_neon_element32(tmp[1], a->vd, pass * 2, MO_32);
439
+ write_neon_element32(tmp[0], a->vd, pass * 2 + 1, MO_32);
440
}
441
+
442
+ tcg_temp_free_i32(tmp[0]);
443
+ tcg_temp_free_i32(tmp[1]);
444
return true;
445
}
446
447
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
448
rm0_64 = tcg_temp_new_i64();
449
rm1_64 = tcg_temp_new_i64();
450
rd_64 = tcg_temp_new_i64();
451
- tmp = neon_load_reg(a->vm, pass * 2);
452
+
453
+ tmp = tcg_temp_new_i32();
454
+ read_neon_element32(tmp, a->vm, pass * 2, MO_32);
455
widenfn(rm0_64, tmp);
456
- tcg_temp_free_i32(tmp);
457
- tmp = neon_load_reg(a->vm, pass * 2 + 1);
458
+ read_neon_element32(tmp, a->vm, pass * 2 + 1, MO_32);
459
widenfn(rm1_64, tmp);
460
tcg_temp_free_i32(tmp);
461
+
462
opfn(rd_64, rm0_64, rm1_64);
463
tcg_temp_free_i64(rm0_64);
464
tcg_temp_free_i64(rm1_64);
465
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
466
narrowfn(rd0, cpu_env, rm);
467
neon_load_reg64(rm, a->vm + 1);
468
narrowfn(rd1, cpu_env, rm);
469
- neon_store_reg(a->vd, 0, rd0);
470
- neon_store_reg(a->vd, 1, rd1);
471
+ write_neon_element32(rd0, a->vd, 0, MO_32);
472
+ write_neon_element32(rd1, a->vd, 1, MO_32);
473
+ tcg_temp_free_i32(rd0);
474
+ tcg_temp_free_i32(rd1);
475
tcg_temp_free_i64(rm);
476
return true;
477
}
478
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
479
}
480
481
rd = tcg_temp_new_i64();
482
+ rm0 = tcg_temp_new_i32();
483
+ rm1 = tcg_temp_new_i32();
484
485
- rm0 = neon_load_reg(a->vm, 0);
486
- rm1 = neon_load_reg(a->vm, 1);
487
+ read_neon_element32(rm0, a->vm, 0, MO_32);
488
+ read_neon_element32(rm1, a->vm, 1, MO_32);
489
490
widenfn(rd, rm0);
491
tcg_gen_shli_i64(rd, rd, 8 << a->size);
492
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
493
494
fpst = fpstatus_ptr(FPST_STD);
495
ahp = get_ahp_flag();
496
- tmp = neon_load_reg(a->vm, 0);
497
+ tmp = tcg_temp_new_i32();
498
+ read_neon_element32(tmp, a->vm, 0, MO_32);
499
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
500
- tmp2 = neon_load_reg(a->vm, 1);
501
+ tmp2 = tcg_temp_new_i32();
502
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
503
gen_helper_vfp_fcvt_f32_to_f16(tmp2, tmp2, fpst, ahp);
504
tcg_gen_shli_i32(tmp2, tmp2, 16);
505
tcg_gen_or_i32(tmp2, tmp2, tmp);
506
- tcg_temp_free_i32(tmp);
507
- tmp = neon_load_reg(a->vm, 2);
508
+ read_neon_element32(tmp, a->vm, 2, MO_32);
509
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
510
- tmp3 = neon_load_reg(a->vm, 3);
511
- neon_store_reg(a->vd, 0, tmp2);
512
+ tmp3 = tcg_temp_new_i32();
513
+ read_neon_element32(tmp3, a->vm, 3, MO_32);
514
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
515
+ tcg_temp_free_i32(tmp2);
516
gen_helper_vfp_fcvt_f32_to_f16(tmp3, tmp3, fpst, ahp);
517
tcg_gen_shli_i32(tmp3, tmp3, 16);
518
tcg_gen_or_i32(tmp3, tmp3, tmp);
519
- neon_store_reg(a->vd, 1, tmp3);
520
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
521
+ tcg_temp_free_i32(tmp3);
522
tcg_temp_free_i32(tmp);
523
tcg_temp_free_i32(ahp);
524
tcg_temp_free_ptr(fpst);
525
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
526
fpst = fpstatus_ptr(FPST_STD);
527
ahp = get_ahp_flag();
528
tmp3 = tcg_temp_new_i32();
529
- tmp = neon_load_reg(a->vm, 0);
530
- tmp2 = neon_load_reg(a->vm, 1);
531
+ tmp2 = tcg_temp_new_i32();
532
+ tmp = tcg_temp_new_i32();
533
+ read_neon_element32(tmp, a->vm, 0, MO_32);
534
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
535
tcg_gen_ext16u_i32(tmp3, tmp);
536
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
537
- neon_store_reg(a->vd, 0, tmp3);
538
+ write_neon_element32(tmp3, a->vd, 0, MO_32);
539
tcg_gen_shri_i32(tmp, tmp, 16);
540
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp);
541
- neon_store_reg(a->vd, 1, tmp);
542
- tmp3 = tcg_temp_new_i32();
543
+ write_neon_element32(tmp, a->vd, 1, MO_32);
544
+ tcg_temp_free_i32(tmp);
545
tcg_gen_ext16u_i32(tmp3, tmp2);
546
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
547
- neon_store_reg(a->vd, 2, tmp3);
548
+ write_neon_element32(tmp3, a->vd, 2, MO_32);
549
+ tcg_temp_free_i32(tmp3);
550
tcg_gen_shri_i32(tmp2, tmp2, 16);
551
gen_helper_vfp_fcvt_f16_to_f32(tmp2, tmp2, fpst, ahp);
552
- neon_store_reg(a->vd, 3, tmp2);
553
+ write_neon_element32(tmp2, a->vd, 3, MO_32);
554
+ tcg_temp_free_i32(tmp2);
555
tcg_temp_free_i32(ahp);
556
tcg_temp_free_ptr(fpst);
557
558
@@ -XXX,XX +XXX,XX @@ DO_2M_CRYPTO(SHA256SU0, aa32_sha2, 2)
559
560
static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
561
{
562
+ TCGv_i32 tmp;
563
int pass;
564
565
/* Handle a 2-reg-misc operation by iterating 32 bits at a time */
566
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
567
return true;
568
}
569
570
+ tmp = tcg_temp_new_i32();
571
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
572
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
573
+ read_neon_element32(tmp, a->vm, pass, MO_32);
574
fn(tmp, tmp);
575
- neon_store_reg(a->vd, pass, tmp);
576
+ write_neon_element32(tmp, a->vd, pass, MO_32);
577
}
578
+ tcg_temp_free_i32(tmp);
579
580
return true;
581
}
582
@@ -XXX,XX +XXX,XX @@ static bool trans_VTRN(DisasContext *s, arg_2misc *a)
583
return true;
584
}
585
586
- if (a->size == 2) {
587
+ tmp = tcg_temp_new_i32();
588
+ tmp2 = tcg_temp_new_i32();
589
+ if (a->size == MO_32) {
590
for (pass = 0; pass < (a->q ? 4 : 2); pass += 2) {
591
- tmp = neon_load_reg(a->vm, pass);
592
- tmp2 = neon_load_reg(a->vd, pass + 1);
593
- neon_store_reg(a->vm, pass, tmp2);
594
- neon_store_reg(a->vd, pass + 1, tmp);
595
+ read_neon_element32(tmp, a->vm, pass, MO_32);
596
+ read_neon_element32(tmp2, a->vd, pass + 1, MO_32);
597
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
598
+ write_neon_element32(tmp, a->vd, pass + 1, MO_32);
599
}
600
} else {
601
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
602
- tmp = neon_load_reg(a->vm, pass);
603
- tmp2 = neon_load_reg(a->vd, pass);
604
- if (a->size == 0) {
605
+ read_neon_element32(tmp, a->vm, pass, MO_32);
606
+ read_neon_element32(tmp2, a->vd, pass, MO_32);
607
+ if (a->size == MO_8) {
608
gen_neon_trn_u8(tmp, tmp2);
609
} else {
610
gen_neon_trn_u16(tmp, tmp2);
611
}
612
- neon_store_reg(a->vm, pass, tmp2);
613
- neon_store_reg(a->vd, pass, tmp);
614
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
615
+ write_neon_element32(tmp, a->vd, pass, MO_32);
616
}
617
}
618
+ tcg_temp_free_i32(tmp);
619
+ tcg_temp_free_i32(tmp2);
620
return true;
621
}
622
--
209
--
623
2.20.1
210
2.25.1
624
625
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We can use proper widening loads to extend 32-bit inputs,
3
When arm_is_el2_enabled was introduced, we missed
4
and skip the "widenfn" step.
4
updating pauth_check_trap.
5
5
6
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/788
7
Fixes: e6ef0169264b ("target/arm: use arm_is_el2_enabled() where applicable")
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-12-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20220315021205.342768-1-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
target/arm/translate.c | 6 +++
13
target/arm/pauth_helper.c | 2 +-
12
target/arm/translate-neon.c.inc | 66 ++++++++++++++++++---------------
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
2 files changed, 43 insertions(+), 29 deletions(-)
14
15
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
18
--- a/target/arm/pauth_helper.c
18
+++ b/target/arm/translate.c
19
+++ b/target/arm/pauth_helper.c
19
@@ -XXX,XX +XXX,XX @@ static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
20
@@ -XXX,XX +XXX,XX @@ static void QEMU_NORETURN pauth_trap(CPUARMState *env, int target_el,
20
long off = neon_element_offset(reg, ele, memop);
21
21
22
static void pauth_check_trap(CPUARMState *env, int el, uintptr_t ra)
22
switch (memop) {
23
+ case MO_SL:
24
+ tcg_gen_ld32s_i64(dest, cpu_env, off);
25
+ break;
26
+ case MO_UL:
27
+ tcg_gen_ld32u_i64(dest, cpu_env, off);
28
+ break;
29
case MO_Q:
30
tcg_gen_ld_i64(dest, cpu_env, off);
31
break;
32
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-neon.c.inc
35
+++ b/target/arm/translate-neon.c.inc
36
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
37
static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
38
NeonGenWidenFn *widenfn,
39
NeonGenTwo64OpFn *opfn,
40
- bool src1_wide)
41
+ int src1_mop, int src2_mop)
42
{
23
{
43
/* 3-regs different lengths, prewidening case (VADDL/VSUBL/VAADW/VSUBW) */
24
- if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
44
TCGv_i64 rn0_64, rn1_64, rm_64;
25
+ if (el < 2 && arm_is_el2_enabled(env)) {
45
- TCGv_i32 rm;
26
uint64_t hcr = arm_hcr_el2_eff(env);
46
27
bool trap = !(hcr & HCR_API);
47
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
28
if (el == 0) {
48
return false;
49
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
50
return false;
51
}
52
53
- if (!widenfn || !opfn) {
54
+ if (!opfn) {
55
/* size == 3 case, which is an entirely different insn group */
56
return false;
57
}
58
59
- if ((a->vd & 1) || (src1_wide && (a->vn & 1))) {
60
+ if ((a->vd & 1) || (src1_mop == MO_Q && (a->vn & 1))) {
61
return false;
62
}
63
64
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
65
rn1_64 = tcg_temp_new_i64();
66
rm_64 = tcg_temp_new_i64();
67
68
- if (src1_wide) {
69
- read_neon_element64(rn0_64, a->vn, 0, MO_64);
70
+ if (src1_mop >= 0) {
71
+ read_neon_element64(rn0_64, a->vn, 0, src1_mop);
72
} else {
73
TCGv_i32 tmp = tcg_temp_new_i32();
74
read_neon_element32(tmp, a->vn, 0, MO_32);
75
widenfn(rn0_64, tmp);
76
tcg_temp_free_i32(tmp);
77
}
78
- rm = tcg_temp_new_i32();
79
- read_neon_element32(rm, a->vm, 0, MO_32);
80
+ if (src2_mop >= 0) {
81
+ read_neon_element64(rm_64, a->vm, 0, src2_mop);
82
+ } else {
83
+ TCGv_i32 tmp = tcg_temp_new_i32();
84
+ read_neon_element32(tmp, a->vm, 0, MO_32);
85
+ widenfn(rm_64, tmp);
86
+ tcg_temp_free_i32(tmp);
87
+ }
88
89
- widenfn(rm_64, rm);
90
- tcg_temp_free_i32(rm);
91
opfn(rn0_64, rn0_64, rm_64);
92
93
/*
94
* Load second pass inputs before storing the first pass result, to
95
* avoid incorrect results if a narrow input overlaps with the result.
96
*/
97
- if (src1_wide) {
98
- read_neon_element64(rn1_64, a->vn, 1, MO_64);
99
+ if (src1_mop >= 0) {
100
+ read_neon_element64(rn1_64, a->vn, 1, src1_mop);
101
} else {
102
TCGv_i32 tmp = tcg_temp_new_i32();
103
read_neon_element32(tmp, a->vn, 1, MO_32);
104
widenfn(rn1_64, tmp);
105
tcg_temp_free_i32(tmp);
106
}
107
- rm = tcg_temp_new_i32();
108
- read_neon_element32(rm, a->vm, 1, MO_32);
109
+ if (src2_mop >= 0) {
110
+ read_neon_element64(rm_64, a->vm, 1, src2_mop);
111
+ } else {
112
+ TCGv_i32 tmp = tcg_temp_new_i32();
113
+ read_neon_element32(tmp, a->vm, 1, MO_32);
114
+ widenfn(rm_64, tmp);
115
+ tcg_temp_free_i32(tmp);
116
+ }
117
118
write_neon_element64(rn0_64, a->vd, 0, MO_64);
119
120
- widenfn(rm_64, rm);
121
- tcg_temp_free_i32(rm);
122
opfn(rn1_64, rn1_64, rm_64);
123
write_neon_element64(rn1_64, a->vd, 1, MO_64);
124
125
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
126
return true;
127
}
128
129
-#define DO_PREWIDEN(INSN, S, EXT, OP, SRC1WIDE) \
130
+#define DO_PREWIDEN(INSN, S, OP, SRC1WIDE, SIGN) \
131
static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
132
{ \
133
static NeonGenWidenFn * const widenfn[] = { \
134
gen_helper_neon_widen_##S##8, \
135
gen_helper_neon_widen_##S##16, \
136
- tcg_gen_##EXT##_i32_i64, \
137
- NULL, \
138
+ NULL, NULL, \
139
}; \
140
static NeonGenTwo64OpFn * const addfn[] = { \
141
gen_helper_neon_##OP##l_u16, \
142
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
143
tcg_gen_##OP##_i64, \
144
NULL, \
145
}; \
146
- return do_prewiden_3d(s, a, widenfn[a->size], \
147
- addfn[a->size], SRC1WIDE); \
148
+ int narrow_mop = a->size == MO_32 ? MO_32 | SIGN : -1; \
149
+ return do_prewiden_3d(s, a, widenfn[a->size], addfn[a->size], \
150
+ SRC1WIDE ? MO_Q : narrow_mop, \
151
+ narrow_mop); \
152
}
153
154
-DO_PREWIDEN(VADDL_S, s, ext, add, false)
155
-DO_PREWIDEN(VADDL_U, u, extu, add, false)
156
-DO_PREWIDEN(VSUBL_S, s, ext, sub, false)
157
-DO_PREWIDEN(VSUBL_U, u, extu, sub, false)
158
-DO_PREWIDEN(VADDW_S, s, ext, add, true)
159
-DO_PREWIDEN(VADDW_U, u, extu, add, true)
160
-DO_PREWIDEN(VSUBW_S, s, ext, sub, true)
161
-DO_PREWIDEN(VSUBW_U, u, extu, sub, true)
162
+DO_PREWIDEN(VADDL_S, s, add, false, MO_SIGN)
163
+DO_PREWIDEN(VADDL_U, u, add, false, 0)
164
+DO_PREWIDEN(VSUBL_S, s, sub, false, MO_SIGN)
165
+DO_PREWIDEN(VSUBL_U, u, sub, false, 0)
166
+DO_PREWIDEN(VADDW_S, s, add, true, MO_SIGN)
167
+DO_PREWIDEN(VADDW_U, u, add, true, 0)
168
+DO_PREWIDEN(VSUBW_S, s, sub, true, MO_SIGN)
169
+DO_PREWIDEN(VSUBW_U, u, sub, true, 0)
170
171
static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
172
NeonGenTwo64OpFn *opfn, NeonGenNarrowFn *narrowfn)
173
--
29
--
174
2.20.1
30
2.25.1
175
31
176
32
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
LPAE descriptors come in three forms:
2
2
3
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the
3
* table descriptors, giving the address of the next level page table
4
future HCR_EL2.TLOR when S-EL2 is enabled.
4
* page descriptors, which occur only at level 3 and describe the
5
mapping of one page (which might be 4K, 16K or 64K)
6
* block descriptors, which occur at higher page table levels, and
7
describe the mapping of huge pages
5
8
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
9
QEMU's page-table-walk code treats block and page entries
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
identically, simply ORing in a number of bits from the input virtual
11
address that depends on the level of the page table that we stopped
12
at; we depend on the previous masking of descaddr with descaddrmask
13
to have already cleared out the low bits of the descriptor word.
14
15
This is not quite right: the address field in a block descriptor is
16
smaller, and so there are bits which are valid address bits in a page
17
descriptor or a table descriptor but which are not supposed to be
18
part of the address in a block descriptor, and descaddrmask does not
19
clear them. We previously mostly got away with this because those
20
descriptor bits are RES0; however with FEAT_BBM (part of Armv8.4)
21
block descriptor bit 16 is defined to be the nT bit. No emulated
22
QEMU CPU has FEAT_BBM yet, but if the host CPU has it then we might
23
see it when using KVM or hvf.
24
25
Explicitly zero out all the descaddr bits we're about to OR vaddr
26
bits into.
27
28
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/790
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Message-id: 20220304165628.2345765-1-peter.maydell@linaro.org
9
---
32
---
10
target/arm/helper.c | 19 +++++--------------
33
target/arm/helper.c | 10 ++++++++--
11
1 file changed, 5 insertions(+), 14 deletions(-)
34
1 file changed, 8 insertions(+), 2 deletions(-)
12
35
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
36
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
38
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
39
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
18
#endif
41
indexmask = indexmask_grainsize;
19
42
continue;
20
/* Shared logic between LORID and the rest of the LOR* registers.
43
}
21
- * Secure state has already been delt with.
44
- /* Block entry at level 1 or 2, or page entry at level 3.
22
+ * Secure state exclusion has already been dealt with.
45
+ /*
23
*/
46
+ * Block entry at level 1 or 2, or page entry at level 3.
24
-static CPAccessResult access_lor_ns(CPUARMState *env)
47
* These are basically the same thing, although the number
25
+static CPAccessResult access_lor_ns(CPUARMState *env,
48
- * of bits we pull in from the vaddr varies.
26
+ const ARMCPRegInfo *ri, bool isread)
49
+ * of bits we pull in from the vaddr varies. Note that although
27
{
50
+ * descaddrmask masks enough of the low bits of the descriptor
28
int el = arm_current_el(env);
51
+ * to give a correct page or table address, the address field
29
52
+ * in a block descriptor is smaller; so we need to explicitly
30
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_ns(CPUARMState *env)
53
+ * clear the lower bits here before ORing in the low vaddr bits.
31
return CP_ACCESS_OK;
54
*/
32
}
55
page_size = (1ULL << ((stride * (4 - level)) + 3));
33
56
+ descaddr &= ~(page_size - 1);
34
-static CPAccessResult access_lorid(CPUARMState *env, const ARMCPRegInfo *ri,
57
descaddr |= (address & (page_size - 1));
35
- bool isread)
58
/* Extract attributes from the descriptor */
36
-{
59
attrs = extract64(descriptor, 2, 10)
37
- if (arm_is_secure_below_el3(env)) {
38
- /* Access ok in secure mode. */
39
- return CP_ACCESS_OK;
40
- }
41
- return access_lor_ns(env);
42
-}
43
-
44
static CPAccessResult access_lor_other(CPUARMState *env,
45
const ARMCPRegInfo *ri, bool isread)
46
{
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_other(CPUARMState *env,
48
/* Access denied in secure mode. */
49
return CP_ACCESS_TRAP;
50
}
51
- return access_lor_ns(env);
52
+ return access_lor_ns(env, ri, isread);
53
}
54
55
/*
56
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
57
.type = ARM_CP_CONST, .resetvalue = 0 },
58
{ .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
59
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
60
- .access = PL1_R, .accessfn = access_lorid,
61
+ .access = PL1_R, .accessfn = access_lor_ns,
62
.type = ARM_CP_CONST, .resetvalue = 0 },
63
REGINFO_SENTINEL
64
};
65
--
60
--
66
2.20.1
61
2.25.1
67
68
diff view generated by jsdifflib
1
The randomness tests in the NPCM7xx RNG test fail intermittently
1
In commit 00f05c02f9e7342f we gave the TYPE_XLNX_CSU_DMA object its
2
but fairly frequently. On my machine running the test in a loop:
2
own class struct, but forgot to update the TypeInfo::class_size
3
while QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/npcm7xx_rng-test; do true; done
3
accordingly. This meant that not enough memory was allocated for the
4
class struct, and the initialization of xcdc->read in the class init
5
function wrote off the end of the memory. Add the missing line.
4
6
5
will fail in less than a minute with an error like:
7
Found by running 'check-qtest-aarch64' with a clang
6
ERROR:../../tests/qtest/npcm7xx_rng-test.c:256:test_first_byte_runs:
8
address-sanitizer build, which complains:
7
assertion failed (calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE) > 0.01): (0.00286205989 > 0.01)
8
9
9
(Failures have been observed on all 4 of the randomness tests,
10
==2542634==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61000000ab00 at pc 0x559a20aebc29 bp 0x7fff97df74d0 sp 0x7fff97df74c8
10
not just first_byte_runs.)
11
WRITE of size 8 at 0x61000000ab00 thread T0
12
#0 0x559a20aebc28 in xlnx_csu_dma_class_init /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../hw/dma/xlnx_csu_dma.c:722:16
13
#1 0x559a21bf297c in type_initialize /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../qom/object.c:365:9
14
#2 0x559a21bf3442 in object_class_foreach_tramp /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../qom/object.c:1070:5
15
#3 0x7f09bcb641b7 in g_hash_table_foreach (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x401b7)
16
#4 0x559a21bf3c27 in object_class_foreach /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../qom/object.c:1092:5
17
#5 0x559a21bf3c27 in object_class_get_list /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../qom/object.c:1149:5
18
#6 0x559a2081a2fd in select_machine /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../softmmu/vl.c:1661:24
19
#7 0x559a2081a2fd in qemu_create_machine /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../softmmu/vl.c:2146:35
20
#8 0x559a2081a2fd in qemu_init /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../softmmu/vl.c:3706:5
21
#9 0x559a20720ed5 in main /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../softmmu/main.c:49:5
22
#10 0x7f09baec00b2 in __libc_start_main /build/glibc-sMfBJT/glibc-2.31/csu/../csu/libc-start.c:308:16
23
#11 0x559a2067673d in _start (/mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/qemu-system-aarch64+0xf4b73d)
11
24
12
It's not clear why these tests are failing like this, but intermittent
25
0x61000000ab00 is located 0 bytes to the right of 192-byte region [0x61000000aa40,0x61000000ab00)
13
failures make CI and merge testing awkward, so disable running them
26
allocated by thread T0 here:
14
unless a developer specifically sets QEMU_TEST_FLAKY_RNG_TESTS when
27
#0 0x559a206eeff2 in calloc (/mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/qemu-system-aarch64+0xfc3ff2)
15
running the test suite, until we work out the cause.
28
#1 0x7f09bcb7bef0 in g_malloc0 (/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x57ef0)
29
#2 0x559a21bf3442 in object_class_foreach_tramp /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/san/../../qom/object.c:1070:5
16
30
31
Fixes: 00f05c02f9e7342f ("hw/dma/xlnx_csu_dma: Support starting a read transfer through a class method")
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
33
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
19
Message-id: 20201102152454.8287-1-peter.maydell@linaro.org
34
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
20
Reviewed-by: Havard Skinnemoen <hskinnemoen@google.com>
35
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
36
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
37
Message-id: 20220308150207.2546272-1-peter.maydell@linaro.org
21
---
38
---
22
tests/qtest/npcm7xx_rng-test.c | 14 ++++++++++----
39
hw/dma/xlnx_csu_dma.c | 1 +
23
1 file changed, 10 insertions(+), 4 deletions(-)
40
1 file changed, 1 insertion(+)
24
41
25
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
42
diff --git a/hw/dma/xlnx_csu_dma.c b/hw/dma/xlnx_csu_dma.c
26
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/qtest/npcm7xx_rng-test.c
44
--- a/hw/dma/xlnx_csu_dma.c
28
+++ b/tests/qtest/npcm7xx_rng-test.c
45
+++ b/hw/dma/xlnx_csu_dma.c
29
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
46
@@ -XXX,XX +XXX,XX @@ static const TypeInfo xlnx_csu_dma_info = {
30
47
.parent = TYPE_SYS_BUS_DEVICE,
31
qtest_add_func("npcm7xx_rng/enable_disable", test_enable_disable);
48
.instance_size = sizeof(XlnxCSUDMA),
32
qtest_add_func("npcm7xx_rng/rosel", test_rosel);
49
.class_init = xlnx_csu_dma_class_init,
33
- qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
50
+ .class_size = sizeof(XlnxCSUDMAClass),
34
- qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
51
.instance_init = xlnx_csu_dma_init,
35
- qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
52
.interfaces = (InterfaceInfo[]) {
36
- qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
53
{ TYPE_STREAM_SINK },
37
+ /*
38
+ * These tests fail intermittently; only run them on explicit
39
+ * request until we figure out why.
40
+ */
41
+ if (getenv("QEMU_TEST_FLAKY_RNG_TESTS")) {
42
+ qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
43
+ qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
44
+ qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
45
+ qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
46
+ }
47
48
qtest_start("-machine npcm750-evb");
49
ret = g_test_run();
50
--
54
--
51
2.20.1
55
2.25.1
52
56
53
57
diff view generated by jsdifflib
1
The helper functions for performing the udot/sdot operations against
1
In npcm7xx_clk_sel_init() we allocate a string with g_strdup_printf().
2
a scalar were not using an address-swizzling macro when converting
2
Use g_autofree so we free it rather than leaking it.
3
the index of the scalar element into a pointer into the vm array.
4
This had no effect on little-endian hosts but meant we generated
5
incorrect results on big-endian hosts.
6
3
7
For these insns, the index is indexing over group of 4 8-bit values,
4
(Detected with the clang leak sanitizer.)
8
so 32 bits per indexed entity, and H4() is therefore what we want.
9
(For Neon the only possible input indexes are 0 and 1.)
10
5
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
9
Message-id: 20220308170302.2582820-1-peter.maydell@linaro.org
15
---
10
---
16
target/arm/vec_helper.c | 4 ++--
11
hw/misc/npcm7xx_clk.c | 4 ++--
17
1 file changed, 2 insertions(+), 2 deletions(-)
12
1 file changed, 2 insertions(+), 2 deletions(-)
18
13
19
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
14
diff --git a/hw/misc/npcm7xx_clk.c b/hw/misc/npcm7xx_clk.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/vec_helper.c
16
--- a/hw/misc/npcm7xx_clk.c
22
+++ b/target/arm/vec_helper.c
17
+++ b/hw/misc/npcm7xx_clk.c
23
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
18
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_clk_sel_init(Object *obj)
24
intptr_t index = simd_data(desc);
19
NPCM7xxClockSELState *sel = NPCM7XX_CLOCK_SEL(obj);
25
uint32_t *d = vd;
20
26
int8_t *n = vn;
21
for (i = 0; i < NPCM7XX_CLK_SEL_MAX_INPUT; ++i) {
27
- int8_t *m_indexed = (int8_t *)vm + index * 4;
22
- sel->clock_in[i] = qdev_init_clock_in(DEVICE(sel),
28
+ int8_t *m_indexed = (int8_t *)vm + H4(index) * 4;
23
- g_strdup_printf("clock-in[%d]", i),
29
24
+ g_autofree char *s = g_strdup_printf("clock-in[%d]", i);
30
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
25
+ sel->clock_in[i] = qdev_init_clock_in(DEVICE(sel), s,
31
* Otherwise opr_sz is a multiple of 16.
26
npcm7xx_clk_update_sel_cb, sel, ClockUpdate);
32
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
27
}
33
intptr_t index = simd_data(desc);
28
sel->clock_out = qdev_init_clock_out(DEVICE(sel), "clock-out");
34
uint32_t *d = vd;
35
uint8_t *n = vn;
36
- uint8_t *m_indexed = (uint8_t *)vm + index * 4;
37
+ uint8_t *m_indexed = (uint8_t *)vm + H4(index) * 4;
38
39
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
40
* Otherwise opr_sz is a multiple of 16.
41
--
29
--
42
2.20.1
30
2.25.1
43
31
44
32
diff view generated by jsdifflib
1
Sphinx 3.2 is pickier than earlier versions about the option:: markup,
1
We currently list the emulators in the Windows installer's dialog
2
and complains about our usage in qemu-option-trace.rst:
2
in an essentially random order (it's whatever glob.glob() returns
3
3
them to, which is filesystem-implementation-dependent). Add a
4
../../docs/qemu-option-trace.rst.inc:4:Malformed option description
4
call to sorted() so they appear in alphabetical order.
5
'[enable=]PATTERN', should look like "opt", "-opt args", "--opt args",
6
"/opt args" or "+opt args"
7
8
In this file, we're really trying to document the different parts of
9
the top-level --trace option, which qemu-nbd.rst and qemu-img.rst
10
have already introduced with an option:: markup. So it's not right
11
to use option:: here anyway. Switch to a different markup
12
(definition lists) which gives about the same formatted output.
13
14
(Unlike option::, this markup doesn't produce index entries; but
15
at the moment we don't do anything much with indexes anyway, and
16
in any case I think it doesn't make much sense to have individual
17
index entries for the sub-parts of the --trace option.)
18
5
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Stefan Weil <sw@weilnetz.de>
22
Message-id: 20201030174700.7204-3-peter.maydell@linaro.org
9
Reviewed-by: John Snow <jsnow@redhat.com>
10
Message-id: 20220305105743.2384766-2-peter.maydell@linaro.org
23
---
11
---
24
docs/qemu-option-trace.rst.inc | 6 +++---
12
scripts/nsis.py | 4 ++--
25
1 file changed, 3 insertions(+), 3 deletions(-)
13
1 file changed, 2 insertions(+), 2 deletions(-)
26
14
27
diff --git a/docs/qemu-option-trace.rst.inc b/docs/qemu-option-trace.rst.inc
15
diff --git a/scripts/nsis.py b/scripts/nsis.py
28
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
29
--- a/docs/qemu-option-trace.rst.inc
17
--- a/scripts/nsis.py
30
+++ b/docs/qemu-option-trace.rst.inc
18
+++ b/scripts/nsis.py
31
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ def main():
32
20
with open(
33
Specify tracing options.
21
os.path.join(destdir + args.prefix, "system-emulations.nsh"), "w"
34
22
) as nsh:
35
-.. option:: [enable=]PATTERN
23
- for exe in glob.glob(
36
+``[enable=]PATTERN``
24
+ for exe in sorted(glob.glob(
37
25
os.path.join(destdir + args.prefix, "qemu-system-*.exe")
38
Immediately enable events matching *PATTERN*
26
- ):
39
(either event name or a globbing pattern). This option is only
27
+ )):
40
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
28
exe = os.path.basename(exe)
41
29
arch = exe[12:-4]
42
Use :option:`-trace help` to print a list of names of trace points.
30
nsh.write(
43
44
-.. option:: events=FILE
45
+``events=FILE``
46
47
Immediately enable events listed in *FILE*.
48
The file must contain one event name (as listed in the ``trace-events-all``
49
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
50
available if QEMU has been compiled with the ``simple``, ``log`` or
51
``ftrace`` tracing backend.
52
53
-.. option:: file=FILE
54
+``file=FILE``
55
56
Log output traces to *FILE*.
57
This option is only available if QEMU has been compiled with
58
--
31
--
59
2.20.1
32
2.25.1
60
33
61
34
diff view generated by jsdifflib
1
The kerneldoc script currently emits Sphinx markup for a macro with
1
When we build our Windows installer, it emits the warning:
2
arguments that uses the c:function directive. This is correct for
3
Sphinx versions earlier than Sphinx 3, where c:macro doesn't allow
4
documentation of macros with arguments and c:function is not picky
5
about the syntax of what it is passed. However, in Sphinx 3 the
6
c:macro directive was enhanced to support macros with arguments,
7
and c:function was made more picky about what syntax it accepted.
8
2
9
When kerneldoc is told that it needs to produce output for Sphinx
3
warning 7998: ANSI targets are deprecated
10
3 or later, make it emit c:function only for functions and c:macro
11
for macros with arguments. We assume that anything with a return
12
type is a function and anything without is a macro.
13
4
14
This fixes the Sphinx error:
5
Fix this by making our installer a Unicode installer instead. These
6
won't work on Win95/98/ME, but we already do not support those.
15
7
16
/home/petmay01/linaro/qemu-from-laptop/qemu/docs/../include/qom/object.h:155:Error in declarator
8
See
17
If declarator-id with parameters (e.g., 'void f(int arg)'):
9
https://nsis.sourceforge.io/Docs/Chapter4.html#aunicodetarget
18
Invalid C declaration: Expected identifier in nested name. [error at 25]
10
for the documentation of the Unicode directive.
19
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
20
-------------------------^
21
If parenthesis in noptr-declarator (e.g., 'void (*f(int arg))(double)'):
22
Error in declarator or parameters
23
Invalid C declaration: Expecting "(" in parameters. [error at 39]
24
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
25
---------------------------------------^
26
11
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
29
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
14
Reviewed-by: Stefan Weil <sw@weilnetz.de>
30
Message-id: 20201030174700.7204-2-peter.maydell@linaro.org
15
Message-id: 20220305105743.2384766-3-peter.maydell@linaro.org
31
---
16
---
32
scripts/kernel-doc | 18 +++++++++++++++++-
17
qemu.nsi | 3 +++
33
1 file changed, 17 insertions(+), 1 deletion(-)
18
1 file changed, 3 insertions(+)
34
19
35
diff --git a/scripts/kernel-doc b/scripts/kernel-doc
20
diff --git a/qemu.nsi b/qemu.nsi
36
index XXXXXXX..XXXXXXX 100755
21
index XXXXXXX..XXXXXXX 100644
37
--- a/scripts/kernel-doc
22
--- a/qemu.nsi
38
+++ b/scripts/kernel-doc
23
+++ b/qemu.nsi
39
@@ -XXX,XX +XXX,XX @@ sub output_function_rst(%) {
24
@@ -XXX,XX +XXX,XX @@
40
    output_highlight_rst($args{'purpose'});
25
!define OUTFILE "qemu-setup.exe"
41
    $start = "\n\n**Syntax**\n\n ``";
26
!endif
42
} else {
27
43
-    print ".. c:function:: ";
28
+; Build a unicode installer
44
+ if ((split(/\./, $sphinx_version))[0] >= 3) {
29
+Unicode true
45
+ # Sphinx 3 and later distinguish macros and functions and
30
+
46
+ # complain if you use c:function with something that's not
31
; Use maximum compression.
47
+ # syntactically valid as a function declaration.
32
SetCompressor /SOLID lzma
48
+ # We assume that anything with a return type is a function
33
49
+ # and anything without is a macro.
50
+ if ($args{'functiontype'} ne "") {
51
+ print ".. c:function:: ";
52
+ } else {
53
+ print ".. c:macro:: ";
54
+ }
55
+ } else {
56
+ # Older Sphinx don't support documenting macros that take
57
+ # arguments with c:macro, and don't complain about the use
58
+ # of c:function for this.
59
+ print ".. c:function:: ";
60
+ }
61
}
62
if ($args{'functiontype'} ne "") {
63
    $start .= $args{'functiontype'} . " " . $args{'function'} . " (";
64
--
34
--
65
2.20.1
35
2.25.1
66
36
67
37
diff view generated by jsdifflib
1
On some hosts (eg Ubuntu Bionic) pkg-config returns a set of
1
We use the nsis.py script to write out an installer script Section
2
libraries for gio-2.0 which don't actually work when compiling
2
for each emulator executable, so the exact set of Sections depends on
3
statically. (Specifically, the returned library string includes
3
which executables were built. However the part of qemu.nsi which
4
-lmount, but not -lblkid which -lmount depends upon, so linking
4
specifies mouse-over descriptions for each Section still has a
5
fails due to missing symbols.)
5
hard-coded and very outdated list (with just i386 and alpha). This
6
causes two problems. Firstly, if you build the installer for a
7
configuration where you didn't build the i386 binaries you get
8
warnings like this:
9
warning 6000: unknown variable/constant "{Section_i386}" detected, ignoring (macro:_==:1)
10
warning 6000: unknown variable/constant "{Section_i386w}" detected, ignoring (macro:_==:1)
11
(this happens in our gitlab CI jobs, for instance).
12
Secondly, most of the emulators in the generated installer don't have
13
any mouseover text.
6
14
7
Check that the libraries work, and don't enable gio if they don't,
15
Make nsis.py generate a second output file which has the necessary
8
in the same way we do for gnutls.
16
MUI_DESCRIPTION_TEXT lines for each Section it creates, so we can
17
include that at the right point in qemu.nsi to set the mouse-over
18
text.
9
19
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
21
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
22
Reviewed-by: John Snow <jsnow@redhat.com>
13
Message-id: 20200928160402.7961-1-peter.maydell@linaro.org
23
Message-id: 20220305105743.2384766-4-peter.maydell@linaro.org
14
---
24
---
15
configure | 10 +++++++++-
25
qemu.nsi | 5 +----
16
1 file changed, 9 insertions(+), 1 deletion(-)
26
scripts/nsis.py | 13 ++++++++++++-
27
2 files changed, 13 insertions(+), 5 deletions(-)
17
28
18
diff --git a/configure b/configure
29
diff --git a/qemu.nsi b/qemu.nsi
19
index XXXXXXX..XXXXXXX 100755
30
index XXXXXXX..XXXXXXX 100644
20
--- a/configure
31
--- a/qemu.nsi
21
+++ b/configure
32
+++ b/qemu.nsi
22
@@ -XXX,XX +XXX,XX @@ if test "$static" = yes && test "$mingw32" = yes; then
33
@@ -XXX,XX +XXX,XX @@ SectionEnd
23
fi
34
; Descriptions (mouse-over).
24
35
!insertmacro MUI_FUNCTION_DESCRIPTION_BEGIN
25
if $pkg_config --atleast-version=$glib_req_ver gio-2.0; then
36
!insertmacro MUI_DESCRIPTION_TEXT ${SectionSystem} "System emulation."
26
- gio=yes
37
- !insertmacro MUI_DESCRIPTION_TEXT ${Section_alpha} "Alpha system emulation."
27
gio_cflags=$($pkg_config --cflags gio-2.0)
38
- !insertmacro MUI_DESCRIPTION_TEXT ${Section_alphaw} "Alpha system emulation (GUI)."
28
gio_libs=$($pkg_config --libs gio-2.0)
39
- !insertmacro MUI_DESCRIPTION_TEXT ${Section_i386} "PC i386 system emulation."
29
gdbus_codegen=$($pkg_config --variable=gdbus_codegen gio-2.0)
40
- !insertmacro MUI_DESCRIPTION_TEXT ${Section_i386w} "PC i386 system emulation (GUI)."
30
if [ ! -x "$gdbus_codegen" ]; then
41
+!include "${BINDIR}\system-mui-text.nsh"
31
gdbus_codegen=
42
!insertmacro MUI_DESCRIPTION_TEXT ${SectionTools} "Tools."
32
fi
43
!ifdef DLLDIR
33
+ # Check that the libraries actually work -- Ubuntu 18.04 ships
44
!insertmacro MUI_DESCRIPTION_TEXT ${SectionDll} "Runtime Libraries (DLL)."
34
+ # with pkg-config --static --libs data for gio-2.0 that is missing
45
diff --git a/scripts/nsis.py b/scripts/nsis.py
35
+ # -lblkid and will give a link error.
46
index XXXXXXX..XXXXXXX 100644
36
+ write_c_skeleton
47
--- a/scripts/nsis.py
37
+ if compile_prog "" "gio_libs" ; then
48
+++ b/scripts/nsis.py
38
+ gio=yes
49
@@ -XXX,XX +XXX,XX @@ def main():
39
+ else
50
subprocess.run(["make", "install", "DESTDIR=" + destdir + os.path.sep])
40
+ gio=no
51
with open(
41
+ fi
52
os.path.join(destdir + args.prefix, "system-emulations.nsh"), "w"
42
else
53
- ) as nsh:
43
gio=no
54
+ ) as nsh, open(
44
fi
55
+ os.path.join(destdir + args.prefix, "system-mui-text.nsh"), "w"
56
+ ) as muinsh:
57
for exe in sorted(glob.glob(
58
os.path.join(destdir + args.prefix, "qemu-system-*.exe")
59
)):
60
@@ -XXX,XX +XXX,XX @@ def main():
61
arch, exe
62
)
63
)
64
+ if arch.endswith('w'):
65
+ desc = arch[:-1] + " emulation (GUI)."
66
+ else:
67
+ desc = arch + " emulation."
68
+
69
+ muinsh.write(
70
+ """
71
+ !insertmacro MUI_DESCRIPTION_TEXT ${{Section_{0}}} "{1}"
72
+ """.format(arch, desc))
73
74
for exe in glob.glob(os.path.join(destdir + args.prefix, "*.exe")):
75
signcode(exe)
45
--
76
--
46
2.20.1
77
2.25.1
47
78
48
79
diff view generated by jsdifflib
1
From: AlexChen <alex.chen@huawei.com>
1
From: Eric Auger <eric.auger@redhat.com>
2
2
3
In omap_lcd_interrupts(), the pointer omap_lcd is dereferinced before
3
CONFIG_ARM_GIC_TCG actually guards the compilation of TCG GICv3
4
being check if it is valid, which may lead to NULL pointer dereference.
4
specific files. So let's rename it into CONFIG_ARM_GICV3_TCG
5
So move the assignment to surface after checking that the omap_lcd is valid
6
and move surface_bits_per_pixel(surface) to after the surface assignment.
7
5
8
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Signed-off-by: Eric Auger <eric.auger@redhat.com>
9
Signed-off-by: AlexChen <alex.chen@huawei.com>
7
Reviewed-by: Andrew Jones <drjones@redhat.com>
10
Message-id: 5F9CDB8A.9000001@huawei.com
8
Message-id: 20220308182452.223473-2-eric.auger@redhat.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/display/omap_lcdc.c | 10 +++++++---
12
hw/intc/Kconfig | 2 +-
15
1 file changed, 7 insertions(+), 3 deletions(-)
13
hw/intc/meson.build | 4 ++--
14
2 files changed, 3 insertions(+), 3 deletions(-)
16
15
17
diff --git a/hw/display/omap_lcdc.c b/hw/display/omap_lcdc.c
16
diff --git a/hw/intc/Kconfig b/hw/intc/Kconfig
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/display/omap_lcdc.c
18
--- a/hw/intc/Kconfig
20
+++ b/hw/display/omap_lcdc.c
19
+++ b/hw/intc/Kconfig
21
@@ -XXX,XX +XXX,XX @@ static void omap_lcd_interrupts(struct omap_lcd_panel_s *s)
20
@@ -XXX,XX +XXX,XX @@ config APIC
22
static void omap_update_display(void *opaque)
21
select MSI_NONBROKEN
23
{
22
select I8259
24
struct omap_lcd_panel_s *omap_lcd = (struct omap_lcd_panel_s *) opaque;
23
25
- DisplaySurface *surface = qemu_console_surface(omap_lcd->con);
24
-config ARM_GIC_TCG
26
+ DisplaySurface *surface;
25
+config ARM_GICV3_TCG
27
draw_line_func draw_line;
26
bool
28
int size, height, first, last;
27
default y
29
int width, linesize, step, bpp, frame_offset;
28
depends on ARM_GIC && TCG
30
hwaddr frame_base;
29
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
31
30
index XXXXXXX..XXXXXXX 100644
32
- if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable ||
31
--- a/hw/intc/meson.build
33
- !surface_bits_per_pixel(surface)) {
32
+++ b/hw/intc/meson.build
34
+ if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable) {
33
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ARM_GIC', if_true: files(
35
+ return;
34
'arm_gicv3_common.c',
36
+ }
35
'arm_gicv3_its_common.c',
37
+
36
))
38
+ surface = qemu_console_surface(omap_lcd->con);
37
-softmmu_ss.add(when: 'CONFIG_ARM_GIC_TCG', if_true: files(
39
+ if (!surface_bits_per_pixel(surface)) {
38
+softmmu_ss.add(when: 'CONFIG_ARM_GICV3_TCG', if_true: files(
40
return;
39
'arm_gicv3.c',
41
}
40
'arm_gicv3_dist.c',
42
41
'arm_gicv3_its.c',
42
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP_PMU', if_true: files('xlnx-pmu-iomod-in
43
specific_ss.add(when: 'CONFIG_ALLWINNER_A10_PIC', if_true: files('allwinner-a10-pic.c'))
44
specific_ss.add(when: 'CONFIG_APIC', if_true: files('apic.c', 'apic_common.c'))
45
specific_ss.add(when: 'CONFIG_ARM_GIC', if_true: files('arm_gicv3_cpuif_common.c'))
46
-specific_ss.add(when: 'CONFIG_ARM_GIC_TCG', if_true: files('arm_gicv3_cpuif.c'))
47
+specific_ss.add(when: 'CONFIG_ARM_GICV3_TCG', if_true: files('arm_gicv3_cpuif.c'))
48
specific_ss.add(when: 'CONFIG_ARM_GIC_KVM', if_true: files('arm_gic_kvm.c'))
49
specific_ss.add(when: ['CONFIG_ARM_GIC_KVM', 'TARGET_AARCH64'], if_true: files('arm_gicv3_kvm.c', 'arm_gicv3_its_kvm.c'))
50
specific_ss.add(when: 'CONFIG_ARM_V7M', if_true: files('armv7m_nvic.c'))
43
--
51
--
44
2.20.1
52
2.25.1
45
46
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Eric Auger <eric.auger@redhat.com>
2
2
3
In both cases, we can sink the write-back and perform
3
In TCG mode, if gic-version=max we always select GICv3 even if
4
the accumulate into the normal destination temps.
4
CONFIG_ARM_GICV3_TCG is unset. We shall rather select GICv2.
5
This also brings the benefit of fixing qos tests errors for tests
6
using gic-version=max with CONFIG_ARM_GICV3_TCG unset.
5
7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Eric Auger <eric.auger@redhat.com>
7
Message-id: 20201030022618.785675-11-richard.henderson@linaro.org
9
Reviewed-by: Andrew Jones <drjones@redhat.com>
10
Message-id: 20220308182452.223473-3-eric.auger@redhat.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
target/arm/translate-neon.c.inc | 23 +++++++++--------------
14
hw/arm/virt.c | 7 ++++++-
12
1 file changed, 9 insertions(+), 14 deletions(-)
15
1 file changed, 6 insertions(+), 1 deletion(-)
13
16
14
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
17
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-neon.c.inc
19
--- a/hw/arm/virt.c
17
+++ b/target/arm/translate-neon.c.inc
20
+++ b/hw/arm/virt.c
18
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
21
@@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms)
19
if (accfn) {
22
vms->gic_version = VIRT_GIC_VERSION_2;
20
tmp = tcg_temp_new_i64();
23
break;
21
read_neon_element64(tmp, a->vd, 0, MO_64);
24
case VIRT_GIC_VERSION_MAX:
22
- accfn(tmp, tmp, rd0);
25
- vms->gic_version = VIRT_GIC_VERSION_3;
23
- write_neon_element64(tmp, a->vd, 0, MO_64);
26
+ if (module_object_class_by_name("arm-gicv3")) {
24
+ accfn(rd0, tmp, rd0);
27
+ /* CONFIG_ARM_GICV3_TCG was set */
25
read_neon_element64(tmp, a->vd, 1, MO_64);
28
+ vms->gic_version = VIRT_GIC_VERSION_3;
26
- accfn(tmp, tmp, rd1);
29
+ } else {
27
- write_neon_element64(tmp, a->vd, 1, MO_64);
30
+ vms->gic_version = VIRT_GIC_VERSION_2;
28
+ accfn(rd1, tmp, rd1);
31
+ }
29
tcg_temp_free_i64(tmp);
32
break;
30
- } else {
33
case VIRT_GIC_VERSION_HOST:
31
- write_neon_element64(rd0, a->vd, 0, MO_64);
34
error_report("gic-version=host requires KVM");
32
- write_neon_element64(rd1, a->vd, 1, MO_64);
33
}
34
35
+ write_neon_element64(rd0, a->vd, 0, MO_64);
36
+ write_neon_element64(rd1, a->vd, 1, MO_64);
37
tcg_temp_free_i64(rd0);
38
tcg_temp_free_i64(rd1);
39
40
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
41
if (accfn) {
42
TCGv_i64 t64 = tcg_temp_new_i64();
43
read_neon_element64(t64, a->vd, 0, MO_64);
44
- accfn(t64, t64, rn0_64);
45
- write_neon_element64(t64, a->vd, 0, MO_64);
46
+ accfn(rn0_64, t64, rn0_64);
47
read_neon_element64(t64, a->vd, 1, MO_64);
48
- accfn(t64, t64, rn1_64);
49
- write_neon_element64(t64, a->vd, 1, MO_64);
50
+ accfn(rn1_64, t64, rn1_64);
51
tcg_temp_free_i64(t64);
52
- } else {
53
- write_neon_element64(rn0_64, a->vd, 0, MO_64);
54
- write_neon_element64(rn1_64, a->vd, 1, MO_64);
55
}
56
+
57
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
58
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
59
tcg_temp_free_i64(rn0_64);
60
tcg_temp_free_i64(rn1_64);
61
return true;
62
--
35
--
63
2.20.1
36
2.25.1
64
65
diff view generated by jsdifflib
1
If we're using the capstone disassembler, disassembly of a run of
1
Currently the CPU_LOG_INT logging misses some useful information
2
instructions more than 32 bytes long disassembles the wrong data for
2
about loads from the vector table. Add logging where we load vector
3
instructions beyond the 32 byte mark:
3
table entries. This is particularly helpful for cases where the user
4
has accidentally not put a vector table in their image at all, which
5
can result in confusing guest crashes at startup.
4
6
5
(qemu) xp /16x 0x100
7
Here's an example of the new logging for a case where
6
0000000000000100: 0x00000005 0x54410001 0x00000001 0x00001000
8
the vector table contains garbage:
7
0000000000000110: 0x00000000 0x00000004 0x54410002 0x3c000000
8
0000000000000120: 0x00000000 0x00000004 0x54410009 0x74736574
9
0000000000000130: 0x00000000 0x00000000 0x00000000 0x00000000
10
(qemu) xp /16i 0x100
11
0x00000100: 00000005 andeq r0, r0, r5
12
0x00000104: 54410001 strbpl r0, [r1], #-1
13
0x00000108: 00000001 andeq r0, r0, r1
14
0x0000010c: 00001000 andeq r1, r0, r0
15
0x00000110: 00000000 andeq r0, r0, r0
16
0x00000114: 00000004 andeq r0, r0, r4
17
0x00000118: 54410002 strbpl r0, [r1], #-2
18
0x0000011c: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
19
0x00000120: 54410001 strbpl r0, [r1], #-1
20
0x00000124: 00000001 andeq r0, r0, r1
21
0x00000128: 00001000 andeq r1, r0, r0
22
0x0000012c: 00000000 andeq r0, r0, r0
23
0x00000130: 00000004 andeq r0, r0, r4
24
0x00000134: 54410002 strbpl r0, [r1], #-2
25
0x00000138: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
26
0x0000013c: 00000000 andeq r0, r0, r0
27
9
28
Here the disassembly of 0x120..0x13f is using the data that is in
10
Loaded reset SP 0x0 PC 0x0 from vector table
29
0x104..0x123.
11
Loaded reset SP 0xd008f8df PC 0xf000bf00 from vector table
12
Taking exception 3 [Prefetch Abort] on CPU 0
13
...with CFSR.IACCVIOL
14
...BusFault with BFSR.STKERR
15
...taking pending nonsecure exception 3
16
...loading from element 3 of non-secure vector table at 0xc
17
...loaded new PC 0x20000558
18
----------------
19
IN:
20
0x20000558: 08000079 stmdaeq r0, {r0, r3, r4, r5, r6}
30
21
31
This is caused by passing the wrong value to the read_memory_func().
22
(The double reset logging is the result of our long-standing
32
The intention is that at this point in the loop the 'cap_buf' buffer
23
"CPUs all get reset twice" weirdness; it looks a bit ugly
33
already contains 'csize' bytes of data for the instruction at guest
24
but it'll go away if we ever fix that :-))
34
addr 'pc', and we want to read in an extra 'tsize' bytes. Those
35
extra bytes are therefore at 'pc + csize', not 'pc'. On the first
36
time through the loop 'csize' happens to be zero, so the initial read
37
of 32 bytes into cap_buf is correct and as long as the disassembly
38
never needs to read more data we return the correct information.
39
25
40
Use the correct guest address in the call to read_memory_func().
41
42
Cc: qemu-stable@nongnu.org
43
Fixes: https://bugs.launchpad.net/qemu/+bug/1900779
44
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
45
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
27
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
46
Message-id: 20201022132445.25039-1-peter.maydell@linaro.org
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
30
Message-id: 20220315204306.2797684-2-peter.maydell@linaro.org
47
---
31
---
48
disas/capstone.c | 2 +-
32
target/arm/cpu.c | 5 +++++
49
1 file changed, 1 insertion(+), 1 deletion(-)
33
target/arm/m_helper.c | 5 +++++
34
2 files changed, 10 insertions(+)
50
35
51
diff --git a/disas/capstone.c b/disas/capstone.c
36
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
52
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
53
--- a/disas/capstone.c
38
--- a/target/arm/cpu.c
54
+++ b/disas/capstone.c
39
+++ b/target/arm/cpu.c
55
@@ -XXX,XX +XXX,XX @@ bool cap_disas_monitor(disassemble_info *info, uint64_t pc, int count)
40
@@ -XXX,XX +XXX,XX @@
56
41
#include "qemu/osdep.h"
57
/* Make certain that we can make progress. */
42
#include "qemu/qemu-print.h"
58
assert(tsize != 0);
43
#include "qemu/timer.h"
59
- info->read_memory_func(pc, cap_buf + csize, tsize, info);
44
+#include "qemu/log.h"
60
+ info->read_memory_func(pc + csize, cap_buf + csize, tsize, info);
45
#include "qemu-common.h"
61
csize += tsize;
46
#include "target/arm/idau.h"
62
47
#include "qemu/module.h"
63
if (cs_disasm_iter(handle, &cbuf, &csize, &pc, insn)) {
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
49
initial_pc = ldl_phys(s->as, vecbase + 4);
50
}
51
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "Loaded reset SP 0x%x PC 0x%x from vector table\n",
54
+ initial_msp, initial_pc);
55
+
56
env->regs[13] = initial_msp & 0xFFFFFFFC;
57
env->regs[15] = initial_pc & ~1;
58
env->thumb = initial_pc & 1;
59
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/m_helper.c
62
+++ b/target/arm/m_helper.c
63
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
64
ARMMMUIdx mmu_idx;
65
bool exc_secure;
66
67
+ qemu_log_mask(CPU_LOG_INT,
68
+ "...loading from element %d of %s vector table at 0x%x\n",
69
+ exc, targets_secure ? "secure" : "non-secure", addr);
70
+
71
mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true);
72
73
/*
74
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
75
goto load_fail;
76
}
77
*pvec = vector_entry;
78
+ qemu_log_mask(CPU_LOG_INT, "...loaded new PC 0x%x\n", *pvec);
79
return true;
80
81
load_fail:
64
--
82
--
65
2.20.1
83
2.25.1
66
84
67
85
diff view generated by jsdifflib
1
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to
1
For M-profile, the fault address is not always exposed to the guest
2
armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el().
2
in a fault register (for instance the BFAR bus fault address register
3
This is incorrect when the security state being queried is not the
3
is only updated for bus faults on data accesses, not instruction
4
current one, because arm_current_el() uses the current security state
4
accesses). Currently we log the address only if we're putting it
5
to determine which of the banked CONTROL.nPRIV bits to look at.
5
into a particular guest-visible register. Since we always have it,
6
The effect was that if (for instance) Secure state was in privileged
6
log it generically, to make logs of i-side faults a bit clearer.
7
mode but Non-Secure was not then we would return the wrong MMU index.
8
9
The only places where we are using this function in a way that could
10
trigger this bug are for the stack loads during a v8M function-return
11
and for the instruction fetch of a v8M SG insn.
12
13
Fix the bug by expanding out the M-profile version of the
14
arm_current_el() logic inline so it can use the passed in secstate
15
rather than env->v7m.secure.
16
7
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Message-id: 20220315204306.2797684-3-peter.maydell@linaro.org
20
---
13
---
21
target/arm/m_helper.c | 3 ++-
14
target/arm/m_helper.c | 6 ++++++
22
1 file changed, 2 insertions(+), 1 deletion(-)
15
1 file changed, 6 insertions(+)
23
16
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
25
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
19
--- a/target/arm/m_helper.c
27
+++ b/target/arm/m_helper.c
20
+++ b/target/arm/m_helper.c
28
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
21
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
29
/* Return the MMU index for a v7M CPU in the specified security state */
22
* Note that for M profile we don't have a guest facing FSR, but
30
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
23
* the env->exception.fsr will be populated by the code that
31
{
24
* raises the fault, in the A profile short-descriptor format.
32
- bool priv = arm_current_el(env) != 0;
25
+ *
33
+ bool priv = arm_v7m_is_handler_mode(env) ||
26
+ * Log the exception.vaddress now regardless of subtype, because
34
+ !(env->v7m.control[secstate] & 1);
27
+ * logging below only logs it when it goes into a guest visible
35
28
+ * register.
36
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
29
*/
37
}
30
+ qemu_log_mask(CPU_LOG_INT, "...at fault address 0x%x\n",
31
+ (uint32_t)env->exception.vaddress);
32
switch (env->exception.fsr & 0xf) {
33
case M_FAKE_FSR_NSC_EXEC:
34
/*
38
--
35
--
39
2.20.1
36
2.25.1
40
37
41
38
diff view generated by jsdifflib
1
From: AlexChen <alex.chen@huawei.com>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
In exynos4210_fimd_update(), the pointer s is dereferinced before
3
Add an unimplemented SERDES (Serializer/Deserializer) area.
4
being check if it is valid, which may lead to NULL pointer dereference.
5
So move the assignment to global_width after checking that the s is valid.
6
4
7
Reported-by: Euler Robot <euler.robot@huawei.com>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Alex Chen <alex.chen@huawei.com>
6
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 5F9F8D88.9030102@huawei.com
8
Message-id: 20220316164645.2303510-2-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
hw/display/exynos4210_fimd.c | 4 +++-
11
include/hw/arm/xlnx-zynqmp.h | 2 +-
14
1 file changed, 3 insertions(+), 1 deletion(-)
12
hw/arm/xlnx-zynqmp.c | 5 +++++
13
2 files changed, 6 insertions(+), 1 deletion(-)
15
14
16
diff --git a/hw/display/exynos4210_fimd.c b/hw/display/exynos4210_fimd.c
15
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/display/exynos4210_fimd.c
17
--- a/include/hw/arm/xlnx-zynqmp.h
19
+++ b/hw/display/exynos4210_fimd.c
18
+++ b/include/hw/arm/xlnx-zynqmp.h
20
@@ -XXX,XX +XXX,XX @@ static void exynos4210_fimd_update(void *opaque)
19
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
21
bool blend = false;
20
/*
22
uint8_t *host_fb_addr;
21
* Unimplemented mmio regions needed to boot some images.
23
bool is_dirty = false;
22
*/
24
- const int global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
23
-#define XLNX_ZYNQMP_NUM_UNIMP_AREAS 1
25
+ int global_width;
24
+#define XLNX_ZYNQMP_NUM_UNIMP_AREAS 2
26
25
27
if (!s || !s->console || !s->enabled ||
26
struct XlnxZynqMPState {
28
surface_bits_per_pixel(qemu_console_surface(s->console)) == 0) {
27
/*< private >*/
29
return;
28
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
30
}
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/xlnx-zynqmp.c
31
+++ b/hw/arm/xlnx-zynqmp.c
32
@@ -XXX,XX +XXX,XX @@
33
#define QSPI_DMA_ADDR 0xff0f0800
34
#define NUM_QSPI_IRQ_LINES 2
35
36
+/* Serializer/Deserializer. */
37
+#define SERDES_ADDR 0xfd400000
38
+#define SERDES_SIZE 0x20000
31
+
39
+
32
+ global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
40
#define DP_ADDR 0xfd4a0000
33
exynos4210_update_resolution(s);
41
#define DP_IRQ 113
34
surface = qemu_console_surface(s->console);
42
43
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_create_unimp_mmio(XlnxZynqMPState *s)
44
hwaddr size;
45
} unimp_areas[ARRAY_SIZE(s->mr_unimp)] = {
46
{ .name = "apu", APU_ADDR, APU_SIZE },
47
+ { .name = "serdes", SERDES_ADDR, SERDES_SIZE },
48
};
49
unsigned int nr;
35
50
36
--
51
--
37
2.20.1
52
2.25.1
38
53
39
54
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
HCR should be applied when NS is set, not when it is cleared.
3
Make the rvbar property settable after realize. This is done
4
in preparation to model the ZynqMP's runtime configurable rvbar.
4
5
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20220316164645.2303510-3-edgar.iglesias@gmail.com
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
target/arm/helper.c | 5 ++---
11
target/arm/cpu.h | 3 ++-
10
1 file changed, 2 insertions(+), 3 deletions(-)
12
target/arm/cpu.c | 12 +++++++-----
13
target/arm/helper.c | 10 +++++++---
14
3 files changed, 16 insertions(+), 9 deletions(-)
11
15
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
21
uint64_t vbar_el[4];
22
};
23
uint32_t mvbar; /* (monitor) vector base address register */
24
+ uint64_t rvbar; /* rvbar sampled from rvbar property at reset */
25
struct { /* FCSE PID. */
26
uint32_t fcseidr_ns;
27
uint32_t fcseidr_s;
28
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
29
30
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
31
uint32_t dcz_blocksize;
32
- uint64_t rvbar;
33
+ uint64_t rvbar_prop; /* Property/input signals. */
34
35
/* Configurable aspects of GIC cpu interface (which is part of the CPU) */
36
int gic_num_lrs; /* number of list registers */
37
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/cpu.c
40
+++ b/target/arm/cpu.c
41
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
42
} else {
43
env->pstate = PSTATE_MODE_EL1h;
44
}
45
- env->pc = cpu->rvbar;
46
+
47
+ /* Sample rvbar at reset. */
48
+ env->cp15.rvbar = cpu->rvbar_prop;
49
+ env->pc = env->cp15.rvbar;
50
#endif
51
} else {
52
#if defined(CONFIG_USER_ONLY)
53
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_reset_cbar_property =
54
static Property arm_cpu_reset_hivecs_property =
55
DEFINE_PROP_BOOL("reset-hivecs", ARMCPU, reset_hivecs, false);
56
57
-static Property arm_cpu_rvbar_property =
58
- DEFINE_PROP_UINT64("rvbar", ARMCPU, rvbar, 0);
59
-
60
#ifndef CONFIG_USER_ONLY
61
static Property arm_cpu_has_el2_property =
62
DEFINE_PROP_BOOL("has_el2", ARMCPU, has_el2, true);
63
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
64
}
65
66
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
67
- qdev_property_add_static(DEVICE(obj), &arm_cpu_rvbar_property);
68
+ object_property_add_uint64_ptr(obj, "rvbar",
69
+ &cpu->rvbar_prop,
70
+ OBJ_PROP_FLAG_READWRITE);
71
}
72
73
#ifndef CONFIG_USER_ONLY
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
74
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
75
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
76
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
77
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
78
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
17
79
ARMCPRegInfo rvbar = {
18
/*
80
.name = "RVBAR_EL1", .state = ARM_CP_STATE_AA64,
19
* Non-IS variants of TLB operations are upgraded to
81
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1,
20
- * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
82
- .type = ARM_CP_CONST, .access = PL1_R, .resetvalue = cpu->rvbar
21
+ * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
83
+ .access = PL1_R,
22
* force broadcast of these operations.
84
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
23
*/
85
};
24
static bool tlb_force_broadcast(CPUARMState *env)
86
define_one_arm_cp_reg(cpu, &rvbar);
25
{
87
}
26
- return (env->cp15.hcr_el2 & HCR_FB) &&
88
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
27
- arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
89
ARMCPRegInfo rvbar = {
28
+ return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
90
.name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64,
29
}
91
.opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1,
30
92
- .type = ARM_CP_CONST, .access = PL2_R, .resetvalue = cpu->rvbar
31
static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
93
+ .access = PL2_R,
94
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
95
};
96
define_one_arm_cp_reg(cpu, &rvbar);
97
}
98
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
99
ARMCPRegInfo el3_regs[] = {
100
{ .name = "RVBAR_EL3", .state = ARM_CP_STATE_AA64,
101
.opc0 = 3, .opc1 = 6, .crn = 12, .crm = 0, .opc2 = 1,
102
- .type = ARM_CP_CONST, .access = PL3_R, .resetvalue = cpu->rvbar },
103
+ .access = PL3_R,
104
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
105
+ },
106
{ .name = "SCTLR_EL3", .state = ARM_CP_STATE_AA64,
107
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 0,
108
.access = PL3_RW,
32
--
109
--
33
2.20.1
110
2.25.1
34
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
The only uses of this function are for loading VFP
3
Add a model of the Xilinx ZynqMP CRF. At the moment this
4
single-precision values, and nothing to do with NEON.
4
is mostly a stub model.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20201030022618.785675-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Message-id: 20220316164645.2303510-4-edgar.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/translate.c | 4 +-
12
include/hw/misc/xlnx-zynqmp-crf.h | 211 ++++++++++++++++++++++++
12
target/arm/translate-vfp.c.inc | 184 ++++++++++++++++-----------------
13
hw/misc/xlnx-zynqmp-crf.c | 266 ++++++++++++++++++++++++++++++
13
2 files changed, 94 insertions(+), 94 deletions(-)
14
hw/misc/meson.build | 1 +
15
3 files changed, 478 insertions(+)
16
create mode 100644 include/hw/misc/xlnx-zynqmp-crf.h
17
create mode 100644 hw/misc/xlnx-zynqmp-crf.c
14
18
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
diff --git a/include/hw/misc/xlnx-zynqmp-crf.h b/include/hw/misc/xlnx-zynqmp-crf.h
20
new file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- /dev/null
23
+++ b/include/hw/misc/xlnx-zynqmp-crf.h
24
@@ -XXX,XX +XXX,XX @@
25
+/*
26
+ * QEMU model of the CRF - Clock Reset FPD.
27
+ *
28
+ * Copyright (c) 2022 Xilinx Inc.
29
+ * SPDX-License-Identifier: GPL-2.0-or-later
30
+ * Written by Edgar E. Iglesias <edgar.iglesias@xilinx.com>
31
+ */
32
+#ifndef HW_MISC_XLNX_ZYNQMP_CRF_H
33
+#define HW_MISC_XLNX_ZYNQMP_CRF_H
34
+
35
+#include "hw/sysbus.h"
36
+#include "hw/register.h"
37
+
38
+#define TYPE_XLNX_ZYNQMP_CRF "xlnx.zynqmp_crf"
39
+OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPCRF, XLNX_ZYNQMP_CRF)
40
+
41
+REG32(ERR_CTRL, 0x0)
42
+ FIELD(ERR_CTRL, SLVERR_ENABLE, 0, 1)
43
+REG32(IR_STATUS, 0x4)
44
+ FIELD(IR_STATUS, ADDR_DECODE_ERR, 0, 1)
45
+REG32(IR_MASK, 0x8)
46
+ FIELD(IR_MASK, ADDR_DECODE_ERR, 0, 1)
47
+REG32(IR_ENABLE, 0xc)
48
+ FIELD(IR_ENABLE, ADDR_DECODE_ERR, 0, 1)
49
+REG32(IR_DISABLE, 0x10)
50
+ FIELD(IR_DISABLE, ADDR_DECODE_ERR, 0, 1)
51
+REG32(CRF_WPROT, 0x1c)
52
+ FIELD(CRF_WPROT, ACTIVE, 0, 1)
53
+REG32(APLL_CTRL, 0x20)
54
+ FIELD(APLL_CTRL, POST_SRC, 24, 3)
55
+ FIELD(APLL_CTRL, PRE_SRC, 20, 3)
56
+ FIELD(APLL_CTRL, CLKOUTDIV, 17, 1)
57
+ FIELD(APLL_CTRL, DIV2, 16, 1)
58
+ FIELD(APLL_CTRL, FBDIV, 8, 7)
59
+ FIELD(APLL_CTRL, BYPASS, 3, 1)
60
+ FIELD(APLL_CTRL, RESET, 0, 1)
61
+REG32(APLL_CFG, 0x24)
62
+ FIELD(APLL_CFG, LOCK_DLY, 25, 7)
63
+ FIELD(APLL_CFG, LOCK_CNT, 13, 10)
64
+ FIELD(APLL_CFG, LFHF, 10, 2)
65
+ FIELD(APLL_CFG, CP, 5, 4)
66
+ FIELD(APLL_CFG, RES, 0, 4)
67
+REG32(APLL_FRAC_CFG, 0x28)
68
+ FIELD(APLL_FRAC_CFG, ENABLED, 31, 1)
69
+ FIELD(APLL_FRAC_CFG, SEED, 22, 3)
70
+ FIELD(APLL_FRAC_CFG, ALGRTHM, 19, 1)
71
+ FIELD(APLL_FRAC_CFG, ORDER, 18, 1)
72
+ FIELD(APLL_FRAC_CFG, DATA, 0, 16)
73
+REG32(DPLL_CTRL, 0x2c)
74
+ FIELD(DPLL_CTRL, POST_SRC, 24, 3)
75
+ FIELD(DPLL_CTRL, PRE_SRC, 20, 3)
76
+ FIELD(DPLL_CTRL, CLKOUTDIV, 17, 1)
77
+ FIELD(DPLL_CTRL, DIV2, 16, 1)
78
+ FIELD(DPLL_CTRL, FBDIV, 8, 7)
79
+ FIELD(DPLL_CTRL, BYPASS, 3, 1)
80
+ FIELD(DPLL_CTRL, RESET, 0, 1)
81
+REG32(DPLL_CFG, 0x30)
82
+ FIELD(DPLL_CFG, LOCK_DLY, 25, 7)
83
+ FIELD(DPLL_CFG, LOCK_CNT, 13, 10)
84
+ FIELD(DPLL_CFG, LFHF, 10, 2)
85
+ FIELD(DPLL_CFG, CP, 5, 4)
86
+ FIELD(DPLL_CFG, RES, 0, 4)
87
+REG32(DPLL_FRAC_CFG, 0x34)
88
+ FIELD(DPLL_FRAC_CFG, ENABLED, 31, 1)
89
+ FIELD(DPLL_FRAC_CFG, SEED, 22, 3)
90
+ FIELD(DPLL_FRAC_CFG, ALGRTHM, 19, 1)
91
+ FIELD(DPLL_FRAC_CFG, ORDER, 18, 1)
92
+ FIELD(DPLL_FRAC_CFG, DATA, 0, 16)
93
+REG32(VPLL_CTRL, 0x38)
94
+ FIELD(VPLL_CTRL, POST_SRC, 24, 3)
95
+ FIELD(VPLL_CTRL, PRE_SRC, 20, 3)
96
+ FIELD(VPLL_CTRL, CLKOUTDIV, 17, 1)
97
+ FIELD(VPLL_CTRL, DIV2, 16, 1)
98
+ FIELD(VPLL_CTRL, FBDIV, 8, 7)
99
+ FIELD(VPLL_CTRL, BYPASS, 3, 1)
100
+ FIELD(VPLL_CTRL, RESET, 0, 1)
101
+REG32(VPLL_CFG, 0x3c)
102
+ FIELD(VPLL_CFG, LOCK_DLY, 25, 7)
103
+ FIELD(VPLL_CFG, LOCK_CNT, 13, 10)
104
+ FIELD(VPLL_CFG, LFHF, 10, 2)
105
+ FIELD(VPLL_CFG, CP, 5, 4)
106
+ FIELD(VPLL_CFG, RES, 0, 4)
107
+REG32(VPLL_FRAC_CFG, 0x40)
108
+ FIELD(VPLL_FRAC_CFG, ENABLED, 31, 1)
109
+ FIELD(VPLL_FRAC_CFG, SEED, 22, 3)
110
+ FIELD(VPLL_FRAC_CFG, ALGRTHM, 19, 1)
111
+ FIELD(VPLL_FRAC_CFG, ORDER, 18, 1)
112
+ FIELD(VPLL_FRAC_CFG, DATA, 0, 16)
113
+REG32(PLL_STATUS, 0x44)
114
+ FIELD(PLL_STATUS, VPLL_STABLE, 5, 1)
115
+ FIELD(PLL_STATUS, DPLL_STABLE, 4, 1)
116
+ FIELD(PLL_STATUS, APLL_STABLE, 3, 1)
117
+ FIELD(PLL_STATUS, VPLL_LOCK, 2, 1)
118
+ FIELD(PLL_STATUS, DPLL_LOCK, 1, 1)
119
+ FIELD(PLL_STATUS, APLL_LOCK, 0, 1)
120
+REG32(APLL_TO_LPD_CTRL, 0x48)
121
+ FIELD(APLL_TO_LPD_CTRL, DIVISOR0, 8, 6)
122
+REG32(DPLL_TO_LPD_CTRL, 0x4c)
123
+ FIELD(DPLL_TO_LPD_CTRL, DIVISOR0, 8, 6)
124
+REG32(VPLL_TO_LPD_CTRL, 0x50)
125
+ FIELD(VPLL_TO_LPD_CTRL, DIVISOR0, 8, 6)
126
+REG32(ACPU_CTRL, 0x60)
127
+ FIELD(ACPU_CTRL, CLKACT_HALF, 25, 1)
128
+ FIELD(ACPU_CTRL, CLKACT_FULL, 24, 1)
129
+ FIELD(ACPU_CTRL, DIVISOR0, 8, 6)
130
+ FIELD(ACPU_CTRL, SRCSEL, 0, 3)
131
+REG32(DBG_TRACE_CTRL, 0x64)
132
+ FIELD(DBG_TRACE_CTRL, CLKACT, 24, 1)
133
+ FIELD(DBG_TRACE_CTRL, DIVISOR0, 8, 6)
134
+ FIELD(DBG_TRACE_CTRL, SRCSEL, 0, 3)
135
+REG32(DBG_FPD_CTRL, 0x68)
136
+ FIELD(DBG_FPD_CTRL, CLKACT, 24, 1)
137
+ FIELD(DBG_FPD_CTRL, DIVISOR0, 8, 6)
138
+ FIELD(DBG_FPD_CTRL, SRCSEL, 0, 3)
139
+REG32(DP_VIDEO_REF_CTRL, 0x70)
140
+ FIELD(DP_VIDEO_REF_CTRL, CLKACT, 24, 1)
141
+ FIELD(DP_VIDEO_REF_CTRL, DIVISOR1, 16, 6)
142
+ FIELD(DP_VIDEO_REF_CTRL, DIVISOR0, 8, 6)
143
+ FIELD(DP_VIDEO_REF_CTRL, SRCSEL, 0, 3)
144
+REG32(DP_AUDIO_REF_CTRL, 0x74)
145
+ FIELD(DP_AUDIO_REF_CTRL, CLKACT, 24, 1)
146
+ FIELD(DP_AUDIO_REF_CTRL, DIVISOR1, 16, 6)
147
+ FIELD(DP_AUDIO_REF_CTRL, DIVISOR0, 8, 6)
148
+ FIELD(DP_AUDIO_REF_CTRL, SRCSEL, 0, 3)
149
+REG32(DP_STC_REF_CTRL, 0x7c)
150
+ FIELD(DP_STC_REF_CTRL, CLKACT, 24, 1)
151
+ FIELD(DP_STC_REF_CTRL, DIVISOR1, 16, 6)
152
+ FIELD(DP_STC_REF_CTRL, DIVISOR0, 8, 6)
153
+ FIELD(DP_STC_REF_CTRL, SRCSEL, 0, 3)
154
+REG32(DDR_CTRL, 0x80)
155
+ FIELD(DDR_CTRL, CLKACT, 24, 1)
156
+ FIELD(DDR_CTRL, DIVISOR0, 8, 6)
157
+ FIELD(DDR_CTRL, SRCSEL, 0, 3)
158
+REG32(GPU_REF_CTRL, 0x84)
159
+ FIELD(GPU_REF_CTRL, PP1_CLKACT, 26, 1)
160
+ FIELD(GPU_REF_CTRL, PP0_CLKACT, 25, 1)
161
+ FIELD(GPU_REF_CTRL, CLKACT, 24, 1)
162
+ FIELD(GPU_REF_CTRL, DIVISOR0, 8, 6)
163
+ FIELD(GPU_REF_CTRL, SRCSEL, 0, 3)
164
+REG32(SATA_REF_CTRL, 0xa0)
165
+ FIELD(SATA_REF_CTRL, CLKACT, 24, 1)
166
+ FIELD(SATA_REF_CTRL, DIVISOR0, 8, 6)
167
+ FIELD(SATA_REF_CTRL, SRCSEL, 0, 3)
168
+REG32(PCIE_REF_CTRL, 0xb4)
169
+ FIELD(PCIE_REF_CTRL, CLKACT, 24, 1)
170
+ FIELD(PCIE_REF_CTRL, DIVISOR0, 8, 6)
171
+ FIELD(PCIE_REF_CTRL, SRCSEL, 0, 3)
172
+REG32(GDMA_REF_CTRL, 0xb8)
173
+ FIELD(GDMA_REF_CTRL, CLKACT, 24, 1)
174
+ FIELD(GDMA_REF_CTRL, DIVISOR0, 8, 6)
175
+ FIELD(GDMA_REF_CTRL, SRCSEL, 0, 3)
176
+REG32(DPDMA_REF_CTRL, 0xbc)
177
+ FIELD(DPDMA_REF_CTRL, CLKACT, 24, 1)
178
+ FIELD(DPDMA_REF_CTRL, DIVISOR0, 8, 6)
179
+ FIELD(DPDMA_REF_CTRL, SRCSEL, 0, 3)
180
+REG32(TOPSW_MAIN_CTRL, 0xc0)
181
+ FIELD(TOPSW_MAIN_CTRL, CLKACT, 24, 1)
182
+ FIELD(TOPSW_MAIN_CTRL, DIVISOR0, 8, 6)
183
+ FIELD(TOPSW_MAIN_CTRL, SRCSEL, 0, 3)
184
+REG32(TOPSW_LSBUS_CTRL, 0xc4)
185
+ FIELD(TOPSW_LSBUS_CTRL, CLKACT, 24, 1)
186
+ FIELD(TOPSW_LSBUS_CTRL, DIVISOR0, 8, 6)
187
+ FIELD(TOPSW_LSBUS_CTRL, SRCSEL, 0, 3)
188
+REG32(DBG_TSTMP_CTRL, 0xf8)
189
+ FIELD(DBG_TSTMP_CTRL, DIVISOR0, 8, 6)
190
+ FIELD(DBG_TSTMP_CTRL, SRCSEL, 0, 3)
191
+REG32(RST_FPD_TOP, 0x100)
192
+ FIELD(RST_FPD_TOP, PCIE_CFG_RESET, 19, 1)
193
+ FIELD(RST_FPD_TOP, PCIE_BRIDGE_RESET, 18, 1)
194
+ FIELD(RST_FPD_TOP, PCIE_CTRL_RESET, 17, 1)
195
+ FIELD(RST_FPD_TOP, DP_RESET, 16, 1)
196
+ FIELD(RST_FPD_TOP, SWDT_RESET, 15, 1)
197
+ FIELD(RST_FPD_TOP, AFI_FM5_RESET, 12, 1)
198
+ FIELD(RST_FPD_TOP, AFI_FM4_RESET, 11, 1)
199
+ FIELD(RST_FPD_TOP, AFI_FM3_RESET, 10, 1)
200
+ FIELD(RST_FPD_TOP, AFI_FM2_RESET, 9, 1)
201
+ FIELD(RST_FPD_TOP, AFI_FM1_RESET, 8, 1)
202
+ FIELD(RST_FPD_TOP, AFI_FM0_RESET, 7, 1)
203
+ FIELD(RST_FPD_TOP, GDMA_RESET, 6, 1)
204
+ FIELD(RST_FPD_TOP, GPU_PP1_RESET, 5, 1)
205
+ FIELD(RST_FPD_TOP, GPU_PP0_RESET, 4, 1)
206
+ FIELD(RST_FPD_TOP, GPU_RESET, 3, 1)
207
+ FIELD(RST_FPD_TOP, GT_RESET, 2, 1)
208
+ FIELD(RST_FPD_TOP, SATA_RESET, 1, 1)
209
+REG32(RST_FPD_APU, 0x104)
210
+ FIELD(RST_FPD_APU, ACPU3_PWRON_RESET, 13, 1)
211
+ FIELD(RST_FPD_APU, ACPU2_PWRON_RESET, 12, 1)
212
+ FIELD(RST_FPD_APU, ACPU1_PWRON_RESET, 11, 1)
213
+ FIELD(RST_FPD_APU, ACPU0_PWRON_RESET, 10, 1)
214
+ FIELD(RST_FPD_APU, APU_L2_RESET, 8, 1)
215
+ FIELD(RST_FPD_APU, ACPU3_RESET, 3, 1)
216
+ FIELD(RST_FPD_APU, ACPU2_RESET, 2, 1)
217
+ FIELD(RST_FPD_APU, ACPU1_RESET, 1, 1)
218
+ FIELD(RST_FPD_APU, ACPU0_RESET, 0, 1)
219
+REG32(RST_DDR_SS, 0x108)
220
+ FIELD(RST_DDR_SS, DDR_RESET, 3, 1)
221
+ FIELD(RST_DDR_SS, APM_RESET, 2, 1)
222
+
223
+#define CRF_R_MAX (R_RST_DDR_SS + 1)
224
+
225
+struct XlnxZynqMPCRF {
226
+ SysBusDevice parent_obj;
227
+ MemoryRegion iomem;
228
+ qemu_irq irq_ir;
229
+
230
+ RegisterInfoArray *reg_array;
231
+ uint32_t regs[CRF_R_MAX];
232
+ RegisterInfo regs_info[CRF_R_MAX];
233
+};
234
+
235
+#endif
236
diff --git a/hw/misc/xlnx-zynqmp-crf.c b/hw/misc/xlnx-zynqmp-crf.c
237
new file mode 100644
238
index XXXXXXX..XXXXXXX
239
--- /dev/null
240
+++ b/hw/misc/xlnx-zynqmp-crf.c
241
@@ -XXX,XX +XXX,XX @@
242
+/*
243
+ * QEMU model of the CRF - Clock Reset FPD.
244
+ *
245
+ * Copyright (c) 2022 Xilinx Inc.
246
+ * SPDX-License-Identifier: GPL-2.0-or-later
247
+ * Written by Edgar E. Iglesias <edgar.iglesias@xilinx.com>
248
+ */
249
+
250
+#include "qemu/osdep.h"
251
+#include "hw/sysbus.h"
252
+#include "hw/register.h"
253
+#include "qemu/bitops.h"
254
+#include "qemu/log.h"
255
+#include "migration/vmstate.h"
256
+#include "hw/irq.h"
257
+#include "hw/misc/xlnx-zynqmp-crf.h"
258
+#include "target/arm/arm-powerctl.h"
259
+
260
+#ifndef XLNX_ZYNQMP_CRF_ERR_DEBUG
261
+#define XLNX_ZYNQMP_CRF_ERR_DEBUG 0
262
+#endif
263
+
264
+#define CRF_MAX_CPU 4
265
+
266
+static void ir_update_irq(XlnxZynqMPCRF *s)
267
+{
268
+ bool pending = s->regs[R_IR_STATUS] & ~s->regs[R_IR_MASK];
269
+ qemu_set_irq(s->irq_ir, pending);
270
+}
271
+
272
+static void ir_status_postw(RegisterInfo *reg, uint64_t val64)
273
+{
274
+ XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(reg->opaque);
275
+ ir_update_irq(s);
276
+}
277
+
278
+static uint64_t ir_enable_prew(RegisterInfo *reg, uint64_t val64)
279
+{
280
+ XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(reg->opaque);
281
+ uint32_t val = val64;
282
+
283
+ s->regs[R_IR_MASK] &= ~val;
284
+ ir_update_irq(s);
285
+ return 0;
286
+}
287
+
288
+static uint64_t ir_disable_prew(RegisterInfo *reg, uint64_t val64)
289
+{
290
+ XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(reg->opaque);
291
+ uint32_t val = val64;
292
+
293
+ s->regs[R_IR_MASK] |= val;
294
+ ir_update_irq(s);
295
+ return 0;
296
+}
297
+
298
+static uint64_t rst_fpd_apu_prew(RegisterInfo *reg, uint64_t val64)
299
+{
300
+ XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(reg->opaque);
301
+ uint32_t val = val64;
302
+ uint32_t val_old = s->regs[R_RST_FPD_APU];
303
+ unsigned int i;
304
+
305
+ for (i = 0; i < CRF_MAX_CPU; i++) {
306
+ uint32_t mask = (1 << (R_RST_FPD_APU_ACPU0_RESET_SHIFT + i));
307
+
308
+ if ((val ^ val_old) & mask) {
309
+ if (val & mask) {
310
+ arm_set_cpu_off(i);
311
+ } else {
312
+ arm_set_cpu_on_and_reset(i);
313
+ }
314
+ }
315
+ }
316
+ return val64;
317
+}
318
+
319
+static const RegisterAccessInfo crf_regs_info[] = {
320
+ { .name = "ERR_CTRL", .addr = A_ERR_CTRL,
321
+ },{ .name = "IR_STATUS", .addr = A_IR_STATUS,
322
+ .w1c = 0x1,
323
+ .post_write = ir_status_postw,
324
+ },{ .name = "IR_MASK", .addr = A_IR_MASK,
325
+ .reset = 0x1,
326
+ .ro = 0x1,
327
+ },{ .name = "IR_ENABLE", .addr = A_IR_ENABLE,
328
+ .pre_write = ir_enable_prew,
329
+ },{ .name = "IR_DISABLE", .addr = A_IR_DISABLE,
330
+ .pre_write = ir_disable_prew,
331
+ },{ .name = "CRF_WPROT", .addr = A_CRF_WPROT,
332
+ },{ .name = "APLL_CTRL", .addr = A_APLL_CTRL,
333
+ .reset = 0x12c09,
334
+ .rsvd = 0xf88c80f6,
335
+ },{ .name = "APLL_CFG", .addr = A_APLL_CFG,
336
+ .rsvd = 0x1801210,
337
+ },{ .name = "APLL_FRAC_CFG", .addr = A_APLL_FRAC_CFG,
338
+ .rsvd = 0x7e330000,
339
+ },{ .name = "DPLL_CTRL", .addr = A_DPLL_CTRL,
340
+ .reset = 0x2c09,
341
+ .rsvd = 0xf88c80f6,
342
+ },{ .name = "DPLL_CFG", .addr = A_DPLL_CFG,
343
+ .rsvd = 0x1801210,
344
+ },{ .name = "DPLL_FRAC_CFG", .addr = A_DPLL_FRAC_CFG,
345
+ .rsvd = 0x7e330000,
346
+ },{ .name = "VPLL_CTRL", .addr = A_VPLL_CTRL,
347
+ .reset = 0x12809,
348
+ .rsvd = 0xf88c80f6,
349
+ },{ .name = "VPLL_CFG", .addr = A_VPLL_CFG,
350
+ .rsvd = 0x1801210,
351
+ },{ .name = "VPLL_FRAC_CFG", .addr = A_VPLL_FRAC_CFG,
352
+ .rsvd = 0x7e330000,
353
+ },{ .name = "PLL_STATUS", .addr = A_PLL_STATUS,
354
+ .reset = 0x3f,
355
+ .rsvd = 0xc0,
356
+ .ro = 0x3f,
357
+ },{ .name = "APLL_TO_LPD_CTRL", .addr = A_APLL_TO_LPD_CTRL,
358
+ .reset = 0x400,
359
+ .rsvd = 0xc0ff,
360
+ },{ .name = "DPLL_TO_LPD_CTRL", .addr = A_DPLL_TO_LPD_CTRL,
361
+ .reset = 0x400,
362
+ .rsvd = 0xc0ff,
363
+ },{ .name = "VPLL_TO_LPD_CTRL", .addr = A_VPLL_TO_LPD_CTRL,
364
+ .reset = 0x400,
365
+ .rsvd = 0xc0ff,
366
+ },{ .name = "ACPU_CTRL", .addr = A_ACPU_CTRL,
367
+ .reset = 0x3000400,
368
+ .rsvd = 0xfcffc0f8,
369
+ },{ .name = "DBG_TRACE_CTRL", .addr = A_DBG_TRACE_CTRL,
370
+ .reset = 0x2500,
371
+ .rsvd = 0xfeffc0f8,
372
+ },{ .name = "DBG_FPD_CTRL", .addr = A_DBG_FPD_CTRL,
373
+ .reset = 0x1002500,
374
+ .rsvd = 0xfeffc0f8,
375
+ },{ .name = "DP_VIDEO_REF_CTRL", .addr = A_DP_VIDEO_REF_CTRL,
376
+ .reset = 0x1002300,
377
+ .rsvd = 0xfec0c0f8,
378
+ },{ .name = "DP_AUDIO_REF_CTRL", .addr = A_DP_AUDIO_REF_CTRL,
379
+ .reset = 0x1032300,
380
+ .rsvd = 0xfec0c0f8,
381
+ },{ .name = "DP_STC_REF_CTRL", .addr = A_DP_STC_REF_CTRL,
382
+ .reset = 0x1203200,
383
+ .rsvd = 0xfec0c0f8,
384
+ },{ .name = "DDR_CTRL", .addr = A_DDR_CTRL,
385
+ .reset = 0x1000500,
386
+ .rsvd = 0xfeffc0f8,
387
+ },{ .name = "GPU_REF_CTRL", .addr = A_GPU_REF_CTRL,
388
+ .reset = 0x1500,
389
+ .rsvd = 0xf8ffc0f8,
390
+ },{ .name = "SATA_REF_CTRL", .addr = A_SATA_REF_CTRL,
391
+ .reset = 0x1001600,
392
+ .rsvd = 0xfeffc0f8,
393
+ },{ .name = "PCIE_REF_CTRL", .addr = A_PCIE_REF_CTRL,
394
+ .reset = 0x1500,
395
+ .rsvd = 0xfeffc0f8,
396
+ },{ .name = "GDMA_REF_CTRL", .addr = A_GDMA_REF_CTRL,
397
+ .reset = 0x1000500,
398
+ .rsvd = 0xfeffc0f8,
399
+ },{ .name = "DPDMA_REF_CTRL", .addr = A_DPDMA_REF_CTRL,
400
+ .reset = 0x1000500,
401
+ .rsvd = 0xfeffc0f8,
402
+ },{ .name = "TOPSW_MAIN_CTRL", .addr = A_TOPSW_MAIN_CTRL,
403
+ .reset = 0x1000400,
404
+ .rsvd = 0xfeffc0f8,
405
+ },{ .name = "TOPSW_LSBUS_CTRL", .addr = A_TOPSW_LSBUS_CTRL,
406
+ .reset = 0x1000800,
407
+ .rsvd = 0xfeffc0f8,
408
+ },{ .name = "DBG_TSTMP_CTRL", .addr = A_DBG_TSTMP_CTRL,
409
+ .reset = 0xa00,
410
+ .rsvd = 0xffffc0f8,
411
+ },
412
+ { .name = "RST_FPD_TOP", .addr = A_RST_FPD_TOP,
413
+ .reset = 0xf9ffe,
414
+ .rsvd = 0xf06001,
415
+ },{ .name = "RST_FPD_APU", .addr = A_RST_FPD_APU,
416
+ .reset = 0x3d0f,
417
+ .rsvd = 0xc2f0,
418
+ .pre_write = rst_fpd_apu_prew,
419
+ },{ .name = "RST_DDR_SS", .addr = A_RST_DDR_SS,
420
+ .reset = 0xf,
421
+ .rsvd = 0xf3,
422
+ }
423
+};
424
+
425
+static void crf_reset_enter(Object *obj, ResetType type)
426
+{
427
+ XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(obj);
428
+ unsigned int i;
429
+
430
+ for (i = 0; i < ARRAY_SIZE(s->regs_info); ++i) {
431
+ register_reset(&s->regs_info[i]);
432
+ }
433
+}
434
+
435
+static void crf_reset_hold(Object *obj)
436
+{
437
+ XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(obj);
438
+ ir_update_irq(s);
439
+}
440
+
441
+static const MemoryRegionOps crf_ops = {
442
+ .read = register_read_memory,
443
+ .write = register_write_memory,
444
+ .endianness = DEVICE_LITTLE_ENDIAN,
445
+ .valid = {
446
+ .min_access_size = 4,
447
+ .max_access_size = 4,
448
+ },
449
+};
450
+
451
+static void crf_init(Object *obj)
452
+{
453
+ XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(obj);
454
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
455
+
456
+ s->reg_array =
457
+ register_init_block32(DEVICE(obj), crf_regs_info,
458
+ ARRAY_SIZE(crf_regs_info),
459
+ s->regs_info, s->regs,
460
+ &crf_ops,
461
+ XLNX_ZYNQMP_CRF_ERR_DEBUG,
462
+ CRF_R_MAX * 4);
463
+ sysbus_init_mmio(sbd, &s->reg_array->mem);
464
+ sysbus_init_irq(sbd, &s->irq_ir);
465
+}
466
+
467
+static void crf_finalize(Object *obj)
468
+{
469
+ XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(obj);
470
+ register_finalize_block(s->reg_array);
471
+}
472
+
473
+static const VMStateDescription vmstate_crf = {
474
+ .name = TYPE_XLNX_ZYNQMP_CRF,
475
+ .version_id = 1,
476
+ .minimum_version_id = 1,
477
+ .fields = (VMStateField[]) {
478
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCRF, CRF_R_MAX),
479
+ VMSTATE_END_OF_LIST(),
480
+ }
481
+};
482
+
483
+static void crf_class_init(ObjectClass *klass, void *data)
484
+{
485
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
486
+ DeviceClass *dc = DEVICE_CLASS(klass);
487
+
488
+ dc->vmsd = &vmstate_crf;
489
+ rc->phases.enter = crf_reset_enter;
490
+ rc->phases.hold = crf_reset_hold;
491
+}
492
+
493
+static const TypeInfo crf_info = {
494
+ .name = TYPE_XLNX_ZYNQMP_CRF,
495
+ .parent = TYPE_SYS_BUS_DEVICE,
496
+ .instance_size = sizeof(XlnxZynqMPCRF),
497
+ .class_init = crf_class_init,
498
+ .instance_init = crf_init,
499
+ .instance_finalize = crf_finalize,
500
+};
501
+
502
+static void crf_register_types(void)
503
+{
504
+ type_register_static(&crf_info);
505
+}
506
+
507
+type_init(crf_register_types)
508
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
16
index XXXXXXX..XXXXXXX 100644
509
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
510
--- a/hw/misc/meson.build
18
+++ b/target/arm/translate.c
511
+++ b/hw/misc/meson.build
19
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg64(TCGv_i64 var, int reg)
512
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_RASPI', if_true: files(
20
tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
513
))
21
}
514
softmmu_ss.add(when: 'CONFIG_SLAVIO', if_true: files('slavio_misc.c'))
22
515
softmmu_ss.add(when: 'CONFIG_ZYNQ', if_true: files('zynq_slcr.c'))
23
-static inline void neon_load_reg32(TCGv_i32 var, int reg)
516
+specific_ss.add(when: 'CONFIG_XLNX_ZYNQMP_ARM', if_true: files('xlnx-zynqmp-crf.c'))
24
+static inline void vfp_load_reg32(TCGv_i32 var, int reg)
517
softmmu_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files(
25
{
518
'xlnx-versal-xramc.c',
26
tcg_gen_ld_i32(var, cpu_env, vfp_reg_offset(false, reg));
519
'xlnx-versal-pmc-iou-slcr.c',
27
}
28
29
-static inline void neon_store_reg32(TCGv_i32 var, int reg)
30
+static inline void vfp_store_reg32(TCGv_i32 var, int reg)
31
{
32
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
33
}
34
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-vfp.c.inc
37
+++ b/target/arm/translate-vfp.c.inc
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
39
frn = tcg_temp_new_i32();
40
frm = tcg_temp_new_i32();
41
dest = tcg_temp_new_i32();
42
- neon_load_reg32(frn, rn);
43
- neon_load_reg32(frm, rm);
44
+ vfp_load_reg32(frn, rn);
45
+ vfp_load_reg32(frm, rm);
46
switch (a->cc) {
47
case 0: /* eq: Z */
48
tcg_gen_movcond_i32(TCG_COND_EQ, dest, cpu_ZF, zero,
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
50
if (sz == 1) {
51
tcg_gen_andi_i32(dest, dest, 0xffff);
52
}
53
- neon_store_reg32(dest, rd);
54
+ vfp_store_reg32(dest, rd);
55
tcg_temp_free_i32(frn);
56
tcg_temp_free_i32(frm);
57
tcg_temp_free_i32(dest);
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
59
TCGv_i32 tcg_res;
60
tcg_op = tcg_temp_new_i32();
61
tcg_res = tcg_temp_new_i32();
62
- neon_load_reg32(tcg_op, rm);
63
+ vfp_load_reg32(tcg_op, rm);
64
if (sz == 1) {
65
gen_helper_rinth(tcg_res, tcg_op, fpst);
66
} else {
67
gen_helper_rints(tcg_res, tcg_op, fpst);
68
}
69
- neon_store_reg32(tcg_res, rd);
70
+ vfp_store_reg32(tcg_res, rd);
71
tcg_temp_free_i32(tcg_op);
72
tcg_temp_free_i32(tcg_res);
73
}
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
gen_helper_vfp_tould(tcg_res, tcg_double, tcg_shift, fpst);
76
}
77
tcg_gen_extrl_i64_i32(tcg_tmp, tcg_res);
78
- neon_store_reg32(tcg_tmp, rd);
79
+ vfp_store_reg32(tcg_tmp, rd);
80
tcg_temp_free_i32(tcg_tmp);
81
tcg_temp_free_i64(tcg_res);
82
tcg_temp_free_i64(tcg_double);
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
84
TCGv_i32 tcg_single, tcg_res;
85
tcg_single = tcg_temp_new_i32();
86
tcg_res = tcg_temp_new_i32();
87
- neon_load_reg32(tcg_single, rm);
88
+ vfp_load_reg32(tcg_single, rm);
89
if (sz == 1) {
90
if (is_signed) {
91
gen_helper_vfp_toslh(tcg_res, tcg_single, tcg_shift, fpst);
92
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
93
gen_helper_vfp_touls(tcg_res, tcg_single, tcg_shift, fpst);
94
}
95
}
96
- neon_store_reg32(tcg_res, rd);
97
+ vfp_store_reg32(tcg_res, rd);
98
tcg_temp_free_i32(tcg_res);
99
tcg_temp_free_i32(tcg_single);
100
}
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
102
if (a->l) {
103
/* VFP to general purpose register */
104
tmp = tcg_temp_new_i32();
105
- neon_load_reg32(tmp, a->vn);
106
+ vfp_load_reg32(tmp, a->vn);
107
tcg_gen_andi_i32(tmp, tmp, 0xffff);
108
store_reg(s, a->rt, tmp);
109
} else {
110
/* general purpose register to VFP */
111
tmp = load_reg(s, a->rt);
112
tcg_gen_andi_i32(tmp, tmp, 0xffff);
113
- neon_store_reg32(tmp, a->vn);
114
+ vfp_store_reg32(tmp, a->vn);
115
tcg_temp_free_i32(tmp);
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
119
if (a->l) {
120
/* VFP to general purpose register */
121
tmp = tcg_temp_new_i32();
122
- neon_load_reg32(tmp, a->vn);
123
+ vfp_load_reg32(tmp, a->vn);
124
if (a->rt == 15) {
125
/* Set the 4 flag bits in the CPSR. */
126
gen_set_nzcv(tmp);
127
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
128
} else {
129
/* general purpose register to VFP */
130
tmp = load_reg(s, a->rt);
131
- neon_store_reg32(tmp, a->vn);
132
+ vfp_store_reg32(tmp, a->vn);
133
tcg_temp_free_i32(tmp);
134
}
135
136
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
137
if (a->op) {
138
/* fpreg to gpreg */
139
tmp = tcg_temp_new_i32();
140
- neon_load_reg32(tmp, a->vm);
141
+ vfp_load_reg32(tmp, a->vm);
142
store_reg(s, a->rt, tmp);
143
tmp = tcg_temp_new_i32();
144
- neon_load_reg32(tmp, a->vm + 1);
145
+ vfp_load_reg32(tmp, a->vm + 1);
146
store_reg(s, a->rt2, tmp);
147
} else {
148
/* gpreg to fpreg */
149
tmp = load_reg(s, a->rt);
150
- neon_store_reg32(tmp, a->vm);
151
+ vfp_store_reg32(tmp, a->vm);
152
tcg_temp_free_i32(tmp);
153
tmp = load_reg(s, a->rt2);
154
- neon_store_reg32(tmp, a->vm + 1);
155
+ vfp_store_reg32(tmp, a->vm + 1);
156
tcg_temp_free_i32(tmp);
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
160
if (a->op) {
161
/* fpreg to gpreg */
162
tmp = tcg_temp_new_i32();
163
- neon_load_reg32(tmp, a->vm * 2);
164
+ vfp_load_reg32(tmp, a->vm * 2);
165
store_reg(s, a->rt, tmp);
166
tmp = tcg_temp_new_i32();
167
- neon_load_reg32(tmp, a->vm * 2 + 1);
168
+ vfp_load_reg32(tmp, a->vm * 2 + 1);
169
store_reg(s, a->rt2, tmp);
170
} else {
171
/* gpreg to fpreg */
172
tmp = load_reg(s, a->rt);
173
- neon_store_reg32(tmp, a->vm * 2);
174
+ vfp_store_reg32(tmp, a->vm * 2);
175
tcg_temp_free_i32(tmp);
176
tmp = load_reg(s, a->rt2);
177
- neon_store_reg32(tmp, a->vm * 2 + 1);
178
+ vfp_store_reg32(tmp, a->vm * 2 + 1);
179
tcg_temp_free_i32(tmp);
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
183
tmp = tcg_temp_new_i32();
184
if (a->l) {
185
gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
186
- neon_store_reg32(tmp, a->vd);
187
+ vfp_store_reg32(tmp, a->vd);
188
} else {
189
- neon_load_reg32(tmp, a->vd);
190
+ vfp_load_reg32(tmp, a->vd);
191
gen_aa32_st16(s, tmp, addr, get_mem_index(s));
192
}
193
tcg_temp_free_i32(tmp);
194
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
195
tmp = tcg_temp_new_i32();
196
if (a->l) {
197
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
198
- neon_store_reg32(tmp, a->vd);
199
+ vfp_store_reg32(tmp, a->vd);
200
} else {
201
- neon_load_reg32(tmp, a->vd);
202
+ vfp_load_reg32(tmp, a->vd);
203
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
204
}
205
tcg_temp_free_i32(tmp);
206
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
207
if (a->l) {
208
/* load */
209
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
210
- neon_store_reg32(tmp, a->vd + i);
211
+ vfp_store_reg32(tmp, a->vd + i);
212
} else {
213
/* store */
214
- neon_load_reg32(tmp, a->vd + i);
215
+ vfp_load_reg32(tmp, a->vd + i);
216
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
217
}
218
tcg_gen_addi_i32(addr, addr, offset);
219
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
220
fd = tcg_temp_new_i32();
221
fpst = fpstatus_ptr(FPST_FPCR);
222
223
- neon_load_reg32(f0, vn);
224
- neon_load_reg32(f1, vm);
225
+ vfp_load_reg32(f0, vn);
226
+ vfp_load_reg32(f1, vm);
227
228
for (;;) {
229
if (reads_vd) {
230
- neon_load_reg32(fd, vd);
231
+ vfp_load_reg32(fd, vd);
232
}
233
fn(fd, f0, f1, fpst);
234
- neon_store_reg32(fd, vd);
235
+ vfp_store_reg32(fd, vd);
236
237
if (veclen == 0) {
238
break;
239
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
240
veclen--;
241
vd = vfp_advance_sreg(vd, delta_d);
242
vn = vfp_advance_sreg(vn, delta_d);
243
- neon_load_reg32(f0, vn);
244
+ vfp_load_reg32(f0, vn);
245
if (delta_m) {
246
vm = vfp_advance_sreg(vm, delta_m);
247
- neon_load_reg32(f1, vm);
248
+ vfp_load_reg32(f1, vm);
249
}
250
}
251
252
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_hp(DisasContext *s, VFPGen3OpSPFn *fn,
253
fd = tcg_temp_new_i32();
254
fpst = fpstatus_ptr(FPST_FPCR_F16);
255
256
- neon_load_reg32(f0, vn);
257
- neon_load_reg32(f1, vm);
258
+ vfp_load_reg32(f0, vn);
259
+ vfp_load_reg32(f1, vm);
260
261
if (reads_vd) {
262
- neon_load_reg32(fd, vd);
263
+ vfp_load_reg32(fd, vd);
264
}
265
fn(fd, f0, f1, fpst);
266
- neon_store_reg32(fd, vd);
267
+ vfp_store_reg32(fd, vd);
268
269
tcg_temp_free_i32(f0);
270
tcg_temp_free_i32(f1);
271
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
272
f0 = tcg_temp_new_i32();
273
fd = tcg_temp_new_i32();
274
275
- neon_load_reg32(f0, vm);
276
+ vfp_load_reg32(f0, vm);
277
278
for (;;) {
279
fn(fd, f0);
280
- neon_store_reg32(fd, vd);
281
+ vfp_store_reg32(fd, vd);
282
283
if (veclen == 0) {
284
break;
285
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
286
/* single source one-many */
287
while (veclen--) {
288
vd = vfp_advance_sreg(vd, delta_d);
289
- neon_store_reg32(fd, vd);
290
+ vfp_store_reg32(fd, vd);
291
}
292
break;
293
}
294
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
295
veclen--;
296
vd = vfp_advance_sreg(vd, delta_d);
297
vm = vfp_advance_sreg(vm, delta_m);
298
- neon_load_reg32(f0, vm);
299
+ vfp_load_reg32(f0, vm);
300
}
301
302
tcg_temp_free_i32(f0);
303
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
304
}
305
306
f0 = tcg_temp_new_i32();
307
- neon_load_reg32(f0, vm);
308
+ vfp_load_reg32(f0, vm);
309
fn(f0, f0);
310
- neon_store_reg32(f0, vd);
311
+ vfp_store_reg32(f0, vd);
312
tcg_temp_free_i32(f0);
313
314
return true;
315
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
316
vm = tcg_temp_new_i32();
317
vd = tcg_temp_new_i32();
318
319
- neon_load_reg32(vn, a->vn);
320
- neon_load_reg32(vm, a->vm);
321
+ vfp_load_reg32(vn, a->vn);
322
+ vfp_load_reg32(vm, a->vm);
323
if (neg_n) {
324
/* VFNMS, VFMS */
325
gen_helper_vfp_negh(vn, vn);
326
}
327
- neon_load_reg32(vd, a->vd);
328
+ vfp_load_reg32(vd, a->vd);
329
if (neg_d) {
330
/* VFNMA, VFNMS */
331
gen_helper_vfp_negh(vd, vd);
332
}
333
fpst = fpstatus_ptr(FPST_FPCR_F16);
334
gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst);
335
- neon_store_reg32(vd, a->vd);
336
+ vfp_store_reg32(vd, a->vd);
337
338
tcg_temp_free_ptr(fpst);
339
tcg_temp_free_i32(vn);
340
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
341
vm = tcg_temp_new_i32();
342
vd = tcg_temp_new_i32();
343
344
- neon_load_reg32(vn, a->vn);
345
- neon_load_reg32(vm, a->vm);
346
+ vfp_load_reg32(vn, a->vn);
347
+ vfp_load_reg32(vm, a->vm);
348
if (neg_n) {
349
/* VFNMS, VFMS */
350
gen_helper_vfp_negs(vn, vn);
351
}
352
- neon_load_reg32(vd, a->vd);
353
+ vfp_load_reg32(vd, a->vd);
354
if (neg_d) {
355
/* VFNMA, VFNMS */
356
gen_helper_vfp_negs(vd, vd);
357
}
358
fpst = fpstatus_ptr(FPST_FPCR);
359
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
360
- neon_store_reg32(vd, a->vd);
361
+ vfp_store_reg32(vd, a->vd);
362
363
tcg_temp_free_ptr(fpst);
364
tcg_temp_free_i32(vn);
365
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_hp(DisasContext *s, arg_VMOV_imm_sp *a)
366
}
367
368
fd = tcg_const_i32(vfp_expand_imm(MO_16, a->imm));
369
- neon_store_reg32(fd, a->vd);
370
+ vfp_store_reg32(fd, a->vd);
371
tcg_temp_free_i32(fd);
372
return true;
373
}
374
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
375
fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm));
376
377
for (;;) {
378
- neon_store_reg32(fd, vd);
379
+ vfp_store_reg32(fd, vd);
380
381
if (veclen == 0) {
382
break;
383
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
384
vd = tcg_temp_new_i32();
385
vm = tcg_temp_new_i32();
386
387
- neon_load_reg32(vd, a->vd);
388
+ vfp_load_reg32(vd, a->vd);
389
if (a->z) {
390
tcg_gen_movi_i32(vm, 0);
391
} else {
392
- neon_load_reg32(vm, a->vm);
393
+ vfp_load_reg32(vm, a->vm);
394
}
395
396
if (a->e) {
397
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_sp(DisasContext *s, arg_VCMP_sp *a)
398
vd = tcg_temp_new_i32();
399
vm = tcg_temp_new_i32();
400
401
- neon_load_reg32(vd, a->vd);
402
+ vfp_load_reg32(vd, a->vd);
403
if (a->z) {
404
tcg_gen_movi_i32(vm, 0);
405
} else {
406
- neon_load_reg32(vm, a->vm);
407
+ vfp_load_reg32(vm, a->vm);
408
}
409
410
if (a->e) {
411
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f32_f16(DisasContext *s, arg_VCVT_f32_f16 *a)
412
/* The T bit tells us if we want the low or high 16 bits of Vm */
413
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
414
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp_mode);
415
- neon_store_reg32(tmp, a->vd);
416
+ vfp_store_reg32(tmp, a->vd);
417
tcg_temp_free_i32(ahp_mode);
418
tcg_temp_free_ptr(fpst);
419
tcg_temp_free_i32(tmp);
420
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
421
ahp_mode = get_ahp_flag();
422
tmp = tcg_temp_new_i32();
423
424
- neon_load_reg32(tmp, a->vm);
425
+ vfp_load_reg32(tmp, a->vm);
426
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp_mode);
427
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
428
tcg_temp_free_i32(ahp_mode);
429
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_hp(DisasContext *s, arg_VRINTR_sp *a)
430
}
431
432
tmp = tcg_temp_new_i32();
433
- neon_load_reg32(tmp, a->vm);
434
+ vfp_load_reg32(tmp, a->vm);
435
fpst = fpstatus_ptr(FPST_FPCR_F16);
436
gen_helper_rinth(tmp, tmp, fpst);
437
- neon_store_reg32(tmp, a->vd);
438
+ vfp_store_reg32(tmp, a->vd);
439
tcg_temp_free_ptr(fpst);
440
tcg_temp_free_i32(tmp);
441
return true;
442
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_sp(DisasContext *s, arg_VRINTR_sp *a)
443
}
444
445
tmp = tcg_temp_new_i32();
446
- neon_load_reg32(tmp, a->vm);
447
+ vfp_load_reg32(tmp, a->vm);
448
fpst = fpstatus_ptr(FPST_FPCR);
449
gen_helper_rints(tmp, tmp, fpst);
450
- neon_store_reg32(tmp, a->vd);
451
+ vfp_store_reg32(tmp, a->vd);
452
tcg_temp_free_ptr(fpst);
453
tcg_temp_free_i32(tmp);
454
return true;
455
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_hp(DisasContext *s, arg_VRINTZ_sp *a)
456
}
457
458
tmp = tcg_temp_new_i32();
459
- neon_load_reg32(tmp, a->vm);
460
+ vfp_load_reg32(tmp, a->vm);
461
fpst = fpstatus_ptr(FPST_FPCR_F16);
462
tcg_rmode = tcg_const_i32(float_round_to_zero);
463
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
464
gen_helper_rinth(tmp, tmp, fpst);
465
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
466
- neon_store_reg32(tmp, a->vd);
467
+ vfp_store_reg32(tmp, a->vd);
468
tcg_temp_free_ptr(fpst);
469
tcg_temp_free_i32(tcg_rmode);
470
tcg_temp_free_i32(tmp);
471
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_sp(DisasContext *s, arg_VRINTZ_sp *a)
472
}
473
474
tmp = tcg_temp_new_i32();
475
- neon_load_reg32(tmp, a->vm);
476
+ vfp_load_reg32(tmp, a->vm);
477
fpst = fpstatus_ptr(FPST_FPCR);
478
tcg_rmode = tcg_const_i32(float_round_to_zero);
479
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
480
gen_helper_rints(tmp, tmp, fpst);
481
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
482
- neon_store_reg32(tmp, a->vd);
483
+ vfp_store_reg32(tmp, a->vd);
484
tcg_temp_free_ptr(fpst);
485
tcg_temp_free_i32(tcg_rmode);
486
tcg_temp_free_i32(tmp);
487
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_hp(DisasContext *s, arg_VRINTX_sp *a)
488
}
489
490
tmp = tcg_temp_new_i32();
491
- neon_load_reg32(tmp, a->vm);
492
+ vfp_load_reg32(tmp, a->vm);
493
fpst = fpstatus_ptr(FPST_FPCR_F16);
494
gen_helper_rinth_exact(tmp, tmp, fpst);
495
- neon_store_reg32(tmp, a->vd);
496
+ vfp_store_reg32(tmp, a->vd);
497
tcg_temp_free_ptr(fpst);
498
tcg_temp_free_i32(tmp);
499
return true;
500
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_sp(DisasContext *s, arg_VRINTX_sp *a)
501
}
502
503
tmp = tcg_temp_new_i32();
504
- neon_load_reg32(tmp, a->vm);
505
+ vfp_load_reg32(tmp, a->vm);
506
fpst = fpstatus_ptr(FPST_FPCR);
507
gen_helper_rints_exact(tmp, tmp, fpst);
508
- neon_store_reg32(tmp, a->vd);
509
+ vfp_store_reg32(tmp, a->vd);
510
tcg_temp_free_ptr(fpst);
511
tcg_temp_free_i32(tmp);
512
return true;
513
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
514
515
vm = tcg_temp_new_i32();
516
vd = tcg_temp_new_i64();
517
- neon_load_reg32(vm, a->vm);
518
+ vfp_load_reg32(vm, a->vm);
519
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
520
neon_store_reg64(vd, a->vd);
521
tcg_temp_free_i32(vm);
522
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
523
vm = tcg_temp_new_i64();
524
neon_load_reg64(vm, a->vm);
525
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
526
- neon_store_reg32(vd, a->vd);
527
+ vfp_store_reg32(vd, a->vd);
528
tcg_temp_free_i32(vd);
529
tcg_temp_free_i64(vm);
530
return true;
531
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
532
}
533
534
vm = tcg_temp_new_i32();
535
- neon_load_reg32(vm, a->vm);
536
+ vfp_load_reg32(vm, a->vm);
537
fpst = fpstatus_ptr(FPST_FPCR_F16);
538
if (a->s) {
539
/* i32 -> f16 */
540
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
541
/* u32 -> f16 */
542
gen_helper_vfp_uitoh(vm, vm, fpst);
543
}
544
- neon_store_reg32(vm, a->vd);
545
+ vfp_store_reg32(vm, a->vd);
546
tcg_temp_free_i32(vm);
547
tcg_temp_free_ptr(fpst);
548
return true;
549
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
550
}
551
552
vm = tcg_temp_new_i32();
553
- neon_load_reg32(vm, a->vm);
554
+ vfp_load_reg32(vm, a->vm);
555
fpst = fpstatus_ptr(FPST_FPCR);
556
if (a->s) {
557
/* i32 -> f32 */
558
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
559
/* u32 -> f32 */
560
gen_helper_vfp_uitos(vm, vm, fpst);
561
}
562
- neon_store_reg32(vm, a->vd);
563
+ vfp_store_reg32(vm, a->vd);
564
tcg_temp_free_i32(vm);
565
tcg_temp_free_ptr(fpst);
566
return true;
567
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
568
569
vm = tcg_temp_new_i32();
570
vd = tcg_temp_new_i64();
571
- neon_load_reg32(vm, a->vm);
572
+ vfp_load_reg32(vm, a->vm);
573
fpst = fpstatus_ptr(FPST_FPCR);
574
if (a->s) {
575
/* i32 -> f64 */
576
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
577
vd = tcg_temp_new_i32();
578
neon_load_reg64(vm, a->vm);
579
gen_helper_vjcvt(vd, vm, cpu_env);
580
- neon_store_reg32(vd, a->vd);
581
+ vfp_store_reg32(vd, a->vd);
582
tcg_temp_free_i64(vm);
583
tcg_temp_free_i32(vd);
584
return true;
585
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
586
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
587
588
vd = tcg_temp_new_i32();
589
- neon_load_reg32(vd, a->vd);
590
+ vfp_load_reg32(vd, a->vd);
591
592
fpst = fpstatus_ptr(FPST_FPCR_F16);
593
shift = tcg_const_i32(frac_bits);
594
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
595
g_assert_not_reached();
596
}
597
598
- neon_store_reg32(vd, a->vd);
599
+ vfp_store_reg32(vd, a->vd);
600
tcg_temp_free_i32(vd);
601
tcg_temp_free_i32(shift);
602
tcg_temp_free_ptr(fpst);
603
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
604
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
605
606
vd = tcg_temp_new_i32();
607
- neon_load_reg32(vd, a->vd);
608
+ vfp_load_reg32(vd, a->vd);
609
610
fpst = fpstatus_ptr(FPST_FPCR);
611
shift = tcg_const_i32(frac_bits);
612
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
613
g_assert_not_reached();
614
}
615
616
- neon_store_reg32(vd, a->vd);
617
+ vfp_store_reg32(vd, a->vd);
618
tcg_temp_free_i32(vd);
619
tcg_temp_free_i32(shift);
620
tcg_temp_free_ptr(fpst);
621
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
622
623
fpst = fpstatus_ptr(FPST_FPCR_F16);
624
vm = tcg_temp_new_i32();
625
- neon_load_reg32(vm, a->vm);
626
+ vfp_load_reg32(vm, a->vm);
627
628
if (a->s) {
629
if (a->rz) {
630
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
631
gen_helper_vfp_touih(vm, vm, fpst);
632
}
633
}
634
- neon_store_reg32(vm, a->vd);
635
+ vfp_store_reg32(vm, a->vd);
636
tcg_temp_free_i32(vm);
637
tcg_temp_free_ptr(fpst);
638
return true;
639
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
640
641
fpst = fpstatus_ptr(FPST_FPCR);
642
vm = tcg_temp_new_i32();
643
- neon_load_reg32(vm, a->vm);
644
+ vfp_load_reg32(vm, a->vm);
645
646
if (a->s) {
647
if (a->rz) {
648
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
649
gen_helper_vfp_touis(vm, vm, fpst);
650
}
651
}
652
- neon_store_reg32(vm, a->vd);
653
+ vfp_store_reg32(vm, a->vd);
654
tcg_temp_free_i32(vm);
655
tcg_temp_free_ptr(fpst);
656
return true;
657
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
658
gen_helper_vfp_touid(vd, vm, fpst);
659
}
660
}
661
- neon_store_reg32(vd, a->vd);
662
+ vfp_store_reg32(vd, a->vd);
663
tcg_temp_free_i32(vd);
664
tcg_temp_free_i64(vm);
665
tcg_temp_free_ptr(fpst);
666
@@ -XXX,XX +XXX,XX @@ static bool trans_VINS(DisasContext *s, arg_VINS *a)
667
/* Insert low half of Vm into high half of Vd */
668
rm = tcg_temp_new_i32();
669
rd = tcg_temp_new_i32();
670
- neon_load_reg32(rm, a->vm);
671
- neon_load_reg32(rd, a->vd);
672
+ vfp_load_reg32(rm, a->vm);
673
+ vfp_load_reg32(rd, a->vd);
674
tcg_gen_deposit_i32(rd, rd, rm, 16, 16);
675
- neon_store_reg32(rd, a->vd);
676
+ vfp_store_reg32(rd, a->vd);
677
tcg_temp_free_i32(rm);
678
tcg_temp_free_i32(rd);
679
return true;
680
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOVX(DisasContext *s, arg_VINS *a)
681
682
/* Set Vd to high half of Vm */
683
rm = tcg_temp_new_i32();
684
- neon_load_reg32(rm, a->vm);
685
+ vfp_load_reg32(rm, a->vm);
686
tcg_gen_shri_i32(rm, rm, 16);
687
- neon_store_reg32(rm, a->vd);
688
+ vfp_store_reg32(rm, a->vd);
689
tcg_temp_free_i32(rm);
690
return true;
691
}
692
--
520
--
693
2.20.1
521
2.25.1
694
522
695
523
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc.
3
Connect the ZynqMP CRF - Clock Reset FPD device.
4
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20201030022618.785675-9-richard.henderson@linaro.org
6
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Luc Michel <luc@lmichel.fr>
8
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Message-id: 20220316164645.2303510-5-edgar.iglesias@gmail.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/translate.c | 26 +++++++++
12
include/hw/arm/xlnx-zynqmp.h | 2 ++
11
target/arm/translate-neon.c.inc | 94 ++++++++++++++++-----------------
13
hw/arm/xlnx-zynqmp.c | 16 ++++++++++++++++
12
2 files changed, 73 insertions(+), 47 deletions(-)
14
2 files changed, 18 insertions(+)
13
15
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
18
--- a/include/hw/arm/xlnx-zynqmp.h
17
+++ b/target/arm/translate.c
19
+++ b/include/hw/arm/xlnx-zynqmp.h
18
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
20
@@ -XXX,XX +XXX,XX @@
19
}
21
#include "hw/nvram/xlnx-bbram.h"
22
#include "hw/nvram/xlnx-zynqmp-efuse.h"
23
#include "hw/or-irq.h"
24
+#include "hw/misc/xlnx-zynqmp-crf.h"
25
26
#define TYPE_XLNX_ZYNQMP "xlnx-zynqmp"
27
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
28
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
29
XlnxZDMA adma[XLNX_ZYNQMP_NUM_ADMA_CH];
30
XlnxCSUDMA qspi_dma;
31
qemu_or_irq qspi_irq_orgate;
32
+ XlnxZynqMPCRF crf;
33
34
char *boot_cpu;
35
ARMCPU *boot_cpu_ptr;
36
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/arm/xlnx-zynqmp.c
39
+++ b/hw/arm/xlnx-zynqmp.c
40
@@ -XXX,XX +XXX,XX @@
41
#define QSPI_DMA_ADDR 0xff0f0800
42
#define NUM_QSPI_IRQ_LINES 2
43
44
+#define CRF_ADDR 0xfd1a0000
45
+#define CRF_IRQ 120
46
+
47
/* Serializer/Deserializer. */
48
#define SERDES_ADDR 0xfd400000
49
#define SERDES_SIZE 0x20000
50
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_create_efuse(XlnxZynqMPState *s, qemu_irq *gic)
51
sysbus_connect_irq(sbd, 0, gic[EFUSE_IRQ]);
20
}
52
}
21
53
22
+static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
54
+static void xlnx_zynqmp_create_crf(XlnxZynqMPState *s, qemu_irq *gic)
23
+{
55
+{
24
+ long off = neon_element_offset(reg, ele, memop);
56
+ SysBusDevice *sbd;
25
+
57
+
26
+ switch (memop) {
58
+ object_initialize_child(OBJECT(s), "crf", &s->crf, TYPE_XLNX_ZYNQMP_CRF);
27
+ case MO_Q:
59
+ sbd = SYS_BUS_DEVICE(&s->crf);
28
+ tcg_gen_ld_i64(dest, cpu_env, off);
60
+
29
+ break;
61
+ sysbus_realize(sbd, &error_fatal);
30
+ default:
62
+ sysbus_mmio_map(sbd, 0, CRF_ADDR);
31
+ g_assert_not_reached();
63
+ sysbus_connect_irq(sbd, 0, gic[CRF_IRQ]);
32
+ }
33
+}
64
+}
34
+
65
+
35
static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
66
static void xlnx_zynqmp_create_unimp_mmio(XlnxZynqMPState *s)
36
{
67
{
37
long off = neon_element_offset(reg, ele, memop);
68
static const struct UnimpInfo {
38
@@ -XXX,XX +XXX,XX @@ static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
69
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
39
}
70
40
}
71
xlnx_zynqmp_create_bbram(s, gic_spi);
41
72
xlnx_zynqmp_create_efuse(s, gic_spi);
42
+static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
73
+ xlnx_zynqmp_create_crf(s, gic_spi);
43
+{
74
xlnx_zynqmp_create_unimp_mmio(s);
44
+ long off = neon_element_offset(reg, ele, memop);
75
45
+
76
for (i = 0; i < XLNX_ZYNQMP_NUM_GDMA_CH; i++) {
46
+ switch (memop) {
47
+ case MO_64:
48
+ tcg_gen_st_i64(src, cpu_env, off);
49
+ break;
50
+ default:
51
+ g_assert_not_reached();
52
+ }
53
+}
54
+
55
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
56
{
57
TCGv_ptr ret = tcg_temp_new_ptr();
58
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.c.inc
61
+++ b/target/arm/translate-neon.c.inc
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
63
for (pass = 0; pass < a->q + 1; pass++) {
64
TCGv_i64 tmp = tcg_temp_new_i64();
65
66
- neon_load_reg64(tmp, a->vm + pass);
67
+ read_neon_element64(tmp, a->vm, pass, MO_64);
68
fn(tmp, cpu_env, tmp, constimm);
69
- neon_store_reg64(tmp, a->vd + pass);
70
+ write_neon_element64(tmp, a->vd, pass, MO_64);
71
tcg_temp_free_i64(tmp);
72
}
73
tcg_temp_free_i64(constimm);
74
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
75
rd = tcg_temp_new_i32();
76
77
/* Load both inputs first to avoid potential overwrite if rm == rd */
78
- neon_load_reg64(rm1, a->vm);
79
- neon_load_reg64(rm2, a->vm + 1);
80
+ read_neon_element64(rm1, a->vm, 0, MO_64);
81
+ read_neon_element64(rm2, a->vm, 1, MO_64);
82
83
shiftfn(rm1, rm1, constimm);
84
narrowfn(rd, cpu_env, rm1);
85
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
86
tcg_gen_shli_i64(tmp, tmp, a->shift);
87
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
88
}
89
- neon_store_reg64(tmp, a->vd);
90
+ write_neon_element64(tmp, a->vd, 0, MO_64);
91
92
widenfn(tmp, rm1);
93
tcg_temp_free_i32(rm1);
94
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
95
tcg_gen_shli_i64(tmp, tmp, a->shift);
96
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
97
}
98
- neon_store_reg64(tmp, a->vd + 1);
99
+ write_neon_element64(tmp, a->vd, 1, MO_64);
100
tcg_temp_free_i64(tmp);
101
return true;
102
}
103
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
104
rm_64 = tcg_temp_new_i64();
105
106
if (src1_wide) {
107
- neon_load_reg64(rn0_64, a->vn);
108
+ read_neon_element64(rn0_64, a->vn, 0, MO_64);
109
} else {
110
TCGv_i32 tmp = tcg_temp_new_i32();
111
read_neon_element32(tmp, a->vn, 0, MO_32);
112
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
113
* avoid incorrect results if a narrow input overlaps with the result.
114
*/
115
if (src1_wide) {
116
- neon_load_reg64(rn1_64, a->vn + 1);
117
+ read_neon_element64(rn1_64, a->vn, 1, MO_64);
118
} else {
119
TCGv_i32 tmp = tcg_temp_new_i32();
120
read_neon_element32(tmp, a->vn, 1, MO_32);
121
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
122
rm = tcg_temp_new_i32();
123
read_neon_element32(rm, a->vm, 1, MO_32);
124
125
- neon_store_reg64(rn0_64, a->vd);
126
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
127
128
widenfn(rm_64, rm);
129
tcg_temp_free_i32(rm);
130
opfn(rn1_64, rn1_64, rm_64);
131
- neon_store_reg64(rn1_64, a->vd + 1);
132
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
133
134
tcg_temp_free_i64(rn0_64);
135
tcg_temp_free_i64(rn1_64);
136
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
137
rd0 = tcg_temp_new_i32();
138
rd1 = tcg_temp_new_i32();
139
140
- neon_load_reg64(rn_64, a->vn);
141
- neon_load_reg64(rm_64, a->vm);
142
+ read_neon_element64(rn_64, a->vn, 0, MO_64);
143
+ read_neon_element64(rm_64, a->vm, 0, MO_64);
144
145
opfn(rn_64, rn_64, rm_64);
146
147
narrowfn(rd0, rn_64);
148
149
- neon_load_reg64(rn_64, a->vn + 1);
150
- neon_load_reg64(rm_64, a->vm + 1);
151
+ read_neon_element64(rn_64, a->vn, 1, MO_64);
152
+ read_neon_element64(rm_64, a->vm, 1, MO_64);
153
154
opfn(rn_64, rn_64, rm_64);
155
156
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
157
/* Don't store results until after all loads: they might overlap */
158
if (accfn) {
159
tmp = tcg_temp_new_i64();
160
- neon_load_reg64(tmp, a->vd);
161
+ read_neon_element64(tmp, a->vd, 0, MO_64);
162
accfn(tmp, tmp, rd0);
163
- neon_store_reg64(tmp, a->vd);
164
- neon_load_reg64(tmp, a->vd + 1);
165
+ write_neon_element64(tmp, a->vd, 0, MO_64);
166
+ read_neon_element64(tmp, a->vd, 1, MO_64);
167
accfn(tmp, tmp, rd1);
168
- neon_store_reg64(tmp, a->vd + 1);
169
+ write_neon_element64(tmp, a->vd, 1, MO_64);
170
tcg_temp_free_i64(tmp);
171
} else {
172
- neon_store_reg64(rd0, a->vd);
173
- neon_store_reg64(rd1, a->vd + 1);
174
+ write_neon_element64(rd0, a->vd, 0, MO_64);
175
+ write_neon_element64(rd1, a->vd, 1, MO_64);
176
}
177
178
tcg_temp_free_i64(rd0);
179
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
180
181
if (accfn) {
182
TCGv_i64 t64 = tcg_temp_new_i64();
183
- neon_load_reg64(t64, a->vd);
184
+ read_neon_element64(t64, a->vd, 0, MO_64);
185
accfn(t64, t64, rn0_64);
186
- neon_store_reg64(t64, a->vd);
187
- neon_load_reg64(t64, a->vd + 1);
188
+ write_neon_element64(t64, a->vd, 0, MO_64);
189
+ read_neon_element64(t64, a->vd, 1, MO_64);
190
accfn(t64, t64, rn1_64);
191
- neon_store_reg64(t64, a->vd + 1);
192
+ write_neon_element64(t64, a->vd, 1, MO_64);
193
tcg_temp_free_i64(t64);
194
} else {
195
- neon_store_reg64(rn0_64, a->vd);
196
- neon_store_reg64(rn1_64, a->vd + 1);
197
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
198
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
199
}
200
tcg_temp_free_i64(rn0_64);
201
tcg_temp_free_i64(rn1_64);
202
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
203
right = tcg_temp_new_i64();
204
dest = tcg_temp_new_i64();
205
206
- neon_load_reg64(right, a->vn);
207
- neon_load_reg64(left, a->vm);
208
+ read_neon_element64(right, a->vn, 0, MO_64);
209
+ read_neon_element64(left, a->vm, 0, MO_64);
210
tcg_gen_extract2_i64(dest, right, left, a->imm * 8);
211
- neon_store_reg64(dest, a->vd);
212
+ write_neon_element64(dest, a->vd, 0, MO_64);
213
214
tcg_temp_free_i64(left);
215
tcg_temp_free_i64(right);
216
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
217
destright = tcg_temp_new_i64();
218
219
if (a->imm < 8) {
220
- neon_load_reg64(right, a->vn);
221
- neon_load_reg64(middle, a->vn + 1);
222
+ read_neon_element64(right, a->vn, 0, MO_64);
223
+ read_neon_element64(middle, a->vn, 1, MO_64);
224
tcg_gen_extract2_i64(destright, right, middle, a->imm * 8);
225
- neon_load_reg64(left, a->vm);
226
+ read_neon_element64(left, a->vm, 0, MO_64);
227
tcg_gen_extract2_i64(destleft, middle, left, a->imm * 8);
228
} else {
229
- neon_load_reg64(right, a->vn + 1);
230
- neon_load_reg64(middle, a->vm);
231
+ read_neon_element64(right, a->vn, 1, MO_64);
232
+ read_neon_element64(middle, a->vm, 0, MO_64);
233
tcg_gen_extract2_i64(destright, right, middle, (a->imm - 8) * 8);
234
- neon_load_reg64(left, a->vm + 1);
235
+ read_neon_element64(left, a->vm, 1, MO_64);
236
tcg_gen_extract2_i64(destleft, middle, left, (a->imm - 8) * 8);
237
}
238
239
- neon_store_reg64(destright, a->vd);
240
- neon_store_reg64(destleft, a->vd + 1);
241
+ write_neon_element64(destright, a->vd, 0, MO_64);
242
+ write_neon_element64(destleft, a->vd, 1, MO_64);
243
244
tcg_temp_free_i64(destright);
245
tcg_temp_free_i64(destleft);
246
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
247
248
if (accfn) {
249
TCGv_i64 tmp64 = tcg_temp_new_i64();
250
- neon_load_reg64(tmp64, a->vd + pass);
251
+ read_neon_element64(tmp64, a->vd, pass, MO_64);
252
accfn(rd_64, tmp64, rd_64);
253
tcg_temp_free_i64(tmp64);
254
}
255
- neon_store_reg64(rd_64, a->vd + pass);
256
+ write_neon_element64(rd_64, a->vd, pass, MO_64);
257
tcg_temp_free_i64(rd_64);
258
}
259
return true;
260
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
261
rd0 = tcg_temp_new_i32();
262
rd1 = tcg_temp_new_i32();
263
264
- neon_load_reg64(rm, a->vm);
265
+ read_neon_element64(rm, a->vm, 0, MO_64);
266
narrowfn(rd0, cpu_env, rm);
267
- neon_load_reg64(rm, a->vm + 1);
268
+ read_neon_element64(rm, a->vm, 1, MO_64);
269
narrowfn(rd1, cpu_env, rm);
270
write_neon_element32(rd0, a->vd, 0, MO_32);
271
write_neon_element32(rd1, a->vd, 1, MO_32);
272
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
273
274
widenfn(rd, rm0);
275
tcg_gen_shli_i64(rd, rd, 8 << a->size);
276
- neon_store_reg64(rd, a->vd);
277
+ write_neon_element64(rd, a->vd, 0, MO_64);
278
widenfn(rd, rm1);
279
tcg_gen_shli_i64(rd, rd, 8 << a->size);
280
- neon_store_reg64(rd, a->vd + 1);
281
+ write_neon_element64(rd, a->vd, 1, MO_64);
282
283
tcg_temp_free_i64(rd);
284
tcg_temp_free_i32(rm0);
285
@@ -XXX,XX +XXX,XX @@ static bool trans_VSWP(DisasContext *s, arg_2misc *a)
286
rm = tcg_temp_new_i64();
287
rd = tcg_temp_new_i64();
288
for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
289
- neon_load_reg64(rm, a->vm + pass);
290
- neon_load_reg64(rd, a->vd + pass);
291
- neon_store_reg64(rm, a->vd + pass);
292
- neon_store_reg64(rd, a->vm + pass);
293
+ read_neon_element64(rm, a->vm, pass, MO_64);
294
+ read_neon_element64(rd, a->vd, pass, MO_64);
295
+ write_neon_element64(rm, a->vd, pass, MO_64);
296
+ write_neon_element64(rd, a->vm, pass, MO_64);
297
}
298
tcg_temp_free_i64(rm);
299
tcg_temp_free_i64(rd);
300
--
77
--
301
2.20.1
78
2.25.1
302
79
303
80
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
These are the only users of neon_reg_offset, so remove that.
3
Add a model of the Xilinx ZynqMP APU Control.
4
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Luc Michel <luc@lmichel.fr>
6
Message-id: 20201030022618.785675-4-richard.henderson@linaro.org
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220316164645.2303510-6-edgar.iglesias@gmail.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
9
---
10
target/arm/translate.c | 14 ++------------
10
include/hw/misc/xlnx-zynqmp-apu-ctrl.h | 93 +++++++++
11
1 file changed, 2 insertions(+), 12 deletions(-)
11
hw/misc/xlnx-zynqmp-apu-ctrl.c | 253 +++++++++++++++++++++++++
12
hw/misc/meson.build | 1 +
13
3 files changed, 347 insertions(+)
14
create mode 100644 include/hw/misc/xlnx-zynqmp-apu-ctrl.h
15
create mode 100644 hw/misc/xlnx-zynqmp-apu-ctrl.c
12
16
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
diff --git a/include/hw/misc/xlnx-zynqmp-apu-ctrl.h b/include/hw/misc/xlnx-zynqmp-apu-ctrl.h
18
new file mode 100644
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/include/hw/misc/xlnx-zynqmp-apu-ctrl.h
22
@@ -XXX,XX +XXX,XX @@
23
+/*
24
+ * QEMU model of ZynqMP APU Control.
25
+ *
26
+ * Copyright (c) 2013-2022 Xilinx Inc
27
+ * SPDX-License-Identifier: GPL-2.0-or-later
28
+ *
29
+ * Written by Peter Crosthwaite <peter.crosthwaite@xilinx.com> and
30
+ * Edgar E. Iglesias <edgar.iglesias@xilinx.com>
31
+ *
32
+ */
33
+#ifndef HW_MISC_XLNX_ZYNQMP_APU_CTRL_H
34
+#define HW_MISC_XLNX_ZYNQMP_APU_CTRL_H
35
+
36
+#include "hw/sysbus.h"
37
+#include "hw/register.h"
38
+#include "target/arm/cpu.h"
39
+
40
+#define TYPE_XLNX_ZYNQMP_APU_CTRL "xlnx.apu-ctrl"
41
+OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPAPUCtrl, XLNX_ZYNQMP_APU_CTRL)
42
+
43
+REG32(APU_ERR_CTRL, 0x0)
44
+ FIELD(APU_ERR_CTRL, PSLVERR, 0, 1)
45
+REG32(ISR, 0x10)
46
+ FIELD(ISR, INV_APB, 0, 1)
47
+REG32(IMR, 0x14)
48
+ FIELD(IMR, INV_APB, 0, 1)
49
+REG32(IEN, 0x18)
50
+ FIELD(IEN, INV_APB, 0, 1)
51
+REG32(IDS, 0x1c)
52
+ FIELD(IDS, INV_APB, 0, 1)
53
+REG32(CONFIG_0, 0x20)
54
+ FIELD(CONFIG_0, CFGTE, 24, 4)
55
+ FIELD(CONFIG_0, CFGEND, 16, 4)
56
+ FIELD(CONFIG_0, VINITHI, 8, 4)
57
+ FIELD(CONFIG_0, AA64NAA32, 0, 4)
58
+REG32(CONFIG_1, 0x24)
59
+ FIELD(CONFIG_1, L2RSTDISABLE, 29, 1)
60
+ FIELD(CONFIG_1, L1RSTDISABLE, 28, 1)
61
+ FIELD(CONFIG_1, CP15DISABLE, 0, 4)
62
+REG32(RVBARADDR0L, 0x40)
63
+ FIELD(RVBARADDR0L, ADDR, 2, 30)
64
+REG32(RVBARADDR0H, 0x44)
65
+ FIELD(RVBARADDR0H, ADDR, 0, 8)
66
+REG32(RVBARADDR1L, 0x48)
67
+ FIELD(RVBARADDR1L, ADDR, 2, 30)
68
+REG32(RVBARADDR1H, 0x4c)
69
+ FIELD(RVBARADDR1H, ADDR, 0, 8)
70
+REG32(RVBARADDR2L, 0x50)
71
+ FIELD(RVBARADDR2L, ADDR, 2, 30)
72
+REG32(RVBARADDR2H, 0x54)
73
+ FIELD(RVBARADDR2H, ADDR, 0, 8)
74
+REG32(RVBARADDR3L, 0x58)
75
+ FIELD(RVBARADDR3L, ADDR, 2, 30)
76
+REG32(RVBARADDR3H, 0x5c)
77
+ FIELD(RVBARADDR3H, ADDR, 0, 8)
78
+REG32(ACE_CTRL, 0x60)
79
+ FIELD(ACE_CTRL, AWQOS, 16, 4)
80
+ FIELD(ACE_CTRL, ARQOS, 0, 4)
81
+REG32(SNOOP_CTRL, 0x80)
82
+ FIELD(SNOOP_CTRL, ACE_INACT, 4, 1)
83
+ FIELD(SNOOP_CTRL, ACP_INACT, 0, 1)
84
+REG32(PWRCTL, 0x90)
85
+ FIELD(PWRCTL, CLREXMONREQ, 17, 1)
86
+ FIELD(PWRCTL, L2FLUSHREQ, 16, 1)
87
+ FIELD(PWRCTL, CPUPWRDWNREQ, 0, 4)
88
+REG32(PWRSTAT, 0x94)
89
+ FIELD(PWRSTAT, CLREXMONACK, 17, 1)
90
+ FIELD(PWRSTAT, L2FLUSHDONE, 16, 1)
91
+ FIELD(PWRSTAT, DBGNOPWRDWN, 0, 4)
92
+
93
+#define APU_R_MAX ((R_PWRSTAT) + 1)
94
+
95
+#define APU_MAX_CPU 4
96
+
97
+struct XlnxZynqMPAPUCtrl {
98
+ SysBusDevice busdev;
99
+
100
+ ARMCPU *cpus[APU_MAX_CPU];
101
+ /* WFIs towards PMU. */
102
+ qemu_irq wfi_out[4];
103
+ /* CPU Power status towards INTC Redirect. */
104
+ qemu_irq cpu_power_status[4];
105
+ qemu_irq irq_imr;
106
+
107
+ uint8_t cpu_pwrdwn_req;
108
+ uint8_t cpu_in_wfi;
109
+
110
+ RegisterInfoArray *reg_array;
111
+ uint32_t regs[APU_R_MAX];
112
+ RegisterInfo regs_info[APU_R_MAX];
113
+};
114
+
115
+#endif
116
diff --git a/hw/misc/xlnx-zynqmp-apu-ctrl.c b/hw/misc/xlnx-zynqmp-apu-ctrl.c
117
new file mode 100644
118
index XXXXXXX..XXXXXXX
119
--- /dev/null
120
+++ b/hw/misc/xlnx-zynqmp-apu-ctrl.c
121
@@ -XXX,XX +XXX,XX @@
122
+/*
123
+ * QEMU model of the ZynqMP APU Control.
124
+ *
125
+ * Copyright (c) 2013-2022 Xilinx Inc
126
+ * SPDX-License-Identifier: GPL-2.0-or-later
127
+ *
128
+ * Written by Peter Crosthwaite <peter.crosthwaite@xilinx.com> and
129
+ * Edgar E. Iglesias <edgar.iglesias@xilinx.com>
130
+ */
131
+
132
+#include "qemu/osdep.h"
133
+#include "qapi/error.h"
134
+#include "qemu/log.h"
135
+#include "migration/vmstate.h"
136
+#include "hw/qdev-properties.h"
137
+#include "hw/sysbus.h"
138
+#include "hw/irq.h"
139
+#include "hw/register.h"
140
+
141
+#include "qemu/bitops.h"
142
+#include "qapi/qmp/qerror.h"
143
+
144
+#include "hw/misc/xlnx-zynqmp-apu-ctrl.h"
145
+
146
+#ifndef XILINX_ZYNQMP_APU_ERR_DEBUG
147
+#define XILINX_ZYNQMP_APU_ERR_DEBUG 0
148
+#endif
149
+
150
+static void update_wfi_out(void *opaque)
151
+{
152
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(opaque);
153
+ unsigned int i, wfi_pending;
154
+
155
+ wfi_pending = s->cpu_pwrdwn_req & s->cpu_in_wfi;
156
+ for (i = 0; i < APU_MAX_CPU; i++) {
157
+ qemu_set_irq(s->wfi_out[i], !!(wfi_pending & (1 << i)));
158
+ }
159
+}
160
+
161
+static void zynqmp_apu_rvbar_post_write(RegisterInfo *reg, uint64_t val)
162
+{
163
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(reg->opaque);
164
+ int i;
165
+
166
+ for (i = 0; i < APU_MAX_CPU; ++i) {
167
+ uint64_t rvbar = s->regs[R_RVBARADDR0L + 2 * i] +
168
+ ((uint64_t)s->regs[R_RVBARADDR0H + 2 * i] << 32);
169
+ if (s->cpus[i]) {
170
+ object_property_set_int(OBJECT(s->cpus[i]), "rvbar", rvbar,
171
+ &error_abort);
172
+ }
173
+ }
174
+}
175
+
176
+static void zynqmp_apu_pwrctl_post_write(RegisterInfo *reg, uint64_t val)
177
+{
178
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(reg->opaque);
179
+ unsigned int i, new;
180
+
181
+ for (i = 0; i < APU_MAX_CPU; i++) {
182
+ new = val & (1 << i);
183
+ /* Check if CPU's CPUPWRDNREQ has changed. If yes, update GPIOs. */
184
+ if (new != (s->cpu_pwrdwn_req & (1 << i))) {
185
+ qemu_set_irq(s->cpu_power_status[i], !!new);
186
+ }
187
+ s->cpu_pwrdwn_req &= ~(1 << i);
188
+ s->cpu_pwrdwn_req |= new;
189
+ }
190
+ update_wfi_out(s);
191
+}
192
+
193
+static void imr_update_irq(XlnxZynqMPAPUCtrl *s)
194
+{
195
+ bool pending = s->regs[R_ISR] & ~s->regs[R_IMR];
196
+ qemu_set_irq(s->irq_imr, pending);
197
+}
198
+
199
+static void isr_postw(RegisterInfo *reg, uint64_t val64)
200
+{
201
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(reg->opaque);
202
+ imr_update_irq(s);
203
+}
204
+
205
+static uint64_t ien_prew(RegisterInfo *reg, uint64_t val64)
206
+{
207
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(reg->opaque);
208
+ uint32_t val = val64;
209
+
210
+ s->regs[R_IMR] &= ~val;
211
+ imr_update_irq(s);
212
+ return 0;
213
+}
214
+
215
+static uint64_t ids_prew(RegisterInfo *reg, uint64_t val64)
216
+{
217
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(reg->opaque);
218
+ uint32_t val = val64;
219
+
220
+ s->regs[R_IMR] |= val;
221
+ imr_update_irq(s);
222
+ return 0;
223
+}
224
+
225
+static const RegisterAccessInfo zynqmp_apu_regs_info[] = {
226
+#define RVBAR_REGDEF(n) \
227
+ { .name = "RVBAR CPU " #n " Low", .addr = A_RVBARADDR ## n ## L, \
228
+ .reset = 0xffff0000ul, \
229
+ .post_write = zynqmp_apu_rvbar_post_write, \
230
+ },{ .name = "RVBAR CPU " #n " High", .addr = A_RVBARADDR ## n ## H, \
231
+ .post_write = zynqmp_apu_rvbar_post_write, \
232
+ }
233
+ { .name = "ERR_CTRL", .addr = A_APU_ERR_CTRL,
234
+ },{ .name = "ISR", .addr = A_ISR,
235
+ .w1c = 0x1,
236
+ .post_write = isr_postw,
237
+ },{ .name = "IMR", .addr = A_IMR,
238
+ .reset = 0x1,
239
+ .ro = 0x1,
240
+ },{ .name = "IEN", .addr = A_IEN,
241
+ .pre_write = ien_prew,
242
+ },{ .name = "IDS", .addr = A_IDS,
243
+ .pre_write = ids_prew,
244
+ },{ .name = "CONFIG_0", .addr = A_CONFIG_0,
245
+ .reset = 0xf0f,
246
+ },{ .name = "CONFIG_1", .addr = A_CONFIG_1,
247
+ },
248
+ RVBAR_REGDEF(0),
249
+ RVBAR_REGDEF(1),
250
+ RVBAR_REGDEF(2),
251
+ RVBAR_REGDEF(3),
252
+ { .name = "ACE_CTRL", .addr = A_ACE_CTRL,
253
+ .reset = 0xf000f,
254
+ },{ .name = "SNOOP_CTRL", .addr = A_SNOOP_CTRL,
255
+ },{ .name = "PWRCTL", .addr = A_PWRCTL,
256
+ .post_write = zynqmp_apu_pwrctl_post_write,
257
+ },{ .name = "PWRSTAT", .addr = A_PWRSTAT,
258
+ .ro = 0x3000f,
259
+ }
260
+};
261
+
262
+static void zynqmp_apu_reset_enter(Object *obj, ResetType type)
263
+{
264
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(obj);
265
+ int i;
266
+
267
+ for (i = 0; i < APU_R_MAX; ++i) {
268
+ register_reset(&s->regs_info[i]);
269
+ }
270
+
271
+ s->cpu_pwrdwn_req = 0;
272
+ s->cpu_in_wfi = 0;
273
+}
274
+
275
+static void zynqmp_apu_reset_hold(Object *obj)
276
+{
277
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(obj);
278
+
279
+ update_wfi_out(s);
280
+ imr_update_irq(s);
281
+}
282
+
283
+static const MemoryRegionOps zynqmp_apu_ops = {
284
+ .read = register_read_memory,
285
+ .write = register_write_memory,
286
+ .endianness = DEVICE_LITTLE_ENDIAN,
287
+ .valid = {
288
+ .min_access_size = 4,
289
+ .max_access_size = 4,
290
+ }
291
+};
292
+
293
+static void zynqmp_apu_handle_wfi(void *opaque, int irq, int level)
294
+{
295
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(opaque);
296
+
297
+ s->cpu_in_wfi = deposit32(s->cpu_in_wfi, irq, 1, level);
298
+ update_wfi_out(s);
299
+}
300
+
301
+static void zynqmp_apu_init(Object *obj)
302
+{
303
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(obj);
304
+ int i;
305
+
306
+ s->reg_array =
307
+ register_init_block32(DEVICE(obj), zynqmp_apu_regs_info,
308
+ ARRAY_SIZE(zynqmp_apu_regs_info),
309
+ s->regs_info, s->regs,
310
+ &zynqmp_apu_ops,
311
+ XILINX_ZYNQMP_APU_ERR_DEBUG,
312
+ APU_R_MAX * 4);
313
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->reg_array->mem);
314
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq_imr);
315
+
316
+ for (i = 0; i < APU_MAX_CPU; ++i) {
317
+ g_autofree gchar *prop_name = g_strdup_printf("cpu%d", i);
318
+ object_property_add_link(obj, prop_name, TYPE_ARM_CPU,
319
+ (Object **)&s->cpus[i],
320
+ qdev_prop_allow_set_link_before_realize,
321
+ OBJ_PROP_LINK_STRONG);
322
+ }
323
+
324
+ /* wfi_out is used to connect to PMU GPIs. */
325
+ qdev_init_gpio_out_named(DEVICE(obj), s->wfi_out, "wfi_out", 4);
326
+ /* CPU_POWER_STATUS is used to connect to INTC redirect. */
327
+ qdev_init_gpio_out_named(DEVICE(obj), s->cpu_power_status,
328
+ "CPU_POWER_STATUS", 4);
329
+ /* wfi_in is used as input from CPUs as wfi request. */
330
+ qdev_init_gpio_in_named(DEVICE(obj), zynqmp_apu_handle_wfi, "wfi_in", 4);
331
+}
332
+
333
+static void zynqmp_apu_finalize(Object *obj)
334
+{
335
+ XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(obj);
336
+ register_finalize_block(s->reg_array);
337
+}
338
+
339
+static const VMStateDescription vmstate_zynqmp_apu = {
340
+ .name = TYPE_XLNX_ZYNQMP_APU_CTRL,
341
+ .version_id = 1,
342
+ .minimum_version_id = 1,
343
+ .fields = (VMStateField[]) {
344
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPAPUCtrl, APU_R_MAX),
345
+ VMSTATE_END_OF_LIST(),
346
+ }
347
+};
348
+
349
+static void zynqmp_apu_class_init(ObjectClass *klass, void *data)
350
+{
351
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
352
+ DeviceClass *dc = DEVICE_CLASS(klass);
353
+
354
+ dc->vmsd = &vmstate_zynqmp_apu;
355
+
356
+ rc->phases.enter = zynqmp_apu_reset_enter;
357
+ rc->phases.hold = zynqmp_apu_reset_hold;
358
+}
359
+
360
+static const TypeInfo zynqmp_apu_info = {
361
+ .name = TYPE_XLNX_ZYNQMP_APU_CTRL,
362
+ .parent = TYPE_SYS_BUS_DEVICE,
363
+ .instance_size = sizeof(XlnxZynqMPAPUCtrl),
364
+ .class_init = zynqmp_apu_class_init,
365
+ .instance_init = zynqmp_apu_init,
366
+ .instance_finalize = zynqmp_apu_finalize,
367
+};
368
+
369
+static void zynqmp_apu_register_types(void)
370
+{
371
+ type_register_static(&zynqmp_apu_info);
372
+}
373
+
374
+type_init(zynqmp_apu_register_types)
375
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
14
index XXXXXXX..XXXXXXX 100644
376
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
377
--- a/hw/misc/meson.build
16
+++ b/target/arm/translate.c
378
+++ b/hw/misc/meson.build
17
@@ -XXX,XX +XXX,XX @@ static inline long vfp_reg_offset(bool dp, unsigned reg)
379
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_RASPI', if_true: files(
18
}
380
softmmu_ss.add(when: 'CONFIG_SLAVIO', if_true: files('slavio_misc.c'))
19
}
381
softmmu_ss.add(when: 'CONFIG_ZYNQ', if_true: files('zynq_slcr.c'))
20
382
specific_ss.add(when: 'CONFIG_XLNX_ZYNQMP_ARM', if_true: files('xlnx-zynqmp-crf.c'))
21
-/* Return the offset of a 32-bit piece of a NEON register.
383
+specific_ss.add(when: 'CONFIG_XLNX_ZYNQMP_ARM', if_true: files('xlnx-zynqmp-apu-ctrl.c'))
22
- zero is the least significant end of the register. */
384
softmmu_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files(
23
-static inline long
385
'xlnx-versal-xramc.c',
24
-neon_reg_offset (int reg, int n)
386
'xlnx-versal-pmc-iou-slcr.c',
25
-{
26
- int sreg;
27
- sreg = reg * 2 + n;
28
- return vfp_reg_offset(0, sreg);
29
-}
30
-
31
static TCGv_i32 neon_load_reg(int reg, int pass)
32
{
33
TCGv_i32 tmp = tcg_temp_new_i32();
34
- tcg_gen_ld_i32(tmp, cpu_env, neon_reg_offset(reg, pass));
35
+ tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
36
return tmp;
37
}
38
39
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
40
{
41
- tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
42
+ tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
43
tcg_temp_free_i32(var);
44
}
45
46
--
387
--
47
2.20.1
388
2.25.1
48
49
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
This function makes it clear that we're talking about the whole
3
Connect the ZynqMP APU Control device.
4
register, and not the 32-bit piece at index 0. This fixes a bug
5
when running on a big-endian host.
6
4
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20201030022618.785675-2-richard.henderson@linaro.org
6
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Luc Michel <luc@lmichel.fr>
8
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Message-id: 20220316164645.2303510-7-edgar.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/translate.c | 8 ++++++
12
include/hw/arm/xlnx-zynqmp.h | 4 +++-
13
target/arm/translate-neon.c.inc | 44 ++++++++++++++++-----------------
13
hw/arm/xlnx-zynqmp.c | 25 +++++++++++++++++++++++--
14
target/arm/translate-vfp.c.inc | 2 +-
14
2 files changed, 26 insertions(+), 3 deletions(-)
15
3 files changed, 31 insertions(+), 23 deletions(-)
16
15
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
18
--- a/include/hw/arm/xlnx-zynqmp.h
20
+++ b/target/arm/translate.c
19
+++ b/include/hw/arm/xlnx-zynqmp.h
21
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
20
@@ -XXX,XX +XXX,XX @@
22
unallocated_encoding(s);
21
#include "hw/nvram/xlnx-bbram.h"
22
#include "hw/nvram/xlnx-zynqmp-efuse.h"
23
#include "hw/or-irq.h"
24
+#include "hw/misc/xlnx-zynqmp-apu-ctrl.h"
25
#include "hw/misc/xlnx-zynqmp-crf.h"
26
27
#define TYPE_XLNX_ZYNQMP "xlnx-zynqmp"
28
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
29
/*
30
* Unimplemented mmio regions needed to boot some images.
31
*/
32
-#define XLNX_ZYNQMP_NUM_UNIMP_AREAS 2
33
+#define XLNX_ZYNQMP_NUM_UNIMP_AREAS 1
34
35
struct XlnxZynqMPState {
36
/*< private >*/
37
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
38
XlnxZDMA adma[XLNX_ZYNQMP_NUM_ADMA_CH];
39
XlnxCSUDMA qspi_dma;
40
qemu_or_irq qspi_irq_orgate;
41
+ XlnxZynqMPAPUCtrl apu_ctrl;
42
XlnxZynqMPCRF crf;
43
44
char *boot_cpu;
45
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/arm/xlnx-zynqmp.c
48
+++ b/hw/arm/xlnx-zynqmp.c
49
@@ -XXX,XX +XXX,XX @@
50
#define DPDMA_IRQ 116
51
52
#define APU_ADDR 0xfd5c0000
53
-#define APU_SIZE 0x100
54
+#define APU_IRQ 153
55
56
#define IPI_ADDR 0xFF300000
57
#define IPI_IRQ 64
58
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_create_efuse(XlnxZynqMPState *s, qemu_irq *gic)
59
sysbus_connect_irq(sbd, 0, gic[EFUSE_IRQ]);
23
}
60
}
24
61
25
+/*
62
+static void xlnx_zynqmp_create_apu_ctrl(XlnxZynqMPState *s, qemu_irq *gic)
26
+ * Return the offset of a "full" NEON Dreg.
27
+ */
28
+static long neon_full_reg_offset(unsigned reg)
29
+{
63
+{
30
+ return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
64
+ SysBusDevice *sbd;
65
+ int i;
66
+
67
+ object_initialize_child(OBJECT(s), "apu-ctrl", &s->apu_ctrl,
68
+ TYPE_XLNX_ZYNQMP_APU_CTRL);
69
+ sbd = SYS_BUS_DEVICE(&s->apu_ctrl);
70
+
71
+ for (i = 0; i < XLNX_ZYNQMP_NUM_APU_CPUS; i++) {
72
+ g_autofree gchar *name = g_strdup_printf("cpu%d", i);
73
+
74
+ object_property_set_link(OBJECT(&s->apu_ctrl), name,
75
+ OBJECT(&s->apu_cpu[i]), &error_abort);
76
+ }
77
+
78
+ sysbus_realize(sbd, &error_fatal);
79
+ sysbus_mmio_map(sbd, 0, APU_ADDR);
80
+ sysbus_connect_irq(sbd, 0, gic[APU_IRQ]);
31
+}
81
+}
32
+
82
+
33
static inline long vfp_reg_offset(bool dp, unsigned reg)
83
static void xlnx_zynqmp_create_crf(XlnxZynqMPState *s, qemu_irq *gic)
34
{
84
{
35
if (dp) {
85
SysBusDevice *sbd;
36
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
86
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_create_unimp_mmio(XlnxZynqMPState *s)
37
index XXXXXXX..XXXXXXX 100644
87
hwaddr base;
38
--- a/target/arm/translate-neon.c.inc
88
hwaddr size;
39
+++ b/target/arm/translate-neon.c.inc
89
} unimp_areas[ARRAY_SIZE(s->mr_unimp)] = {
40
@@ -XXX,XX +XXX,XX @@ neon_element_offset(int reg, int element, MemOp size)
90
- { .name = "apu", APU_ADDR, APU_SIZE },
41
ofs ^= 8 - element_size;
91
{ .name = "serdes", SERDES_ADDR, SERDES_SIZE },
42
}
92
};
43
#endif
93
unsigned int nr;
44
- return neon_reg_offset(reg, 0) + ofs;
94
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
45
+ return neon_full_reg_offset(reg) + ofs;
95
46
}
96
xlnx_zynqmp_create_bbram(s, gic_spi);
47
97
xlnx_zynqmp_create_efuse(s, gic_spi);
48
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
98
+ xlnx_zynqmp_create_apu_ctrl(s, gic_spi);
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
99
xlnx_zynqmp_create_crf(s, gic_spi);
50
* We cannot write 16 bytes at once because the
100
xlnx_zynqmp_create_unimp_mmio(s);
51
* destination is unaligned.
52
*/
53
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
54
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
55
8, 8, tmp);
56
- tcg_gen_gvec_mov(0, neon_reg_offset(vd + 1, 0),
57
- neon_reg_offset(vd, 0), 8, 8);
58
+ tcg_gen_gvec_mov(0, neon_full_reg_offset(vd + 1),
59
+ neon_full_reg_offset(vd), 8, 8);
60
} else {
61
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
62
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
63
vec_size, vec_size, tmp);
64
}
65
tcg_gen_addi_i32(addr, addr, 1 << size);
66
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
67
static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
68
{
69
int vec_size = a->q ? 16 : 8;
70
- int rd_ofs = neon_reg_offset(a->vd, 0);
71
- int rn_ofs = neon_reg_offset(a->vn, 0);
72
- int rm_ofs = neon_reg_offset(a->vm, 0);
73
+ int rd_ofs = neon_full_reg_offset(a->vd);
74
+ int rn_ofs = neon_full_reg_offset(a->vn);
75
+ int rm_ofs = neon_full_reg_offset(a->vm);
76
77
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
78
return false;
79
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
80
{
81
/* Handle a 2-reg-shift insn which can be vectorized. */
82
int vec_size = a->q ? 16 : 8;
83
- int rd_ofs = neon_reg_offset(a->vd, 0);
84
- int rm_ofs = neon_reg_offset(a->vm, 0);
85
+ int rd_ofs = neon_full_reg_offset(a->vd);
86
+ int rm_ofs = neon_full_reg_offset(a->vm);
87
88
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
89
return false;
90
@@ -XXX,XX +XXX,XX @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
91
{
92
/* FP operations in 2-reg-and-shift group */
93
int vec_size = a->q ? 16 : 8;
94
- int rd_ofs = neon_reg_offset(a->vd, 0);
95
- int rm_ofs = neon_reg_offset(a->vm, 0);
96
+ int rd_ofs = neon_full_reg_offset(a->vd);
97
+ int rm_ofs = neon_full_reg_offset(a->vm);
98
TCGv_ptr fpst;
99
100
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
101
@@ -XXX,XX +XXX,XX @@ static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
102
return true;
103
}
104
105
- reg_ofs = neon_reg_offset(a->vd, 0);
106
+ reg_ofs = neon_full_reg_offset(a->vd);
107
vec_size = a->q ? 16 : 8;
108
imm = asimd_imm_const(a->imm, a->cmode, a->op);
109
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_P_3d(DisasContext *s, arg_3diff *a)
111
return true;
112
}
113
114
- tcg_gen_gvec_3_ool(neon_reg_offset(a->vd, 0),
115
- neon_reg_offset(a->vn, 0),
116
- neon_reg_offset(a->vm, 0),
117
+ tcg_gen_gvec_3_ool(neon_full_reg_offset(a->vd),
118
+ neon_full_reg_offset(a->vn),
119
+ neon_full_reg_offset(a->vm),
120
16, 16, 0, fn_gvec);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
124
{
125
/* Two registers and a scalar, using gvec */
126
int vec_size = a->q ? 16 : 8;
127
- int rd_ofs = neon_reg_offset(a->vd, 0);
128
- int rn_ofs = neon_reg_offset(a->vn, 0);
129
+ int rd_ofs = neon_full_reg_offset(a->vd);
130
+ int rn_ofs = neon_full_reg_offset(a->vn);
131
int rm_ofs;
132
int idx;
133
TCGv_ptr fpstatus;
134
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
135
/* a->vm is M:Vm, which encodes both register and index */
136
idx = extract32(a->vm, a->size + 2, 2);
137
a->vm = extract32(a->vm, 0, a->size + 2);
138
- rm_ofs = neon_reg_offset(a->vm, 0);
139
+ rm_ofs = neon_full_reg_offset(a->vm);
140
141
fpstatus = fpstatus_ptr(a->size == 1 ? FPST_STD_F16 : FPST_STD);
142
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, fpstatus,
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
144
return true;
145
}
146
147
- tcg_gen_gvec_dup_mem(a->size, neon_reg_offset(a->vd, 0),
148
+ tcg_gen_gvec_dup_mem(a->size, neon_full_reg_offset(a->vd),
149
neon_element_offset(a->vm, a->index, a->size),
150
a->q ? 16 : 8, a->q ? 16 : 8);
151
return true;
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
153
static bool do_2misc_vec(DisasContext *s, arg_2misc *a, GVecGen2Fn *fn)
154
{
155
int vec_size = a->q ? 16 : 8;
156
- int rd_ofs = neon_reg_offset(a->vd, 0);
157
- int rm_ofs = neon_reg_offset(a->vm, 0);
158
+ int rd_ofs = neon_full_reg_offset(a->vd);
159
+ int rm_ofs = neon_full_reg_offset(a->vm);
160
161
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
162
return false;
163
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-vfp.c.inc
166
+++ b/target/arm/translate-vfp.c.inc
167
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
168
}
169
170
tmp = load_reg(s, a->rt);
171
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(a->vn, 0),
172
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(a->vn),
173
vec_size, vec_size, tmp);
174
tcg_temp_free_i32(tmp);
175
101
176
--
102
--
177
2.20.1
103
2.25.1
178
104
179
105
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This will shortly have users outside of translate-neon.c.inc.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-3-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 20 ++++++++++++++++++++
11
target/arm/translate-neon.c.inc | 19 -------------------
12
2 files changed, 20 insertions(+), 19 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
19
return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
20
}
21
22
+/*
23
+ * Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
24
+ * where 0 is the least significant end of the register.
25
+ */
26
+static long neon_element_offset(int reg, int element, MemOp size)
27
+{
28
+ int element_size = 1 << size;
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /*
32
+ * Calculate the offset assuming fully little-endian,
33
+ * then XOR to account for the order of the 8-byte units.
34
+ */
35
+ if (element_size < 8) {
36
+ ofs ^= 8 - element_size;
37
+ }
38
+#endif
39
+ return neon_full_reg_offset(reg) + ofs;
40
+}
41
+
42
static inline long vfp_reg_offset(bool dp, unsigned reg)
43
{
44
if (dp) {
45
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-neon.c.inc
48
+++ b/target/arm/translate-neon.c.inc
49
@@ -XXX,XX +XXX,XX @@ static inline int neon_3same_fp_size(DisasContext *s, int x)
50
#include "decode-neon-ls.c.inc"
51
#include "decode-neon-shared.c.inc"
52
53
-/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
54
- * where 0 is the least significant end of the register.
55
- */
56
-static inline long
57
-neon_element_offset(int reg, int element, MemOp size)
58
-{
59
- int element_size = 1 << size;
60
- int ofs = element * element_size;
61
-#ifdef HOST_WORDS_BIGENDIAN
62
- /* Calculate the offset assuming fully little-endian,
63
- * then XOR to account for the order of the 8-byte units.
64
- */
65
- if (element_size < 8) {
66
- ofs ^= 8 - element_size;
67
- }
68
-#endif
69
- return neon_full_reg_offset(reg) + ofs;
70
-}
71
-
72
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
73
{
74
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
75
--
76
2.20.1
77
78
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This seems a bit more readable than using offsetof CPU_DoubleU.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-5-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 13 ++++---------
11
1 file changed, 4 insertions(+), 9 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static long neon_element_offset(int reg, int element, MemOp size)
18
return neon_full_reg_offset(reg) + ofs;
19
}
20
21
-static inline long vfp_reg_offset(bool dp, unsigned reg)
22
+/* Return the offset of a VFP Dreg (dp = true) or VFP Sreg (dp = false). */
23
+static long vfp_reg_offset(bool dp, unsigned reg)
24
{
25
if (dp) {
26
- return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
27
+ return neon_element_offset(reg, 0, MO_64);
28
} else {
29
- long ofs = offsetof(CPUARMState, vfp.zregs[reg >> 2].d[(reg >> 1) & 1]);
30
- if (reg & 1) {
31
- ofs += offsetof(CPU_DoubleU, l.upper);
32
- } else {
33
- ofs += offsetof(CPU_DoubleU, l.lower);
34
- }
35
- return ofs;
36
+ return neon_element_offset(reg >> 1, reg & 1, MO_32);
37
}
38
}
39
40
--
41
2.20.1
42
43
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We can then use this to improve VMOV (scalar to gp) and
4
VMOV (gp to scalar) so that we simply perform the memory
5
operation that we wanted, rather than inserting or
6
extracting from a 32-bit quantity.
7
8
These were the last uses of neon_load/store_reg, so remove them.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20201030022618.785675-7-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/translate.c | 50 +++++++++++++-----------
16
target/arm/translate-vfp.c.inc | 71 +++++-----------------------------
17
2 files changed, 37 insertions(+), 84 deletions(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
24
* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
25
* where 0 is the least significant end of the register.
26
*/
27
-static long neon_element_offset(int reg, int element, MemOp size)
28
+static long neon_element_offset(int reg, int element, MemOp memop)
29
{
30
- int element_size = 1 << size;
31
+ int element_size = 1 << (memop & MO_SIZE);
32
int ofs = element * element_size;
33
#ifdef HOST_WORDS_BIGENDIAN
34
/*
35
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
36
}
37
}
38
39
-static TCGv_i32 neon_load_reg(int reg, int pass)
40
-{
41
- TCGv_i32 tmp = tcg_temp_new_i32();
42
- tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
43
- return tmp;
44
-}
45
-
46
-static void neon_store_reg(int reg, int pass, TCGv_i32 var)
47
-{
48
- tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
49
- tcg_temp_free_i32(var);
50
-}
51
-
52
static inline void neon_load_reg64(TCGv_i64 var, int reg)
53
{
54
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
55
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
56
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
57
}
58
59
-static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
60
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
61
{
62
- long off = neon_element_offset(reg, ele, size);
63
+ long off = neon_element_offset(reg, ele, memop);
64
65
- switch (size) {
66
- case MO_32:
67
+ switch (memop) {
68
+ case MO_SB:
69
+ tcg_gen_ld8s_i32(dest, cpu_env, off);
70
+ break;
71
+ case MO_UB:
72
+ tcg_gen_ld8u_i32(dest, cpu_env, off);
73
+ break;
74
+ case MO_SW:
75
+ tcg_gen_ld16s_i32(dest, cpu_env, off);
76
+ break;
77
+ case MO_UW:
78
+ tcg_gen_ld16u_i32(dest, cpu_env, off);
79
+ break;
80
+ case MO_UL:
81
+ case MO_SL:
82
tcg_gen_ld_i32(dest, cpu_env, off);
83
break;
84
default:
85
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
86
}
87
}
88
89
-static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
90
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
91
{
92
- long off = neon_element_offset(reg, ele, size);
93
+ long off = neon_element_offset(reg, ele, memop);
94
95
- switch (size) {
96
+ switch (memop) {
97
+ case MO_8:
98
+ tcg_gen_st8_i32(src, cpu_env, off);
99
+ break;
100
+ case MO_16:
101
+ tcg_gen_st16_i32(src, cpu_env, off);
102
+ break;
103
case MO_32:
104
tcg_gen_st_i32(src, cpu_env, off);
105
break;
106
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-vfp.c.inc
109
+++ b/target/arm/translate-vfp.c.inc
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
111
{
112
/* VMOV scalar to general purpose register */
113
TCGv_i32 tmp;
114
- int pass;
115
- uint32_t offset;
116
117
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
118
- if (a->size == 2
119
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
120
+ if (a->size == MO_32
121
? !dc_isar_feature(aa32_fpsp_v2, s)
122
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
123
return false;
124
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
125
return false;
126
}
127
128
- offset = a->index << a->size;
129
- pass = extract32(offset, 2, 1);
130
- offset = extract32(offset, 0, 2) * 8;
131
-
132
if (!vfp_access_check(s)) {
133
return true;
134
}
135
136
- tmp = neon_load_reg(a->vn, pass);
137
- switch (a->size) {
138
- case 0:
139
- if (offset) {
140
- tcg_gen_shri_i32(tmp, tmp, offset);
141
- }
142
- if (a->u) {
143
- gen_uxtb(tmp);
144
- } else {
145
- gen_sxtb(tmp);
146
- }
147
- break;
148
- case 1:
149
- if (a->u) {
150
- if (offset) {
151
- tcg_gen_shri_i32(tmp, tmp, 16);
152
- } else {
153
- gen_uxth(tmp);
154
- }
155
- } else {
156
- if (offset) {
157
- tcg_gen_sari_i32(tmp, tmp, 16);
158
- } else {
159
- gen_sxth(tmp);
160
- }
161
- }
162
- break;
163
- case 2:
164
- break;
165
- }
166
+ tmp = tcg_temp_new_i32();
167
+ read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN));
168
store_reg(s, a->rt, tmp);
169
170
return true;
171
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
172
static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
173
{
174
/* VMOV general purpose register to scalar */
175
- TCGv_i32 tmp, tmp2;
176
- int pass;
177
- uint32_t offset;
178
+ TCGv_i32 tmp;
179
180
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
181
- if (a->size == 2
182
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
183
+ if (a->size == MO_32
184
? !dc_isar_feature(aa32_fpsp_v2, s)
185
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
186
return false;
187
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
188
return false;
189
}
190
191
- offset = a->index << a->size;
192
- pass = extract32(offset, 2, 1);
193
- offset = extract32(offset, 0, 2) * 8;
194
-
195
if (!vfp_access_check(s)) {
196
return true;
197
}
198
199
tmp = load_reg(s, a->rt);
200
- switch (a->size) {
201
- case 0:
202
- tmp2 = neon_load_reg(a->vn, pass);
203
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 8);
204
- tcg_temp_free_i32(tmp2);
205
- break;
206
- case 1:
207
- tmp2 = neon_load_reg(a->vn, pass);
208
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 16);
209
- tcg_temp_free_i32(tmp2);
210
- break;
211
- case 2:
212
- break;
213
- }
214
- neon_store_reg(a->vn, pass, tmp);
215
+ write_neon_element32(tmp, a->vn, a->index, a->size);
216
+ tcg_temp_free_i32(tmp);
217
218
return true;
219
}
220
--
221
2.20.1
222
223
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Andrew Deason <adeason@sinenomine.net>
2
2
3
The only uses of this function are for loading VFP
3
On older Solaris releases (before Solaris 11), we didn't get a
4
double-precision values, and nothing to do with NEON.
4
prototype for madvise, and so util/osdep.c provides its own prototype.
5
Some time between the public Solaris 11.4 release and Solaris 11.4.42
6
CBE, we started getting an madvise prototype that looks like this:
5
7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
extern int madvise(void *, size_t, int);
7
Message-id: 20201030022618.785675-10-richard.henderson@linaro.org
9
10
which conflicts with the prototype in util/osdeps.c. Instead of always
11
declaring this prototype, check if we're missing the madvise()
12
prototype, and only declare it ourselves if the prototype is missing.
13
Move the prototype to include/qemu/osdep.h, the normal place to handle
14
platform-specific header quirks.
15
16
The 'missing_madvise_proto' meson check contains an obviously wrong
17
prototype for madvise. So if that code compiles and links, we must be
18
missing the actual prototype for madvise.
19
20
Signed-off-by: Andrew Deason <adeason@sinenomine.net>
21
Message-id: 20220316035227.3702-2-adeason@sinenomine.net
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
24
---
11
target/arm/translate.c | 8 ++--
25
meson.build | 23 +++++++++++++++++++++--
12
target/arm/translate-vfp.c.inc | 84 +++++++++++++++++-----------------
26
include/qemu/osdep.h | 8 ++++++++
13
2 files changed, 46 insertions(+), 46 deletions(-)
27
util/osdep.c | 3 ---
28
3 files changed, 29 insertions(+), 5 deletions(-)
14
29
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
30
diff --git a/meson.build b/meson.build
16
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
32
--- a/meson.build
18
+++ b/target/arm/translate.c
33
+++ b/meson.build
19
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
34
@@ -XXX,XX +XXX,XX @@ config_host_data.set('CONFIG_FDATASYNC', cc.links(gnu_source_prefix + '''
20
}
35
#error Not supported
21
}
36
#endif
22
37
}'''))
23
-static inline void neon_load_reg64(TCGv_i64 var, int reg)
38
-config_host_data.set('CONFIG_MADVISE', cc.links(gnu_source_prefix + '''
24
+static inline void vfp_load_reg64(TCGv_i64 var, int reg)
39
+
25
{
40
+has_madvise = cc.links(gnu_source_prefix + '''
26
- tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
41
#include <sys/types.h>
27
+ tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(true, reg));
42
#include <sys/mman.h>
28
}
43
#include <stddef.h>
29
44
- int main(void) { return madvise(NULL, 0, MADV_DONTNEED); }'''))
30
-static inline void neon_store_reg64(TCGv_i64 var, int reg)
45
+ int main(void) { return madvise(NULL, 0, MADV_DONTNEED); }''')
31
+static inline void vfp_store_reg64(TCGv_i64 var, int reg)
46
+missing_madvise_proto = false
32
{
47
+if has_madvise
33
- tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
48
+ # Some platforms (illumos and Solaris before Solaris 11) provide madvise()
34
+ tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(true, reg));
49
+ # but forget to prototype it. In this case, has_madvise will be true (the
35
}
50
+ # test program links despite a compile warning). To detect the
36
51
+ # missing-prototype case, we try again with a definitely-bogus prototype.
37
static inline void vfp_load_reg32(TCGv_i32 var, int reg)
52
+ # This will only compile if the system headers don't provide the prototype;
38
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
53
+ # otherwise the conflicting prototypes will cause a compiler error.
54
+ missing_madvise_proto = cc.links(gnu_source_prefix + '''
55
+ #include <sys/types.h>
56
+ #include <sys/mman.h>
57
+ #include <stddef.h>
58
+ extern int madvise(int);
59
+ int main(void) { return madvise(0); }''')
60
+endif
61
+config_host_data.set('CONFIG_MADVISE', has_madvise)
62
+config_host_data.set('HAVE_MADVISE_WITHOUT_PROTOTYPE', missing_madvise_proto)
63
+
64
config_host_data.set('CONFIG_MEMFD', cc.links(gnu_source_prefix + '''
65
#include <sys/mman.h>
66
int main(void) { return memfd_create("foo", MFD_ALLOW_SEALING); }'''))
67
diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
39
index XXXXXXX..XXXXXXX 100644
68
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-vfp.c.inc
69
--- a/include/qemu/osdep.h
41
+++ b/target/arm/translate-vfp.c.inc
70
+++ b/include/qemu/osdep.h
42
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
71
@@ -XXX,XX +XXX,XX @@ void qemu_anon_ram_free(void *ptr, size_t size);
43
tcg_gen_ext_i32_i64(nf, cpu_NF);
72
#define SIGIO SIGPOLL
44
tcg_gen_ext_i32_i64(vf, cpu_VF);
73
#endif
45
74
46
- neon_load_reg64(frn, rn);
75
+#ifdef HAVE_MADVISE_WITHOUT_PROTOTYPE
47
- neon_load_reg64(frm, rm);
76
+/*
48
+ vfp_load_reg64(frn, rn);
77
+ * See MySQL bug #7156 (http://bugs.mysql.com/bug.php?id=7156) for discussion
49
+ vfp_load_reg64(frm, rm);
78
+ * about Solaris missing the madvise() prototype.
50
switch (a->cc) {
79
+ */
51
case 0: /* eq: Z */
80
+extern int madvise(char *, size_t, int);
52
tcg_gen_movcond_i64(TCG_COND_EQ, dest, zf, zero,
81
+#endif
53
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
82
+
54
tcg_temp_free_i64(tmp);
83
#if defined(CONFIG_LINUX)
55
break;
84
#ifndef BUS_MCEERR_AR
56
}
85
#define BUS_MCEERR_AR 4
57
- neon_store_reg64(dest, rd);
86
diff --git a/util/osdep.c b/util/osdep.c
58
+ vfp_store_reg64(dest, rd);
87
index XXXXXXX..XXXXXXX 100644
59
tcg_temp_free_i64(frn);
88
--- a/util/osdep.c
60
tcg_temp_free_i64(frm);
89
+++ b/util/osdep.c
61
tcg_temp_free_i64(dest);
90
@@ -XXX,XX +XXX,XX @@
62
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
91
63
TCGv_i64 tcg_res;
92
#ifdef CONFIG_SOLARIS
64
tcg_op = tcg_temp_new_i64();
93
#include <sys/statvfs.h>
65
tcg_res = tcg_temp_new_i64();
94
-/* See MySQL bug #7156 (http://bugs.mysql.com/bug.php?id=7156) for
66
- neon_load_reg64(tcg_op, rm);
95
- discussion about Solaris header problems */
67
+ vfp_load_reg64(tcg_op, rm);
96
-extern int madvise(char *, size_t, int);
68
gen_helper_rintd(tcg_res, tcg_op, fpst);
97
#endif
69
- neon_store_reg64(tcg_res, rd);
98
70
+ vfp_store_reg64(tcg_res, rd);
99
#include "qemu-common.h"
71
tcg_temp_free_i64(tcg_op);
72
tcg_temp_free_i64(tcg_res);
73
} else {
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
tcg_double = tcg_temp_new_i64();
76
tcg_res = tcg_temp_new_i64();
77
tcg_tmp = tcg_temp_new_i32();
78
- neon_load_reg64(tcg_double, rm);
79
+ vfp_load_reg64(tcg_double, rm);
80
if (is_signed) {
81
gen_helper_vfp_tosld(tcg_res, tcg_double, tcg_shift, fpst);
82
} else {
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
84
tmp = tcg_temp_new_i64();
85
if (a->l) {
86
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
87
- neon_store_reg64(tmp, a->vd);
88
+ vfp_store_reg64(tmp, a->vd);
89
} else {
90
- neon_load_reg64(tmp, a->vd);
91
+ vfp_load_reg64(tmp, a->vd);
92
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
93
}
94
tcg_temp_free_i64(tmp);
95
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
96
if (a->l) {
97
/* load */
98
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
99
- neon_store_reg64(tmp, a->vd + i);
100
+ vfp_store_reg64(tmp, a->vd + i);
101
} else {
102
/* store */
103
- neon_load_reg64(tmp, a->vd + i);
104
+ vfp_load_reg64(tmp, a->vd + i);
105
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
106
}
107
tcg_gen_addi_i32(addr, addr, offset);
108
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
109
fd = tcg_temp_new_i64();
110
fpst = fpstatus_ptr(FPST_FPCR);
111
112
- neon_load_reg64(f0, vn);
113
- neon_load_reg64(f1, vm);
114
+ vfp_load_reg64(f0, vn);
115
+ vfp_load_reg64(f1, vm);
116
117
for (;;) {
118
if (reads_vd) {
119
- neon_load_reg64(fd, vd);
120
+ vfp_load_reg64(fd, vd);
121
}
122
fn(fd, f0, f1, fpst);
123
- neon_store_reg64(fd, vd);
124
+ vfp_store_reg64(fd, vd);
125
126
if (veclen == 0) {
127
break;
128
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
129
veclen--;
130
vd = vfp_advance_dreg(vd, delta_d);
131
vn = vfp_advance_dreg(vn, delta_d);
132
- neon_load_reg64(f0, vn);
133
+ vfp_load_reg64(f0, vn);
134
if (delta_m) {
135
vm = vfp_advance_dreg(vm, delta_m);
136
- neon_load_reg64(f1, vm);
137
+ vfp_load_reg64(f1, vm);
138
}
139
}
140
141
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
142
f0 = tcg_temp_new_i64();
143
fd = tcg_temp_new_i64();
144
145
- neon_load_reg64(f0, vm);
146
+ vfp_load_reg64(f0, vm);
147
148
for (;;) {
149
fn(fd, f0);
150
- neon_store_reg64(fd, vd);
151
+ vfp_store_reg64(fd, vd);
152
153
if (veclen == 0) {
154
break;
155
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
156
/* single source one-many */
157
while (veclen--) {
158
vd = vfp_advance_dreg(vd, delta_d);
159
- neon_store_reg64(fd, vd);
160
+ vfp_store_reg64(fd, vd);
161
}
162
break;
163
}
164
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
165
veclen--;
166
vd = vfp_advance_dreg(vd, delta_d);
167
vd = vfp_advance_dreg(vm, delta_m);
168
- neon_load_reg64(f0, vm);
169
+ vfp_load_reg64(f0, vm);
170
}
171
172
tcg_temp_free_i64(f0);
173
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
174
vm = tcg_temp_new_i64();
175
vd = tcg_temp_new_i64();
176
177
- neon_load_reg64(vn, a->vn);
178
- neon_load_reg64(vm, a->vm);
179
+ vfp_load_reg64(vn, a->vn);
180
+ vfp_load_reg64(vm, a->vm);
181
if (neg_n) {
182
/* VFNMS, VFMS */
183
gen_helper_vfp_negd(vn, vn);
184
}
185
- neon_load_reg64(vd, a->vd);
186
+ vfp_load_reg64(vd, a->vd);
187
if (neg_d) {
188
/* VFNMA, VFNMS */
189
gen_helper_vfp_negd(vd, vd);
190
}
191
fpst = fpstatus_ptr(FPST_FPCR);
192
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
193
- neon_store_reg64(vd, a->vd);
194
+ vfp_store_reg64(vd, a->vd);
195
196
tcg_temp_free_ptr(fpst);
197
tcg_temp_free_i64(vn);
198
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
199
fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm));
200
201
for (;;) {
202
- neon_store_reg64(fd, vd);
203
+ vfp_store_reg64(fd, vd);
204
205
if (veclen == 0) {
206
break;
207
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
208
vd = tcg_temp_new_i64();
209
vm = tcg_temp_new_i64();
210
211
- neon_load_reg64(vd, a->vd);
212
+ vfp_load_reg64(vd, a->vd);
213
if (a->z) {
214
tcg_gen_movi_i64(vm, 0);
215
} else {
216
- neon_load_reg64(vm, a->vm);
217
+ vfp_load_reg64(vm, a->vm);
218
}
219
220
if (a->e) {
221
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
222
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
223
vd = tcg_temp_new_i64();
224
gen_helper_vfp_fcvt_f16_to_f64(vd, tmp, fpst, ahp_mode);
225
- neon_store_reg64(vd, a->vd);
226
+ vfp_store_reg64(vd, a->vd);
227
tcg_temp_free_i32(ahp_mode);
228
tcg_temp_free_ptr(fpst);
229
tcg_temp_free_i32(tmp);
230
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
231
tmp = tcg_temp_new_i32();
232
vm = tcg_temp_new_i64();
233
234
- neon_load_reg64(vm, a->vm);
235
+ vfp_load_reg64(vm, a->vm);
236
gen_helper_vfp_fcvt_f64_to_f16(tmp, vm, fpst, ahp_mode);
237
tcg_temp_free_i64(vm);
238
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
239
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
240
}
241
242
tmp = tcg_temp_new_i64();
243
- neon_load_reg64(tmp, a->vm);
244
+ vfp_load_reg64(tmp, a->vm);
245
fpst = fpstatus_ptr(FPST_FPCR);
246
gen_helper_rintd(tmp, tmp, fpst);
247
- neon_store_reg64(tmp, a->vd);
248
+ vfp_store_reg64(tmp, a->vd);
249
tcg_temp_free_ptr(fpst);
250
tcg_temp_free_i64(tmp);
251
return true;
252
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
253
}
254
255
tmp = tcg_temp_new_i64();
256
- neon_load_reg64(tmp, a->vm);
257
+ vfp_load_reg64(tmp, a->vm);
258
fpst = fpstatus_ptr(FPST_FPCR);
259
tcg_rmode = tcg_const_i32(float_round_to_zero);
260
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
261
gen_helper_rintd(tmp, tmp, fpst);
262
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
263
- neon_store_reg64(tmp, a->vd);
264
+ vfp_store_reg64(tmp, a->vd);
265
tcg_temp_free_ptr(fpst);
266
tcg_temp_free_i64(tmp);
267
tcg_temp_free_i32(tcg_rmode);
268
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
269
}
270
271
tmp = tcg_temp_new_i64();
272
- neon_load_reg64(tmp, a->vm);
273
+ vfp_load_reg64(tmp, a->vm);
274
fpst = fpstatus_ptr(FPST_FPCR);
275
gen_helper_rintd_exact(tmp, tmp, fpst);
276
- neon_store_reg64(tmp, a->vd);
277
+ vfp_store_reg64(tmp, a->vd);
278
tcg_temp_free_ptr(fpst);
279
tcg_temp_free_i64(tmp);
280
return true;
281
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
282
vd = tcg_temp_new_i64();
283
vfp_load_reg32(vm, a->vm);
284
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
285
- neon_store_reg64(vd, a->vd);
286
+ vfp_store_reg64(vd, a->vd);
287
tcg_temp_free_i32(vm);
288
tcg_temp_free_i64(vd);
289
return true;
290
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
291
292
vd = tcg_temp_new_i32();
293
vm = tcg_temp_new_i64();
294
- neon_load_reg64(vm, a->vm);
295
+ vfp_load_reg64(vm, a->vm);
296
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
297
vfp_store_reg32(vd, a->vd);
298
tcg_temp_free_i32(vd);
299
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
300
/* u32 -> f64 */
301
gen_helper_vfp_uitod(vd, vm, fpst);
302
}
303
- neon_store_reg64(vd, a->vd);
304
+ vfp_store_reg64(vd, a->vd);
305
tcg_temp_free_i32(vm);
306
tcg_temp_free_i64(vd);
307
tcg_temp_free_ptr(fpst);
308
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
309
310
vm = tcg_temp_new_i64();
311
vd = tcg_temp_new_i32();
312
- neon_load_reg64(vm, a->vm);
313
+ vfp_load_reg64(vm, a->vm);
314
gen_helper_vjcvt(vd, vm, cpu_env);
315
vfp_store_reg32(vd, a->vd);
316
tcg_temp_free_i64(vm);
317
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
318
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
319
320
vd = tcg_temp_new_i64();
321
- neon_load_reg64(vd, a->vd);
322
+ vfp_load_reg64(vd, a->vd);
323
324
fpst = fpstatus_ptr(FPST_FPCR);
325
shift = tcg_const_i32(frac_bits);
326
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
327
g_assert_not_reached();
328
}
329
330
- neon_store_reg64(vd, a->vd);
331
+ vfp_store_reg64(vd, a->vd);
332
tcg_temp_free_i64(vd);
333
tcg_temp_free_i32(shift);
334
tcg_temp_free_ptr(fpst);
335
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
336
fpst = fpstatus_ptr(FPST_FPCR);
337
vm = tcg_temp_new_i64();
338
vd = tcg_temp_new_i32();
339
- neon_load_reg64(vm, a->vm);
340
+ vfp_load_reg64(vm, a->vm);
341
342
if (a->s) {
343
if (a->rz) {
344
--
100
--
345
2.20.1
101
2.25.1
346
347
diff view generated by jsdifflib
Deleted patch
1
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error
2
meant we were using the H4() address swizzler macro rather than the
3
H2() which is required for 2-byte data. This had no effect on
4
little-endian hosts but meant we put the result data into the
5
destination Dreg in the wrong order on big-endian hosts.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
11
---
12
target/arm/vec_helper.c | 8 ++++----
13
1 file changed, 4 insertions(+), 4 deletions(-)
14
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/vec_helper.c
18
+++ b/target/arm/vec_helper.c
19
@@ -XXX,XX +XXX,XX @@ DO_ABA(gvec_uaba_d, uint64_t)
20
r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \
21
r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \
22
\
23
- d[H4(0)] = r0; \
24
- d[H4(1)] = r1; \
25
- d[H4(2)] = r2; \
26
- d[H4(3)] = r3; \
27
+ d[H2(0)] = r0; \
28
+ d[H2(1)] = r1; \
29
+ d[H2(2)] = r2; \
30
+ d[H2(3)] = r3; \
31
}
32
33
DO_NEON_PAIRWISE(neon_padd, add)
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
From: Andrew Deason <adeason@sinenomine.net>
2
2
3
When booting a CPU with EL3 using the -kernel flag, set up CPTR_EL3 so
3
On Solaris, 'sun' is #define'd to 1, which causes errors if a variable
4
that SVE will not trap to EL3.
4
is named 'sun'. Slightly change the name of the var for the Slot User
5
Number so we can build on Solaris.
5
6
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
7
Reviewed-by: Ani Sinha <ani@anisinha.ca>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Andrew Deason <adeason@sinenomine.net>
8
Message-id: 20201030151541.11976-1-remi@remlab.net
9
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
10
Message-id: 20220316035227.3702-3-adeason@sinenomine.net
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
hw/arm/boot.c | 3 +++
13
hw/i386/acpi-build.c | 4 ++--
12
1 file changed, 3 insertions(+)
14
1 file changed, 2 insertions(+), 2 deletions(-)
13
15
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
16
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/boot.c
18
--- a/hw/i386/acpi-build.c
17
+++ b/hw/arm/boot.c
19
+++ b/hw/i386/acpi-build.c
18
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
20
@@ -XXX,XX +XXX,XX @@ Aml *aml_pci_device_dsm(void)
19
if (cpu_isar_feature(aa64_mte, cpu)) {
21
Aml *bnum = aml_arg(4);
20
env->cp15.scr_el3 |= SCR_ATA;
22
Aml *func = aml_arg(2);
21
}
23
Aml *rev = aml_arg(1);
22
+ if (cpu_isar_feature(aa64_sve, cpu)) {
24
- Aml *sun = aml_arg(5);
23
+ env->cp15.cptr_el[3] |= CPTR_EZ;
25
+ Aml *sunum = aml_arg(5);
24
+ }
26
25
/* AArch64 kernels never boot in secure mode */
27
method = aml_method("PDSM", 6, AML_SERIALIZED);
26
assert(!info->secure_boot);
28
27
/* This hook is only supported for AArch32 currently:
29
@@ -XXX,XX +XXX,XX @@ Aml *aml_pci_device_dsm(void)
30
UUID = aml_touuid("E5C937D0-3553-4D7A-9117-EA4D19C3434D");
31
ifctx = aml_if(aml_equal(aml_arg(0), UUID));
32
{
33
- aml_append(ifctx, aml_store(aml_call2("AIDX", bnum, sun), acpi_index));
34
+ aml_append(ifctx, aml_store(aml_call2("AIDX", bnum, sunum), acpi_index));
35
ifctx1 = aml_if(aml_equal(func, zero));
36
{
37
uint8_t byte_list[1];
28
--
38
--
29
2.20.1
39
2.25.1
30
31
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Andrew Deason <adeason@sinenomine.net>
2
2
3
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic.
3
The include for statvfs.h has not been needed since all statvfs calls
4
This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN):
4
were removed in commit 4a1418e07bdc ("Unbreak large mem support by
5
removing kqemu").
5
6
6
CID 1432363 (#1 of 1): Unintentional integer overflow:
7
The comment mentioning CONFIG_BSD hasn't made sense since an include
8
for config-host.h was removed in commit aafd75841001 ("util: Clean up
9
includes").
7
10
8
overflow_before_widen:
11
Remove this cruft.
9
Potentially overflowing expression 1 << scale with type int
10
(32 bits, signed) is evaluated using 32-bit arithmetic, and
11
then used in a context that expects an expression of type
12
hwaddr (64 bits, unsigned).
13
12
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Acked-by: Eric Auger <eric.auger@redhat.com>
16
Message-id: 20201030144617.1535064-1-philmd@redhat.com
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Andrew Deason <adeason@sinenomine.net>
15
Message-id: 20220316035227.3702-4-adeason@sinenomine.net
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
17
---
20
hw/arm/smmuv3.c | 3 ++-
18
util/osdep.c | 7 -------
21
1 file changed, 2 insertions(+), 1 deletion(-)
19
1 file changed, 7 deletions(-)
22
20
23
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
21
diff --git a/util/osdep.c b/util/osdep.c
24
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/smmuv3.c
23
--- a/util/osdep.c
26
+++ b/hw/arm/smmuv3.c
24
+++ b/util/osdep.c
27
@@ -XXX,XX +XXX,XX @@
25
@@ -XXX,XX +XXX,XX @@
28
*/
26
*/
29
30
#include "qemu/osdep.h"
27
#include "qemu/osdep.h"
31
+#include "qemu/bitops.h"
28
#include "qapi/error.h"
32
#include "hw/irq.h"
29
-
33
#include "hw/sysbus.h"
30
-/* Needed early for CONFIG_BSD etc. */
34
#include "migration/vmstate.h"
31
-
35
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
32
-#ifdef CONFIG_SOLARIS
36
scale = CMD_SCALE(cmd);
33
-#include <sys/statvfs.h>
37
num = CMD_NUM(cmd);
34
-#endif
38
ttl = CMD_TTL(cmd);
35
-
39
- num_pages = (num + 1) * (1 << (scale));
36
#include "qemu-common.h"
40
+ num_pages = (num + 1) * BIT_ULL(scale);
37
#include "qemu/cutils.h"
41
}
38
#include "qemu/sockets.h"
42
43
if (type == SMMU_CMD_TLBI_NH_VA) {
44
--
39
--
45
2.20.1
40
2.25.1
46
47
diff view generated by jsdifflib
Deleted patch
1
In gicv3_init_cpuif() we copy the ARMCPU gicv3_maintenance_interrupt
2
into the GICv3CPUState struct's maintenance_irq field. This will
3
only work if the board happens to have already wired up the CPU
4
maintenance IRQ before the GIC was realized. Unfortunately this is
5
not the case for the 'virt' board, and so the value that gets copied
6
is NULL (since a qemu_irq is really a pointer to an IRQState struct
7
under the hood). The effect is that the CPU interface code never
8
actually raises the maintenance interrupt line.
9
1
10
Instead, since the GICv3CPUState has a pointer to the CPUState, make
11
the dereference at the point where we want to raise the interrupt, to
12
avoid an implicit requirement on board code to wire things up in a
13
particular order.
14
15
Reported-by: Jose Martins <josemartins90@gmail.com>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20201009153904.28529-1-peter.maydell@linaro.org
18
Reviewed-by: Luc Michel <luc@lmichel.fr>
19
---
20
include/hw/intc/arm_gicv3_common.h | 1 -
21
hw/intc/arm_gicv3_cpuif.c | 5 ++---
22
2 files changed, 2 insertions(+), 4 deletions(-)
23
24
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/intc/arm_gicv3_common.h
27
+++ b/include/hw/intc/arm_gicv3_common.h
28
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
29
qemu_irq parent_fiq;
30
qemu_irq parent_virq;
31
qemu_irq parent_vfiq;
32
- qemu_irq maintenance_irq;
33
34
/* Redistributor */
35
uint32_t level; /* Current IRQ level */
36
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/intc/arm_gicv3_cpuif.c
39
+++ b/hw/intc/arm_gicv3_cpuif.c
40
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
41
int irqlevel = 0;
42
int fiqlevel = 0;
43
int maintlevel = 0;
44
+ ARMCPU *cpu = ARM_CPU(cs->cpu);
45
46
idx = hppvi_index(cs);
47
trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx);
48
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
49
50
qemu_set_irq(cs->parent_vfiq, fiqlevel);
51
qemu_set_irq(cs->parent_virq, irqlevel);
52
- qemu_set_irq(cs->maintenance_irq, maintlevel);
53
+ qemu_set_irq(cpu->gicv3_maintenance_interrupt, maintlevel);
54
}
55
56
static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
58
&& cpu->gic_num_lrs) {
59
int j;
60
61
- cs->maintenance_irq = cpu->gicv3_maintenance_interrupt;
62
-
63
cs->num_list_regs = cpu->gic_num_lrs;
64
cs->vpribits = cpu->gic_vpribits;
65
cs->vprebits = cpu->gic_vprebits;
66
--
67
2.20.1
68
69
diff view generated by jsdifflib