1
The following changes since commit 6eeea6725a70e6fcb5abba0764496bdab07ddfb3:
1
Small pile of bug fixes for rc1. I've included my patches to get
2
our docs building with Sphinx 3, just for convenience...
2
3
3
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2020-10-06' into staging (2020-10-06 21:13:34 +0100)
4
-- PMM
5
6
The following changes since commit b149dea55cce97cb226683d06af61984a1c11e96:
7
8
Merge remote-tracking branch 'remotes/cschoenebeck/tags/pull-9p-20201102' into staging (2020-11-02 10:57:48 +0000)
4
9
5
are available in the Git repository at:
10
are available in the Git repository at:
6
11
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201008
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201102
8
13
9
for you to fetch changes up to ba118c26e16a97e6ff6de8184057d3420ce16a23:
14
for you to fetch changes up to ffb4fbf90a2f63c9cb33e4bb9f854c79bf04ca4a:
10
15
11
target/arm: Make '-cpu max' have a 48-bit PA (2020-10-08 15:24:32 +0100)
16
tests/qtest/npcm7xx_rng-test: Disable randomness tests (2020-11-02 16:52:18 +0000)
12
17
13
----------------------------------------------------------------
18
----------------------------------------------------------------
14
target-arm queue:
19
target-arm queue:
15
* hw/ssi/npcm7xx_fiu: Fix handling of unsigned integer
20
* target/arm: Fix Neon emulation bugs on big-endian hosts
16
* hw/arm/fsl-imx25: Fix a typo
21
* target/arm: fix handling of HCR.FB
17
* hw/arm/sbsa-ref : Fix SMMUv3 Initialisation
22
* target/arm: fix LORID_EL1 access check
18
* hw/arm/sbsa-ref : allocate IRQs for SMMUv3
23
* disas/capstone: Fix monitor disassembly of >32 bytes
19
* hw/char/bcm2835_aux: Allow less than 32-bit accesses
24
* hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
20
* hw/arm/virt: Implement kvm-steal-time
25
* hw/arm/boot: fix SVE for EL3 direct kernel boot
21
* target/arm: Make '-cpu max' have a 48-bit PA
26
* hw/display/omap_lcdc: Fix potential NULL pointer dereference
27
* hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
28
* target/arm: Get correct MMU index for other-security-state
29
* configure: Test that gio libs from pkg-config work
30
* hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
31
* docs: Fix building with Sphinx 3
32
* tests/qtest/npcm7xx_rng-test: Disable randomness tests
22
33
23
----------------------------------------------------------------
34
----------------------------------------------------------------
24
Andrew Jones (6):
35
AlexChen (2):
25
linux headers: sync to 5.9-rc7
36
hw/display/omap_lcdc: Fix potential NULL pointer dereference
26
target/arm/kvm: Make uncalled stubs explicitly unreachable
37
hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
27
hw/arm/virt: Move post cpu realize check into its own function
28
hw/arm/virt: Move kvm pmu setup to virt_cpu_post_init
29
tests/qtest: Restore aarch64 arm-cpu-features test
30
hw/arm/virt: Implement kvm-steal-time
31
38
32
Graeme Gregory (2):
39
Peter Maydell (9):
33
hw/arm/sbsa-ref : Fix SMMUv3 Initialisation
40
target/arm: Fix float16 pairwise Neon ops on big-endian hosts
34
hw/arm/sbsa-ref : allocate IRQs for SMMUv3
41
target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hosts
42
disas/capstone: Fix monitor disassembly of >32 bytes
43
target/arm: Get correct MMU index for other-security-state
44
configure: Test that gio libs from pkg-config work
45
hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
46
scripts/kerneldoc: For Sphinx 3 use c:macro for macros with arguments
47
qemu-option-trace.rst.inc: Don't use option:: markup
48
tests/qtest/npcm7xx_rng-test: Disable randomness tests
35
49
36
Peter Maydell (1):
50
Philippe Mathieu-Daudé (1):
37
target/arm: Make '-cpu max' have a 48-bit PA
51
hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
38
52
39
Philippe Mathieu-Daudé (3):
53
Richard Henderson (11):
40
hw/ssi/npcm7xx_fiu: Fix handling of unsigned integer
54
target/arm: Introduce neon_full_reg_offset
41
hw/arm/fsl-imx25: Fix a typo
55
target/arm: Move neon_element_offset to translate.c
42
hw/char/bcm2835_aux: Allow less than 32-bit accesses
56
target/arm: Use neon_element_offset in neon_load/store_reg
57
target/arm: Use neon_element_offset in vfp_reg_offset
58
target/arm: Add read/write_neon_element32
59
target/arm: Expand read/write_neon_element32 to all MemOp
60
target/arm: Rename neon_load_reg32 to vfp_load_reg32
61
target/arm: Add read/write_neon_element64
62
target/arm: Rename neon_load_reg64 to vfp_load_reg64
63
target/arm: Simplify do_long_3d and do_2scalar_long
64
target/arm: Improve do_prewiden_3d
43
65
44
docs/system/arm/cpu-features.rst | 11 ++++
66
Rémi Denis-Courmont (3):
45
include/hw/arm/fsl-imx25.h | 2 +-
67
target/arm: fix handling of HCR.FB
46
include/hw/arm/virt.h | 5 ++
68
target/arm: fix LORID_EL1 access check
47
linux-headers/linux/kvm.h | 6 ++-
69
hw/arm/boot: fix SVE for EL3 direct kernel boot
48
target/arm/cpu.h | 4 ++
49
target/arm/kvm_arm.h | 94 ++++++++++++++++++++++++++-------
50
hw/arm/sbsa-ref.c | 3 +-
51
hw/arm/virt.c | 110 ++++++++++++++++++++++++++++-----------
52
hw/char/bcm2835_aux.c | 4 +-
53
hw/ssi/npcm7xx_fiu.c | 12 ++---
54
target/arm/cpu.c | 8 +++
55
target/arm/cpu64.c | 4 ++
56
target/arm/kvm.c | 16 ++++++
57
target/arm/kvm64.c | 64 +++++++++++++++++++++--
58
target/arm/monitor.c | 2 +-
59
tests/qtest/arm-cpu-features.c | 25 +++++++--
60
hw/ssi/trace-events | 2 +-
61
tests/qtest/meson.build | 3 +-
62
18 files changed, 303 insertions(+), 72 deletions(-)
63
70
71
docs/qemu-option-trace.rst.inc | 6 +-
72
configure | 10 +-
73
include/hw/intc/arm_gicv3_common.h | 1 -
74
disas/capstone.c | 2 +-
75
hw/arm/boot.c | 3 +
76
hw/arm/smmuv3.c | 3 +-
77
hw/display/exynos4210_fimd.c | 4 +-
78
hw/display/omap_lcdc.c | 10 +-
79
hw/intc/arm_gicv3_cpuif.c | 5 +-
80
target/arm/helper.c | 24 +-
81
target/arm/m_helper.c | 3 +-
82
target/arm/translate.c | 153 +++++++++---
83
target/arm/vec_helper.c | 12 +-
84
tests/qtest/npcm7xx_rng-test.c | 14 +-
85
scripts/kernel-doc | 18 +-
86
target/arm/translate-neon.c.inc | 472 ++++++++++++++++++++-----------------
87
target/arm/translate-vfp.c.inc | 341 +++++++++++----------------
88
17 files changed, 588 insertions(+), 493 deletions(-)
89
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This function makes it clear that we're talking about the whole
4
register, and not the 32-bit piece at index 0. This fixes a bug
5
when running on a big-endian host.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201030022618.785675-2-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate.c | 8 ++++++
13
target/arm/translate-neon.c.inc | 44 ++++++++++++++++-----------------
14
target/arm/translate-vfp.c.inc | 2 +-
15
3 files changed, 31 insertions(+), 23 deletions(-)
16
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
20
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
22
unallocated_encoding(s);
23
}
24
25
+/*
26
+ * Return the offset of a "full" NEON Dreg.
27
+ */
28
+static long neon_full_reg_offset(unsigned reg)
29
+{
30
+ return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
31
+}
32
+
33
static inline long vfp_reg_offset(bool dp, unsigned reg)
34
{
35
if (dp) {
36
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.c.inc
39
+++ b/target/arm/translate-neon.c.inc
40
@@ -XXX,XX +XXX,XX @@ neon_element_offset(int reg, int element, MemOp size)
41
ofs ^= 8 - element_size;
42
}
43
#endif
44
- return neon_reg_offset(reg, 0) + ofs;
45
+ return neon_full_reg_offset(reg) + ofs;
46
}
47
48
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
50
* We cannot write 16 bytes at once because the
51
* destination is unaligned.
52
*/
53
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
54
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
55
8, 8, tmp);
56
- tcg_gen_gvec_mov(0, neon_reg_offset(vd + 1, 0),
57
- neon_reg_offset(vd, 0), 8, 8);
58
+ tcg_gen_gvec_mov(0, neon_full_reg_offset(vd + 1),
59
+ neon_full_reg_offset(vd), 8, 8);
60
} else {
61
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
62
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
63
vec_size, vec_size, tmp);
64
}
65
tcg_gen_addi_i32(addr, addr, 1 << size);
66
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
67
static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
68
{
69
int vec_size = a->q ? 16 : 8;
70
- int rd_ofs = neon_reg_offset(a->vd, 0);
71
- int rn_ofs = neon_reg_offset(a->vn, 0);
72
- int rm_ofs = neon_reg_offset(a->vm, 0);
73
+ int rd_ofs = neon_full_reg_offset(a->vd);
74
+ int rn_ofs = neon_full_reg_offset(a->vn);
75
+ int rm_ofs = neon_full_reg_offset(a->vm);
76
77
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
78
return false;
79
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
80
{
81
/* Handle a 2-reg-shift insn which can be vectorized. */
82
int vec_size = a->q ? 16 : 8;
83
- int rd_ofs = neon_reg_offset(a->vd, 0);
84
- int rm_ofs = neon_reg_offset(a->vm, 0);
85
+ int rd_ofs = neon_full_reg_offset(a->vd);
86
+ int rm_ofs = neon_full_reg_offset(a->vm);
87
88
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
89
return false;
90
@@ -XXX,XX +XXX,XX @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
91
{
92
/* FP operations in 2-reg-and-shift group */
93
int vec_size = a->q ? 16 : 8;
94
- int rd_ofs = neon_reg_offset(a->vd, 0);
95
- int rm_ofs = neon_reg_offset(a->vm, 0);
96
+ int rd_ofs = neon_full_reg_offset(a->vd);
97
+ int rm_ofs = neon_full_reg_offset(a->vm);
98
TCGv_ptr fpst;
99
100
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
101
@@ -XXX,XX +XXX,XX @@ static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
102
return true;
103
}
104
105
- reg_ofs = neon_reg_offset(a->vd, 0);
106
+ reg_ofs = neon_full_reg_offset(a->vd);
107
vec_size = a->q ? 16 : 8;
108
imm = asimd_imm_const(a->imm, a->cmode, a->op);
109
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_P_3d(DisasContext *s, arg_3diff *a)
111
return true;
112
}
113
114
- tcg_gen_gvec_3_ool(neon_reg_offset(a->vd, 0),
115
- neon_reg_offset(a->vn, 0),
116
- neon_reg_offset(a->vm, 0),
117
+ tcg_gen_gvec_3_ool(neon_full_reg_offset(a->vd),
118
+ neon_full_reg_offset(a->vn),
119
+ neon_full_reg_offset(a->vm),
120
16, 16, 0, fn_gvec);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
124
{
125
/* Two registers and a scalar, using gvec */
126
int vec_size = a->q ? 16 : 8;
127
- int rd_ofs = neon_reg_offset(a->vd, 0);
128
- int rn_ofs = neon_reg_offset(a->vn, 0);
129
+ int rd_ofs = neon_full_reg_offset(a->vd);
130
+ int rn_ofs = neon_full_reg_offset(a->vn);
131
int rm_ofs;
132
int idx;
133
TCGv_ptr fpstatus;
134
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
135
/* a->vm is M:Vm, which encodes both register and index */
136
idx = extract32(a->vm, a->size + 2, 2);
137
a->vm = extract32(a->vm, 0, a->size + 2);
138
- rm_ofs = neon_reg_offset(a->vm, 0);
139
+ rm_ofs = neon_full_reg_offset(a->vm);
140
141
fpstatus = fpstatus_ptr(a->size == 1 ? FPST_STD_F16 : FPST_STD);
142
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, fpstatus,
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
144
return true;
145
}
146
147
- tcg_gen_gvec_dup_mem(a->size, neon_reg_offset(a->vd, 0),
148
+ tcg_gen_gvec_dup_mem(a->size, neon_full_reg_offset(a->vd),
149
neon_element_offset(a->vm, a->index, a->size),
150
a->q ? 16 : 8, a->q ? 16 : 8);
151
return true;
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
153
static bool do_2misc_vec(DisasContext *s, arg_2misc *a, GVecGen2Fn *fn)
154
{
155
int vec_size = a->q ? 16 : 8;
156
- int rd_ofs = neon_reg_offset(a->vd, 0);
157
- int rm_ofs = neon_reg_offset(a->vm, 0);
158
+ int rd_ofs = neon_full_reg_offset(a->vd);
159
+ int rm_ofs = neon_full_reg_offset(a->vm);
160
161
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
162
return false;
163
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-vfp.c.inc
166
+++ b/target/arm/translate-vfp.c.inc
167
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
168
}
169
170
tmp = load_reg(s, a->rt);
171
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(a->vn, 0),
172
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(a->vn),
173
vec_size, vec_size, tmp);
174
tcg_temp_free_i32(tmp);
175
176
--
177
2.20.1
178
179
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This will shortly have users outside of translate-neon.c.inc.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-3-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 20 ++++++++++++++++++++
11
target/arm/translate-neon.c.inc | 19 -------------------
12
2 files changed, 20 insertions(+), 19 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
19
return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
20
}
21
22
+/*
23
+ * Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
24
+ * where 0 is the least significant end of the register.
25
+ */
26
+static long neon_element_offset(int reg, int element, MemOp size)
27
+{
28
+ int element_size = 1 << size;
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /*
32
+ * Calculate the offset assuming fully little-endian,
33
+ * then XOR to account for the order of the 8-byte units.
34
+ */
35
+ if (element_size < 8) {
36
+ ofs ^= 8 - element_size;
37
+ }
38
+#endif
39
+ return neon_full_reg_offset(reg) + ofs;
40
+}
41
+
42
static inline long vfp_reg_offset(bool dp, unsigned reg)
43
{
44
if (dp) {
45
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-neon.c.inc
48
+++ b/target/arm/translate-neon.c.inc
49
@@ -XXX,XX +XXX,XX @@ static inline int neon_3same_fp_size(DisasContext *s, int x)
50
#include "decode-neon-ls.c.inc"
51
#include "decode-neon-shared.c.inc"
52
53
-/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
54
- * where 0 is the least significant end of the register.
55
- */
56
-static inline long
57
-neon_element_offset(int reg, int element, MemOp size)
58
-{
59
- int element_size = 1 << size;
60
- int ofs = element * element_size;
61
-#ifdef HOST_WORDS_BIGENDIAN
62
- /* Calculate the offset assuming fully little-endian,
63
- * then XOR to account for the order of the 8-byte units.
64
- */
65
- if (element_size < 8) {
66
- ofs ^= 8 - element_size;
67
- }
68
-#endif
69
- return neon_full_reg_offset(reg) + ofs;
70
-}
71
-
72
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
73
{
74
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
75
--
76
2.20.1
77
78
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When we compile without KVM support !defined(CONFIG_KVM) we generate
3
These are the only users of neon_reg_offset, so remove that.
4
stubs for functions that the linker will still encounter. Sometimes
5
these stubs can be executed safely and are placed in paths where they
6
get executed with or without KVM. Other functions should never be
7
called without KVM. Those functions should be guarded by kvm_enabled(),
8
but should also be robust to refactoring mistakes. Putting a
9
g_assert_not_reached() in the function should help. Additionally,
10
the g_assert_not_reached() calls may actually help the linker remove
11
some code.
12
4
13
We remove the stubs for kvm_arm_get/put_virtual_time(), as they aren't
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
necessary at all - the only caller is in kvm.c
6
Message-id: 20201030022618.785675-4-richard.henderson@linaro.org
15
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Eric Auger <eric.auger@redhat.com>
17
Signed-off-by: Andrew Jones <drjones@redhat.com>
18
Message-id: 20201001061718.101915-3-drjones@redhat.com
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
9
---
21
target/arm/kvm_arm.h | 51 +++++++++++++++++++++++++++-----------------
10
target/arm/translate.c | 14 ++------------
22
1 file changed, 32 insertions(+), 19 deletions(-)
11
1 file changed, 2 insertions(+), 12 deletions(-)
23
12
24
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
25
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/kvm_arm.h
15
--- a/target/arm/translate.c
27
+++ b/target/arm/kvm_arm.h
16
+++ b/target/arm/translate.c
28
@@ -XXX,XX +XXX,XX @@ int kvm_arm_set_irq(int cpu, int irqtype, int irq, int level);
17
@@ -XXX,XX +XXX,XX @@ static inline long vfp_reg_offset(bool dp, unsigned reg)
29
18
}
30
#else
19
}
31
20
32
-static inline void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
21
-/* Return the offset of a 32-bit piece of a NEON register.
22
- zero is the least significant end of the register. */
23
-static inline long
24
-neon_reg_offset (int reg, int n)
33
-{
25
-{
34
- /*
26
- int sreg;
35
- * This should never actually be called in the "not KVM" case,
27
- sreg = reg * 2 + n;
36
- * but set up the fields to indicate an error anyway.
28
- return vfp_reg_offset(0, sreg);
37
- */
38
- cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
39
- cpu->host_cpu_probe_failed = true;
40
-}
29
-}
41
-
30
-
42
-static inline void kvm_arm_add_vcpu_properties(Object *obj) {}
31
static TCGv_i32 neon_load_reg(int reg, int pass)
43
-
44
+/*
45
+ * It's safe to call these functions without KVM support.
46
+ * They should either do nothing or return "not supported".
47
+ */
48
static inline bool kvm_arm_aarch32_supported(void)
49
{
32
{
50
return false;
33
TCGv_i32 tmp = tcg_temp_new_i32();
51
@@ -XXX,XX +XXX,XX @@ static inline bool kvm_arm_sve_supported(void)
34
- tcg_gen_ld_i32(tmp, cpu_env, neon_reg_offset(reg, pass));
52
return false;
35
+ tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
36
return tmp;
53
}
37
}
54
38
55
+/*
39
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
56
+ * These functions should never actually be called without KVM support.
57
+ */
58
+static inline void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
59
+{
60
+ g_assert_not_reached();
61
+}
62
+
63
+static inline void kvm_arm_add_vcpu_properties(Object *obj)
64
+{
65
+ g_assert_not_reached();
66
+}
67
+
68
static inline int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
69
{
40
{
70
- return -ENOENT;
41
- tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
71
+ g_assert_not_reached();
42
+ tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
43
tcg_temp_free_i32(var);
72
}
44
}
73
45
74
static inline int kvm_arm_vgic_probe(void)
75
{
76
- return 0;
77
+ g_assert_not_reached();
78
}
79
80
-static inline void kvm_arm_pmu_set_irq(CPUState *cs, int irq) {}
81
-static inline void kvm_arm_pmu_init(CPUState *cs) {}
82
+static inline void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
83
+{
84
+ g_assert_not_reached();
85
+}
86
87
-static inline void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map) {}
88
+static inline void kvm_arm_pmu_init(CPUState *cs)
89
+{
90
+ g_assert_not_reached();
91
+}
92
+
93
+static inline void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
94
+{
95
+ g_assert_not_reached();
96
+}
97
98
-static inline void kvm_arm_get_virtual_time(CPUState *cs) {}
99
-static inline void kvm_arm_put_virtual_time(CPUState *cs) {}
100
#endif
101
102
static inline const char *gic_class_name(void)
103
--
46
--
104
2.20.1
47
2.20.1
105
48
106
49
diff view generated by jsdifflib
1
From: Graeme Gregory <graeme@nuviainc.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
SMMUv3 has an error in a previous patch where an i was transposed to a 1
3
This seems a bit more readable than using offsetof CPU_DoubleU.
4
meaning interrupts would not have been correctly assigned to the SMMUv3
5
instance.
6
4
7
Fixes: 48ba18e6d3f3 ("hw/arm/sbsa-ref: Simplify by moving the gic in the machine state")
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Graeme Gregory <graeme@nuviainc.com>
6
Message-id: 20201030022618.785675-5-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 20201007100732.4103790-2-graeme@nuviainc.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
hw/arm/sbsa-ref.c | 2 +-
10
target/arm/translate.c | 13 ++++---------
15
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 4 insertions(+), 9 deletions(-)
16
12
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
15
--- a/target/arm/translate.c
20
+++ b/hw/arm/sbsa-ref.c
16
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ static void create_smmu(const SBSAMachineState *sms, PCIBus *bus)
17
@@ -XXX,XX +XXX,XX @@ static long neon_element_offset(int reg, int element, MemOp size)
22
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
18
return neon_full_reg_offset(reg) + ofs;
23
for (i = 0; i < NUM_SMMU_IRQS; i++) {
19
}
24
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
20
25
- qdev_get_gpio_in(sms->gic, irq + 1));
21
-static inline long vfp_reg_offset(bool dp, unsigned reg)
26
+ qdev_get_gpio_in(sms->gic, irq + i));
22
+/* Return the offset of a VFP Dreg (dp = true) or VFP Sreg (dp = false). */
23
+static long vfp_reg_offset(bool dp, unsigned reg)
24
{
25
if (dp) {
26
- return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
27
+ return neon_element_offset(reg, 0, MO_64);
28
} else {
29
- long ofs = offsetof(CPUARMState, vfp.zregs[reg >> 2].d[(reg >> 1) & 1]);
30
- if (reg & 1) {
31
- ofs += offsetof(CPU_DoubleU, l.upper);
32
- } else {
33
- ofs += offsetof(CPU_DoubleU, l.lower);
34
- }
35
- return ofs;
36
+ return neon_element_offset(reg >> 1, reg & 1, MO_32);
27
}
37
}
28
}
38
}
29
39
30
--
40
--
31
2.20.1
41
2.20.1
32
42
33
43
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Model these off the aa64 read/write_vec_element functions.
4
Use it within translate-neon.c.inc. The new functions do
5
not allocate or free temps, so this rearranges the calling
6
code a bit.
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201030022618.785675-6-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/translate.c | 26 ++++
14
target/arm/translate-neon.c.inc | 256 ++++++++++++++++++++------------
15
2 files changed, 183 insertions(+), 99 deletions(-)
16
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
20
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
22
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
23
}
24
25
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
26
+{
27
+ long off = neon_element_offset(reg, ele, size);
28
+
29
+ switch (size) {
30
+ case MO_32:
31
+ tcg_gen_ld_i32(dest, cpu_env, off);
32
+ break;
33
+ default:
34
+ g_assert_not_reached();
35
+ }
36
+}
37
+
38
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
39
+{
40
+ long off = neon_element_offset(reg, ele, size);
41
+
42
+ switch (size) {
43
+ case MO_32:
44
+ tcg_gen_st_i32(src, cpu_env, off);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
52
{
53
TCGv_ptr ret = tcg_temp_new_ptr();
54
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/translate-neon.c.inc
57
+++ b/target/arm/translate-neon.c.inc
58
@@ -XXX,XX +XXX,XX @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
59
* early. Since Q is 0 there are always just two passes, so instead
60
* of a complicated loop over each pass we just unroll.
61
*/
62
- tmp = neon_load_reg(a->vn, 0);
63
- tmp2 = neon_load_reg(a->vn, 1);
64
+ tmp = tcg_temp_new_i32();
65
+ tmp2 = tcg_temp_new_i32();
66
+ tmp3 = tcg_temp_new_i32();
67
+
68
+ read_neon_element32(tmp, a->vn, 0, MO_32);
69
+ read_neon_element32(tmp2, a->vn, 1, MO_32);
70
fn(tmp, tmp, tmp2);
71
- tcg_temp_free_i32(tmp2);
72
73
- tmp3 = neon_load_reg(a->vm, 0);
74
- tmp2 = neon_load_reg(a->vm, 1);
75
+ read_neon_element32(tmp3, a->vm, 0, MO_32);
76
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
77
fn(tmp3, tmp3, tmp2);
78
- tcg_temp_free_i32(tmp2);
79
80
- neon_store_reg(a->vd, 0, tmp);
81
- neon_store_reg(a->vd, 1, tmp3);
82
+ write_neon_element32(tmp, a->vd, 0, MO_32);
83
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
84
+
85
+ tcg_temp_free_i32(tmp);
86
+ tcg_temp_free_i32(tmp2);
87
+ tcg_temp_free_i32(tmp3);
88
return true;
89
}
90
91
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
92
* 2-reg-and-shift operations, size < 3 case, where the
93
* helper needs to be passed cpu_env.
94
*/
95
- TCGv_i32 constimm;
96
+ TCGv_i32 constimm, tmp;
97
int pass;
98
99
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
100
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
101
* by immediate using the variable shift operations.
102
*/
103
constimm = tcg_const_i32(dup_const(a->size, a->shift));
104
+ tmp = tcg_temp_new_i32();
105
106
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
107
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
108
+ read_neon_element32(tmp, a->vm, pass, MO_32);
109
fn(tmp, cpu_env, tmp, constimm);
110
- neon_store_reg(a->vd, pass, tmp);
111
+ write_neon_element32(tmp, a->vd, pass, MO_32);
112
}
113
+ tcg_temp_free_i32(tmp);
114
tcg_temp_free_i32(constimm);
115
return true;
116
}
117
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
118
constimm = tcg_const_i64(-a->shift);
119
rm1 = tcg_temp_new_i64();
120
rm2 = tcg_temp_new_i64();
121
+ rd = tcg_temp_new_i32();
122
123
/* Load both inputs first to avoid potential overwrite if rm == rd */
124
neon_load_reg64(rm1, a->vm);
125
neon_load_reg64(rm2, a->vm + 1);
126
127
shiftfn(rm1, rm1, constimm);
128
- rd = tcg_temp_new_i32();
129
narrowfn(rd, cpu_env, rm1);
130
- neon_store_reg(a->vd, 0, rd);
131
+ write_neon_element32(rd, a->vd, 0, MO_32);
132
133
shiftfn(rm2, rm2, constimm);
134
- rd = tcg_temp_new_i32();
135
narrowfn(rd, cpu_env, rm2);
136
- neon_store_reg(a->vd, 1, rd);
137
+ write_neon_element32(rd, a->vd, 1, MO_32);
138
139
+ tcg_temp_free_i32(rd);
140
tcg_temp_free_i64(rm1);
141
tcg_temp_free_i64(rm2);
142
tcg_temp_free_i64(constimm);
143
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
144
constimm = tcg_const_i32(imm);
145
146
/* Load all inputs first to avoid potential overwrite */
147
- rm1 = neon_load_reg(a->vm, 0);
148
- rm2 = neon_load_reg(a->vm, 1);
149
- rm3 = neon_load_reg(a->vm + 1, 0);
150
- rm4 = neon_load_reg(a->vm + 1, 1);
151
+ rm1 = tcg_temp_new_i32();
152
+ rm2 = tcg_temp_new_i32();
153
+ rm3 = tcg_temp_new_i32();
154
+ rm4 = tcg_temp_new_i32();
155
+ read_neon_element32(rm1, a->vm, 0, MO_32);
156
+ read_neon_element32(rm2, a->vm, 1, MO_32);
157
+ read_neon_element32(rm3, a->vm, 2, MO_32);
158
+ read_neon_element32(rm4, a->vm, 3, MO_32);
159
rtmp = tcg_temp_new_i64();
160
161
shiftfn(rm1, rm1, constimm);
162
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
163
tcg_temp_free_i32(rm2);
164
165
narrowfn(rm1, cpu_env, rtmp);
166
- neon_store_reg(a->vd, 0, rm1);
167
+ write_neon_element32(rm1, a->vd, 0, MO_32);
168
+ tcg_temp_free_i32(rm1);
169
170
shiftfn(rm3, rm3, constimm);
171
shiftfn(rm4, rm4, constimm);
172
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
173
174
narrowfn(rm3, cpu_env, rtmp);
175
tcg_temp_free_i64(rtmp);
176
- neon_store_reg(a->vd, 1, rm3);
177
+ write_neon_element32(rm3, a->vd, 1, MO_32);
178
+ tcg_temp_free_i32(rm3);
179
return true;
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
183
widen_mask = dup_const(a->size + 1, widen_mask);
184
}
185
186
- rm0 = neon_load_reg(a->vm, 0);
187
- rm1 = neon_load_reg(a->vm, 1);
188
+ rm0 = tcg_temp_new_i32();
189
+ rm1 = tcg_temp_new_i32();
190
+ read_neon_element32(rm0, a->vm, 0, MO_32);
191
+ read_neon_element32(rm1, a->vm, 1, MO_32);
192
tmp = tcg_temp_new_i64();
193
194
widenfn(tmp, rm0);
195
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
196
if (src1_wide) {
197
neon_load_reg64(rn0_64, a->vn);
198
} else {
199
- TCGv_i32 tmp = neon_load_reg(a->vn, 0);
200
+ TCGv_i32 tmp = tcg_temp_new_i32();
201
+ read_neon_element32(tmp, a->vn, 0, MO_32);
202
widenfn(rn0_64, tmp);
203
tcg_temp_free_i32(tmp);
204
}
205
- rm = neon_load_reg(a->vm, 0);
206
+ rm = tcg_temp_new_i32();
207
+ read_neon_element32(rm, a->vm, 0, MO_32);
208
209
widenfn(rm_64, rm);
210
tcg_temp_free_i32(rm);
211
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
212
if (src1_wide) {
213
neon_load_reg64(rn1_64, a->vn + 1);
214
} else {
215
- TCGv_i32 tmp = neon_load_reg(a->vn, 1);
216
+ TCGv_i32 tmp = tcg_temp_new_i32();
217
+ read_neon_element32(tmp, a->vn, 1, MO_32);
218
widenfn(rn1_64, tmp);
219
tcg_temp_free_i32(tmp);
220
}
221
- rm = neon_load_reg(a->vm, 1);
222
+ rm = tcg_temp_new_i32();
223
+ read_neon_element32(rm, a->vm, 1, MO_32);
224
225
neon_store_reg64(rn0_64, a->vd);
226
227
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
228
229
narrowfn(rd1, rn_64);
230
231
- neon_store_reg(a->vd, 0, rd0);
232
- neon_store_reg(a->vd, 1, rd1);
233
+ write_neon_element32(rd0, a->vd, 0, MO_32);
234
+ write_neon_element32(rd1, a->vd, 1, MO_32);
235
236
+ tcg_temp_free_i32(rd0);
237
+ tcg_temp_free_i32(rd1);
238
tcg_temp_free_i64(rn_64);
239
tcg_temp_free_i64(rm_64);
240
241
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
242
rd0 = tcg_temp_new_i64();
243
rd1 = tcg_temp_new_i64();
244
245
- rn = neon_load_reg(a->vn, 0);
246
- rm = neon_load_reg(a->vm, 0);
247
+ rn = tcg_temp_new_i32();
248
+ rm = tcg_temp_new_i32();
249
+ read_neon_element32(rn, a->vn, 0, MO_32);
250
+ read_neon_element32(rm, a->vm, 0, MO_32);
251
opfn(rd0, rn, rm);
252
- tcg_temp_free_i32(rn);
253
- tcg_temp_free_i32(rm);
254
255
- rn = neon_load_reg(a->vn, 1);
256
- rm = neon_load_reg(a->vm, 1);
257
+ read_neon_element32(rn, a->vn, 1, MO_32);
258
+ read_neon_element32(rm, a->vm, 1, MO_32);
259
opfn(rd1, rn, rm);
260
tcg_temp_free_i32(rn);
261
tcg_temp_free_i32(rm);
262
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
263
264
static inline TCGv_i32 neon_get_scalar(int size, int reg)
265
{
266
- TCGv_i32 tmp;
267
- if (size == 1) {
268
- tmp = neon_load_reg(reg & 7, reg >> 4);
269
+ TCGv_i32 tmp = tcg_temp_new_i32();
270
+ if (size == MO_16) {
271
+ read_neon_element32(tmp, reg & 7, reg >> 4, MO_32);
272
if (reg & 8) {
273
gen_neon_dup_high16(tmp);
274
} else {
275
gen_neon_dup_low16(tmp);
276
}
277
} else {
278
- tmp = neon_load_reg(reg & 15, reg >> 4);
279
+ read_neon_element32(tmp, reg & 15, reg >> 4, MO_32);
280
}
281
return tmp;
282
}
283
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
284
* perform an accumulation operation of that result into the
285
* destination.
286
*/
287
- TCGv_i32 scalar;
288
+ TCGv_i32 scalar, tmp;
289
int pass;
290
291
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
292
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
293
}
294
295
scalar = neon_get_scalar(a->size, a->vm);
296
+ tmp = tcg_temp_new_i32();
297
298
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
299
- TCGv_i32 tmp = neon_load_reg(a->vn, pass);
300
+ read_neon_element32(tmp, a->vn, pass, MO_32);
301
opfn(tmp, tmp, scalar);
302
if (accfn) {
303
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
304
+ TCGv_i32 rd = tcg_temp_new_i32();
305
+ read_neon_element32(rd, a->vd, pass, MO_32);
306
accfn(tmp, rd, tmp);
307
tcg_temp_free_i32(rd);
308
}
309
- neon_store_reg(a->vd, pass, tmp);
310
+ write_neon_element32(tmp, a->vd, pass, MO_32);
311
}
312
+ tcg_temp_free_i32(tmp);
313
tcg_temp_free_i32(scalar);
314
return true;
315
}
316
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
317
* performs a kind of fused op-then-accumulate using a helper
318
* function that takes all of rd, rn and the scalar at once.
319
*/
320
- TCGv_i32 scalar;
321
+ TCGv_i32 scalar, rn, rd;
322
int pass;
323
324
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
325
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
326
}
327
328
scalar = neon_get_scalar(a->size, a->vm);
329
+ rn = tcg_temp_new_i32();
330
+ rd = tcg_temp_new_i32();
331
332
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
333
- TCGv_i32 rn = neon_load_reg(a->vn, pass);
334
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
335
+ read_neon_element32(rn, a->vn, pass, MO_32);
336
+ read_neon_element32(rd, a->vd, pass, MO_32);
337
opfn(rd, cpu_env, rn, scalar, rd);
338
- tcg_temp_free_i32(rn);
339
- neon_store_reg(a->vd, pass, rd);
340
+ write_neon_element32(rd, a->vd, pass, MO_32);
341
}
342
+ tcg_temp_free_i32(rn);
343
+ tcg_temp_free_i32(rd);
344
tcg_temp_free_i32(scalar);
345
346
return true;
347
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
348
scalar = neon_get_scalar(a->size, a->vm);
349
350
/* Load all inputs before writing any outputs, in case of overlap */
351
- rn = neon_load_reg(a->vn, 0);
352
+ rn = tcg_temp_new_i32();
353
+ read_neon_element32(rn, a->vn, 0, MO_32);
354
rn0_64 = tcg_temp_new_i64();
355
opfn(rn0_64, rn, scalar);
356
- tcg_temp_free_i32(rn);
357
358
- rn = neon_load_reg(a->vn, 1);
359
+ read_neon_element32(rn, a->vn, 1, MO_32);
360
rn1_64 = tcg_temp_new_i64();
361
opfn(rn1_64, rn, scalar);
362
tcg_temp_free_i32(rn);
363
@@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a)
364
return false;
365
}
366
n <<= 3;
367
+ tmp = tcg_temp_new_i32();
368
if (a->op) {
369
- tmp = neon_load_reg(a->vd, 0);
370
+ read_neon_element32(tmp, a->vd, 0, MO_32);
371
} else {
372
- tmp = tcg_temp_new_i32();
373
tcg_gen_movi_i32(tmp, 0);
374
}
375
- tmp2 = neon_load_reg(a->vm, 0);
376
+ tmp2 = tcg_temp_new_i32();
377
+ read_neon_element32(tmp2, a->vm, 0, MO_32);
378
ptr1 = vfp_reg_ptr(true, a->vn);
379
tmp4 = tcg_const_i32(n);
380
gen_helper_neon_tbl(tmp2, tmp2, tmp, ptr1, tmp4);
381
- tcg_temp_free_i32(tmp);
382
+
383
if (a->op) {
384
- tmp = neon_load_reg(a->vd, 1);
385
+ read_neon_element32(tmp, a->vd, 1, MO_32);
386
} else {
387
- tmp = tcg_temp_new_i32();
388
tcg_gen_movi_i32(tmp, 0);
389
}
390
- tmp3 = neon_load_reg(a->vm, 1);
391
+ tmp3 = tcg_temp_new_i32();
392
+ read_neon_element32(tmp3, a->vm, 1, MO_32);
393
gen_helper_neon_tbl(tmp3, tmp3, tmp, ptr1, tmp4);
394
+ tcg_temp_free_i32(tmp);
395
tcg_temp_free_i32(tmp4);
396
tcg_temp_free_ptr(ptr1);
397
- neon_store_reg(a->vd, 0, tmp2);
398
- neon_store_reg(a->vd, 1, tmp3);
399
- tcg_temp_free_i32(tmp);
400
+
401
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
402
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
403
+ tcg_temp_free_i32(tmp2);
404
+ tcg_temp_free_i32(tmp3);
405
return true;
406
}
407
408
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
409
static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
410
{
411
int pass, half;
412
+ TCGv_i32 tmp[2];
413
414
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
415
return false;
416
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
417
return true;
418
}
419
420
- for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
421
- TCGv_i32 tmp[2];
422
+ tmp[0] = tcg_temp_new_i32();
423
+ tmp[1] = tcg_temp_new_i32();
424
425
+ for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
426
for (half = 0; half < 2; half++) {
427
- tmp[half] = neon_load_reg(a->vm, pass * 2 + half);
428
+ read_neon_element32(tmp[half], a->vm, pass * 2 + half, MO_32);
429
switch (a->size) {
430
case 0:
431
tcg_gen_bswap32_i32(tmp[half], tmp[half]);
432
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
433
g_assert_not_reached();
434
}
435
}
436
- neon_store_reg(a->vd, pass * 2, tmp[1]);
437
- neon_store_reg(a->vd, pass * 2 + 1, tmp[0]);
438
+ write_neon_element32(tmp[1], a->vd, pass * 2, MO_32);
439
+ write_neon_element32(tmp[0], a->vd, pass * 2 + 1, MO_32);
440
}
441
+
442
+ tcg_temp_free_i32(tmp[0]);
443
+ tcg_temp_free_i32(tmp[1]);
444
return true;
445
}
446
447
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
448
rm0_64 = tcg_temp_new_i64();
449
rm1_64 = tcg_temp_new_i64();
450
rd_64 = tcg_temp_new_i64();
451
- tmp = neon_load_reg(a->vm, pass * 2);
452
+
453
+ tmp = tcg_temp_new_i32();
454
+ read_neon_element32(tmp, a->vm, pass * 2, MO_32);
455
widenfn(rm0_64, tmp);
456
- tcg_temp_free_i32(tmp);
457
- tmp = neon_load_reg(a->vm, pass * 2 + 1);
458
+ read_neon_element32(tmp, a->vm, pass * 2 + 1, MO_32);
459
widenfn(rm1_64, tmp);
460
tcg_temp_free_i32(tmp);
461
+
462
opfn(rd_64, rm0_64, rm1_64);
463
tcg_temp_free_i64(rm0_64);
464
tcg_temp_free_i64(rm1_64);
465
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
466
narrowfn(rd0, cpu_env, rm);
467
neon_load_reg64(rm, a->vm + 1);
468
narrowfn(rd1, cpu_env, rm);
469
- neon_store_reg(a->vd, 0, rd0);
470
- neon_store_reg(a->vd, 1, rd1);
471
+ write_neon_element32(rd0, a->vd, 0, MO_32);
472
+ write_neon_element32(rd1, a->vd, 1, MO_32);
473
+ tcg_temp_free_i32(rd0);
474
+ tcg_temp_free_i32(rd1);
475
tcg_temp_free_i64(rm);
476
return true;
477
}
478
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
479
}
480
481
rd = tcg_temp_new_i64();
482
+ rm0 = tcg_temp_new_i32();
483
+ rm1 = tcg_temp_new_i32();
484
485
- rm0 = neon_load_reg(a->vm, 0);
486
- rm1 = neon_load_reg(a->vm, 1);
487
+ read_neon_element32(rm0, a->vm, 0, MO_32);
488
+ read_neon_element32(rm1, a->vm, 1, MO_32);
489
490
widenfn(rd, rm0);
491
tcg_gen_shli_i64(rd, rd, 8 << a->size);
492
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
493
494
fpst = fpstatus_ptr(FPST_STD);
495
ahp = get_ahp_flag();
496
- tmp = neon_load_reg(a->vm, 0);
497
+ tmp = tcg_temp_new_i32();
498
+ read_neon_element32(tmp, a->vm, 0, MO_32);
499
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
500
- tmp2 = neon_load_reg(a->vm, 1);
501
+ tmp2 = tcg_temp_new_i32();
502
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
503
gen_helper_vfp_fcvt_f32_to_f16(tmp2, tmp2, fpst, ahp);
504
tcg_gen_shli_i32(tmp2, tmp2, 16);
505
tcg_gen_or_i32(tmp2, tmp2, tmp);
506
- tcg_temp_free_i32(tmp);
507
- tmp = neon_load_reg(a->vm, 2);
508
+ read_neon_element32(tmp, a->vm, 2, MO_32);
509
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
510
- tmp3 = neon_load_reg(a->vm, 3);
511
- neon_store_reg(a->vd, 0, tmp2);
512
+ tmp3 = tcg_temp_new_i32();
513
+ read_neon_element32(tmp3, a->vm, 3, MO_32);
514
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
515
+ tcg_temp_free_i32(tmp2);
516
gen_helper_vfp_fcvt_f32_to_f16(tmp3, tmp3, fpst, ahp);
517
tcg_gen_shli_i32(tmp3, tmp3, 16);
518
tcg_gen_or_i32(tmp3, tmp3, tmp);
519
- neon_store_reg(a->vd, 1, tmp3);
520
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
521
+ tcg_temp_free_i32(tmp3);
522
tcg_temp_free_i32(tmp);
523
tcg_temp_free_i32(ahp);
524
tcg_temp_free_ptr(fpst);
525
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
526
fpst = fpstatus_ptr(FPST_STD);
527
ahp = get_ahp_flag();
528
tmp3 = tcg_temp_new_i32();
529
- tmp = neon_load_reg(a->vm, 0);
530
- tmp2 = neon_load_reg(a->vm, 1);
531
+ tmp2 = tcg_temp_new_i32();
532
+ tmp = tcg_temp_new_i32();
533
+ read_neon_element32(tmp, a->vm, 0, MO_32);
534
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
535
tcg_gen_ext16u_i32(tmp3, tmp);
536
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
537
- neon_store_reg(a->vd, 0, tmp3);
538
+ write_neon_element32(tmp3, a->vd, 0, MO_32);
539
tcg_gen_shri_i32(tmp, tmp, 16);
540
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp);
541
- neon_store_reg(a->vd, 1, tmp);
542
- tmp3 = tcg_temp_new_i32();
543
+ write_neon_element32(tmp, a->vd, 1, MO_32);
544
+ tcg_temp_free_i32(tmp);
545
tcg_gen_ext16u_i32(tmp3, tmp2);
546
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
547
- neon_store_reg(a->vd, 2, tmp3);
548
+ write_neon_element32(tmp3, a->vd, 2, MO_32);
549
+ tcg_temp_free_i32(tmp3);
550
tcg_gen_shri_i32(tmp2, tmp2, 16);
551
gen_helper_vfp_fcvt_f16_to_f32(tmp2, tmp2, fpst, ahp);
552
- neon_store_reg(a->vd, 3, tmp2);
553
+ write_neon_element32(tmp2, a->vd, 3, MO_32);
554
+ tcg_temp_free_i32(tmp2);
555
tcg_temp_free_i32(ahp);
556
tcg_temp_free_ptr(fpst);
557
558
@@ -XXX,XX +XXX,XX @@ DO_2M_CRYPTO(SHA256SU0, aa32_sha2, 2)
559
560
static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
561
{
562
+ TCGv_i32 tmp;
563
int pass;
564
565
/* Handle a 2-reg-misc operation by iterating 32 bits at a time */
566
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
567
return true;
568
}
569
570
+ tmp = tcg_temp_new_i32();
571
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
572
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
573
+ read_neon_element32(tmp, a->vm, pass, MO_32);
574
fn(tmp, tmp);
575
- neon_store_reg(a->vd, pass, tmp);
576
+ write_neon_element32(tmp, a->vd, pass, MO_32);
577
}
578
+ tcg_temp_free_i32(tmp);
579
580
return true;
581
}
582
@@ -XXX,XX +XXX,XX @@ static bool trans_VTRN(DisasContext *s, arg_2misc *a)
583
return true;
584
}
585
586
- if (a->size == 2) {
587
+ tmp = tcg_temp_new_i32();
588
+ tmp2 = tcg_temp_new_i32();
589
+ if (a->size == MO_32) {
590
for (pass = 0; pass < (a->q ? 4 : 2); pass += 2) {
591
- tmp = neon_load_reg(a->vm, pass);
592
- tmp2 = neon_load_reg(a->vd, pass + 1);
593
- neon_store_reg(a->vm, pass, tmp2);
594
- neon_store_reg(a->vd, pass + 1, tmp);
595
+ read_neon_element32(tmp, a->vm, pass, MO_32);
596
+ read_neon_element32(tmp2, a->vd, pass + 1, MO_32);
597
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
598
+ write_neon_element32(tmp, a->vd, pass + 1, MO_32);
599
}
600
} else {
601
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
602
- tmp = neon_load_reg(a->vm, pass);
603
- tmp2 = neon_load_reg(a->vd, pass);
604
- if (a->size == 0) {
605
+ read_neon_element32(tmp, a->vm, pass, MO_32);
606
+ read_neon_element32(tmp2, a->vd, pass, MO_32);
607
+ if (a->size == MO_8) {
608
gen_neon_trn_u8(tmp, tmp2);
609
} else {
610
gen_neon_trn_u16(tmp, tmp2);
611
}
612
- neon_store_reg(a->vm, pass, tmp2);
613
- neon_store_reg(a->vd, pass, tmp);
614
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
615
+ write_neon_element32(tmp, a->vd, pass, MO_32);
616
}
617
}
618
+ tcg_temp_free_i32(tmp);
619
+ tcg_temp_free_i32(tmp2);
620
return true;
621
}
622
--
623
2.20.1
624
625
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
We can then use this to improve VMOV (scalar to gp) and
4
VMOV (gp to scalar) so that we simply perform the memory
5
operation that we wanted, rather than inserting or
6
extracting from a 32-bit quantity.
7
8
These were the last uses of neon_load/store_reg, so remove them.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20201030022618.785675-7-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/translate.c | 50 +++++++++++++-----------
16
target/arm/translate-vfp.c.inc | 71 +++++-----------------------------
17
2 files changed, 37 insertions(+), 84 deletions(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
24
* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
25
* where 0 is the least significant end of the register.
26
*/
27
-static long neon_element_offset(int reg, int element, MemOp size)
28
+static long neon_element_offset(int reg, int element, MemOp memop)
29
{
30
- int element_size = 1 << size;
31
+ int element_size = 1 << (memop & MO_SIZE);
32
int ofs = element * element_size;
33
#ifdef HOST_WORDS_BIGENDIAN
34
/*
35
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
36
}
37
}
38
39
-static TCGv_i32 neon_load_reg(int reg, int pass)
40
-{
41
- TCGv_i32 tmp = tcg_temp_new_i32();
42
- tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
43
- return tmp;
44
-}
45
-
46
-static void neon_store_reg(int reg, int pass, TCGv_i32 var)
47
-{
48
- tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
49
- tcg_temp_free_i32(var);
50
-}
51
-
52
static inline void neon_load_reg64(TCGv_i64 var, int reg)
53
{
54
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
55
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
56
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
57
}
58
59
-static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
60
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
61
{
62
- long off = neon_element_offset(reg, ele, size);
63
+ long off = neon_element_offset(reg, ele, memop);
64
65
- switch (size) {
66
- case MO_32:
67
+ switch (memop) {
68
+ case MO_SB:
69
+ tcg_gen_ld8s_i32(dest, cpu_env, off);
70
+ break;
71
+ case MO_UB:
72
+ tcg_gen_ld8u_i32(dest, cpu_env, off);
73
+ break;
74
+ case MO_SW:
75
+ tcg_gen_ld16s_i32(dest, cpu_env, off);
76
+ break;
77
+ case MO_UW:
78
+ tcg_gen_ld16u_i32(dest, cpu_env, off);
79
+ break;
80
+ case MO_UL:
81
+ case MO_SL:
82
tcg_gen_ld_i32(dest, cpu_env, off);
83
break;
84
default:
85
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
86
}
87
}
88
89
-static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
90
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
91
{
92
- long off = neon_element_offset(reg, ele, size);
93
+ long off = neon_element_offset(reg, ele, memop);
94
95
- switch (size) {
96
+ switch (memop) {
97
+ case MO_8:
98
+ tcg_gen_st8_i32(src, cpu_env, off);
99
+ break;
100
+ case MO_16:
101
+ tcg_gen_st16_i32(src, cpu_env, off);
102
+ break;
103
case MO_32:
104
tcg_gen_st_i32(src, cpu_env, off);
105
break;
106
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-vfp.c.inc
109
+++ b/target/arm/translate-vfp.c.inc
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
111
{
112
/* VMOV scalar to general purpose register */
113
TCGv_i32 tmp;
114
- int pass;
115
- uint32_t offset;
116
117
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
118
- if (a->size == 2
119
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
120
+ if (a->size == MO_32
121
? !dc_isar_feature(aa32_fpsp_v2, s)
122
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
123
return false;
124
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
125
return false;
126
}
127
128
- offset = a->index << a->size;
129
- pass = extract32(offset, 2, 1);
130
- offset = extract32(offset, 0, 2) * 8;
131
-
132
if (!vfp_access_check(s)) {
133
return true;
134
}
135
136
- tmp = neon_load_reg(a->vn, pass);
137
- switch (a->size) {
138
- case 0:
139
- if (offset) {
140
- tcg_gen_shri_i32(tmp, tmp, offset);
141
- }
142
- if (a->u) {
143
- gen_uxtb(tmp);
144
- } else {
145
- gen_sxtb(tmp);
146
- }
147
- break;
148
- case 1:
149
- if (a->u) {
150
- if (offset) {
151
- tcg_gen_shri_i32(tmp, tmp, 16);
152
- } else {
153
- gen_uxth(tmp);
154
- }
155
- } else {
156
- if (offset) {
157
- tcg_gen_sari_i32(tmp, tmp, 16);
158
- } else {
159
- gen_sxth(tmp);
160
- }
161
- }
162
- break;
163
- case 2:
164
- break;
165
- }
166
+ tmp = tcg_temp_new_i32();
167
+ read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN));
168
store_reg(s, a->rt, tmp);
169
170
return true;
171
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
172
static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
173
{
174
/* VMOV general purpose register to scalar */
175
- TCGv_i32 tmp, tmp2;
176
- int pass;
177
- uint32_t offset;
178
+ TCGv_i32 tmp;
179
180
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
181
- if (a->size == 2
182
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
183
+ if (a->size == MO_32
184
? !dc_isar_feature(aa32_fpsp_v2, s)
185
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
186
return false;
187
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
188
return false;
189
}
190
191
- offset = a->index << a->size;
192
- pass = extract32(offset, 2, 1);
193
- offset = extract32(offset, 0, 2) * 8;
194
-
195
if (!vfp_access_check(s)) {
196
return true;
197
}
198
199
tmp = load_reg(s, a->rt);
200
- switch (a->size) {
201
- case 0:
202
- tmp2 = neon_load_reg(a->vn, pass);
203
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 8);
204
- tcg_temp_free_i32(tmp2);
205
- break;
206
- case 1:
207
- tmp2 = neon_load_reg(a->vn, pass);
208
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 16);
209
- tcg_temp_free_i32(tmp2);
210
- break;
211
- case 2:
212
- break;
213
- }
214
- neon_store_reg(a->vn, pass, tmp);
215
+ write_neon_element32(tmp, a->vn, a->index, a->size);
216
+ tcg_temp_free_i32(tmp);
217
218
return true;
219
}
220
--
221
2.20.1
222
223
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The only uses of this function are for loading VFP
4
single-precision values, and nothing to do with NEON.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate.c | 4 +-
12
target/arm/translate-vfp.c.inc | 184 ++++++++++++++++-----------------
13
2 files changed, 94 insertions(+), 94 deletions(-)
14
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
18
+++ b/target/arm/translate.c
19
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg64(TCGv_i64 var, int reg)
20
tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
21
}
22
23
-static inline void neon_load_reg32(TCGv_i32 var, int reg)
24
+static inline void vfp_load_reg32(TCGv_i32 var, int reg)
25
{
26
tcg_gen_ld_i32(var, cpu_env, vfp_reg_offset(false, reg));
27
}
28
29
-static inline void neon_store_reg32(TCGv_i32 var, int reg)
30
+static inline void vfp_store_reg32(TCGv_i32 var, int reg)
31
{
32
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
33
}
34
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-vfp.c.inc
37
+++ b/target/arm/translate-vfp.c.inc
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
39
frn = tcg_temp_new_i32();
40
frm = tcg_temp_new_i32();
41
dest = tcg_temp_new_i32();
42
- neon_load_reg32(frn, rn);
43
- neon_load_reg32(frm, rm);
44
+ vfp_load_reg32(frn, rn);
45
+ vfp_load_reg32(frm, rm);
46
switch (a->cc) {
47
case 0: /* eq: Z */
48
tcg_gen_movcond_i32(TCG_COND_EQ, dest, cpu_ZF, zero,
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
50
if (sz == 1) {
51
tcg_gen_andi_i32(dest, dest, 0xffff);
52
}
53
- neon_store_reg32(dest, rd);
54
+ vfp_store_reg32(dest, rd);
55
tcg_temp_free_i32(frn);
56
tcg_temp_free_i32(frm);
57
tcg_temp_free_i32(dest);
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
59
TCGv_i32 tcg_res;
60
tcg_op = tcg_temp_new_i32();
61
tcg_res = tcg_temp_new_i32();
62
- neon_load_reg32(tcg_op, rm);
63
+ vfp_load_reg32(tcg_op, rm);
64
if (sz == 1) {
65
gen_helper_rinth(tcg_res, tcg_op, fpst);
66
} else {
67
gen_helper_rints(tcg_res, tcg_op, fpst);
68
}
69
- neon_store_reg32(tcg_res, rd);
70
+ vfp_store_reg32(tcg_res, rd);
71
tcg_temp_free_i32(tcg_op);
72
tcg_temp_free_i32(tcg_res);
73
}
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
gen_helper_vfp_tould(tcg_res, tcg_double, tcg_shift, fpst);
76
}
77
tcg_gen_extrl_i64_i32(tcg_tmp, tcg_res);
78
- neon_store_reg32(tcg_tmp, rd);
79
+ vfp_store_reg32(tcg_tmp, rd);
80
tcg_temp_free_i32(tcg_tmp);
81
tcg_temp_free_i64(tcg_res);
82
tcg_temp_free_i64(tcg_double);
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
84
TCGv_i32 tcg_single, tcg_res;
85
tcg_single = tcg_temp_new_i32();
86
tcg_res = tcg_temp_new_i32();
87
- neon_load_reg32(tcg_single, rm);
88
+ vfp_load_reg32(tcg_single, rm);
89
if (sz == 1) {
90
if (is_signed) {
91
gen_helper_vfp_toslh(tcg_res, tcg_single, tcg_shift, fpst);
92
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
93
gen_helper_vfp_touls(tcg_res, tcg_single, tcg_shift, fpst);
94
}
95
}
96
- neon_store_reg32(tcg_res, rd);
97
+ vfp_store_reg32(tcg_res, rd);
98
tcg_temp_free_i32(tcg_res);
99
tcg_temp_free_i32(tcg_single);
100
}
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
102
if (a->l) {
103
/* VFP to general purpose register */
104
tmp = tcg_temp_new_i32();
105
- neon_load_reg32(tmp, a->vn);
106
+ vfp_load_reg32(tmp, a->vn);
107
tcg_gen_andi_i32(tmp, tmp, 0xffff);
108
store_reg(s, a->rt, tmp);
109
} else {
110
/* general purpose register to VFP */
111
tmp = load_reg(s, a->rt);
112
tcg_gen_andi_i32(tmp, tmp, 0xffff);
113
- neon_store_reg32(tmp, a->vn);
114
+ vfp_store_reg32(tmp, a->vn);
115
tcg_temp_free_i32(tmp);
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
119
if (a->l) {
120
/* VFP to general purpose register */
121
tmp = tcg_temp_new_i32();
122
- neon_load_reg32(tmp, a->vn);
123
+ vfp_load_reg32(tmp, a->vn);
124
if (a->rt == 15) {
125
/* Set the 4 flag bits in the CPSR. */
126
gen_set_nzcv(tmp);
127
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
128
} else {
129
/* general purpose register to VFP */
130
tmp = load_reg(s, a->rt);
131
- neon_store_reg32(tmp, a->vn);
132
+ vfp_store_reg32(tmp, a->vn);
133
tcg_temp_free_i32(tmp);
134
}
135
136
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
137
if (a->op) {
138
/* fpreg to gpreg */
139
tmp = tcg_temp_new_i32();
140
- neon_load_reg32(tmp, a->vm);
141
+ vfp_load_reg32(tmp, a->vm);
142
store_reg(s, a->rt, tmp);
143
tmp = tcg_temp_new_i32();
144
- neon_load_reg32(tmp, a->vm + 1);
145
+ vfp_load_reg32(tmp, a->vm + 1);
146
store_reg(s, a->rt2, tmp);
147
} else {
148
/* gpreg to fpreg */
149
tmp = load_reg(s, a->rt);
150
- neon_store_reg32(tmp, a->vm);
151
+ vfp_store_reg32(tmp, a->vm);
152
tcg_temp_free_i32(tmp);
153
tmp = load_reg(s, a->rt2);
154
- neon_store_reg32(tmp, a->vm + 1);
155
+ vfp_store_reg32(tmp, a->vm + 1);
156
tcg_temp_free_i32(tmp);
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
160
if (a->op) {
161
/* fpreg to gpreg */
162
tmp = tcg_temp_new_i32();
163
- neon_load_reg32(tmp, a->vm * 2);
164
+ vfp_load_reg32(tmp, a->vm * 2);
165
store_reg(s, a->rt, tmp);
166
tmp = tcg_temp_new_i32();
167
- neon_load_reg32(tmp, a->vm * 2 + 1);
168
+ vfp_load_reg32(tmp, a->vm * 2 + 1);
169
store_reg(s, a->rt2, tmp);
170
} else {
171
/* gpreg to fpreg */
172
tmp = load_reg(s, a->rt);
173
- neon_store_reg32(tmp, a->vm * 2);
174
+ vfp_store_reg32(tmp, a->vm * 2);
175
tcg_temp_free_i32(tmp);
176
tmp = load_reg(s, a->rt2);
177
- neon_store_reg32(tmp, a->vm * 2 + 1);
178
+ vfp_store_reg32(tmp, a->vm * 2 + 1);
179
tcg_temp_free_i32(tmp);
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
183
tmp = tcg_temp_new_i32();
184
if (a->l) {
185
gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
186
- neon_store_reg32(tmp, a->vd);
187
+ vfp_store_reg32(tmp, a->vd);
188
} else {
189
- neon_load_reg32(tmp, a->vd);
190
+ vfp_load_reg32(tmp, a->vd);
191
gen_aa32_st16(s, tmp, addr, get_mem_index(s));
192
}
193
tcg_temp_free_i32(tmp);
194
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
195
tmp = tcg_temp_new_i32();
196
if (a->l) {
197
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
198
- neon_store_reg32(tmp, a->vd);
199
+ vfp_store_reg32(tmp, a->vd);
200
} else {
201
- neon_load_reg32(tmp, a->vd);
202
+ vfp_load_reg32(tmp, a->vd);
203
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
204
}
205
tcg_temp_free_i32(tmp);
206
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
207
if (a->l) {
208
/* load */
209
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
210
- neon_store_reg32(tmp, a->vd + i);
211
+ vfp_store_reg32(tmp, a->vd + i);
212
} else {
213
/* store */
214
- neon_load_reg32(tmp, a->vd + i);
215
+ vfp_load_reg32(tmp, a->vd + i);
216
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
217
}
218
tcg_gen_addi_i32(addr, addr, offset);
219
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
220
fd = tcg_temp_new_i32();
221
fpst = fpstatus_ptr(FPST_FPCR);
222
223
- neon_load_reg32(f0, vn);
224
- neon_load_reg32(f1, vm);
225
+ vfp_load_reg32(f0, vn);
226
+ vfp_load_reg32(f1, vm);
227
228
for (;;) {
229
if (reads_vd) {
230
- neon_load_reg32(fd, vd);
231
+ vfp_load_reg32(fd, vd);
232
}
233
fn(fd, f0, f1, fpst);
234
- neon_store_reg32(fd, vd);
235
+ vfp_store_reg32(fd, vd);
236
237
if (veclen == 0) {
238
break;
239
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
240
veclen--;
241
vd = vfp_advance_sreg(vd, delta_d);
242
vn = vfp_advance_sreg(vn, delta_d);
243
- neon_load_reg32(f0, vn);
244
+ vfp_load_reg32(f0, vn);
245
if (delta_m) {
246
vm = vfp_advance_sreg(vm, delta_m);
247
- neon_load_reg32(f1, vm);
248
+ vfp_load_reg32(f1, vm);
249
}
250
}
251
252
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_hp(DisasContext *s, VFPGen3OpSPFn *fn,
253
fd = tcg_temp_new_i32();
254
fpst = fpstatus_ptr(FPST_FPCR_F16);
255
256
- neon_load_reg32(f0, vn);
257
- neon_load_reg32(f1, vm);
258
+ vfp_load_reg32(f0, vn);
259
+ vfp_load_reg32(f1, vm);
260
261
if (reads_vd) {
262
- neon_load_reg32(fd, vd);
263
+ vfp_load_reg32(fd, vd);
264
}
265
fn(fd, f0, f1, fpst);
266
- neon_store_reg32(fd, vd);
267
+ vfp_store_reg32(fd, vd);
268
269
tcg_temp_free_i32(f0);
270
tcg_temp_free_i32(f1);
271
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
272
f0 = tcg_temp_new_i32();
273
fd = tcg_temp_new_i32();
274
275
- neon_load_reg32(f0, vm);
276
+ vfp_load_reg32(f0, vm);
277
278
for (;;) {
279
fn(fd, f0);
280
- neon_store_reg32(fd, vd);
281
+ vfp_store_reg32(fd, vd);
282
283
if (veclen == 0) {
284
break;
285
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
286
/* single source one-many */
287
while (veclen--) {
288
vd = vfp_advance_sreg(vd, delta_d);
289
- neon_store_reg32(fd, vd);
290
+ vfp_store_reg32(fd, vd);
291
}
292
break;
293
}
294
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
295
veclen--;
296
vd = vfp_advance_sreg(vd, delta_d);
297
vm = vfp_advance_sreg(vm, delta_m);
298
- neon_load_reg32(f0, vm);
299
+ vfp_load_reg32(f0, vm);
300
}
301
302
tcg_temp_free_i32(f0);
303
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
304
}
305
306
f0 = tcg_temp_new_i32();
307
- neon_load_reg32(f0, vm);
308
+ vfp_load_reg32(f0, vm);
309
fn(f0, f0);
310
- neon_store_reg32(f0, vd);
311
+ vfp_store_reg32(f0, vd);
312
tcg_temp_free_i32(f0);
313
314
return true;
315
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
316
vm = tcg_temp_new_i32();
317
vd = tcg_temp_new_i32();
318
319
- neon_load_reg32(vn, a->vn);
320
- neon_load_reg32(vm, a->vm);
321
+ vfp_load_reg32(vn, a->vn);
322
+ vfp_load_reg32(vm, a->vm);
323
if (neg_n) {
324
/* VFNMS, VFMS */
325
gen_helper_vfp_negh(vn, vn);
326
}
327
- neon_load_reg32(vd, a->vd);
328
+ vfp_load_reg32(vd, a->vd);
329
if (neg_d) {
330
/* VFNMA, VFNMS */
331
gen_helper_vfp_negh(vd, vd);
332
}
333
fpst = fpstatus_ptr(FPST_FPCR_F16);
334
gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst);
335
- neon_store_reg32(vd, a->vd);
336
+ vfp_store_reg32(vd, a->vd);
337
338
tcg_temp_free_ptr(fpst);
339
tcg_temp_free_i32(vn);
340
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
341
vm = tcg_temp_new_i32();
342
vd = tcg_temp_new_i32();
343
344
- neon_load_reg32(vn, a->vn);
345
- neon_load_reg32(vm, a->vm);
346
+ vfp_load_reg32(vn, a->vn);
347
+ vfp_load_reg32(vm, a->vm);
348
if (neg_n) {
349
/* VFNMS, VFMS */
350
gen_helper_vfp_negs(vn, vn);
351
}
352
- neon_load_reg32(vd, a->vd);
353
+ vfp_load_reg32(vd, a->vd);
354
if (neg_d) {
355
/* VFNMA, VFNMS */
356
gen_helper_vfp_negs(vd, vd);
357
}
358
fpst = fpstatus_ptr(FPST_FPCR);
359
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
360
- neon_store_reg32(vd, a->vd);
361
+ vfp_store_reg32(vd, a->vd);
362
363
tcg_temp_free_ptr(fpst);
364
tcg_temp_free_i32(vn);
365
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_hp(DisasContext *s, arg_VMOV_imm_sp *a)
366
}
367
368
fd = tcg_const_i32(vfp_expand_imm(MO_16, a->imm));
369
- neon_store_reg32(fd, a->vd);
370
+ vfp_store_reg32(fd, a->vd);
371
tcg_temp_free_i32(fd);
372
return true;
373
}
374
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
375
fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm));
376
377
for (;;) {
378
- neon_store_reg32(fd, vd);
379
+ vfp_store_reg32(fd, vd);
380
381
if (veclen == 0) {
382
break;
383
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
384
vd = tcg_temp_new_i32();
385
vm = tcg_temp_new_i32();
386
387
- neon_load_reg32(vd, a->vd);
388
+ vfp_load_reg32(vd, a->vd);
389
if (a->z) {
390
tcg_gen_movi_i32(vm, 0);
391
} else {
392
- neon_load_reg32(vm, a->vm);
393
+ vfp_load_reg32(vm, a->vm);
394
}
395
396
if (a->e) {
397
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_sp(DisasContext *s, arg_VCMP_sp *a)
398
vd = tcg_temp_new_i32();
399
vm = tcg_temp_new_i32();
400
401
- neon_load_reg32(vd, a->vd);
402
+ vfp_load_reg32(vd, a->vd);
403
if (a->z) {
404
tcg_gen_movi_i32(vm, 0);
405
} else {
406
- neon_load_reg32(vm, a->vm);
407
+ vfp_load_reg32(vm, a->vm);
408
}
409
410
if (a->e) {
411
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f32_f16(DisasContext *s, arg_VCVT_f32_f16 *a)
412
/* The T bit tells us if we want the low or high 16 bits of Vm */
413
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
414
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp_mode);
415
- neon_store_reg32(tmp, a->vd);
416
+ vfp_store_reg32(tmp, a->vd);
417
tcg_temp_free_i32(ahp_mode);
418
tcg_temp_free_ptr(fpst);
419
tcg_temp_free_i32(tmp);
420
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
421
ahp_mode = get_ahp_flag();
422
tmp = tcg_temp_new_i32();
423
424
- neon_load_reg32(tmp, a->vm);
425
+ vfp_load_reg32(tmp, a->vm);
426
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp_mode);
427
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
428
tcg_temp_free_i32(ahp_mode);
429
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_hp(DisasContext *s, arg_VRINTR_sp *a)
430
}
431
432
tmp = tcg_temp_new_i32();
433
- neon_load_reg32(tmp, a->vm);
434
+ vfp_load_reg32(tmp, a->vm);
435
fpst = fpstatus_ptr(FPST_FPCR_F16);
436
gen_helper_rinth(tmp, tmp, fpst);
437
- neon_store_reg32(tmp, a->vd);
438
+ vfp_store_reg32(tmp, a->vd);
439
tcg_temp_free_ptr(fpst);
440
tcg_temp_free_i32(tmp);
441
return true;
442
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_sp(DisasContext *s, arg_VRINTR_sp *a)
443
}
444
445
tmp = tcg_temp_new_i32();
446
- neon_load_reg32(tmp, a->vm);
447
+ vfp_load_reg32(tmp, a->vm);
448
fpst = fpstatus_ptr(FPST_FPCR);
449
gen_helper_rints(tmp, tmp, fpst);
450
- neon_store_reg32(tmp, a->vd);
451
+ vfp_store_reg32(tmp, a->vd);
452
tcg_temp_free_ptr(fpst);
453
tcg_temp_free_i32(tmp);
454
return true;
455
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_hp(DisasContext *s, arg_VRINTZ_sp *a)
456
}
457
458
tmp = tcg_temp_new_i32();
459
- neon_load_reg32(tmp, a->vm);
460
+ vfp_load_reg32(tmp, a->vm);
461
fpst = fpstatus_ptr(FPST_FPCR_F16);
462
tcg_rmode = tcg_const_i32(float_round_to_zero);
463
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
464
gen_helper_rinth(tmp, tmp, fpst);
465
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
466
- neon_store_reg32(tmp, a->vd);
467
+ vfp_store_reg32(tmp, a->vd);
468
tcg_temp_free_ptr(fpst);
469
tcg_temp_free_i32(tcg_rmode);
470
tcg_temp_free_i32(tmp);
471
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_sp(DisasContext *s, arg_VRINTZ_sp *a)
472
}
473
474
tmp = tcg_temp_new_i32();
475
- neon_load_reg32(tmp, a->vm);
476
+ vfp_load_reg32(tmp, a->vm);
477
fpst = fpstatus_ptr(FPST_FPCR);
478
tcg_rmode = tcg_const_i32(float_round_to_zero);
479
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
480
gen_helper_rints(tmp, tmp, fpst);
481
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
482
- neon_store_reg32(tmp, a->vd);
483
+ vfp_store_reg32(tmp, a->vd);
484
tcg_temp_free_ptr(fpst);
485
tcg_temp_free_i32(tcg_rmode);
486
tcg_temp_free_i32(tmp);
487
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_hp(DisasContext *s, arg_VRINTX_sp *a)
488
}
489
490
tmp = tcg_temp_new_i32();
491
- neon_load_reg32(tmp, a->vm);
492
+ vfp_load_reg32(tmp, a->vm);
493
fpst = fpstatus_ptr(FPST_FPCR_F16);
494
gen_helper_rinth_exact(tmp, tmp, fpst);
495
- neon_store_reg32(tmp, a->vd);
496
+ vfp_store_reg32(tmp, a->vd);
497
tcg_temp_free_ptr(fpst);
498
tcg_temp_free_i32(tmp);
499
return true;
500
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_sp(DisasContext *s, arg_VRINTX_sp *a)
501
}
502
503
tmp = tcg_temp_new_i32();
504
- neon_load_reg32(tmp, a->vm);
505
+ vfp_load_reg32(tmp, a->vm);
506
fpst = fpstatus_ptr(FPST_FPCR);
507
gen_helper_rints_exact(tmp, tmp, fpst);
508
- neon_store_reg32(tmp, a->vd);
509
+ vfp_store_reg32(tmp, a->vd);
510
tcg_temp_free_ptr(fpst);
511
tcg_temp_free_i32(tmp);
512
return true;
513
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
514
515
vm = tcg_temp_new_i32();
516
vd = tcg_temp_new_i64();
517
- neon_load_reg32(vm, a->vm);
518
+ vfp_load_reg32(vm, a->vm);
519
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
520
neon_store_reg64(vd, a->vd);
521
tcg_temp_free_i32(vm);
522
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
523
vm = tcg_temp_new_i64();
524
neon_load_reg64(vm, a->vm);
525
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
526
- neon_store_reg32(vd, a->vd);
527
+ vfp_store_reg32(vd, a->vd);
528
tcg_temp_free_i32(vd);
529
tcg_temp_free_i64(vm);
530
return true;
531
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
532
}
533
534
vm = tcg_temp_new_i32();
535
- neon_load_reg32(vm, a->vm);
536
+ vfp_load_reg32(vm, a->vm);
537
fpst = fpstatus_ptr(FPST_FPCR_F16);
538
if (a->s) {
539
/* i32 -> f16 */
540
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
541
/* u32 -> f16 */
542
gen_helper_vfp_uitoh(vm, vm, fpst);
543
}
544
- neon_store_reg32(vm, a->vd);
545
+ vfp_store_reg32(vm, a->vd);
546
tcg_temp_free_i32(vm);
547
tcg_temp_free_ptr(fpst);
548
return true;
549
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
550
}
551
552
vm = tcg_temp_new_i32();
553
- neon_load_reg32(vm, a->vm);
554
+ vfp_load_reg32(vm, a->vm);
555
fpst = fpstatus_ptr(FPST_FPCR);
556
if (a->s) {
557
/* i32 -> f32 */
558
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
559
/* u32 -> f32 */
560
gen_helper_vfp_uitos(vm, vm, fpst);
561
}
562
- neon_store_reg32(vm, a->vd);
563
+ vfp_store_reg32(vm, a->vd);
564
tcg_temp_free_i32(vm);
565
tcg_temp_free_ptr(fpst);
566
return true;
567
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
568
569
vm = tcg_temp_new_i32();
570
vd = tcg_temp_new_i64();
571
- neon_load_reg32(vm, a->vm);
572
+ vfp_load_reg32(vm, a->vm);
573
fpst = fpstatus_ptr(FPST_FPCR);
574
if (a->s) {
575
/* i32 -> f64 */
576
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
577
vd = tcg_temp_new_i32();
578
neon_load_reg64(vm, a->vm);
579
gen_helper_vjcvt(vd, vm, cpu_env);
580
- neon_store_reg32(vd, a->vd);
581
+ vfp_store_reg32(vd, a->vd);
582
tcg_temp_free_i64(vm);
583
tcg_temp_free_i32(vd);
584
return true;
585
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
586
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
587
588
vd = tcg_temp_new_i32();
589
- neon_load_reg32(vd, a->vd);
590
+ vfp_load_reg32(vd, a->vd);
591
592
fpst = fpstatus_ptr(FPST_FPCR_F16);
593
shift = tcg_const_i32(frac_bits);
594
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
595
g_assert_not_reached();
596
}
597
598
- neon_store_reg32(vd, a->vd);
599
+ vfp_store_reg32(vd, a->vd);
600
tcg_temp_free_i32(vd);
601
tcg_temp_free_i32(shift);
602
tcg_temp_free_ptr(fpst);
603
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
604
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
605
606
vd = tcg_temp_new_i32();
607
- neon_load_reg32(vd, a->vd);
608
+ vfp_load_reg32(vd, a->vd);
609
610
fpst = fpstatus_ptr(FPST_FPCR);
611
shift = tcg_const_i32(frac_bits);
612
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
613
g_assert_not_reached();
614
}
615
616
- neon_store_reg32(vd, a->vd);
617
+ vfp_store_reg32(vd, a->vd);
618
tcg_temp_free_i32(vd);
619
tcg_temp_free_i32(shift);
620
tcg_temp_free_ptr(fpst);
621
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
622
623
fpst = fpstatus_ptr(FPST_FPCR_F16);
624
vm = tcg_temp_new_i32();
625
- neon_load_reg32(vm, a->vm);
626
+ vfp_load_reg32(vm, a->vm);
627
628
if (a->s) {
629
if (a->rz) {
630
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
631
gen_helper_vfp_touih(vm, vm, fpst);
632
}
633
}
634
- neon_store_reg32(vm, a->vd);
635
+ vfp_store_reg32(vm, a->vd);
636
tcg_temp_free_i32(vm);
637
tcg_temp_free_ptr(fpst);
638
return true;
639
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
640
641
fpst = fpstatus_ptr(FPST_FPCR);
642
vm = tcg_temp_new_i32();
643
- neon_load_reg32(vm, a->vm);
644
+ vfp_load_reg32(vm, a->vm);
645
646
if (a->s) {
647
if (a->rz) {
648
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
649
gen_helper_vfp_touis(vm, vm, fpst);
650
}
651
}
652
- neon_store_reg32(vm, a->vd);
653
+ vfp_store_reg32(vm, a->vd);
654
tcg_temp_free_i32(vm);
655
tcg_temp_free_ptr(fpst);
656
return true;
657
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
658
gen_helper_vfp_touid(vd, vm, fpst);
659
}
660
}
661
- neon_store_reg32(vd, a->vd);
662
+ vfp_store_reg32(vd, a->vd);
663
tcg_temp_free_i32(vd);
664
tcg_temp_free_i64(vm);
665
tcg_temp_free_ptr(fpst);
666
@@ -XXX,XX +XXX,XX @@ static bool trans_VINS(DisasContext *s, arg_VINS *a)
667
/* Insert low half of Vm into high half of Vd */
668
rm = tcg_temp_new_i32();
669
rd = tcg_temp_new_i32();
670
- neon_load_reg32(rm, a->vm);
671
- neon_load_reg32(rd, a->vd);
672
+ vfp_load_reg32(rm, a->vm);
673
+ vfp_load_reg32(rd, a->vd);
674
tcg_gen_deposit_i32(rd, rd, rm, 16, 16);
675
- neon_store_reg32(rd, a->vd);
676
+ vfp_store_reg32(rd, a->vd);
677
tcg_temp_free_i32(rm);
678
tcg_temp_free_i32(rd);
679
return true;
680
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOVX(DisasContext *s, arg_VINS *a)
681
682
/* Set Vd to high half of Vm */
683
rm = tcg_temp_new_i32();
684
- neon_load_reg32(rm, a->vm);
685
+ vfp_load_reg32(rm, a->vm);
686
tcg_gen_shri_i32(rm, rm, 16);
687
- neon_store_reg32(rm, a->vd);
688
+ vfp_store_reg32(rm, a->vd);
689
tcg_temp_free_i32(rm);
690
return true;
691
}
692
--
693
2.20.1
694
695
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We'll add more to this new function in coming patches so we also
3
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc.
4
state the gic must be created and call it below create_gic().
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
No functional change intended.
6
Message-id: 20201030022618.785675-9-richard.henderson@linaro.org
7
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Andrew Jones <drjones@redhat.com>
11
Message-id: 20201001061718.101915-4-drjones@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
hw/arm/virt.c | 43 +++++++++++++++++++++++++++----------------
10
target/arm/translate.c | 26 +++++++++
15
1 file changed, 27 insertions(+), 16 deletions(-)
11
target/arm/translate-neon.c.inc | 94 ++++++++++++++++-----------------
16
12
2 files changed, 73 insertions(+), 47 deletions(-)
17
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/virt.c
16
--- a/target/arm/translate.c
20
+++ b/hw/arm/virt.c
17
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms)
18
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
22
}
19
}
23
}
20
}
24
21
25
+/*
22
+static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
26
+ * virt_cpu_post_init() must be called after the CPUs have
27
+ * been realized and the GIC has been created.
28
+ */
29
+static void virt_cpu_post_init(VirtMachineState *vms)
30
+{
23
+{
31
+ bool aarch64;
24
+ long off = neon_element_offset(reg, ele, memop);
32
+
25
+
33
+ aarch64 = object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL);
26
+ switch (memop) {
34
+
27
+ case MO_Q:
35
+ if (!kvm_enabled()) {
28
+ tcg_gen_ld_i64(dest, cpu_env, off);
36
+ if (aarch64 && vms->highmem) {
29
+ break;
37
+ int requested_pa_size = 64 - clz64(vms->highest_gpa);
30
+ default:
38
+ int pamax = arm_pamax(ARM_CPU(first_cpu));
31
+ g_assert_not_reached();
39
+
40
+ if (pamax < requested_pa_size) {
41
+ error_report("VCPU supports less PA bits (%d) than "
42
+ "requested by the memory map (%d)",
43
+ pamax, requested_pa_size);
44
+ exit(1);
45
+ }
46
+ }
47
+ }
32
+ }
48
+}
33
+}
49
+
34
+
50
static void machvirt_init(MachineState *machine)
35
static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
51
{
36
{
52
VirtMachineState *vms = VIRT_MACHINE(machine);
37
long off = neon_element_offset(reg, ele, memop);
53
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
38
@@ -XXX,XX +XXX,XX @@ static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
54
fdt_add_timer_nodes(vms);
39
}
55
fdt_add_cpu_nodes(vms);
40
}
56
41
57
- if (!kvm_enabled()) {
42
+static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
58
- ARMCPU *cpu = ARM_CPU(first_cpu);
43
+{
59
- bool aarch64 = object_property_get_bool(OBJECT(cpu), "aarch64", NULL);
44
+ long off = neon_element_offset(reg, ele, memop);
60
-
61
- if (aarch64 && vms->highmem) {
62
- int requested_pa_size, pamax = arm_pamax(cpu);
63
-
64
- requested_pa_size = 64 - clz64(vms->highest_gpa);
65
- if (pamax < requested_pa_size) {
66
- error_report("VCPU supports less PA bits (%d) than requested "
67
- "by the memory map (%d)", pamax, requested_pa_size);
68
- exit(1);
69
- }
70
- }
71
- }
72
-
73
memory_region_add_subregion(sysmem, vms->memmap[VIRT_MEM].base,
74
machine->ram);
75
if (machine->device_memory) {
76
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
77
78
create_gic(vms);
79
80
+ virt_cpu_post_init(vms);
81
+
45
+
82
fdt_add_pmu_nodes(vms);
46
+ switch (memop) {
83
47
+ case MO_64:
84
create_uart(vms, VIRT_UART, sysmem, serial_hd(0));
48
+ tcg_gen_st_i64(src, cpu_env, off);
49
+ break;
50
+ default:
51
+ g_assert_not_reached();
52
+ }
53
+}
54
+
55
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
56
{
57
TCGv_ptr ret = tcg_temp_new_ptr();
58
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.c.inc
61
+++ b/target/arm/translate-neon.c.inc
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
63
for (pass = 0; pass < a->q + 1; pass++) {
64
TCGv_i64 tmp = tcg_temp_new_i64();
65
66
- neon_load_reg64(tmp, a->vm + pass);
67
+ read_neon_element64(tmp, a->vm, pass, MO_64);
68
fn(tmp, cpu_env, tmp, constimm);
69
- neon_store_reg64(tmp, a->vd + pass);
70
+ write_neon_element64(tmp, a->vd, pass, MO_64);
71
tcg_temp_free_i64(tmp);
72
}
73
tcg_temp_free_i64(constimm);
74
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
75
rd = tcg_temp_new_i32();
76
77
/* Load both inputs first to avoid potential overwrite if rm == rd */
78
- neon_load_reg64(rm1, a->vm);
79
- neon_load_reg64(rm2, a->vm + 1);
80
+ read_neon_element64(rm1, a->vm, 0, MO_64);
81
+ read_neon_element64(rm2, a->vm, 1, MO_64);
82
83
shiftfn(rm1, rm1, constimm);
84
narrowfn(rd, cpu_env, rm1);
85
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
86
tcg_gen_shli_i64(tmp, tmp, a->shift);
87
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
88
}
89
- neon_store_reg64(tmp, a->vd);
90
+ write_neon_element64(tmp, a->vd, 0, MO_64);
91
92
widenfn(tmp, rm1);
93
tcg_temp_free_i32(rm1);
94
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
95
tcg_gen_shli_i64(tmp, tmp, a->shift);
96
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
97
}
98
- neon_store_reg64(tmp, a->vd + 1);
99
+ write_neon_element64(tmp, a->vd, 1, MO_64);
100
tcg_temp_free_i64(tmp);
101
return true;
102
}
103
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
104
rm_64 = tcg_temp_new_i64();
105
106
if (src1_wide) {
107
- neon_load_reg64(rn0_64, a->vn);
108
+ read_neon_element64(rn0_64, a->vn, 0, MO_64);
109
} else {
110
TCGv_i32 tmp = tcg_temp_new_i32();
111
read_neon_element32(tmp, a->vn, 0, MO_32);
112
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
113
* avoid incorrect results if a narrow input overlaps with the result.
114
*/
115
if (src1_wide) {
116
- neon_load_reg64(rn1_64, a->vn + 1);
117
+ read_neon_element64(rn1_64, a->vn, 1, MO_64);
118
} else {
119
TCGv_i32 tmp = tcg_temp_new_i32();
120
read_neon_element32(tmp, a->vn, 1, MO_32);
121
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
122
rm = tcg_temp_new_i32();
123
read_neon_element32(rm, a->vm, 1, MO_32);
124
125
- neon_store_reg64(rn0_64, a->vd);
126
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
127
128
widenfn(rm_64, rm);
129
tcg_temp_free_i32(rm);
130
opfn(rn1_64, rn1_64, rm_64);
131
- neon_store_reg64(rn1_64, a->vd + 1);
132
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
133
134
tcg_temp_free_i64(rn0_64);
135
tcg_temp_free_i64(rn1_64);
136
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
137
rd0 = tcg_temp_new_i32();
138
rd1 = tcg_temp_new_i32();
139
140
- neon_load_reg64(rn_64, a->vn);
141
- neon_load_reg64(rm_64, a->vm);
142
+ read_neon_element64(rn_64, a->vn, 0, MO_64);
143
+ read_neon_element64(rm_64, a->vm, 0, MO_64);
144
145
opfn(rn_64, rn_64, rm_64);
146
147
narrowfn(rd0, rn_64);
148
149
- neon_load_reg64(rn_64, a->vn + 1);
150
- neon_load_reg64(rm_64, a->vm + 1);
151
+ read_neon_element64(rn_64, a->vn, 1, MO_64);
152
+ read_neon_element64(rm_64, a->vm, 1, MO_64);
153
154
opfn(rn_64, rn_64, rm_64);
155
156
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
157
/* Don't store results until after all loads: they might overlap */
158
if (accfn) {
159
tmp = tcg_temp_new_i64();
160
- neon_load_reg64(tmp, a->vd);
161
+ read_neon_element64(tmp, a->vd, 0, MO_64);
162
accfn(tmp, tmp, rd0);
163
- neon_store_reg64(tmp, a->vd);
164
- neon_load_reg64(tmp, a->vd + 1);
165
+ write_neon_element64(tmp, a->vd, 0, MO_64);
166
+ read_neon_element64(tmp, a->vd, 1, MO_64);
167
accfn(tmp, tmp, rd1);
168
- neon_store_reg64(tmp, a->vd + 1);
169
+ write_neon_element64(tmp, a->vd, 1, MO_64);
170
tcg_temp_free_i64(tmp);
171
} else {
172
- neon_store_reg64(rd0, a->vd);
173
- neon_store_reg64(rd1, a->vd + 1);
174
+ write_neon_element64(rd0, a->vd, 0, MO_64);
175
+ write_neon_element64(rd1, a->vd, 1, MO_64);
176
}
177
178
tcg_temp_free_i64(rd0);
179
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
180
181
if (accfn) {
182
TCGv_i64 t64 = tcg_temp_new_i64();
183
- neon_load_reg64(t64, a->vd);
184
+ read_neon_element64(t64, a->vd, 0, MO_64);
185
accfn(t64, t64, rn0_64);
186
- neon_store_reg64(t64, a->vd);
187
- neon_load_reg64(t64, a->vd + 1);
188
+ write_neon_element64(t64, a->vd, 0, MO_64);
189
+ read_neon_element64(t64, a->vd, 1, MO_64);
190
accfn(t64, t64, rn1_64);
191
- neon_store_reg64(t64, a->vd + 1);
192
+ write_neon_element64(t64, a->vd, 1, MO_64);
193
tcg_temp_free_i64(t64);
194
} else {
195
- neon_store_reg64(rn0_64, a->vd);
196
- neon_store_reg64(rn1_64, a->vd + 1);
197
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
198
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
199
}
200
tcg_temp_free_i64(rn0_64);
201
tcg_temp_free_i64(rn1_64);
202
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
203
right = tcg_temp_new_i64();
204
dest = tcg_temp_new_i64();
205
206
- neon_load_reg64(right, a->vn);
207
- neon_load_reg64(left, a->vm);
208
+ read_neon_element64(right, a->vn, 0, MO_64);
209
+ read_neon_element64(left, a->vm, 0, MO_64);
210
tcg_gen_extract2_i64(dest, right, left, a->imm * 8);
211
- neon_store_reg64(dest, a->vd);
212
+ write_neon_element64(dest, a->vd, 0, MO_64);
213
214
tcg_temp_free_i64(left);
215
tcg_temp_free_i64(right);
216
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
217
destright = tcg_temp_new_i64();
218
219
if (a->imm < 8) {
220
- neon_load_reg64(right, a->vn);
221
- neon_load_reg64(middle, a->vn + 1);
222
+ read_neon_element64(right, a->vn, 0, MO_64);
223
+ read_neon_element64(middle, a->vn, 1, MO_64);
224
tcg_gen_extract2_i64(destright, right, middle, a->imm * 8);
225
- neon_load_reg64(left, a->vm);
226
+ read_neon_element64(left, a->vm, 0, MO_64);
227
tcg_gen_extract2_i64(destleft, middle, left, a->imm * 8);
228
} else {
229
- neon_load_reg64(right, a->vn + 1);
230
- neon_load_reg64(middle, a->vm);
231
+ read_neon_element64(right, a->vn, 1, MO_64);
232
+ read_neon_element64(middle, a->vm, 0, MO_64);
233
tcg_gen_extract2_i64(destright, right, middle, (a->imm - 8) * 8);
234
- neon_load_reg64(left, a->vm + 1);
235
+ read_neon_element64(left, a->vm, 1, MO_64);
236
tcg_gen_extract2_i64(destleft, middle, left, (a->imm - 8) * 8);
237
}
238
239
- neon_store_reg64(destright, a->vd);
240
- neon_store_reg64(destleft, a->vd + 1);
241
+ write_neon_element64(destright, a->vd, 0, MO_64);
242
+ write_neon_element64(destleft, a->vd, 1, MO_64);
243
244
tcg_temp_free_i64(destright);
245
tcg_temp_free_i64(destleft);
246
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
247
248
if (accfn) {
249
TCGv_i64 tmp64 = tcg_temp_new_i64();
250
- neon_load_reg64(tmp64, a->vd + pass);
251
+ read_neon_element64(tmp64, a->vd, pass, MO_64);
252
accfn(rd_64, tmp64, rd_64);
253
tcg_temp_free_i64(tmp64);
254
}
255
- neon_store_reg64(rd_64, a->vd + pass);
256
+ write_neon_element64(rd_64, a->vd, pass, MO_64);
257
tcg_temp_free_i64(rd_64);
258
}
259
return true;
260
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
261
rd0 = tcg_temp_new_i32();
262
rd1 = tcg_temp_new_i32();
263
264
- neon_load_reg64(rm, a->vm);
265
+ read_neon_element64(rm, a->vm, 0, MO_64);
266
narrowfn(rd0, cpu_env, rm);
267
- neon_load_reg64(rm, a->vm + 1);
268
+ read_neon_element64(rm, a->vm, 1, MO_64);
269
narrowfn(rd1, cpu_env, rm);
270
write_neon_element32(rd0, a->vd, 0, MO_32);
271
write_neon_element32(rd1, a->vd, 1, MO_32);
272
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
273
274
widenfn(rd, rm0);
275
tcg_gen_shli_i64(rd, rd, 8 << a->size);
276
- neon_store_reg64(rd, a->vd);
277
+ write_neon_element64(rd, a->vd, 0, MO_64);
278
widenfn(rd, rm1);
279
tcg_gen_shli_i64(rd, rd, 8 << a->size);
280
- neon_store_reg64(rd, a->vd + 1);
281
+ write_neon_element64(rd, a->vd, 1, MO_64);
282
283
tcg_temp_free_i64(rd);
284
tcg_temp_free_i32(rm0);
285
@@ -XXX,XX +XXX,XX @@ static bool trans_VSWP(DisasContext *s, arg_2misc *a)
286
rm = tcg_temp_new_i64();
287
rd = tcg_temp_new_i64();
288
for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
289
- neon_load_reg64(rm, a->vm + pass);
290
- neon_load_reg64(rd, a->vd + pass);
291
- neon_store_reg64(rm, a->vd + pass);
292
- neon_store_reg64(rd, a->vm + pass);
293
+ read_neon_element64(rm, a->vm, pass, MO_64);
294
+ read_neon_element64(rd, a->vd, pass, MO_64);
295
+ write_neon_element64(rm, a->vd, pass, MO_64);
296
+ write_neon_element64(rd, a->vm, pass, MO_64);
297
}
298
tcg_temp_free_i64(rm);
299
tcg_temp_free_i64(rd);
85
--
300
--
86
2.20.1
301
2.20.1
87
302
88
303
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
The only uses of this function are for loading VFP
4
double-precision values, and nothing to do with NEON.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-10-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate.c | 8 ++--
12
target/arm/translate-vfp.c.inc | 84 +++++++++++++++++-----------------
13
2 files changed, 46 insertions(+), 46 deletions(-)
14
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
18
+++ b/target/arm/translate.c
19
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
20
}
21
}
22
23
-static inline void neon_load_reg64(TCGv_i64 var, int reg)
24
+static inline void vfp_load_reg64(TCGv_i64 var, int reg)
25
{
26
- tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
27
+ tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(true, reg));
28
}
29
30
-static inline void neon_store_reg64(TCGv_i64 var, int reg)
31
+static inline void vfp_store_reg64(TCGv_i64 var, int reg)
32
{
33
- tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
34
+ tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(true, reg));
35
}
36
37
static inline void vfp_load_reg32(TCGv_i32 var, int reg)
38
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-vfp.c.inc
41
+++ b/target/arm/translate-vfp.c.inc
42
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
43
tcg_gen_ext_i32_i64(nf, cpu_NF);
44
tcg_gen_ext_i32_i64(vf, cpu_VF);
45
46
- neon_load_reg64(frn, rn);
47
- neon_load_reg64(frm, rm);
48
+ vfp_load_reg64(frn, rn);
49
+ vfp_load_reg64(frm, rm);
50
switch (a->cc) {
51
case 0: /* eq: Z */
52
tcg_gen_movcond_i64(TCG_COND_EQ, dest, zf, zero,
53
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
54
tcg_temp_free_i64(tmp);
55
break;
56
}
57
- neon_store_reg64(dest, rd);
58
+ vfp_store_reg64(dest, rd);
59
tcg_temp_free_i64(frn);
60
tcg_temp_free_i64(frm);
61
tcg_temp_free_i64(dest);
62
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
63
TCGv_i64 tcg_res;
64
tcg_op = tcg_temp_new_i64();
65
tcg_res = tcg_temp_new_i64();
66
- neon_load_reg64(tcg_op, rm);
67
+ vfp_load_reg64(tcg_op, rm);
68
gen_helper_rintd(tcg_res, tcg_op, fpst);
69
- neon_store_reg64(tcg_res, rd);
70
+ vfp_store_reg64(tcg_res, rd);
71
tcg_temp_free_i64(tcg_op);
72
tcg_temp_free_i64(tcg_res);
73
} else {
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
tcg_double = tcg_temp_new_i64();
76
tcg_res = tcg_temp_new_i64();
77
tcg_tmp = tcg_temp_new_i32();
78
- neon_load_reg64(tcg_double, rm);
79
+ vfp_load_reg64(tcg_double, rm);
80
if (is_signed) {
81
gen_helper_vfp_tosld(tcg_res, tcg_double, tcg_shift, fpst);
82
} else {
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
84
tmp = tcg_temp_new_i64();
85
if (a->l) {
86
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
87
- neon_store_reg64(tmp, a->vd);
88
+ vfp_store_reg64(tmp, a->vd);
89
} else {
90
- neon_load_reg64(tmp, a->vd);
91
+ vfp_load_reg64(tmp, a->vd);
92
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
93
}
94
tcg_temp_free_i64(tmp);
95
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
96
if (a->l) {
97
/* load */
98
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
99
- neon_store_reg64(tmp, a->vd + i);
100
+ vfp_store_reg64(tmp, a->vd + i);
101
} else {
102
/* store */
103
- neon_load_reg64(tmp, a->vd + i);
104
+ vfp_load_reg64(tmp, a->vd + i);
105
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
106
}
107
tcg_gen_addi_i32(addr, addr, offset);
108
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
109
fd = tcg_temp_new_i64();
110
fpst = fpstatus_ptr(FPST_FPCR);
111
112
- neon_load_reg64(f0, vn);
113
- neon_load_reg64(f1, vm);
114
+ vfp_load_reg64(f0, vn);
115
+ vfp_load_reg64(f1, vm);
116
117
for (;;) {
118
if (reads_vd) {
119
- neon_load_reg64(fd, vd);
120
+ vfp_load_reg64(fd, vd);
121
}
122
fn(fd, f0, f1, fpst);
123
- neon_store_reg64(fd, vd);
124
+ vfp_store_reg64(fd, vd);
125
126
if (veclen == 0) {
127
break;
128
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
129
veclen--;
130
vd = vfp_advance_dreg(vd, delta_d);
131
vn = vfp_advance_dreg(vn, delta_d);
132
- neon_load_reg64(f0, vn);
133
+ vfp_load_reg64(f0, vn);
134
if (delta_m) {
135
vm = vfp_advance_dreg(vm, delta_m);
136
- neon_load_reg64(f1, vm);
137
+ vfp_load_reg64(f1, vm);
138
}
139
}
140
141
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
142
f0 = tcg_temp_new_i64();
143
fd = tcg_temp_new_i64();
144
145
- neon_load_reg64(f0, vm);
146
+ vfp_load_reg64(f0, vm);
147
148
for (;;) {
149
fn(fd, f0);
150
- neon_store_reg64(fd, vd);
151
+ vfp_store_reg64(fd, vd);
152
153
if (veclen == 0) {
154
break;
155
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
156
/* single source one-many */
157
while (veclen--) {
158
vd = vfp_advance_dreg(vd, delta_d);
159
- neon_store_reg64(fd, vd);
160
+ vfp_store_reg64(fd, vd);
161
}
162
break;
163
}
164
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
165
veclen--;
166
vd = vfp_advance_dreg(vd, delta_d);
167
vd = vfp_advance_dreg(vm, delta_m);
168
- neon_load_reg64(f0, vm);
169
+ vfp_load_reg64(f0, vm);
170
}
171
172
tcg_temp_free_i64(f0);
173
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
174
vm = tcg_temp_new_i64();
175
vd = tcg_temp_new_i64();
176
177
- neon_load_reg64(vn, a->vn);
178
- neon_load_reg64(vm, a->vm);
179
+ vfp_load_reg64(vn, a->vn);
180
+ vfp_load_reg64(vm, a->vm);
181
if (neg_n) {
182
/* VFNMS, VFMS */
183
gen_helper_vfp_negd(vn, vn);
184
}
185
- neon_load_reg64(vd, a->vd);
186
+ vfp_load_reg64(vd, a->vd);
187
if (neg_d) {
188
/* VFNMA, VFNMS */
189
gen_helper_vfp_negd(vd, vd);
190
}
191
fpst = fpstatus_ptr(FPST_FPCR);
192
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
193
- neon_store_reg64(vd, a->vd);
194
+ vfp_store_reg64(vd, a->vd);
195
196
tcg_temp_free_ptr(fpst);
197
tcg_temp_free_i64(vn);
198
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
199
fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm));
200
201
for (;;) {
202
- neon_store_reg64(fd, vd);
203
+ vfp_store_reg64(fd, vd);
204
205
if (veclen == 0) {
206
break;
207
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
208
vd = tcg_temp_new_i64();
209
vm = tcg_temp_new_i64();
210
211
- neon_load_reg64(vd, a->vd);
212
+ vfp_load_reg64(vd, a->vd);
213
if (a->z) {
214
tcg_gen_movi_i64(vm, 0);
215
} else {
216
- neon_load_reg64(vm, a->vm);
217
+ vfp_load_reg64(vm, a->vm);
218
}
219
220
if (a->e) {
221
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
222
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
223
vd = tcg_temp_new_i64();
224
gen_helper_vfp_fcvt_f16_to_f64(vd, tmp, fpst, ahp_mode);
225
- neon_store_reg64(vd, a->vd);
226
+ vfp_store_reg64(vd, a->vd);
227
tcg_temp_free_i32(ahp_mode);
228
tcg_temp_free_ptr(fpst);
229
tcg_temp_free_i32(tmp);
230
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
231
tmp = tcg_temp_new_i32();
232
vm = tcg_temp_new_i64();
233
234
- neon_load_reg64(vm, a->vm);
235
+ vfp_load_reg64(vm, a->vm);
236
gen_helper_vfp_fcvt_f64_to_f16(tmp, vm, fpst, ahp_mode);
237
tcg_temp_free_i64(vm);
238
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
239
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
240
}
241
242
tmp = tcg_temp_new_i64();
243
- neon_load_reg64(tmp, a->vm);
244
+ vfp_load_reg64(tmp, a->vm);
245
fpst = fpstatus_ptr(FPST_FPCR);
246
gen_helper_rintd(tmp, tmp, fpst);
247
- neon_store_reg64(tmp, a->vd);
248
+ vfp_store_reg64(tmp, a->vd);
249
tcg_temp_free_ptr(fpst);
250
tcg_temp_free_i64(tmp);
251
return true;
252
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
253
}
254
255
tmp = tcg_temp_new_i64();
256
- neon_load_reg64(tmp, a->vm);
257
+ vfp_load_reg64(tmp, a->vm);
258
fpst = fpstatus_ptr(FPST_FPCR);
259
tcg_rmode = tcg_const_i32(float_round_to_zero);
260
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
261
gen_helper_rintd(tmp, tmp, fpst);
262
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
263
- neon_store_reg64(tmp, a->vd);
264
+ vfp_store_reg64(tmp, a->vd);
265
tcg_temp_free_ptr(fpst);
266
tcg_temp_free_i64(tmp);
267
tcg_temp_free_i32(tcg_rmode);
268
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
269
}
270
271
tmp = tcg_temp_new_i64();
272
- neon_load_reg64(tmp, a->vm);
273
+ vfp_load_reg64(tmp, a->vm);
274
fpst = fpstatus_ptr(FPST_FPCR);
275
gen_helper_rintd_exact(tmp, tmp, fpst);
276
- neon_store_reg64(tmp, a->vd);
277
+ vfp_store_reg64(tmp, a->vd);
278
tcg_temp_free_ptr(fpst);
279
tcg_temp_free_i64(tmp);
280
return true;
281
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
282
vd = tcg_temp_new_i64();
283
vfp_load_reg32(vm, a->vm);
284
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
285
- neon_store_reg64(vd, a->vd);
286
+ vfp_store_reg64(vd, a->vd);
287
tcg_temp_free_i32(vm);
288
tcg_temp_free_i64(vd);
289
return true;
290
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
291
292
vd = tcg_temp_new_i32();
293
vm = tcg_temp_new_i64();
294
- neon_load_reg64(vm, a->vm);
295
+ vfp_load_reg64(vm, a->vm);
296
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
297
vfp_store_reg32(vd, a->vd);
298
tcg_temp_free_i32(vd);
299
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
300
/* u32 -> f64 */
301
gen_helper_vfp_uitod(vd, vm, fpst);
302
}
303
- neon_store_reg64(vd, a->vd);
304
+ vfp_store_reg64(vd, a->vd);
305
tcg_temp_free_i32(vm);
306
tcg_temp_free_i64(vd);
307
tcg_temp_free_ptr(fpst);
308
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
309
310
vm = tcg_temp_new_i64();
311
vd = tcg_temp_new_i32();
312
- neon_load_reg64(vm, a->vm);
313
+ vfp_load_reg64(vm, a->vm);
314
gen_helper_vjcvt(vd, vm, cpu_env);
315
vfp_store_reg32(vd, a->vd);
316
tcg_temp_free_i64(vm);
317
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
318
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
319
320
vd = tcg_temp_new_i64();
321
- neon_load_reg64(vd, a->vd);
322
+ vfp_load_reg64(vd, a->vd);
323
324
fpst = fpstatus_ptr(FPST_FPCR);
325
shift = tcg_const_i32(frac_bits);
326
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
327
g_assert_not_reached();
328
}
329
330
- neon_store_reg64(vd, a->vd);
331
+ vfp_store_reg64(vd, a->vd);
332
tcg_temp_free_i64(vd);
333
tcg_temp_free_i32(shift);
334
tcg_temp_free_ptr(fpst);
335
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
336
fpst = fpstatus_ptr(FPST_FPCR);
337
vm = tcg_temp_new_i64();
338
vd = tcg_temp_new_i32();
339
- neon_load_reg64(vm, a->vm);
340
+ vfp_load_reg64(vm, a->vm);
341
342
if (a->s) {
343
if (a->rz) {
344
--
345
2.20.1
346
347
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move the KVM PMU setup part of fdt_add_pmu_nodes() to
3
In both cases, we can sink the write-back and perform
4
virt_cpu_post_init(), which is a more appropriate location. Now
4
the accumulate into the normal destination temps.
5
fdt_add_pmu_nodes() is also named more appropriately, because it
6
no longer does anything but fdt node creation.
7
5
8
No functional change intended.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
7
Message-id: 20201030022618.785675-11-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Signed-off-by: Andrew Jones <drjones@redhat.com>
13
Message-id: 20201001061718.101915-5-drjones@redhat.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
hw/arm/virt.c | 34 ++++++++++++++++++----------------
11
target/arm/translate-neon.c.inc | 23 +++++++++--------------
17
1 file changed, 18 insertions(+), 16 deletions(-)
12
1 file changed, 9 insertions(+), 14 deletions(-)
18
13
19
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
14
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/virt.c
16
--- a/target/arm/translate-neon.c.inc
22
+++ b/hw/arm/virt.c
17
+++ b/target/arm/translate-neon.c.inc
23
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms)
18
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
24
19
if (accfn) {
25
static void fdt_add_pmu_nodes(const VirtMachineState *vms)
20
tmp = tcg_temp_new_i64();
26
{
21
read_neon_element64(tmp, a->vd, 0, MO_64);
27
- CPUState *cpu;
22
- accfn(tmp, tmp, rd0);
28
- ARMCPU *armcpu;
23
- write_neon_element64(tmp, a->vd, 0, MO_64);
29
+ ARMCPU *armcpu = ARM_CPU(first_cpu);
24
+ accfn(rd0, tmp, rd0);
30
uint32_t irqflags = GIC_FDT_IRQ_FLAGS_LEVEL_HI;
25
read_neon_element64(tmp, a->vd, 1, MO_64);
31
26
- accfn(tmp, tmp, rd1);
32
- CPU_FOREACH(cpu) {
27
- write_neon_element64(tmp, a->vd, 1, MO_64);
33
- armcpu = ARM_CPU(cpu);
28
+ accfn(rd1, tmp, rd1);
34
- if (!arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {
29
tcg_temp_free_i64(tmp);
35
- return;
30
- } else {
36
- }
31
- write_neon_element64(rd0, a->vd, 0, MO_64);
37
- if (kvm_enabled()) {
32
- write_neon_element64(rd1, a->vd, 1, MO_64);
38
- if (kvm_irqchip_in_kernel()) {
39
- kvm_arm_pmu_set_irq(cpu, PPI(VIRTUAL_PMU_IRQ));
40
- }
41
- kvm_arm_pmu_init(cpu);
42
- }
43
+ if (!arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {
44
+ assert(!object_property_get_bool(OBJECT(armcpu), "pmu", NULL));
45
+ return;
46
}
33
}
47
34
48
if (vms->gic_version == VIRT_GIC_VERSION_2) {
35
+ write_neon_element64(rd0, a->vd, 0, MO_64);
49
@@ -XXX,XX +XXX,XX @@ static void fdt_add_pmu_nodes(const VirtMachineState *vms)
36
+ write_neon_element64(rd1, a->vd, 1, MO_64);
50
(1 << vms->smp_cpus) - 1);
37
tcg_temp_free_i64(rd0);
38
tcg_temp_free_i64(rd1);
39
40
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
41
if (accfn) {
42
TCGv_i64 t64 = tcg_temp_new_i64();
43
read_neon_element64(t64, a->vd, 0, MO_64);
44
- accfn(t64, t64, rn0_64);
45
- write_neon_element64(t64, a->vd, 0, MO_64);
46
+ accfn(rn0_64, t64, rn0_64);
47
read_neon_element64(t64, a->vd, 1, MO_64);
48
- accfn(t64, t64, rn1_64);
49
- write_neon_element64(t64, a->vd, 1, MO_64);
50
+ accfn(rn1_64, t64, rn1_64);
51
tcg_temp_free_i64(t64);
52
- } else {
53
- write_neon_element64(rn0_64, a->vd, 0, MO_64);
54
- write_neon_element64(rn1_64, a->vd, 1, MO_64);
51
}
55
}
52
56
+
53
- armcpu = ARM_CPU(qemu_get_cpu(0));
57
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
54
qemu_fdt_add_subnode(vms->fdt, "/pmu");
58
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
55
if (arm_feature(&armcpu->env, ARM_FEATURE_V8)) {
59
tcg_temp_free_i64(rn0_64);
56
const char compat[] = "arm,armv8-pmuv3";
60
tcg_temp_free_i64(rn1_64);
57
@@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms)
61
return true;
58
*/
59
static void virt_cpu_post_init(VirtMachineState *vms)
60
{
61
- bool aarch64;
62
+ bool aarch64, pmu;
63
+ CPUState *cpu;
64
65
aarch64 = object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL);
66
+ pmu = object_property_get_bool(OBJECT(first_cpu), "pmu", NULL);
67
68
- if (!kvm_enabled()) {
69
+ if (kvm_enabled()) {
70
+ CPU_FOREACH(cpu) {
71
+ if (pmu) {
72
+ assert(arm_feature(&ARM_CPU(cpu)->env, ARM_FEATURE_PMU));
73
+ if (kvm_irqchip_in_kernel()) {
74
+ kvm_arm_pmu_set_irq(cpu, PPI(VIRTUAL_PMU_IRQ));
75
+ }
76
+ kvm_arm_pmu_init(cpu);
77
+ }
78
+ }
79
+ } else {
80
if (aarch64 && vms->highmem) {
81
int requested_pa_size = 64 - clz64(vms->highest_gpa);
82
int pamax = arm_pamax(ARM_CPU(first_cpu));
83
--
62
--
84
2.20.1
63
2.20.1
85
64
86
65
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We add the kvm-steal-time CPU property and implement it for machvirt.
3
We can use proper widening loads to extend 32-bit inputs,
4
A tiny bit of refactoring was also done to allow pmu and pvtime to
4
and skip the "widenfn" step.
5
use the same vcpu device helper functions.
6
5
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
Message-id: 20201030022618.785675-12-richard.henderson@linaro.org
9
Message-id: 20201001061718.101915-7-drjones@redhat.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
docs/system/arm/cpu-features.rst | 11 ++++++
11
target/arm/translate.c | 6 +++
13
include/hw/arm/virt.h | 5 +++
12
target/arm/translate-neon.c.inc | 66 ++++++++++++++++++---------------
14
target/arm/cpu.h | 4 ++
13
2 files changed, 43 insertions(+), 29 deletions(-)
15
target/arm/kvm_arm.h | 43 +++++++++++++++++++++
16
hw/arm/virt.c | 43 +++++++++++++++++++--
17
target/arm/cpu.c | 8 ++++
18
target/arm/kvm.c | 16 ++++++++
19
target/arm/kvm64.c | 64 +++++++++++++++++++++++++++++---
20
target/arm/monitor.c | 2 +-
21
tests/qtest/arm-cpu-features.c | 25 +++++++++++--
22
10 files changed, 208 insertions(+), 13 deletions(-)
23
14
24
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
25
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
26
--- a/docs/system/arm/cpu-features.rst
17
--- a/target/arm/translate.c
27
+++ b/docs/system/arm/cpu-features.rst
18
+++ b/target/arm/translate.c
28
@@ -XXX,XX +XXX,XX @@ the list of KVM VCPU features and their descriptions.
19
@@ -XXX,XX +XXX,XX @@ static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
29
adjustment, also restoring the legacy (pre-5.0)
20
long off = neon_element_offset(reg, ele, memop);
30
behavior.
21
31
22
switch (memop) {
32
+ kvm-steal-time Since v5.2, kvm-steal-time is enabled by
23
+ case MO_SL:
33
+ default when KVM is enabled, the feature is
24
+ tcg_gen_ld32s_i64(dest, cpu_env, off);
34
+ supported, and the guest is 64-bit.
25
+ break;
35
+
26
+ case MO_UL:
36
+ When kvm-steal-time is enabled a 64-bit guest
27
+ tcg_gen_ld32u_i64(dest, cpu_env, off);
37
+ can account for time its CPUs were not running
28
+ break;
38
+ due to the host not scheduling the corresponding
29
case MO_Q:
39
+ VCPU threads. The accounting statistics may
30
tcg_gen_ld_i64(dest, cpu_env, off);
40
+ influence the guest scheduler behavior and/or be
31
break;
41
+ exposed to the guest userspace.
32
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
42
+
43
SVE CPU Properties
44
==================
45
46
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
47
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
48
--- a/include/hw/arm/virt.h
34
--- a/target/arm/translate-neon.c.inc
49
+++ b/include/hw/arm/virt.h
35
+++ b/target/arm/translate-neon.c.inc
50
@@ -XXX,XX +XXX,XX @@
36
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
51
37
static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
52
#define PPI(irq) ((irq) + 16)
38
NeonGenWidenFn *widenfn,
53
39
NeonGenTwo64OpFn *opfn,
54
+/* See Linux kernel arch/arm64/include/asm/pvclock-abi.h */
40
- bool src1_wide)
55
+#define PVTIME_SIZE_PER_CPU 64
41
+ int src1_mop, int src2_mop)
56
+
57
enum {
58
VIRT_FLASH,
59
VIRT_MEM,
60
@@ -XXX,XX +XXX,XX @@ enum {
61
VIRT_PCDIMM_ACPI,
62
VIRT_ACPI_GED,
63
VIRT_NVDIMM_ACPI,
64
+ VIRT_PVTIME,
65
VIRT_LOWMEMMAP_LAST,
66
};
67
68
@@ -XXX,XX +XXX,XX @@ struct VirtMachineClass {
69
bool no_highmem_ecam;
70
bool no_ged; /* Machines < 4.2 has no support for ACPI GED device */
71
bool kvm_no_adjvtime;
72
+ bool no_kvm_steal_time;
73
bool acpi_expose_flash;
74
};
75
76
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/cpu.h
79
+++ b/target/arm/cpu.h
80
@@ -XXX,XX +XXX,XX @@
81
#include "hw/registerfields.h"
82
#include "cpu-qom.h"
83
#include "exec/cpu-defs.h"
84
+#include "qapi/qapi-types-common.h"
85
86
/* ARM processors have a weak memory model */
87
#define TCG_GUEST_DEFAULT_MO (0)
88
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
89
bool kvm_vtime_dirty;
90
uint64_t kvm_vtime;
91
92
+ /* KVM steal time */
93
+ OnOffAuto kvm_steal_time;
94
+
95
/* Uniprocessor system with MP extensions */
96
bool mp_is_up;
97
98
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/kvm_arm.h
101
+++ b/target/arm/kvm_arm.h
102
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu);
103
*/
104
void kvm_arm_add_vcpu_properties(Object *obj);
105
106
+/**
107
+ * kvm_arm_steal_time_finalize:
108
+ * @cpu: ARMCPU for which to finalize kvm-steal-time
109
+ * @errp: Pointer to Error* for error propagation
110
+ *
111
+ * Validate the kvm-steal-time property selection and set its default
112
+ * based on KVM support and guest configuration.
113
+ */
114
+void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp);
115
+
116
+/**
117
+ * kvm_arm_steal_time_supported:
118
+ *
119
+ * Returns: true if KVM can enable steal time reporting
120
+ * and false otherwise.
121
+ */
122
+bool kvm_arm_steal_time_supported(void);
123
+
124
/**
125
* kvm_arm_aarch32_supported:
126
*
127
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vgic_probe(void);
128
129
void kvm_arm_pmu_set_irq(CPUState *cs, int irq);
130
void kvm_arm_pmu_init(CPUState *cs);
131
+
132
+/**
133
+ * kvm_arm_pvtime_init:
134
+ * @cs: CPUState
135
+ * @ipa: Per-vcpu guest physical base address of the pvtime structures
136
+ *
137
+ * Initializes PVTIME for the VCPU, setting the PVTIME IPA to @ipa.
138
+ */
139
+void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa);
140
+
141
int kvm_arm_set_irq(int cpu, int irqtype, int irq, int level);
142
143
#else
144
@@ -XXX,XX +XXX,XX @@ static inline bool kvm_arm_sve_supported(void)
145
return false;
146
}
147
148
+static inline bool kvm_arm_steal_time_supported(void)
149
+{
150
+ return false;
151
+}
152
+
153
/*
154
* These functions should never actually be called without KVM support.
155
*/
156
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_pmu_init(CPUState *cs)
157
g_assert_not_reached();
158
}
159
160
+static inline void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
161
+{
162
+ g_assert_not_reached();
163
+}
164
+
165
+static inline void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp)
166
+{
167
+ g_assert_not_reached();
168
+}
169
+
170
static inline void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
171
{
42
{
172
g_assert_not_reached();
43
/* 3-regs different lengths, prewidening case (VADDL/VSUBL/VAADW/VSUBW) */
173
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
44
TCGv_i64 rn0_64, rn1_64, rm_64;
174
index XXXXXXX..XXXXXXX 100644
45
- TCGv_i32 rm;
175
--- a/hw/arm/virt.c
46
176
+++ b/hw/arm/virt.c
47
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
177
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry base_memmap[] = {
48
return false;
178
[VIRT_PCDIMM_ACPI] = { 0x09070000, MEMORY_HOTPLUG_IO_LEN },
49
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
179
[VIRT_ACPI_GED] = { 0x09080000, ACPI_GED_EVT_SEL_LEN },
180
[VIRT_NVDIMM_ACPI] = { 0x09090000, NVDIMM_ACPI_IO_LEN},
181
+ [VIRT_PVTIME] = { 0x090a0000, 0x00010000 },
182
[VIRT_MMIO] = { 0x0a000000, 0x00000200 },
183
/* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
184
[VIRT_PLATFORM_BUS] = { 0x0c000000, 0x02000000 },
185
@@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms)
186
* virt_cpu_post_init() must be called after the CPUs have
187
* been realized and the GIC has been created.
188
*/
189
-static void virt_cpu_post_init(VirtMachineState *vms)
190
+static void virt_cpu_post_init(VirtMachineState *vms, int max_cpus,
191
+ MemoryRegion *sysmem)
192
{
193
- bool aarch64, pmu;
194
+ bool aarch64, pmu, steal_time;
195
CPUState *cpu;
196
197
aarch64 = object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL);
198
pmu = object_property_get_bool(OBJECT(first_cpu), "pmu", NULL);
199
+ steal_time = object_property_get_bool(OBJECT(first_cpu),
200
+ "kvm-steal-time", NULL);
201
202
if (kvm_enabled()) {
203
+ hwaddr pvtime_reg_base = vms->memmap[VIRT_PVTIME].base;
204
+ hwaddr pvtime_reg_size = vms->memmap[VIRT_PVTIME].size;
205
+
206
+ if (steal_time) {
207
+ MemoryRegion *pvtime = g_new(MemoryRegion, 1);
208
+ hwaddr pvtime_size = max_cpus * PVTIME_SIZE_PER_CPU;
209
+
210
+ /* The memory region size must be a multiple of host page size. */
211
+ pvtime_size = REAL_HOST_PAGE_ALIGN(pvtime_size);
212
+
213
+ if (pvtime_size > pvtime_reg_size) {
214
+ error_report("pvtime requires a %ld byte memory region for "
215
+ "%d CPUs, but only %ld has been reserved",
216
+ pvtime_size, max_cpus, pvtime_reg_size);
217
+ exit(1);
218
+ }
219
+
220
+ memory_region_init_ram(pvtime, NULL, "pvtime", pvtime_size, NULL);
221
+ memory_region_add_subregion(sysmem, pvtime_reg_base, pvtime);
222
+ }
223
+
224
CPU_FOREACH(cpu) {
225
if (pmu) {
226
assert(arm_feature(&ARM_CPU(cpu)->env, ARM_FEATURE_PMU));
227
@@ -XXX,XX +XXX,XX @@ static void virt_cpu_post_init(VirtMachineState *vms)
228
}
229
kvm_arm_pmu_init(cpu);
230
}
231
+ if (steal_time) {
232
+ kvm_arm_pvtime_init(cpu, pvtime_reg_base +
233
+ cpu->cpu_index * PVTIME_SIZE_PER_CPU);
234
+ }
235
}
236
} else {
237
if (aarch64 && vms->highmem) {
238
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
239
object_property_set_bool(cpuobj, "kvm-no-adjvtime", true, NULL);
240
}
241
242
+ if (vmc->no_kvm_steal_time &&
243
+ object_property_find(cpuobj, "kvm-steal-time")) {
244
+ object_property_set_bool(cpuobj, "kvm-steal-time", false, NULL);
245
+ }
246
+
247
if (vmc->no_pmu && object_property_find(cpuobj, "pmu")) {
248
object_property_set_bool(cpuobj, "pmu", false, NULL);
249
}
250
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
251
252
create_gic(vms);
253
254
- virt_cpu_post_init(vms);
255
+ virt_cpu_post_init(vms, possible_cpus->len, sysmem);
256
257
fdt_add_pmu_nodes(vms);
258
259
@@ -XXX,XX +XXX,XX @@ DEFINE_VIRT_MACHINE_AS_LATEST(5, 2)
260
261
static void virt_machine_5_1_options(MachineClass *mc)
262
{
263
+ VirtMachineClass *vmc = VIRT_MACHINE_CLASS(OBJECT_CLASS(mc));
264
+
265
virt_machine_5_2_options(mc);
266
compat_props_add(mc->compat_props, hw_compat_5_1, hw_compat_5_1_len);
267
+ vmc->no_kvm_steal_time = true;
268
}
269
DEFINE_VIRT_MACHINE(5, 1)
270
271
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
272
index XXXXXXX..XXXXXXX 100644
273
--- a/target/arm/cpu.c
274
+++ b/target/arm/cpu.c
275
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
276
return;
277
}
278
}
279
+
280
+ if (kvm_enabled()) {
281
+ kvm_arm_steal_time_finalize(cpu, &local_err);
282
+ if (local_err != NULL) {
283
+ error_propagate(errp, local_err);
284
+ return;
285
+ }
286
+ }
287
}
288
289
static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
290
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
291
index XXXXXXX..XXXXXXX 100644
292
--- a/target/arm/kvm.c
293
+++ b/target/arm/kvm.c
294
@@ -XXX,XX +XXX,XX @@ static void kvm_no_adjvtime_set(Object *obj, bool value, Error **errp)
295
ARM_CPU(obj)->kvm_adjvtime = !value;
296
}
297
298
+static bool kvm_steal_time_get(Object *obj, Error **errp)
299
+{
300
+ return ARM_CPU(obj)->kvm_steal_time != ON_OFF_AUTO_OFF;
301
+}
302
+
303
+static void kvm_steal_time_set(Object *obj, bool value, Error **errp)
304
+{
305
+ ARM_CPU(obj)->kvm_steal_time = value ? ON_OFF_AUTO_ON : ON_OFF_AUTO_OFF;
306
+}
307
+
308
/* KVM VCPU properties should be prefixed with "kvm-". */
309
void kvm_arm_add_vcpu_properties(Object *obj)
310
{
311
@@ -XXX,XX +XXX,XX @@ void kvm_arm_add_vcpu_properties(Object *obj)
312
"the virtual counter. VM stopped time "
313
"will be counted.");
314
}
315
+
316
+ cpu->kvm_steal_time = ON_OFF_AUTO_AUTO;
317
+ object_property_add_bool(obj, "kvm-steal-time", kvm_steal_time_get,
318
+ kvm_steal_time_set);
319
+ object_property_set_description(obj, "kvm-steal-time",
320
+ "Set off to disable KVM steal time.");
321
}
322
323
bool kvm_arm_pmu_supported(void)
324
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
325
index XXXXXXX..XXXXXXX 100644
326
--- a/target/arm/kvm64.c
327
+++ b/target/arm/kvm64.c
328
@@ -XXX,XX +XXX,XX @@
329
#include <linux/kvm.h>
330
331
#include "qemu-common.h"
332
+#include "qapi/error.h"
333
#include "cpu.h"
334
#include "qemu/timer.h"
335
#include "qemu/error-report.h"
336
@@ -XXX,XX +XXX,XX @@ static CPUWatchpoint *find_hw_watchpoint(CPUState *cpu, target_ulong addr)
337
return NULL;
338
}
339
340
-static bool kvm_arm_pmu_set_attr(CPUState *cs, struct kvm_device_attr *attr)
341
+static bool kvm_arm_set_device_attr(CPUState *cs, struct kvm_device_attr *attr,
342
+ const char *name)
343
{
344
int err;
345
346
err = kvm_vcpu_ioctl(cs, KVM_HAS_DEVICE_ATTR, attr);
347
if (err != 0) {
348
- error_report("PMU: KVM_HAS_DEVICE_ATTR: %s", strerror(-err));
349
+ error_report("%s: KVM_HAS_DEVICE_ATTR: %s", name, strerror(-err));
350
return false;
50
return false;
351
}
51
}
352
52
353
err = kvm_vcpu_ioctl(cs, KVM_SET_DEVICE_ATTR, attr);
53
- if (!widenfn || !opfn) {
354
if (err != 0) {
54
+ if (!opfn) {
355
- error_report("PMU: KVM_SET_DEVICE_ATTR: %s", strerror(-err));
55
/* size == 3 case, which is an entirely different insn group */
356
+ error_report("%s: KVM_SET_DEVICE_ATTR: %s", name, strerror(-err));
357
return false;
56
return false;
358
}
57
}
359
58
360
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_init(CPUState *cs)
59
- if ((a->vd & 1) || (src1_wide && (a->vn & 1))) {
361
if (!ARM_CPU(cs)->has_pmu) {
60
+ if ((a->vd & 1) || (src1_mop == MO_Q && (a->vn & 1))) {
362
return;
61
return false;
363
}
62
}
364
- if (!kvm_arm_pmu_set_attr(cs, &attr)) {
63
365
+ if (!kvm_arm_set_device_attr(cs, &attr, "PMU")) {
64
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
366
error_report("failed to init PMU");
65
rn1_64 = tcg_temp_new_i64();
367
abort();
66
rm_64 = tcg_temp_new_i64();
67
68
- if (src1_wide) {
69
- read_neon_element64(rn0_64, a->vn, 0, MO_64);
70
+ if (src1_mop >= 0) {
71
+ read_neon_element64(rn0_64, a->vn, 0, src1_mop);
72
} else {
73
TCGv_i32 tmp = tcg_temp_new_i32();
74
read_neon_element32(tmp, a->vn, 0, MO_32);
75
widenfn(rn0_64, tmp);
76
tcg_temp_free_i32(tmp);
368
}
77
}
369
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
78
- rm = tcg_temp_new_i32();
370
if (!ARM_CPU(cs)->has_pmu) {
79
- read_neon_element32(rm, a->vm, 0, MO_32);
371
return;
80
+ if (src2_mop >= 0) {
81
+ read_neon_element64(rm_64, a->vm, 0, src2_mop);
82
+ } else {
83
+ TCGv_i32 tmp = tcg_temp_new_i32();
84
+ read_neon_element32(tmp, a->vm, 0, MO_32);
85
+ widenfn(rm_64, tmp);
86
+ tcg_temp_free_i32(tmp);
87
+ }
88
89
- widenfn(rm_64, rm);
90
- tcg_temp_free_i32(rm);
91
opfn(rn0_64, rn0_64, rm_64);
92
93
/*
94
* Load second pass inputs before storing the first pass result, to
95
* avoid incorrect results if a narrow input overlaps with the result.
96
*/
97
- if (src1_wide) {
98
- read_neon_element64(rn1_64, a->vn, 1, MO_64);
99
+ if (src1_mop >= 0) {
100
+ read_neon_element64(rn1_64, a->vn, 1, src1_mop);
101
} else {
102
TCGv_i32 tmp = tcg_temp_new_i32();
103
read_neon_element32(tmp, a->vn, 1, MO_32);
104
widenfn(rn1_64, tmp);
105
tcg_temp_free_i32(tmp);
372
}
106
}
373
- if (!kvm_arm_pmu_set_attr(cs, &attr)) {
107
- rm = tcg_temp_new_i32();
374
+ if (!kvm_arm_set_device_attr(cs, &attr, "PMU")) {
108
- read_neon_element32(rm, a->vm, 1, MO_32);
375
error_report("failed to set irq for PMU");
109
+ if (src2_mop >= 0) {
376
abort();
110
+ read_neon_element64(rm_64, a->vm, 1, src2_mop);
377
}
111
+ } else {
378
}
112
+ TCGv_i32 tmp = tcg_temp_new_i32();
379
113
+ read_neon_element32(tmp, a->vm, 1, MO_32);
380
+void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
114
+ widenfn(rm_64, tmp);
381
+{
115
+ tcg_temp_free_i32(tmp);
382
+ struct kvm_device_attr attr = {
383
+ .group = KVM_ARM_VCPU_PVTIME_CTRL,
384
+ .attr = KVM_ARM_VCPU_PVTIME_IPA,
385
+ .addr = (uint64_t)&ipa,
386
+ };
387
+
388
+ if (ARM_CPU(cs)->kvm_steal_time == ON_OFF_AUTO_OFF) {
389
+ return;
390
+ }
116
+ }
391
+ if (!kvm_arm_set_device_attr(cs, &attr, "PVTIME IPA")) {
117
392
+ error_report("failed to init PVTIME IPA");
118
write_neon_element64(rn0_64, a->vd, 0, MO_64);
393
+ abort();
119
394
+ }
120
- widenfn(rm_64, rm);
395
+}
121
- tcg_temp_free_i32(rm);
396
+
122
opfn(rn1_64, rn1_64, rm_64);
397
static int read_sys_reg32(int fd, uint32_t *pret, uint64_t id)
123
write_neon_element64(rn1_64, a->vd, 1, MO_64);
398
{
124
399
uint64_t ret;
125
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
400
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
401
return true;
126
return true;
402
}
127
}
403
128
404
+void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp)
129
-#define DO_PREWIDEN(INSN, S, EXT, OP, SRC1WIDE) \
405
+{
130
+#define DO_PREWIDEN(INSN, S, OP, SRC1WIDE, SIGN) \
406
+ bool has_steal_time = kvm_arm_steal_time_supported();
131
static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
407
+
132
{ \
408
+ if (cpu->kvm_steal_time == ON_OFF_AUTO_AUTO) {
133
static NeonGenWidenFn * const widenfn[] = { \
409
+ if (!has_steal_time || !arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
134
gen_helper_neon_widen_##S##8, \
410
+ cpu->kvm_steal_time = ON_OFF_AUTO_OFF;
135
gen_helper_neon_widen_##S##16, \
411
+ } else {
136
- tcg_gen_##EXT##_i32_i64, \
412
+ cpu->kvm_steal_time = ON_OFF_AUTO_ON;
137
- NULL, \
413
+ }
138
+ NULL, NULL, \
414
+ } else if (cpu->kvm_steal_time == ON_OFF_AUTO_ON) {
139
}; \
415
+ if (!has_steal_time) {
140
static NeonGenTwo64OpFn * const addfn[] = { \
416
+ error_setg(errp, "'kvm-steal-time' cannot be enabled "
141
gen_helper_neon_##OP##l_u16, \
417
+ "on this host");
142
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
418
+ return;
143
tcg_gen_##OP##_i64, \
419
+ } else if (!arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
144
NULL, \
420
+ /*
145
}; \
421
+ * DEN0057A chapter 2 says "This specification only covers
146
- return do_prewiden_3d(s, a, widenfn[a->size], \
422
+ * systems in which the Execution state of the hypervisor
147
- addfn[a->size], SRC1WIDE); \
423
+ * as well as EL1 of virtual machines is AArch64.". And,
148
+ int narrow_mop = a->size == MO_32 ? MO_32 | SIGN : -1; \
424
+ * to ensure that, the smc/hvc calls are only specified as
149
+ return do_prewiden_3d(s, a, widenfn[a->size], addfn[a->size], \
425
+ * smc64/hvc64.
150
+ SRC1WIDE ? MO_Q : narrow_mop, \
426
+ */
151
+ narrow_mop); \
427
+ error_setg(errp, "'kvm-steal-time' cannot be enabled "
428
+ "for AArch32 guests");
429
+ return;
430
+ }
431
+ }
432
+}
433
+
434
bool kvm_arm_aarch32_supported(void)
435
{
436
return kvm_check_extension(kvm_state, KVM_CAP_ARM_EL1_32BIT);
437
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_sve_supported(void)
438
return kvm_check_extension(kvm_state, KVM_CAP_ARM_SVE);
439
}
440
441
+bool kvm_arm_steal_time_supported(void)
442
+{
443
+ return kvm_check_extension(kvm_state, KVM_CAP_STEAL_TIME);
444
+}
445
+
446
QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
447
448
void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
449
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
450
index XXXXXXX..XXXXXXX 100644
451
--- a/target/arm/monitor.c
452
+++ b/target/arm/monitor.c
453
@@ -XXX,XX +XXX,XX @@ static const char *cpu_model_advertised_features[] = {
454
"sve128", "sve256", "sve384", "sve512",
455
"sve640", "sve768", "sve896", "sve1024", "sve1152", "sve1280",
456
"sve1408", "sve1536", "sve1664", "sve1792", "sve1920", "sve2048",
457
- "kvm-no-adjvtime",
458
+ "kvm-no-adjvtime", "kvm-steal-time",
459
NULL
460
};
461
462
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
463
index XXXXXXX..XXXXXXX 100644
464
--- a/tests/qtest/arm-cpu-features.c
465
+++ b/tests/qtest/arm-cpu-features.c
466
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
467
assert_set_feature(qts, "max", "pmu", true);
468
469
assert_has_not_feature(qts, "max", "kvm-no-adjvtime");
470
+ assert_has_not_feature(qts, "max", "kvm-steal-time");
471
472
if (g_str_equal(qtest_get_arch(), "aarch64")) {
473
assert_has_feature_enabled(qts, "max", "aarch64");
474
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
475
assert_set_feature(qts, "host", "kvm-no-adjvtime", false);
476
477
if (g_str_equal(qtest_get_arch(), "aarch64")) {
478
+ bool kvm_supports_steal_time;
479
bool kvm_supports_sve;
480
char max_name[8], name[8];
481
uint32_t max_vq, vq;
482
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
483
QDict *resp;
484
char *error;
485
486
+ assert_error(qts, "cortex-a15",
487
+ "We cannot guarantee the CPU type 'cortex-a15' works "
488
+ "with KVM on this host", NULL);
489
+
490
assert_has_feature_enabled(qts, "host", "aarch64");
491
492
/* Enabling and disabling pmu should always work. */
493
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
494
assert_set_feature(qts, "host", "pmu", false);
495
assert_set_feature(qts, "host", "pmu", true);
496
497
- assert_error(qts, "cortex-a15",
498
- "We cannot guarantee the CPU type 'cortex-a15' works "
499
- "with KVM on this host", NULL);
500
-
501
+ /*
502
+ * Some features would be enabled by default, but they're disabled
503
+ * because this instance of KVM doesn't support them. Test that the
504
+ * features are present, and, when enabled, issue further tests.
505
+ */
506
+ assert_has_feature(qts, "host", "kvm-steal-time");
507
assert_has_feature(qts, "host", "sve");
508
+
509
resp = do_query_no_props(qts, "host");
510
+ kvm_supports_steal_time = resp_get_feature(resp, "kvm-steal-time");
511
kvm_supports_sve = resp_get_feature(resp, "sve");
512
vls = resp_get_sve_vls(resp);
513
qobject_unref(resp);
514
515
+ if (kvm_supports_steal_time) {
516
+ /* If we have steal-time then we should be able to toggle it. */
517
+ assert_set_feature(qts, "host", "kvm-steal-time", false);
518
+ assert_set_feature(qts, "host", "kvm-steal-time", true);
519
+ }
520
+
521
if (kvm_supports_sve) {
522
g_assert(vls != 0);
523
max_vq = 64 - __builtin_clzll(vls);
524
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
525
assert_has_not_feature(qts, "host", "aarch64");
526
assert_has_not_feature(qts, "host", "pmu");
527
assert_has_not_feature(qts, "host", "sve");
528
+ assert_has_not_feature(qts, "host", "kvm-steal-time");
529
}
152
}
530
153
531
qtest_quit(qts);
154
-DO_PREWIDEN(VADDL_S, s, ext, add, false)
155
-DO_PREWIDEN(VADDL_U, u, extu, add, false)
156
-DO_PREWIDEN(VSUBL_S, s, ext, sub, false)
157
-DO_PREWIDEN(VSUBL_U, u, extu, sub, false)
158
-DO_PREWIDEN(VADDW_S, s, ext, add, true)
159
-DO_PREWIDEN(VADDW_U, u, extu, add, true)
160
-DO_PREWIDEN(VSUBW_S, s, ext, sub, true)
161
-DO_PREWIDEN(VSUBW_U, u, extu, sub, true)
162
+DO_PREWIDEN(VADDL_S, s, add, false, MO_SIGN)
163
+DO_PREWIDEN(VADDL_U, u, add, false, 0)
164
+DO_PREWIDEN(VSUBL_S, s, sub, false, MO_SIGN)
165
+DO_PREWIDEN(VSUBL_U, u, sub, false, 0)
166
+DO_PREWIDEN(VADDW_S, s, add, true, MO_SIGN)
167
+DO_PREWIDEN(VADDW_U, u, add, true, 0)
168
+DO_PREWIDEN(VSUBW_S, s, sub, true, MO_SIGN)
169
+DO_PREWIDEN(VSUBW_U, u, sub, true, 0)
170
171
static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
172
NeonGenTwo64OpFn *opfn, NeonGenNarrowFn *narrowfn)
532
--
173
--
533
2.20.1
174
2.20.1
534
175
535
176
diff view generated by jsdifflib
New patch
1
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error
2
meant we were using the H4() address swizzler macro rather than the
3
H2() which is required for 2-byte data. This had no effect on
4
little-endian hosts but meant we put the result data into the
5
destination Dreg in the wrong order on big-endian hosts.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
11
---
12
target/arm/vec_helper.c | 8 ++++----
13
1 file changed, 4 insertions(+), 4 deletions(-)
14
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/vec_helper.c
18
+++ b/target/arm/vec_helper.c
19
@@ -XXX,XX +XXX,XX @@ DO_ABA(gvec_uaba_d, uint64_t)
20
r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \
21
r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \
22
\
23
- d[H4(0)] = r0; \
24
- d[H4(1)] = r1; \
25
- d[H4(2)] = r2; \
26
- d[H4(3)] = r3; \
27
+ d[H2(0)] = r0; \
28
+ d[H2(1)] = r1; \
29
+ d[H2(2)] = r2; \
30
+ d[H2(3)] = r3; \
31
}
32
33
DO_NEON_PAIRWISE(neon_padd, add)
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
New patch
1
The helper functions for performing the udot/sdot operations against
2
a scalar were not using an address-swizzling macro when converting
3
the index of the scalar element into a pointer into the vm array.
4
This had no effect on little-endian hosts but meant we generated
5
incorrect results on big-endian hosts.
1
6
7
For these insns, the index is indexing over group of 4 8-bit values,
8
so 32 bits per indexed entity, and H4() is therefore what we want.
9
(For Neon the only possible input indexes are 0 and 1.)
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
15
---
16
target/arm/vec_helper.c | 4 ++--
17
1 file changed, 2 insertions(+), 2 deletions(-)
18
19
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/vec_helper.c
22
+++ b/target/arm/vec_helper.c
23
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
24
intptr_t index = simd_data(desc);
25
uint32_t *d = vd;
26
int8_t *n = vn;
27
- int8_t *m_indexed = (int8_t *)vm + index * 4;
28
+ int8_t *m_indexed = (int8_t *)vm + H4(index) * 4;
29
30
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
31
* Otherwise opr_sz is a multiple of 16.
32
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
33
intptr_t index = simd_data(desc);
34
uint32_t *d = vd;
35
uint8_t *n = vn;
36
- uint8_t *m_indexed = (uint8_t *)vm + index * 4;
37
+ uint8_t *m_indexed = (uint8_t *)vm + H4(index) * 4;
38
39
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
40
* Otherwise opr_sz is a multiple of 16.
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
HCR should be applied when NS is set, not when it is cleared.
4
Message-id: 20201002080935.1660005-1-f4bug@amsat.org
4
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
include/hw/arm/fsl-imx25.h | 2 +-
9
target/arm/helper.c | 5 ++---
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
1 file changed, 2 insertions(+), 3 deletions(-)
10
11
11
diff --git a/include/hw/arm/fsl-imx25.h b/include/hw/arm/fsl-imx25.h
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/include/hw/arm/fsl-imx25.h
14
--- a/target/arm/helper.c
14
+++ b/include/hw/arm/fsl-imx25.h
15
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ struct FslIMX25State {
16
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
16
* 0xBB00_0000 0xBB00_0FFF 4 Kbytes NAND flash main area buffer
17
17
* 0xBB00_1000 0xBB00_11FF 512 B NAND flash spare area buffer
18
/*
18
* 0xBB00_1200 0xBB00_1DFF 3 Kbytes Reserved
19
* Non-IS variants of TLB operations are upgraded to
19
- * 0xBB00_1E00 0xBB00_1FFF 512 B NAND flash control regisers
20
- * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
20
+ * 0xBB00_1E00 0xBB00_1FFF 512 B NAND flash control registers
21
+ * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
21
* 0xBB01_2000 0xBFFF_FFFF 96 Mbytes (minus 8 Kbytes) Reserved
22
* force broadcast of these operations.
22
* 0xC000_0000 0xFFFF_FFFF 1024 Mbytes Reserved
23
*/
23
*/
24
static bool tlb_force_broadcast(CPUARMState *env)
25
{
26
- return (env->cp15.hcr_el2 & HCR_FB) &&
27
- arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
28
+ return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
29
}
30
31
static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
--
32
--
25
2.20.1
33
2.20.1
26
34
27
35
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
2
3
Fix integer handling issues handling issue reported by Coverity:
3
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the
4
future HCR_EL2.TLOR when S-EL2 is enabled.
4
5
5
hw/ssi/npcm7xx_fiu.c: 162 in npcm7xx_fiu_flash_read()
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
6
>>> CID 1432730: Integer handling issues (NEGATIVE_RETURNS)
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
>>> "npcm7xx_fiu_cs_index(fiu, f)" is passed to a parameter that cannot be negative.
8
162 npcm7xx_fiu_select(fiu, npcm7xx_fiu_cs_index(fiu, f));
9
10
hw/ssi/npcm7xx_fiu.c: 221 in npcm7xx_fiu_flash_write()
11
218 cs_id = npcm7xx_fiu_cs_index(fiu, f);
12
219 trace_npcm7xx_fiu_flash_write(DEVICE(fiu)->canonical_path, cs_id, addr,
13
220 size, v);
14
>>> CID 1432729: Integer handling issues (NEGATIVE_RETURNS)
15
>>> "cs_id" is passed to a parameter that cannot be negative.
16
221 npcm7xx_fiu_select(fiu, cs_id);
17
18
Since the index of the flash can not be negative, return an
19
unsigned type.
20
21
Reported-by: Coverity (CID 1432729 & 1432730: NEGATIVE_RETURNS)
22
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
23
Reviewed-by: Havard Skinnemoen <hskinnemoen@google.com>
24
Message-id: 20200919132435.310527-1-f4bug@amsat.org
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
9
---
27
hw/ssi/npcm7xx_fiu.c | 12 ++++++------
10
target/arm/helper.c | 19 +++++--------------
28
hw/ssi/trace-events | 2 +-
11
1 file changed, 5 insertions(+), 14 deletions(-)
29
2 files changed, 7 insertions(+), 7 deletions(-)
30
12
31
diff --git a/hw/ssi/npcm7xx_fiu.c b/hw/ssi/npcm7xx_fiu.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/ssi/npcm7xx_fiu.c
15
--- a/target/arm/helper.c
34
+++ b/hw/ssi/npcm7xx_fiu.c
16
+++ b/target/arm/helper.c
35
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxFIURegister {
17
@@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
36
* Returns the index of flash in the fiu->flash array. This corresponds to the
18
#endif
37
* chip select ID of the flash.
19
20
/* Shared logic between LORID and the rest of the LOR* registers.
21
- * Secure state has already been delt with.
22
+ * Secure state exclusion has already been dealt with.
38
*/
23
*/
39
-static int npcm7xx_fiu_cs_index(NPCM7xxFIUState *fiu, NPCM7xxFIUFlash *flash)
24
-static CPAccessResult access_lor_ns(CPUARMState *env)
40
+static unsigned npcm7xx_fiu_cs_index(NPCM7xxFIUState *fiu,
25
+static CPAccessResult access_lor_ns(CPUARMState *env,
41
+ NPCM7xxFIUFlash *flash)
26
+ const ARMCPRegInfo *ri, bool isread)
42
{
27
{
43
int index = flash - fiu->flash;
28
int el = arm_current_el(env);
44
29
45
@@ -XXX,XX +XXX,XX @@ static int npcm7xx_fiu_cs_index(NPCM7xxFIUState *fiu, NPCM7xxFIUFlash *flash)
30
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_ns(CPUARMState *env)
31
return CP_ACCESS_OK;
46
}
32
}
47
33
48
/* Assert the chip select specified in the UMA Control/Status Register. */
34
-static CPAccessResult access_lorid(CPUARMState *env, const ARMCPRegInfo *ri,
49
-static void npcm7xx_fiu_select(NPCM7xxFIUState *s, int cs_id)
35
- bool isread)
50
+static void npcm7xx_fiu_select(NPCM7xxFIUState *s, unsigned cs_id)
36
-{
37
- if (arm_is_secure_below_el3(env)) {
38
- /* Access ok in secure mode. */
39
- return CP_ACCESS_OK;
40
- }
41
- return access_lor_ns(env);
42
-}
43
-
44
static CPAccessResult access_lor_other(CPUARMState *env,
45
const ARMCPRegInfo *ri, bool isread)
51
{
46
{
52
trace_npcm7xx_fiu_select(DEVICE(s)->canonical_path, cs_id);
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_other(CPUARMState *env,
53
48
/* Access denied in secure mode. */
54
if (cs_id < s->cs_count) {
49
return CP_ACCESS_TRAP;
55
qemu_irq_lower(s->cs_lines[cs_id]);
56
+ s->active_cs = cs_id;
57
} else {
58
qemu_log_mask(LOG_GUEST_ERROR,
59
"%s: UMA to CS%d; this module has only %d chip selects",
60
DEVICE(s)->canonical_path, cs_id, s->cs_count);
61
- cs_id = -1;
62
+ s->active_cs = -1;
63
}
50
}
64
-
51
- return access_lor_ns(env);
65
- s->active_cs = cs_id;
52
+ return access_lor_ns(env, ri, isread);
66
}
53
}
67
54
68
/* Deassert the currently active chip select. */
55
/*
69
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_fiu_flash_write(void *opaque, hwaddr addr, uint64_t v,
56
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
70
NPCM7xxFIUFlash *f = opaque;
57
.type = ARM_CP_CONST, .resetvalue = 0 },
71
NPCM7xxFIUState *fiu = f->fiu;
58
{ .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
72
uint32_t dwr_cfg;
59
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
73
- int cs_id;
60
- .access = PL1_R, .accessfn = access_lorid,
74
+ unsigned cs_id;
61
+ .access = PL1_R, .accessfn = access_lor_ns,
75
int i;
62
.type = ARM_CP_CONST, .resetvalue = 0 },
76
63
REGINFO_SENTINEL
77
if (fiu->active_cs != -1) {
64
};
78
diff --git a/hw/ssi/trace-events b/hw/ssi/trace-events
79
index XXXXXXX..XXXXXXX 100644
80
--- a/hw/ssi/trace-events
81
+++ b/hw/ssi/trace-events
82
@@ -XXX,XX +XXX,XX @@ npcm7xx_fiu_deselect(const char *id, int cs) "%s deselect CS%d"
83
npcm7xx_fiu_ctrl_read(const char *id, uint64_t addr, uint32_t data) "%s offset: 0x%04" PRIx64 " value: 0x%08" PRIx32
84
npcm7xx_fiu_ctrl_write(const char *id, uint64_t addr, uint32_t data) "%s offset: 0x%04" PRIx64 " value: 0x%08" PRIx32
85
npcm7xx_fiu_flash_read(const char *id, int cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
86
-npcm7xx_fiu_flash_write(const char *id, int cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
87
+npcm7xx_fiu_flash_write(const char *id, unsigned cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
88
--
65
--
89
2.20.1
66
2.20.1
90
67
91
68
diff view generated by jsdifflib
New patch
1
If we're using the capstone disassembler, disassembly of a run of
2
instructions more than 32 bytes long disassembles the wrong data for
3
instructions beyond the 32 byte mark:
1
4
5
(qemu) xp /16x 0x100
6
0000000000000100: 0x00000005 0x54410001 0x00000001 0x00001000
7
0000000000000110: 0x00000000 0x00000004 0x54410002 0x3c000000
8
0000000000000120: 0x00000000 0x00000004 0x54410009 0x74736574
9
0000000000000130: 0x00000000 0x00000000 0x00000000 0x00000000
10
(qemu) xp /16i 0x100
11
0x00000100: 00000005 andeq r0, r0, r5
12
0x00000104: 54410001 strbpl r0, [r1], #-1
13
0x00000108: 00000001 andeq r0, r0, r1
14
0x0000010c: 00001000 andeq r1, r0, r0
15
0x00000110: 00000000 andeq r0, r0, r0
16
0x00000114: 00000004 andeq r0, r0, r4
17
0x00000118: 54410002 strbpl r0, [r1], #-2
18
0x0000011c: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
19
0x00000120: 54410001 strbpl r0, [r1], #-1
20
0x00000124: 00000001 andeq r0, r0, r1
21
0x00000128: 00001000 andeq r1, r0, r0
22
0x0000012c: 00000000 andeq r0, r0, r0
23
0x00000130: 00000004 andeq r0, r0, r4
24
0x00000134: 54410002 strbpl r0, [r1], #-2
25
0x00000138: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
26
0x0000013c: 00000000 andeq r0, r0, r0
27
28
Here the disassembly of 0x120..0x13f is using the data that is in
29
0x104..0x123.
30
31
This is caused by passing the wrong value to the read_memory_func().
32
The intention is that at this point in the loop the 'cap_buf' buffer
33
already contains 'csize' bytes of data for the instruction at guest
34
addr 'pc', and we want to read in an extra 'tsize' bytes. Those
35
extra bytes are therefore at 'pc + csize', not 'pc'. On the first
36
time through the loop 'csize' happens to be zero, so the initial read
37
of 32 bytes into cap_buf is correct and as long as the disassembly
38
never needs to read more data we return the correct information.
39
40
Use the correct guest address in the call to read_memory_func().
41
42
Cc: qemu-stable@nongnu.org
43
Fixes: https://bugs.launchpad.net/qemu/+bug/1900779
44
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
45
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
46
Message-id: 20201022132445.25039-1-peter.maydell@linaro.org
47
---
48
disas/capstone.c | 2 +-
49
1 file changed, 1 insertion(+), 1 deletion(-)
50
51
diff --git a/disas/capstone.c b/disas/capstone.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/disas/capstone.c
54
+++ b/disas/capstone.c
55
@@ -XXX,XX +XXX,XX @@ bool cap_disas_monitor(disassemble_info *info, uint64_t pc, int count)
56
57
/* Make certain that we can make progress. */
58
assert(tsize != 0);
59
- info->read_memory_func(pc, cap_buf + csize, tsize, info);
60
+ info->read_memory_func(pc + csize, cap_buf + csize, tsize, info);
61
csize += tsize;
62
63
if (cs_disasm_iter(handle, &cbuf, &csize, &pc, insn)) {
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
arm-cpu-features got dropped from the AArch64 tests during the meson
3
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic.
4
conversion shuffle.
4
This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN):
5
5
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
6
CID 1432363 (#1 of 1): Unintentional integer overflow:
7
Message-id: 20201001061718.101915-6-drjones@redhat.com
7
8
overflow_before_widen:
9
Potentially overflowing expression 1 << scale with type int
10
(32 bits, signed) is evaluated using 32-bit arithmetic, and
11
then used in a context that expects an expression of type
12
hwaddr (64 bits, unsigned).
13
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Acked-by: Eric Auger <eric.auger@redhat.com>
16
Message-id: 20201030144617.1535064-1-philmd@redhat.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
19
---
11
tests/qtest/meson.build | 3 ++-
20
hw/arm/smmuv3.c | 3 ++-
12
1 file changed, 2 insertions(+), 1 deletion(-)
21
1 file changed, 2 insertions(+), 1 deletion(-)
13
22
14
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
23
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
15
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/qtest/meson.build
25
--- a/hw/arm/smmuv3.c
17
+++ b/tests/qtest/meson.build
26
+++ b/hw/arm/smmuv3.c
18
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
27
@@ -XXX,XX +XXX,XX @@
19
(cpu != 'arm' ? ['bios-tables-test'] : []) + \
28
*/
20
(config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) + \
29
21
(config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) + \
30
#include "qemu/osdep.h"
22
- ['numa-test',
31
+#include "qemu/bitops.h"
23
+ ['arm-cpu-features',
32
#include "hw/irq.h"
24
+ 'numa-test',
33
#include "hw/sysbus.h"
25
'boot-serial-test',
34
#include "migration/vmstate.h"
26
'migration-test']
35
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
27
36
scale = CMD_SCALE(cmd);
37
num = CMD_NUM(cmd);
38
ttl = CMD_TTL(cmd);
39
- num_pages = (num + 1) * (1 << (scale));
40
+ num_pages = (num + 1) * BIT_ULL(scale);
41
}
42
43
if (type == SMMU_CMD_TLBI_NH_VA) {
28
--
44
--
29
2.20.1
45
2.20.1
30
46
31
47
diff view generated by jsdifflib
1
From: Graeme Gregory <graeme@nuviainc.com>
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
2
3
Original commit did not allocate IRQs for the SMMUv3 in the irqmap
3
When booting a CPU with EL3 using the -kernel flag, set up CPTR_EL3 so
4
effectively using irq 0->3 (shared with other devices). Assuming
4
that SVE will not trap to EL3.
5
original intent was to allocate unique IRQs then add an allocation
6
to the irqmap.
7
5
8
Fixes: e9fdf453240 ("hw/arm: Add arm SBSA reference machine, devices part")
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Graeme Gregory <graeme@nuviainc.com>
8
Message-id: 20201030151541.11976-1-remi@remlab.net
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Message-id: 20201007100732.4103790-3-graeme@nuviainc.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
hw/arm/sbsa-ref.c | 1 +
11
hw/arm/boot.c | 3 +++
16
1 file changed, 1 insertion(+)
12
1 file changed, 3 insertions(+)
17
13
18
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/sbsa-ref.c
16
--- a/hw/arm/boot.c
21
+++ b/hw/arm/sbsa-ref.c
17
+++ b/hw/arm/boot.c
22
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
18
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
23
[SBSA_SECURE_UART_MM] = 9,
19
if (cpu_isar_feature(aa64_mte, cpu)) {
24
[SBSA_AHCI] = 10,
20
env->cp15.scr_el3 |= SCR_ATA;
25
[SBSA_EHCI] = 11,
21
}
26
+ [SBSA_SMMU] = 12, /* ... to 15 */
22
+ if (cpu_isar_feature(aa64_sve, cpu)) {
27
};
23
+ env->cp15.cptr_el[3] |= CPTR_EZ;
28
24
+ }
29
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
25
/* AArch64 kernels never boot in secure mode */
26
assert(!info->secure_boot);
27
/* This hook is only supported for AArch32 currently:
30
--
28
--
31
2.20.1
29
2.20.1
32
30
33
31
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: AlexChen <alex.chen@huawei.com>
2
2
3
Update against Linux 5.9-rc7.
3
In omap_lcd_interrupts(), the pointer omap_lcd is dereferinced before
4
being check if it is valid, which may lead to NULL pointer dereference.
5
So move the assignment to surface after checking that the omap_lcd is valid
6
and move surface_bits_per_pixel(surface) to after the surface assignment.
4
7
5
Cc: Paolo Bonzini <pbonzini@redhat.com>
8
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
9
Signed-off-by: AlexChen <alex.chen@huawei.com>
7
Message-id: 20201001061718.101915-2-drjones@redhat.com
10
Message-id: 5F9CDB8A.9000001@huawei.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
linux-headers/linux/kvm.h | 6 ++++--
14
hw/display/omap_lcdc.c | 10 +++++++---
11
1 file changed, 4 insertions(+), 2 deletions(-)
15
1 file changed, 7 insertions(+), 3 deletions(-)
12
16
13
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
17
diff --git a/hw/display/omap_lcdc.c b/hw/display/omap_lcdc.c
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/linux-headers/linux/kvm.h
19
--- a/hw/display/omap_lcdc.c
16
+++ b/linux-headers/linux/kvm.h
20
+++ b/hw/display/omap_lcdc.c
17
@@ -XXX,XX +XXX,XX @@ struct kvm_ppc_resize_hpt {
21
@@ -XXX,XX +XXX,XX @@ static void omap_lcd_interrupts(struct omap_lcd_panel_s *s)
18
#define KVM_VM_PPC_HV 1
22
static void omap_update_display(void *opaque)
19
#define KVM_VM_PPC_PR 2
23
{
20
24
struct omap_lcd_panel_s *omap_lcd = (struct omap_lcd_panel_s *) opaque;
21
-/* on MIPS, 0 forces trap & emulate, 1 forces VZ ASE */
25
- DisplaySurface *surface = qemu_console_surface(omap_lcd->con);
22
-#define KVM_VM_MIPS_TE        0
26
+ DisplaySurface *surface;
23
+/* on MIPS, 0 indicates auto, 1 forces VZ ASE, 2 forces trap & emulate */
27
draw_line_func draw_line;
24
+#define KVM_VM_MIPS_AUTO    0
28
int size, height, first, last;
25
#define KVM_VM_MIPS_VZ        1
29
int width, linesize, step, bpp, frame_offset;
26
+#define KVM_VM_MIPS_TE        2
30
hwaddr frame_base;
27
31
28
#define KVM_S390_SIE_PAGE_OFFSET 1
32
- if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable ||
29
33
- !surface_bits_per_pixel(surface)) {
30
@@ -XXX,XX +XXX,XX @@ struct kvm_ppc_resize_hpt {
34
+ if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable) {
31
#define KVM_CAP_LAST_CPU 184
35
+ return;
32
#define KVM_CAP_SMALLER_MAXPHYADDR 185
36
+ }
33
#define KVM_CAP_S390_DIAG318 186
37
+
34
+#define KVM_CAP_STEAL_TIME 187
38
+ surface = qemu_console_surface(omap_lcd->con);
35
39
+ if (!surface_bits_per_pixel(surface)) {
36
#ifdef KVM_CAP_IRQ_ROUTING
40
return;
41
}
37
42
38
--
43
--
39
2.20.1
44
2.20.1
40
45
41
46
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: AlexChen <alex.chen@huawei.com>
2
2
3
The "BCM2835 ARM Peripherals" datasheet [*] chapter 2
3
In exynos4210_fimd_update(), the pointer s is dereferinced before
4
("Auxiliaries: UART1 & SPI1, SPI2"), list the register
4
being check if it is valid, which may lead to NULL pointer dereference.
5
sizes as 3/8/16/32 bits. We assume this means this
5
So move the assignment to global_width after checking that the s is valid.
6
peripheral allows 8-bit accesses.
7
6
8
This was not an issue until commit 5d971f9e67 which reverted
7
Reported-by: Euler Robot <euler.robot@huawei.com>
9
("memory: accept mismatching sizes in memory_region_access_valid").
8
Signed-off-by: Alex Chen <alex.chen@huawei.com>
10
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
The model is implemented as 32-bit accesses (see commit 97398d900c,
10
Message-id: 5F9F8D88.9030102@huawei.com
12
all registers are 32-bit) so replace MemoryRegionOps.valid as
13
MemoryRegionOps.impl, and re-introduce MemoryRegionOps.valid
14
with a 8/32-bit range.
15
16
[*] https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf
17
18
Fixes: 97398d900c ("bcm2835_aux: add emulation of BCM2835 AUX (aka UART1) block")
19
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Message-id: 20201002181032.1899463-1-f4bug@amsat.org
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
12
---
24
hw/char/bcm2835_aux.c | 4 +++-
13
hw/display/exynos4210_fimd.c | 4 +++-
25
1 file changed, 3 insertions(+), 1 deletion(-)
14
1 file changed, 3 insertions(+), 1 deletion(-)
26
15
27
diff --git a/hw/char/bcm2835_aux.c b/hw/char/bcm2835_aux.c
16
diff --git a/hw/display/exynos4210_fimd.c b/hw/display/exynos4210_fimd.c
28
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/char/bcm2835_aux.c
18
--- a/hw/display/exynos4210_fimd.c
30
+++ b/hw/char/bcm2835_aux.c
19
+++ b/hw/display/exynos4210_fimd.c
31
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps bcm2835_aux_ops = {
20
@@ -XXX,XX +XXX,XX @@ static void exynos4210_fimd_update(void *opaque)
32
.read = bcm2835_aux_read,
21
bool blend = false;
33
.write = bcm2835_aux_write,
22
uint8_t *host_fb_addr;
34
.endianness = DEVICE_NATIVE_ENDIAN,
23
bool is_dirty = false;
35
- .valid.min_access_size = 4,
24
- const int global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
36
+ .impl.min_access_size = 4,
25
+ int global_width;
37
+ .impl.max_access_size = 4,
26
38
+ .valid.min_access_size = 1,
27
if (!s || !s->console || !s->enabled ||
39
.valid.max_access_size = 4,
28
surface_bits_per_pixel(qemu_console_surface(s->console)) == 0) {
40
};
29
return;
30
}
31
+
32
+ global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
33
exynos4210_update_resolution(s);
34
surface = qemu_console_surface(s->console);
41
35
42
--
36
--
43
2.20.1
37
2.20.1
44
38
45
39
diff view generated by jsdifflib
1
QEMU supports a 48-bit physical address range, but we don't currently
1
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to
2
expose it in the '-cpu max' ID registers (you get the same range as
2
armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el().
3
Cortex-A57, which is 44 bits).
3
This is incorrect when the security state being queried is not the
4
current one, because arm_current_el() uses the current security state
5
to determine which of the banked CONTROL.nPRIV bits to look at.
6
The effect was that if (for instance) Secure state was in privileged
7
mode but Non-Secure was not then we would return the wrong MMU index.
4
8
5
Set the ID_AA64MMFR0.PARange field to indicate 48 bits.
9
The only places where we are using this function in a way that could
10
trigger this bug are for the stack loads during a v8M function-return
11
and for the instruction fetch of a v8M SG insn.
12
13
Fix the bug by expanding out the M-profile version of the
14
arm_current_el() logic inline so it can use the passed in secstate
15
rather than env->v7m.secure.
6
16
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201001160116.18095-1-peter.maydell@linaro.org
19
Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
10
---
20
---
11
target/arm/cpu64.c | 4 ++++
21
target/arm/m_helper.c | 3 ++-
12
1 file changed, 4 insertions(+)
22
1 file changed, 2 insertions(+), 1 deletion(-)
13
23
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu64.c
26
--- a/target/arm/m_helper.c
17
+++ b/target/arm/cpu64.c
27
+++ b/target/arm/m_helper.c
18
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
28
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
19
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 2);
29
/* Return the MMU index for a v7M CPU in the specified security state */
20
cpu->isar.id_aa64pfr1 = t;
30
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
21
31
{
22
+ t = cpu->isar.id_aa64mmfr0;
32
- bool priv = arm_current_el(env) != 0;
23
+ t = FIELD_DP64(t, ID_AA64MMFR0, PARANGE, 5); /* PARange: 48 bits */
33
+ bool priv = arm_v7m_is_handler_mode(env) ||
24
+ cpu->isar.id_aa64mmfr0 = t;
34
+ !(env->v7m.control[secstate] & 1);
25
+
35
26
t = cpu->isar.id_aa64mmfr1;
36
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
27
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
37
}
28
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
29
--
38
--
30
2.20.1
39
2.20.1
31
40
32
41
diff view generated by jsdifflib
New patch
1
On some hosts (eg Ubuntu Bionic) pkg-config returns a set of
2
libraries for gio-2.0 which don't actually work when compiling
3
statically. (Specifically, the returned library string includes
4
-lmount, but not -lblkid which -lmount depends upon, so linking
5
fails due to missing symbols.)
1
6
7
Check that the libraries work, and don't enable gio if they don't,
8
in the same way we do for gnutls.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20200928160402.7961-1-peter.maydell@linaro.org
14
---
15
configure | 10 +++++++++-
16
1 file changed, 9 insertions(+), 1 deletion(-)
17
18
diff --git a/configure b/configure
19
index XXXXXXX..XXXXXXX 100755
20
--- a/configure
21
+++ b/configure
22
@@ -XXX,XX +XXX,XX @@ if test "$static" = yes && test "$mingw32" = yes; then
23
fi
24
25
if $pkg_config --atleast-version=$glib_req_ver gio-2.0; then
26
- gio=yes
27
gio_cflags=$($pkg_config --cflags gio-2.0)
28
gio_libs=$($pkg_config --libs gio-2.0)
29
gdbus_codegen=$($pkg_config --variable=gdbus_codegen gio-2.0)
30
if [ ! -x "$gdbus_codegen" ]; then
31
gdbus_codegen=
32
fi
33
+ # Check that the libraries actually work -- Ubuntu 18.04 ships
34
+ # with pkg-config --static --libs data for gio-2.0 that is missing
35
+ # -lblkid and will give a link error.
36
+ write_c_skeleton
37
+ if compile_prog "" "gio_libs" ; then
38
+ gio=yes
39
+ else
40
+ gio=no
41
+ fi
42
else
43
gio=no
44
fi
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
New patch
1
In gicv3_init_cpuif() we copy the ARMCPU gicv3_maintenance_interrupt
2
into the GICv3CPUState struct's maintenance_irq field. This will
3
only work if the board happens to have already wired up the CPU
4
maintenance IRQ before the GIC was realized. Unfortunately this is
5
not the case for the 'virt' board, and so the value that gets copied
6
is NULL (since a qemu_irq is really a pointer to an IRQState struct
7
under the hood). The effect is that the CPU interface code never
8
actually raises the maintenance interrupt line.
1
9
10
Instead, since the GICv3CPUState has a pointer to the CPUState, make
11
the dereference at the point where we want to raise the interrupt, to
12
avoid an implicit requirement on board code to wire things up in a
13
particular order.
14
15
Reported-by: Jose Martins <josemartins90@gmail.com>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20201009153904.28529-1-peter.maydell@linaro.org
18
Reviewed-by: Luc Michel <luc@lmichel.fr>
19
---
20
include/hw/intc/arm_gicv3_common.h | 1 -
21
hw/intc/arm_gicv3_cpuif.c | 5 ++---
22
2 files changed, 2 insertions(+), 4 deletions(-)
23
24
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/intc/arm_gicv3_common.h
27
+++ b/include/hw/intc/arm_gicv3_common.h
28
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
29
qemu_irq parent_fiq;
30
qemu_irq parent_virq;
31
qemu_irq parent_vfiq;
32
- qemu_irq maintenance_irq;
33
34
/* Redistributor */
35
uint32_t level; /* Current IRQ level */
36
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/intc/arm_gicv3_cpuif.c
39
+++ b/hw/intc/arm_gicv3_cpuif.c
40
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
41
int irqlevel = 0;
42
int fiqlevel = 0;
43
int maintlevel = 0;
44
+ ARMCPU *cpu = ARM_CPU(cs->cpu);
45
46
idx = hppvi_index(cs);
47
trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx);
48
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
49
50
qemu_set_irq(cs->parent_vfiq, fiqlevel);
51
qemu_set_irq(cs->parent_virq, irqlevel);
52
- qemu_set_irq(cs->maintenance_irq, maintlevel);
53
+ qemu_set_irq(cpu->gicv3_maintenance_interrupt, maintlevel);
54
}
55
56
static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
58
&& cpu->gic_num_lrs) {
59
int j;
60
61
- cs->maintenance_irq = cpu->gicv3_maintenance_interrupt;
62
-
63
cs->num_list_regs = cpu->gic_num_lrs;
64
cs->vpribits = cpu->gic_vpribits;
65
cs->vprebits = cpu->gic_vprebits;
66
--
67
2.20.1
68
69
diff view generated by jsdifflib
New patch
1
The kerneldoc script currently emits Sphinx markup for a macro with
2
arguments that uses the c:function directive. This is correct for
3
Sphinx versions earlier than Sphinx 3, where c:macro doesn't allow
4
documentation of macros with arguments and c:function is not picky
5
about the syntax of what it is passed. However, in Sphinx 3 the
6
c:macro directive was enhanced to support macros with arguments,
7
and c:function was made more picky about what syntax it accepted.
1
8
9
When kerneldoc is told that it needs to produce output for Sphinx
10
3 or later, make it emit c:function only for functions and c:macro
11
for macros with arguments. We assume that anything with a return
12
type is a function and anything without is a macro.
13
14
This fixes the Sphinx error:
15
16
/home/petmay01/linaro/qemu-from-laptop/qemu/docs/../include/qom/object.h:155:Error in declarator
17
If declarator-id with parameters (e.g., 'void f(int arg)'):
18
Invalid C declaration: Expected identifier in nested name. [error at 25]
19
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
20
-------------------------^
21
If parenthesis in noptr-declarator (e.g., 'void (*f(int arg))(double)'):
22
Error in declarator or parameters
23
Invalid C declaration: Expecting "(" in parameters. [error at 39]
24
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
25
---------------------------------------^
26
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
29
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
30
Message-id: 20201030174700.7204-2-peter.maydell@linaro.org
31
---
32
scripts/kernel-doc | 18 +++++++++++++++++-
33
1 file changed, 17 insertions(+), 1 deletion(-)
34
35
diff --git a/scripts/kernel-doc b/scripts/kernel-doc
36
index XXXXXXX..XXXXXXX 100755
37
--- a/scripts/kernel-doc
38
+++ b/scripts/kernel-doc
39
@@ -XXX,XX +XXX,XX @@ sub output_function_rst(%) {
40
    output_highlight_rst($args{'purpose'});
41
    $start = "\n\n**Syntax**\n\n ``";
42
} else {
43
-    print ".. c:function:: ";
44
+ if ((split(/\./, $sphinx_version))[0] >= 3) {
45
+ # Sphinx 3 and later distinguish macros and functions and
46
+ # complain if you use c:function with something that's not
47
+ # syntactically valid as a function declaration.
48
+ # We assume that anything with a return type is a function
49
+ # and anything without is a macro.
50
+ if ($args{'functiontype'} ne "") {
51
+ print ".. c:function:: ";
52
+ } else {
53
+ print ".. c:macro:: ";
54
+ }
55
+ } else {
56
+ # Older Sphinx don't support documenting macros that take
57
+ # arguments with c:macro, and don't complain about the use
58
+ # of c:function for this.
59
+ print ".. c:function:: ";
60
+ }
61
}
62
if ($args{'functiontype'} ne "") {
63
    $start .= $args{'functiontype'} . " " . $args{'function'} . " (";
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
New patch
1
Sphinx 3.2 is pickier than earlier versions about the option:: markup,
2
and complains about our usage in qemu-option-trace.rst:
1
3
4
../../docs/qemu-option-trace.rst.inc:4:Malformed option description
5
'[enable=]PATTERN', should look like "opt", "-opt args", "--opt args",
6
"/opt args" or "+opt args"
7
8
In this file, we're really trying to document the different parts of
9
the top-level --trace option, which qemu-nbd.rst and qemu-img.rst
10
have already introduced with an option:: markup. So it's not right
11
to use option:: here anyway. Switch to a different markup
12
(definition lists) which gives about the same formatted output.
13
14
(Unlike option::, this markup doesn't produce index entries; but
15
at the moment we don't do anything much with indexes anyway, and
16
in any case I think it doesn't make much sense to have individual
17
index entries for the sub-parts of the --trace option.)
18
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
21
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
22
Message-id: 20201030174700.7204-3-peter.maydell@linaro.org
23
---
24
docs/qemu-option-trace.rst.inc | 6 +++---
25
1 file changed, 3 insertions(+), 3 deletions(-)
26
27
diff --git a/docs/qemu-option-trace.rst.inc b/docs/qemu-option-trace.rst.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/docs/qemu-option-trace.rst.inc
30
+++ b/docs/qemu-option-trace.rst.inc
31
@@ -XXX,XX +XXX,XX @@
32
33
Specify tracing options.
34
35
-.. option:: [enable=]PATTERN
36
+``[enable=]PATTERN``
37
38
Immediately enable events matching *PATTERN*
39
(either event name or a globbing pattern). This option is only
40
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
41
42
Use :option:`-trace help` to print a list of names of trace points.
43
44
-.. option:: events=FILE
45
+``events=FILE``
46
47
Immediately enable events listed in *FILE*.
48
The file must contain one event name (as listed in the ``trace-events-all``
49
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
50
available if QEMU has been compiled with the ``simple``, ``log`` or
51
``ftrace`` tracing backend.
52
53
-.. option:: file=FILE
54
+``file=FILE``
55
56
Log output traces to *FILE*.
57
This option is only available if QEMU has been compiled with
58
--
59
2.20.1
60
61
diff view generated by jsdifflib
New patch
1
The randomness tests in the NPCM7xx RNG test fail intermittently
2
but fairly frequently. On my machine running the test in a loop:
3
while QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/npcm7xx_rng-test; do true; done
1
4
5
will fail in less than a minute with an error like:
6
ERROR:../../tests/qtest/npcm7xx_rng-test.c:256:test_first_byte_runs:
7
assertion failed (calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE) > 0.01): (0.00286205989 > 0.01)
8
9
(Failures have been observed on all 4 of the randomness tests,
10
not just first_byte_runs.)
11
12
It's not clear why these tests are failing like this, but intermittent
13
failures make CI and merge testing awkward, so disable running them
14
unless a developer specifically sets QEMU_TEST_FLAKY_RNG_TESTS when
15
running the test suite, until we work out the cause.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
19
Message-id: 20201102152454.8287-1-peter.maydell@linaro.org
20
Reviewed-by: Havard Skinnemoen <hskinnemoen@google.com>
21
---
22
tests/qtest/npcm7xx_rng-test.c | 14 ++++++++++----
23
1 file changed, 10 insertions(+), 4 deletions(-)
24
25
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/qtest/npcm7xx_rng-test.c
28
+++ b/tests/qtest/npcm7xx_rng-test.c
29
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
30
31
qtest_add_func("npcm7xx_rng/enable_disable", test_enable_disable);
32
qtest_add_func("npcm7xx_rng/rosel", test_rosel);
33
- qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
34
- qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
35
- qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
36
- qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
37
+ /*
38
+ * These tests fail intermittently; only run them on explicit
39
+ * request until we figure out why.
40
+ */
41
+ if (getenv("QEMU_TEST_FLAKY_RNG_TESTS")) {
42
+ qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
43
+ qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
44
+ qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
45
+ qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
46
+ }
47
48
qtest_start("-machine npcm750-evb");
49
ret = g_test_run();
50
--
51
2.20.1
52
53
diff view generated by jsdifflib