1
target-arm queue for 3.1: mostly bug fixes, but the "turn on
1
Small pile of bug fixes for rc1. I've included my patches to get
2
EL2 support for Cortex-A7 and -A15" is technically enabling
2
our docs building with Sphinx 3, just for convenience...
3
of a new feature... I think this is OK since we're only at rc1,
4
and it's easy to revert that feature bit flip if necessary.
5
3
6
thanks
7
-- PMM
4
-- PMM
8
5
6
The following changes since commit b149dea55cce97cb226683d06af61984a1c11e96:
9
7
10
The following changes since commit 5704c36d25ee84e7129722cb0db53df9faefe943:
8
Merge remote-tracking branch 'remotes/cschoenebeck/tags/pull-9p-20201102' into staging (2020-11-02 10:57:48 +0000)
11
12
Merge remote-tracking branch 'remotes/kraxel/tags/fixes-31-20181112-pull-request' into staging (2018-11-12 15:55:40 +0000)
13
9
14
are available in the Git repository at:
10
are available in the Git repository at:
15
11
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181112
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201102
17
13
18
for you to fetch changes up to 1a4c1a6dbf60aebddd07753f1013ea896c06ad29:
14
for you to fetch changes up to ffb4fbf90a2f63c9cb33e4bb9f854c79bf04ca4a:
19
15
20
target/arm/cpu: Give Cortex-A15 and -A7 the EL2 feature (2018-11-12 16:52:29 +0000)
16
tests/qtest/npcm7xx_rng-test: Disable randomness tests (2020-11-02 16:52:18 +0000)
21
17
22
----------------------------------------------------------------
18
----------------------------------------------------------------
23
target/arm queue:
19
target-arm queue:
24
* Remove no-longer-needed workaround for small SAU regions for v8M
20
* target/arm: Fix Neon emulation bugs on big-endian hosts
25
* Remove antique TODO comment
21
* target/arm: fix handling of HCR.FB
26
* MAINTAINERS: Add an entry for the 'collie' machine
22
* target/arm: fix LORID_EL1 access check
27
* hw/arm/sysbus-fdt: Only call match_fn callback if the type matches
23
* disas/capstone: Fix monitor disassembly of >32 bytes
28
* Fix infinite recursion in tlbi_aa64_vmalle1_write()
24
* hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
29
* ARM KVM: fix various bugs in handling of guest debugging
25
* hw/arm/boot: fix SVE for EL3 direct kernel boot
30
* Correctly implement handling of HCR_EL2.{VI, VF}
26
* hw/display/omap_lcdc: Fix potential NULL pointer dereference
31
* Hyp mode R14 is shared with User and System
27
* hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
32
* Give Cortex-A15 and -A7 the EL2 feature
28
* target/arm: Get correct MMU index for other-security-state
29
* configure: Test that gio libs from pkg-config work
30
* hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
31
* docs: Fix building with Sphinx 3
32
* tests/qtest/npcm7xx_rng-test: Disable randomness tests
33
33
34
----------------------------------------------------------------
34
----------------------------------------------------------------
35
Alex Bennée (6):
35
AlexChen (2):
36
target/arm64: properly handle DBGVR RESS bits
36
hw/display/omap_lcdc: Fix potential NULL pointer dereference
37
target/arm64: hold BQL when calling do_interrupt()
37
hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
38
target/arm64: kvm debug set target_el when passing exception to guest
39
tests/guest-debug: fix scoping of failcount
40
arm: use symbolic MDCR_TDE in arm_debug_target_el
41
arm: fix aa64_generate_debug_exceptions to work with EL2
42
38
43
Eric Auger (1):
39
Peter Maydell (9):
44
hw/arm/sysbus-fdt: Only call match_fn callback if the type matches
40
target/arm: Fix float16 pairwise Neon ops on big-endian hosts
41
target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hosts
42
disas/capstone: Fix monitor disassembly of >32 bytes
43
target/arm: Get correct MMU index for other-security-state
44
configure: Test that gio libs from pkg-config work
45
hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
46
scripts/kerneldoc: For Sphinx 3 use c:macro for macros with arguments
47
qemu-option-trace.rst.inc: Don't use option:: markup
48
tests/qtest/npcm7xx_rng-test: Disable randomness tests
45
49
46
Peter Maydell (7):
50
Philippe Mathieu-Daudé (1):
47
target/arm: Remove workaround for small SAU regions
51
hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
48
target/arm: Remove antique TODO comment
49
Revert "target/arm: Implement HCR.VI and VF"
50
target/arm: Track the state of our irq lines from the GIC explicitly
51
target/arm: Correctly implement handling of HCR_EL2.{VI, VF}
52
target/arm: Hyp mode R14 is shared with User and System
53
target/arm/cpu: Give Cortex-A15 and -A7 the EL2 feature
54
52
55
Richard Henderson (1):
53
Richard Henderson (11):
56
target/arm: Fix typo in tlbi_aa64_vmalle1_write
54
target/arm: Introduce neon_full_reg_offset
55
target/arm: Move neon_element_offset to translate.c
56
target/arm: Use neon_element_offset in neon_load/store_reg
57
target/arm: Use neon_element_offset in vfp_reg_offset
58
target/arm: Add read/write_neon_element32
59
target/arm: Expand read/write_neon_element32 to all MemOp
60
target/arm: Rename neon_load_reg32 to vfp_load_reg32
61
target/arm: Add read/write_neon_element64
62
target/arm: Rename neon_load_reg64 to vfp_load_reg64
63
target/arm: Simplify do_long_3d and do_2scalar_long
64
target/arm: Improve do_prewiden_3d
57
65
58
Thomas Huth (1):
66
Rémi Denis-Courmont (3):
59
MAINTAINERS: Add an entry for the 'collie' machine
67
target/arm: fix handling of HCR.FB
68
target/arm: fix LORID_EL1 access check
69
hw/arm/boot: fix SVE for EL3 direct kernel boot
60
70
61
target/arm/cpu.h | 44 +++++++++++------
71
docs/qemu-option-trace.rst.inc | 6 +-
62
target/arm/internals.h | 34 +++++++++++++
72
configure | 10 +-
63
hw/arm/sysbus-fdt.c | 12 +++--
73
include/hw/intc/arm_gicv3_common.h | 1 -
64
target/arm/cpu.c | 66 ++++++++++++++++++++++++-
74
disas/capstone.c | 2 +-
65
target/arm/helper.c | 101 +++++++++++++-------------------------
75
hw/arm/boot.c | 3 +
66
target/arm/kvm32.c | 4 +-
76
hw/arm/smmuv3.c | 3 +-
67
target/arm/kvm64.c | 20 +++++++-
77
hw/display/exynos4210_fimd.c | 4 +-
68
target/arm/machine.c | 51 +++++++++++++++++++
78
hw/display/omap_lcdc.c | 10 +-
69
target/arm/op_helper.c | 4 +-
79
hw/intc/arm_gicv3_cpuif.c | 5 +-
70
MAINTAINERS | 7 +++
80
target/arm/helper.c | 24 +-
71
tests/guest-debug/test-gdbstub.py | 1 +
81
target/arm/m_helper.c | 3 +-
72
11 files changed, 248 insertions(+), 96 deletions(-)
82
target/arm/translate.c | 153 +++++++++---
83
target/arm/vec_helper.c | 12 +-
84
tests/qtest/npcm7xx_rng-test.c | 14 +-
85
scripts/kernel-doc | 18 +-
86
target/arm/translate-neon.c.inc | 472 ++++++++++++++++++++-----------------
87
target/arm/translate-vfp.c.inc | 341 +++++++++++----------------
88
17 files changed, 588 insertions(+), 493 deletions(-)
73
89
diff view generated by jsdifflib
1
Hyp mode is an exception to the general rule that each AArch32
1
From: Richard Henderson <richard.henderson@linaro.org>
2
mode has its own r13, r14 and SPSR -- it has a banked r13 and
3
SPSR but shares its r14 with User and System mode. We were
4
incorrectly implementing it as banked, which meant that on
5
entry to Hyp mode r14 was 0 rather than the USR/SYS r14.
6
2
7
We provide a new function r14_bank_number() which is like
3
This function makes it clear that we're talking about the whole
8
the existing bank_number() but provides the index into
4
register, and not the 32-bit piece at index 0. This fixes a bug
9
env->banked_r14[]; bank_number() provides the index to use
5
when running on a big-endian host.
10
for env->banked_r13[] and env->banked_cpsr[].
11
6
12
All the points in the code that were using bank_number()
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
to index into env->banked_r14[] are updated for consintency:
8
Message-id: 20201030022618.785675-2-richard.henderson@linaro.org
14
* switch_mode() -- this is the only place where we fix
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
an actual bug
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
* aarch64_sync_32_to_64() and aarch64_sync_64_to_32():
11
---
17
no behavioural change as we already special-cased Hyp R14
12
target/arm/translate.c | 8 ++++++
18
* kvm32.c: no behavioural change since the guest can't ever
13
target/arm/translate-neon.c.inc | 44 ++++++++++++++++-----------------
19
be in Hyp mode, but conceptually the right thing to do
14
target/arm/translate-vfp.c.inc | 2 +-
20
* msr_banked()/mrs_banked(): we can never get to the case
15
3 files changed, 31 insertions(+), 23 deletions(-)
21
that accesses banked_r14[] with tgtmode == ARM_CPU_MODE_HYP,
22
so no behavioural change
23
16
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
25
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
26
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
27
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
28
Message-id: 20181109173553.22341-2-peter.maydell@linaro.org
29
---
30
target/arm/internals.h | 16 ++++++++++++++++
31
target/arm/helper.c | 29 +++++++++++++++--------------
32
target/arm/kvm32.c | 4 ++--
33
target/arm/op_helper.c | 4 ++--
34
4 files changed, 35 insertions(+), 18 deletions(-)
35
36
diff --git a/target/arm/internals.h b/target/arm/internals.h
37
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/internals.h
19
--- a/target/arm/translate.c
39
+++ b/target/arm/internals.h
20
+++ b/target/arm/translate.c
40
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
21
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
41
g_assert_not_reached();
22
unallocated_encoding(s);
42
}
23
}
43
24
44
+/**
25
+/*
45
+ * r14_bank_number: Map CPU mode onto register bank for r14
26
+ * Return the offset of a "full" NEON Dreg.
46
+ *
47
+ * Given an AArch32 CPU mode, return the index into the saved register
48
+ * banks to use for the R14 (LR) in that mode. This is the same as
49
+ * bank_number(), except for the special case of Hyp mode, where
50
+ * R14 is shared with USR and SYS, unlike its R13 and SPSR.
51
+ * This should be used as the index into env->banked_r14[], and
52
+ * bank_number() used for the index into env->banked_r13[] and
53
+ * env->banked_spsr[].
54
+ */
27
+ */
55
+static inline int r14_bank_number(int mode)
28
+static long neon_full_reg_offset(unsigned reg)
56
+{
29
+{
57
+ return (mode == ARM_CPU_MODE_HYP) ? BANK_USRSYS : bank_number(mode);
30
+ return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
58
+}
31
+}
59
+
32
+
60
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
33
static inline long vfp_reg_offset(bool dp, unsigned reg)
61
void arm_translate_init(void);
34
{
62
35
if (dp) {
63
diff --git a/target/arm/helper.c b/target/arm/helper.c
36
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
64
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/helper.c
38
--- a/target/arm/translate-neon.c.inc
66
+++ b/target/arm/helper.c
39
+++ b/target/arm/translate-neon.c.inc
67
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
40
@@ -XXX,XX +XXX,XX @@ neon_element_offset(int reg, int element, MemOp size)
68
41
ofs ^= 8 - element_size;
69
i = bank_number(old_mode);
42
}
70
env->banked_r13[i] = env->regs[13];
43
#endif
71
- env->banked_r14[i] = env->regs[14];
44
- return neon_reg_offset(reg, 0) + ofs;
72
env->banked_spsr[i] = env->spsr;
45
+ return neon_full_reg_offset(reg) + ofs;
73
74
i = bank_number(mode);
75
env->regs[13] = env->banked_r13[i];
76
- env->regs[14] = env->banked_r14[i];
77
env->spsr = env->banked_spsr[i];
78
+
79
+ env->banked_r14[r14_bank_number(old_mode)] = env->regs[14];
80
+ env->regs[14] = env->banked_r14[r14_bank_number(mode)];
81
}
46
}
82
47
83
/* Physical Interrupt Target EL Lookup Table
48
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
84
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
85
if (mode == ARM_CPU_MODE_HYP) {
50
* We cannot write 16 bytes at once because the
86
env->xregs[14] = env->regs[14];
51
* destination is unaligned.
52
*/
53
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
54
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
55
8, 8, tmp);
56
- tcg_gen_gvec_mov(0, neon_reg_offset(vd + 1, 0),
57
- neon_reg_offset(vd, 0), 8, 8);
58
+ tcg_gen_gvec_mov(0, neon_full_reg_offset(vd + 1),
59
+ neon_full_reg_offset(vd), 8, 8);
87
} else {
60
} else {
88
- env->xregs[14] = env->banked_r14[bank_number(ARM_CPU_MODE_USR)];
61
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
89
+ env->xregs[14] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)];
62
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
63
vec_size, vec_size, tmp);
90
}
64
}
65
tcg_gen_addi_i32(addr, addr, 1 << size);
66
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
67
static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
68
{
69
int vec_size = a->q ? 16 : 8;
70
- int rd_ofs = neon_reg_offset(a->vd, 0);
71
- int rn_ofs = neon_reg_offset(a->vn, 0);
72
- int rm_ofs = neon_reg_offset(a->vm, 0);
73
+ int rd_ofs = neon_full_reg_offset(a->vd);
74
+ int rn_ofs = neon_full_reg_offset(a->vn);
75
+ int rm_ofs = neon_full_reg_offset(a->vm);
76
77
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
78
return false;
79
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
80
{
81
/* Handle a 2-reg-shift insn which can be vectorized. */
82
int vec_size = a->q ? 16 : 8;
83
- int rd_ofs = neon_reg_offset(a->vd, 0);
84
- int rm_ofs = neon_reg_offset(a->vm, 0);
85
+ int rd_ofs = neon_full_reg_offset(a->vd);
86
+ int rm_ofs = neon_full_reg_offset(a->vm);
87
88
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
89
return false;
90
@@ -XXX,XX +XXX,XX @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
91
{
92
/* FP operations in 2-reg-and-shift group */
93
int vec_size = a->q ? 16 : 8;
94
- int rd_ofs = neon_reg_offset(a->vd, 0);
95
- int rm_ofs = neon_reg_offset(a->vm, 0);
96
+ int rd_ofs = neon_full_reg_offset(a->vd);
97
+ int rm_ofs = neon_full_reg_offset(a->vm);
98
TCGv_ptr fpst;
99
100
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
101
@@ -XXX,XX +XXX,XX @@ static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
102
return true;
91
}
103
}
92
104
93
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
105
- reg_ofs = neon_reg_offset(a->vd, 0);
94
env->xregs[16] = env->regs[14];
106
+ reg_ofs = neon_full_reg_offset(a->vd);
95
env->xregs[17] = env->regs[13];
107
vec_size = a->q ? 16 : 8;
96
} else {
108
imm = asimd_imm_const(a->imm, a->cmode, a->op);
97
- env->xregs[16] = env->banked_r14[bank_number(ARM_CPU_MODE_IRQ)];
109
98
+ env->xregs[16] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)];
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_P_3d(DisasContext *s, arg_3diff *a)
99
env->xregs[17] = env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)];
111
return true;
100
}
112
}
101
113
102
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
114
- tcg_gen_gvec_3_ool(neon_reg_offset(a->vd, 0),
103
env->xregs[18] = env->regs[14];
115
- neon_reg_offset(a->vn, 0),
104
env->xregs[19] = env->regs[13];
116
- neon_reg_offset(a->vm, 0),
105
} else {
117
+ tcg_gen_gvec_3_ool(neon_full_reg_offset(a->vd),
106
- env->xregs[18] = env->banked_r14[bank_number(ARM_CPU_MODE_SVC)];
118
+ neon_full_reg_offset(a->vn),
107
+ env->xregs[18] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)];
119
+ neon_full_reg_offset(a->vm),
108
env->xregs[19] = env->banked_r13[bank_number(ARM_CPU_MODE_SVC)];
120
16, 16, 0, fn_gvec);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
124
{
125
/* Two registers and a scalar, using gvec */
126
int vec_size = a->q ? 16 : 8;
127
- int rd_ofs = neon_reg_offset(a->vd, 0);
128
- int rn_ofs = neon_reg_offset(a->vn, 0);
129
+ int rd_ofs = neon_full_reg_offset(a->vd);
130
+ int rn_ofs = neon_full_reg_offset(a->vn);
131
int rm_ofs;
132
int idx;
133
TCGv_ptr fpstatus;
134
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
135
/* a->vm is M:Vm, which encodes both register and index */
136
idx = extract32(a->vm, a->size + 2, 2);
137
a->vm = extract32(a->vm, 0, a->size + 2);
138
- rm_ofs = neon_reg_offset(a->vm, 0);
139
+ rm_ofs = neon_full_reg_offset(a->vm);
140
141
fpstatus = fpstatus_ptr(a->size == 1 ? FPST_STD_F16 : FPST_STD);
142
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, fpstatus,
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
144
return true;
109
}
145
}
110
146
111
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
147
- tcg_gen_gvec_dup_mem(a->size, neon_reg_offset(a->vd, 0),
112
env->xregs[20] = env->regs[14];
148
+ tcg_gen_gvec_dup_mem(a->size, neon_full_reg_offset(a->vd),
113
env->xregs[21] = env->regs[13];
149
neon_element_offset(a->vm, a->index, a->size),
114
} else {
150
a->q ? 16 : 8, a->q ? 16 : 8);
115
- env->xregs[20] = env->banked_r14[bank_number(ARM_CPU_MODE_ABT)];
151
return true;
116
+ env->xregs[20] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)];
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
117
env->xregs[21] = env->banked_r13[bank_number(ARM_CPU_MODE_ABT)];
153
static bool do_2misc_vec(DisasContext *s, arg_2misc *a, GVecGen2Fn *fn)
154
{
155
int vec_size = a->q ? 16 : 8;
156
- int rd_ofs = neon_reg_offset(a->vd, 0);
157
- int rm_ofs = neon_reg_offset(a->vm, 0);
158
+ int rd_ofs = neon_full_reg_offset(a->vd);
159
+ int rm_ofs = neon_full_reg_offset(a->vm);
160
161
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
162
return false;
163
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-vfp.c.inc
166
+++ b/target/arm/translate-vfp.c.inc
167
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
118
}
168
}
119
169
120
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
170
tmp = load_reg(s, a->rt);
121
env->xregs[22] = env->regs[14];
171
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(a->vn, 0),
122
env->xregs[23] = env->regs[13];
172
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(a->vn),
123
} else {
173
vec_size, vec_size, tmp);
124
- env->xregs[22] = env->banked_r14[bank_number(ARM_CPU_MODE_UND)];
174
tcg_temp_free_i32(tmp);
125
+ env->xregs[22] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)];
175
126
env->xregs[23] = env->banked_r13[bank_number(ARM_CPU_MODE_UND)];
127
}
128
129
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
130
env->xregs[i] = env->fiq_regs[i - 24];
131
}
132
env->xregs[29] = env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)];
133
- env->xregs[30] = env->banked_r14[bank_number(ARM_CPU_MODE_FIQ)];
134
+ env->xregs[30] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)];
135
}
136
137
env->pc = env->regs[15];
138
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
139
if (mode == ARM_CPU_MODE_HYP) {
140
env->regs[14] = env->xregs[14];
141
} else {
142
- env->banked_r14[bank_number(ARM_CPU_MODE_USR)] = env->xregs[14];
143
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)] = env->xregs[14];
144
}
145
}
146
147
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
148
env->regs[14] = env->xregs[16];
149
env->regs[13] = env->xregs[17];
150
} else {
151
- env->banked_r14[bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[16];
152
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[16];
153
env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[17];
154
}
155
156
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
157
env->regs[14] = env->xregs[18];
158
env->regs[13] = env->xregs[19];
159
} else {
160
- env->banked_r14[bank_number(ARM_CPU_MODE_SVC)] = env->xregs[18];
161
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)] = env->xregs[18];
162
env->banked_r13[bank_number(ARM_CPU_MODE_SVC)] = env->xregs[19];
163
}
164
165
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
166
env->regs[14] = env->xregs[20];
167
env->regs[13] = env->xregs[21];
168
} else {
169
- env->banked_r14[bank_number(ARM_CPU_MODE_ABT)] = env->xregs[20];
170
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)] = env->xregs[20];
171
env->banked_r13[bank_number(ARM_CPU_MODE_ABT)] = env->xregs[21];
172
}
173
174
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
175
env->regs[14] = env->xregs[22];
176
env->regs[13] = env->xregs[23];
177
} else {
178
- env->banked_r14[bank_number(ARM_CPU_MODE_UND)] = env->xregs[22];
179
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)] = env->xregs[22];
180
env->banked_r13[bank_number(ARM_CPU_MODE_UND)] = env->xregs[23];
181
}
182
183
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
184
env->fiq_regs[i - 24] = env->xregs[i];
185
}
186
env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[29];
187
- env->banked_r14[bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[30];
188
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[30];
189
}
190
191
env->regs[15] = env->pc;
192
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/kvm32.c
195
+++ b/target/arm/kvm32.c
196
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
197
memcpy(env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
198
}
199
env->banked_r13[bn] = env->regs[13];
200
- env->banked_r14[bn] = env->regs[14];
201
env->banked_spsr[bn] = env->spsr;
202
+ env->banked_r14[r14_bank_number(mode)] = env->regs[14];
203
204
/* Now we can safely copy stuff down to the kernel */
205
for (i = 0; i < ARRAY_SIZE(regs); i++) {
206
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
207
memcpy(env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
208
}
209
env->regs[13] = env->banked_r13[bn];
210
- env->regs[14] = env->banked_r14[bn];
211
env->spsr = env->banked_spsr[bn];
212
+ env->regs[14] = env->banked_r14[r14_bank_number(mode)];
213
214
/* VFP registers */
215
r.id = KVM_REG_ARM | KVM_REG_SIZE_U64 | KVM_REG_ARM_VFP;
216
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
217
index XXXXXXX..XXXXXXX 100644
218
--- a/target/arm/op_helper.c
219
+++ b/target/arm/op_helper.c
220
@@ -XXX,XX +XXX,XX @@ void HELPER(msr_banked)(CPUARMState *env, uint32_t value, uint32_t tgtmode,
221
env->banked_r13[bank_number(tgtmode)] = value;
222
break;
223
case 14:
224
- env->banked_r14[bank_number(tgtmode)] = value;
225
+ env->banked_r14[r14_bank_number(tgtmode)] = value;
226
break;
227
case 8 ... 12:
228
switch (tgtmode) {
229
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mrs_banked)(CPUARMState *env, uint32_t tgtmode, uint32_t regno)
230
case 13:
231
return env->banked_r13[bank_number(tgtmode)];
232
case 14:
233
- return env->banked_r14[bank_number(tgtmode)];
234
+ return env->banked_r14[r14_bank_number(tgtmode)];
235
case 8 ... 12:
236
switch (tgtmode) {
237
case ARM_CPU_MODE_USR:
238
--
176
--
239
2.19.1
177
2.20.1
240
178
241
179
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This will shortly have users outside of translate-neon.c.inc.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-3-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 20 ++++++++++++++++++++
11
target/arm/translate-neon.c.inc | 19 -------------------
12
2 files changed, 20 insertions(+), 19 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
19
return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
20
}
21
22
+/*
23
+ * Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
24
+ * where 0 is the least significant end of the register.
25
+ */
26
+static long neon_element_offset(int reg, int element, MemOp size)
27
+{
28
+ int element_size = 1 << size;
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /*
32
+ * Calculate the offset assuming fully little-endian,
33
+ * then XOR to account for the order of the 8-byte units.
34
+ */
35
+ if (element_size < 8) {
36
+ ofs ^= 8 - element_size;
37
+ }
38
+#endif
39
+ return neon_full_reg_offset(reg) + ofs;
40
+}
41
+
42
static inline long vfp_reg_offset(bool dp, unsigned reg)
43
{
44
if (dp) {
45
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-neon.c.inc
48
+++ b/target/arm/translate-neon.c.inc
49
@@ -XXX,XX +XXX,XX @@ static inline int neon_3same_fp_size(DisasContext *s, int x)
50
#include "decode-neon-ls.c.inc"
51
#include "decode-neon-shared.c.inc"
52
53
-/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
54
- * where 0 is the least significant end of the register.
55
- */
56
-static inline long
57
-neon_element_offset(int reg, int element, MemOp size)
58
-{
59
- int element_size = 1 << size;
60
- int ofs = element * element_size;
61
-#ifdef HOST_WORDS_BIGENDIAN
62
- /* Calculate the offset assuming fully little-endian,
63
- * then XOR to account for the order of the 8-byte units.
64
- */
65
- if (element_size < 8) {
66
- ofs ^= 8 - element_size;
67
- }
68
-#endif
69
- return neon_full_reg_offset(reg) + ofs;
70
-}
71
-
72
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
73
{
74
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
75
--
76
2.20.1
77
78
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When we are debugging the guest all exceptions come our way but might
3
These are the only users of neon_reg_offset, so remove that.
4
be for the guest's own debug exceptions. We use the ->do_interrupt()
5
infrastructure to inject the exception into the guest. However, we are
6
missing a full setup of the exception structure, causing an assert
7
later down the line.
8
4
9
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-4-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181109152119.9242-4-alex.bennee@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
9
---
15
target/arm/kvm64.c | 1 +
10
target/arm/translate.c | 14 ++------------
16
1 file changed, 1 insertion(+)
11
1 file changed, 2 insertions(+), 12 deletions(-)
17
12
18
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm64.c
15
--- a/target/arm/translate.c
21
+++ b/target/arm/kvm64.c
16
+++ b/target/arm/translate.c
22
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
17
@@ -XXX,XX +XXX,XX @@ static inline long vfp_reg_offset(bool dp, unsigned reg)
23
cs->exception_index = EXCP_BKPT;
18
}
24
env->exception.syndrome = debug_exit->hsr;
19
}
25
env->exception.vaddress = debug_exit->far;
20
26
+ env->exception.target_el = 1;
21
-/* Return the offset of a 32-bit piece of a NEON register.
27
qemu_mutex_lock_iothread();
22
- zero is the least significant end of the register. */
28
cc->do_interrupt(cs);
23
-static inline long
29
qemu_mutex_unlock_iothread();
24
-neon_reg_offset (int reg, int n)
25
-{
26
- int sreg;
27
- sreg = reg * 2 + n;
28
- return vfp_reg_offset(0, sreg);
29
-}
30
-
31
static TCGv_i32 neon_load_reg(int reg, int pass)
32
{
33
TCGv_i32 tmp = tcg_temp_new_i32();
34
- tcg_gen_ld_i32(tmp, cpu_env, neon_reg_offset(reg, pass));
35
+ tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
36
return tmp;
37
}
38
39
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
40
{
41
- tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
42
+ tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
43
tcg_temp_free_i32(var);
44
}
45
30
--
46
--
31
2.19.1
47
2.20.1
32
48
33
49
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The test was incomplete and incorrectly caused debug exceptions to be
3
This seems a bit more readable than using offsetof CPU_DoubleU.
4
generated when returning to EL2 after a failed attempt to single-step
5
an EL1 instruction. Fix this while cleaning up the function a little.
6
4
7
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-5-richard.henderson@linaro.org
9
Message-id: 20181109152119.9242-8-alex.bennee@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/cpu.h | 39 ++++++++++++++++++++++++---------------
10
target/arm/translate.c | 13 ++++---------
13
1 file changed, 24 insertions(+), 15 deletions(-)
11
1 file changed, 4 insertions(+), 9 deletions(-)
14
12
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
15
--- a/target/arm/translate.c
18
+++ b/target/arm/cpu.h
16
+++ b/target/arm/translate.c
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_v7m_csselr_razwi(ARMCPU *cpu)
17
@@ -XXX,XX +XXX,XX @@ static long neon_element_offset(int reg, int element, MemOp size)
20
return (cpu->clidr & R_V7M_CLIDR_CTYPE_ALL_MASK) != 0;
18
return neon_full_reg_offset(reg) + ofs;
21
}
19
}
22
20
23
+/* See AArch64.GenerateDebugExceptionsFrom() in ARM ARM pseudocode */
21
-static inline long vfp_reg_offset(bool dp, unsigned reg)
24
static inline bool aa64_generate_debug_exceptions(CPUARMState *env)
22
+/* Return the offset of a VFP Dreg (dp = true) or VFP Sreg (dp = false). */
23
+static long vfp_reg_offset(bool dp, unsigned reg)
25
{
24
{
26
- if (arm_is_secure(env)) {
25
if (dp) {
27
- /* MDCR_EL3.SDD disables debug events from Secure state */
26
- return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
28
- if (extract32(env->cp15.mdcr_el3, 16, 1) != 0
27
+ return neon_element_offset(reg, 0, MO_64);
29
- || arm_current_el(env) == 3) {
28
} else {
30
- return false;
29
- long ofs = offsetof(CPUARMState, vfp.zregs[reg >> 2].d[(reg >> 1) & 1]);
30
- if (reg & 1) {
31
- ofs += offsetof(CPU_DoubleU, l.upper);
32
- } else {
33
- ofs += offsetof(CPU_DoubleU, l.lower);
31
- }
34
- }
32
+ int cur_el = arm_current_el(env);
35
- return ofs;
33
+ int debug_el;
36
+ return neon_element_offset(reg >> 1, reg & 1, MO_32);
34
+
35
+ if (cur_el == 3) {
36
+ return false;
37
}
37
}
38
39
- if (arm_current_el(env) == arm_debug_target_el(env)) {
40
- if ((extract32(env->cp15.mdscr_el1, 13, 1) == 0)
41
- || (env->daif & PSTATE_D)) {
42
- return false;
43
- }
44
+ /* MDCR_EL3.SDD disables debug events from Secure state */
45
+ if (arm_is_secure_below_el3(env)
46
+ && extract32(env->cp15.mdcr_el3, 16, 1)) {
47
+ return false;
48
}
49
- return true;
50
+
51
+ /*
52
+ * Same EL to same EL debug exceptions need MDSCR_KDE enabled
53
+ * while not masking the (D)ebug bit in DAIF.
54
+ */
55
+ debug_el = arm_debug_target_el(env);
56
+
57
+ if (cur_el == debug_el) {
58
+ return extract32(env->cp15.mdscr_el1, 13, 1)
59
+ && !(env->daif & PSTATE_D);
60
+ }
61
+
62
+ /* Otherwise the debug target needs to be a higher EL */
63
+ return debug_el > cur_el;
64
}
38
}
65
39
66
static inline bool aa32_generate_debug_exceptions(CPUARMState *env)
67
@@ -XXX,XX +XXX,XX @@ static inline bool aa32_generate_debug_exceptions(CPUARMState *env)
68
* since the pseudocode has it at all callsites except for the one in
69
* CheckSoftwareStep(), where it is elided because both branches would
70
* always return the same value.
71
- *
72
- * Parts of the pseudocode relating to EL2 and EL3 are omitted because we
73
- * don't yet implement those exception levels or their associated trap bits.
74
*/
75
static inline bool arm_generate_debug_exceptions(CPUARMState *env)
76
{
77
--
40
--
78
2.19.1
41
2.20.1
79
42
80
43
diff view generated by jsdifflib
1
In commit 8a0fc3a29fc2315325400 we tried to implement HCR_EL2.{VI,VF},
1
From: Richard Henderson <richard.henderson@linaro.org>
2
but we got it wrong and had to revert it.
3
2
4
In that commit we implemented them as simply tracking whether there
3
Model these off the aa64 read/write_vec_element functions.
5
is a pending virtual IRQ or virtual FIQ. This is not correct -- these
4
Use it within translate-neon.c.inc. The new functions do
6
bits cause a software-generated VIRQ/VFIQ, which is distinct from
5
not allocate or free temps, so this rearranges the calling
7
whether there is a hardware-generated VIRQ/VFIQ caused by the
6
code a bit.
8
external interrupt controller. So we need to track separately
9
the HCR_EL2 bit state and the external virq/vfiq line state, and
10
OR the two together to get the actual pending VIRQ/VFIQ state.
11
7
12
Fixes: 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201030022618.785675-6-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Message-id: 20181109134731.11605-4-peter.maydell@linaro.org
16
---
12
---
17
target/arm/internals.h | 18 ++++++++++++++++
13
target/arm/translate.c | 26 ++++
18
target/arm/cpu.c | 48 +++++++++++++++++++++++++++++++++++++++++-
14
target/arm/translate-neon.c.inc | 256 ++++++++++++++++++++------------
19
target/arm/helper.c | 20 ++++++++++++++++--
15
2 files changed, 183 insertions(+), 99 deletions(-)
20
3 files changed, 83 insertions(+), 3 deletions(-)
21
16
22
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
23
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/internals.h
19
--- a/target/arm/translate.c
25
+++ b/target/arm/internals.h
20
+++ b/target/arm/translate.c
26
@@ -XXX,XX +XXX,XX @@ static inline const char *aarch32_mode_name(uint32_t psr)
21
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
27
return cpu_mode_names[psr & 0xf];
22
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
28
}
23
}
29
24
30
+/**
25
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
31
+ * arm_cpu_update_virq: Update CPU_INTERRUPT_VIRQ bit in cs->interrupt_request
32
+ *
33
+ * Update the CPU_INTERRUPT_VIRQ bit in cs->interrupt_request, following
34
+ * a change to either the input VIRQ line from the GIC or the HCR_EL2.VI bit.
35
+ * Must be called with the iothread lock held.
36
+ */
37
+void arm_cpu_update_virq(ARMCPU *cpu);
38
+
39
+/**
40
+ * arm_cpu_update_vfiq: Update CPU_INTERRUPT_VFIQ bit in cs->interrupt_request
41
+ *
42
+ * Update the CPU_INTERRUPT_VFIQ bit in cs->interrupt_request, following
43
+ * a change to either the input VFIQ line from the GIC or the HCR_EL2.VF bit.
44
+ * Must be called with the iothread lock held.
45
+ */
46
+void arm_cpu_update_vfiq(ARMCPU *cpu);
47
+
48
#endif
49
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/cpu.c
52
+++ b/target/arm/cpu.c
53
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
54
}
55
#endif
56
57
+void arm_cpu_update_virq(ARMCPU *cpu)
58
+{
26
+{
59
+ /*
27
+ long off = neon_element_offset(reg, ele, size);
60
+ * Update the interrupt level for VIRQ, which is the logical OR of
28
+
61
+ * the HCR_EL2.VI bit and the input line level from the GIC.
29
+ switch (size) {
62
+ */
30
+ case MO_32:
63
+ CPUARMState *env = &cpu->env;
31
+ tcg_gen_ld_i32(dest, cpu_env, off);
64
+ CPUState *cs = CPU(cpu);
32
+ break;
65
+
33
+ default:
66
+ bool new_state = (env->cp15.hcr_el2 & HCR_VI) ||
34
+ g_assert_not_reached();
67
+ (env->irq_line_state & CPU_INTERRUPT_VIRQ);
68
+
69
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VIRQ) != 0)) {
70
+ if (new_state) {
71
+ cpu_interrupt(cs, CPU_INTERRUPT_VIRQ);
72
+ } else {
73
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
74
+ }
75
+ }
35
+ }
76
+}
36
+}
77
+
37
+
78
+void arm_cpu_update_vfiq(ARMCPU *cpu)
38
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
79
+{
39
+{
80
+ /*
40
+ long off = neon_element_offset(reg, ele, size);
81
+ * Update the interrupt level for VFIQ, which is the logical OR of
41
+
82
+ * the HCR_EL2.VF bit and the input line level from the GIC.
42
+ switch (size) {
83
+ */
43
+ case MO_32:
84
+ CPUARMState *env = &cpu->env;
44
+ tcg_gen_st_i32(src, cpu_env, off);
85
+ CPUState *cs = CPU(cpu);
45
+ break;
86
+
46
+ default:
87
+ bool new_state = (env->cp15.hcr_el2 & HCR_VF) ||
47
+ g_assert_not_reached();
88
+ (env->irq_line_state & CPU_INTERRUPT_VFIQ);
89
+
90
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFIQ) != 0)) {
91
+ if (new_state) {
92
+ cpu_interrupt(cs, CPU_INTERRUPT_VFIQ);
93
+ } else {
94
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VFIQ);
95
+ }
96
+ }
48
+ }
97
+}
49
+}
98
+
50
+
99
#ifndef CONFIG_USER_ONLY
51
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
100
static void arm_cpu_set_irq(void *opaque, int irq, int level)
101
{
52
{
102
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
53
TCGv_ptr ret = tcg_temp_new_ptr();
103
54
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
104
switch (irq) {
105
case ARM_CPU_VIRQ:
106
+ assert(arm_feature(env, ARM_FEATURE_EL2));
107
+ arm_cpu_update_virq(cpu);
108
+ break;
109
case ARM_CPU_VFIQ:
110
assert(arm_feature(env, ARM_FEATURE_EL2));
111
- /* fall through */
112
+ arm_cpu_update_vfiq(cpu);
113
+ break;
114
case ARM_CPU_IRQ:
115
case ARM_CPU_FIQ:
116
if (level) {
117
diff --git a/target/arm/helper.c b/target/arm/helper.c
118
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/helper.c
56
--- a/target/arm/translate-neon.c.inc
120
+++ b/target/arm/helper.c
57
+++ b/target/arm/translate-neon.c.inc
121
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
58
@@ -XXX,XX +XXX,XX @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
122
tlb_flush(CPU(cpu));
59
* early. Since Q is 0 there are always just two passes, so instead
123
}
60
* of a complicated loop over each pass we just unroll.
124
env->cp15.hcr_el2 = value;
61
*/
125
+
62
- tmp = neon_load_reg(a->vn, 0);
126
+ /*
63
- tmp2 = neon_load_reg(a->vn, 1);
127
+ * Updates to VI and VF require us to update the status of
64
+ tmp = tcg_temp_new_i32();
128
+ * virtual interrupts, which are the logical OR of these bits
65
+ tmp2 = tcg_temp_new_i32();
129
+ * and the state of the input lines from the GIC. (This requires
66
+ tmp3 = tcg_temp_new_i32();
130
+ * that we have the iothread lock, which is done by marking the
67
+
131
+ * reginfo structs as ARM_CP_IO.)
68
+ read_neon_element32(tmp, a->vn, 0, MO_32);
132
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
69
+ read_neon_element32(tmp2, a->vn, 1, MO_32);
133
+ * possible for it to be taken immediately, because VIRQ and
70
fn(tmp, tmp, tmp2);
134
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
71
- tcg_temp_free_i32(tmp2);
135
+ * can only be written at EL2.
72
136
+ */
73
- tmp3 = neon_load_reg(a->vm, 0);
137
+ g_assert(qemu_mutex_iothread_locked());
74
- tmp2 = neon_load_reg(a->vm, 1);
138
+ arm_cpu_update_virq(cpu);
75
+ read_neon_element32(tmp3, a->vm, 0, MO_32);
139
+ arm_cpu_update_vfiq(cpu);
76
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
140
}
77
fn(tmp3, tmp3, tmp2);
141
78
- tcg_temp_free_i32(tmp2);
142
static void hcr_writehigh(CPUARMState *env, const ARMCPRegInfo *ri,
79
143
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
80
- neon_store_reg(a->vd, 0, tmp);
144
81
- neon_store_reg(a->vd, 1, tmp3);
145
static const ARMCPRegInfo el2_cp_reginfo[] = {
82
+ write_neon_element32(tmp, a->vd, 0, MO_32);
146
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
83
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
147
+ .type = ARM_CP_IO,
84
+
148
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
85
+ tcg_temp_free_i32(tmp);
149
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
86
+ tcg_temp_free_i32(tmp2);
150
.writefn = hcr_write },
87
+ tcg_temp_free_i32(tmp3);
151
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
88
return true;
152
- .type = ARM_CP_ALIAS,
89
}
153
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
90
154
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
91
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
155
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
92
* 2-reg-and-shift operations, size < 3 case, where the
156
.writefn = hcr_writelow },
93
* helper needs to be passed cpu_env.
157
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
94
*/
158
95
- TCGv_i32 constimm;
159
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
96
+ TCGv_i32 constimm, tmp;
160
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
97
int pass;
161
- .type = ARM_CP_ALIAS,
98
162
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
99
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
163
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
100
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
164
.access = PL2_RW,
101
* by immediate using the variable shift operations.
165
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
102
*/
103
constimm = tcg_const_i32(dup_const(a->size, a->shift));
104
+ tmp = tcg_temp_new_i32();
105
106
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
107
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
108
+ read_neon_element32(tmp, a->vm, pass, MO_32);
109
fn(tmp, cpu_env, tmp, constimm);
110
- neon_store_reg(a->vd, pass, tmp);
111
+ write_neon_element32(tmp, a->vd, pass, MO_32);
112
}
113
+ tcg_temp_free_i32(tmp);
114
tcg_temp_free_i32(constimm);
115
return true;
116
}
117
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
118
constimm = tcg_const_i64(-a->shift);
119
rm1 = tcg_temp_new_i64();
120
rm2 = tcg_temp_new_i64();
121
+ rd = tcg_temp_new_i32();
122
123
/* Load both inputs first to avoid potential overwrite if rm == rd */
124
neon_load_reg64(rm1, a->vm);
125
neon_load_reg64(rm2, a->vm + 1);
126
127
shiftfn(rm1, rm1, constimm);
128
- rd = tcg_temp_new_i32();
129
narrowfn(rd, cpu_env, rm1);
130
- neon_store_reg(a->vd, 0, rd);
131
+ write_neon_element32(rd, a->vd, 0, MO_32);
132
133
shiftfn(rm2, rm2, constimm);
134
- rd = tcg_temp_new_i32();
135
narrowfn(rd, cpu_env, rm2);
136
- neon_store_reg(a->vd, 1, rd);
137
+ write_neon_element32(rd, a->vd, 1, MO_32);
138
139
+ tcg_temp_free_i32(rd);
140
tcg_temp_free_i64(rm1);
141
tcg_temp_free_i64(rm2);
142
tcg_temp_free_i64(constimm);
143
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
144
constimm = tcg_const_i32(imm);
145
146
/* Load all inputs first to avoid potential overwrite */
147
- rm1 = neon_load_reg(a->vm, 0);
148
- rm2 = neon_load_reg(a->vm, 1);
149
- rm3 = neon_load_reg(a->vm + 1, 0);
150
- rm4 = neon_load_reg(a->vm + 1, 1);
151
+ rm1 = tcg_temp_new_i32();
152
+ rm2 = tcg_temp_new_i32();
153
+ rm3 = tcg_temp_new_i32();
154
+ rm4 = tcg_temp_new_i32();
155
+ read_neon_element32(rm1, a->vm, 0, MO_32);
156
+ read_neon_element32(rm2, a->vm, 1, MO_32);
157
+ read_neon_element32(rm3, a->vm, 2, MO_32);
158
+ read_neon_element32(rm4, a->vm, 3, MO_32);
159
rtmp = tcg_temp_new_i64();
160
161
shiftfn(rm1, rm1, constimm);
162
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
163
tcg_temp_free_i32(rm2);
164
165
narrowfn(rm1, cpu_env, rtmp);
166
- neon_store_reg(a->vd, 0, rm1);
167
+ write_neon_element32(rm1, a->vd, 0, MO_32);
168
+ tcg_temp_free_i32(rm1);
169
170
shiftfn(rm3, rm3, constimm);
171
shiftfn(rm4, rm4, constimm);
172
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
173
174
narrowfn(rm3, cpu_env, rtmp);
175
tcg_temp_free_i64(rtmp);
176
- neon_store_reg(a->vd, 1, rm3);
177
+ write_neon_element32(rm3, a->vd, 1, MO_32);
178
+ tcg_temp_free_i32(rm3);
179
return true;
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
183
widen_mask = dup_const(a->size + 1, widen_mask);
184
}
185
186
- rm0 = neon_load_reg(a->vm, 0);
187
- rm1 = neon_load_reg(a->vm, 1);
188
+ rm0 = tcg_temp_new_i32();
189
+ rm1 = tcg_temp_new_i32();
190
+ read_neon_element32(rm0, a->vm, 0, MO_32);
191
+ read_neon_element32(rm1, a->vm, 1, MO_32);
192
tmp = tcg_temp_new_i64();
193
194
widenfn(tmp, rm0);
195
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
196
if (src1_wide) {
197
neon_load_reg64(rn0_64, a->vn);
198
} else {
199
- TCGv_i32 tmp = neon_load_reg(a->vn, 0);
200
+ TCGv_i32 tmp = tcg_temp_new_i32();
201
+ read_neon_element32(tmp, a->vn, 0, MO_32);
202
widenfn(rn0_64, tmp);
203
tcg_temp_free_i32(tmp);
204
}
205
- rm = neon_load_reg(a->vm, 0);
206
+ rm = tcg_temp_new_i32();
207
+ read_neon_element32(rm, a->vm, 0, MO_32);
208
209
widenfn(rm_64, rm);
210
tcg_temp_free_i32(rm);
211
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
212
if (src1_wide) {
213
neon_load_reg64(rn1_64, a->vn + 1);
214
} else {
215
- TCGv_i32 tmp = neon_load_reg(a->vn, 1);
216
+ TCGv_i32 tmp = tcg_temp_new_i32();
217
+ read_neon_element32(tmp, a->vn, 1, MO_32);
218
widenfn(rn1_64, tmp);
219
tcg_temp_free_i32(tmp);
220
}
221
- rm = neon_load_reg(a->vm, 1);
222
+ rm = tcg_temp_new_i32();
223
+ read_neon_element32(rm, a->vm, 1, MO_32);
224
225
neon_store_reg64(rn0_64, a->vd);
226
227
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
228
229
narrowfn(rd1, rn_64);
230
231
- neon_store_reg(a->vd, 0, rd0);
232
- neon_store_reg(a->vd, 1, rd1);
233
+ write_neon_element32(rd0, a->vd, 0, MO_32);
234
+ write_neon_element32(rd1, a->vd, 1, MO_32);
235
236
+ tcg_temp_free_i32(rd0);
237
+ tcg_temp_free_i32(rd1);
238
tcg_temp_free_i64(rn_64);
239
tcg_temp_free_i64(rm_64);
240
241
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
242
rd0 = tcg_temp_new_i64();
243
rd1 = tcg_temp_new_i64();
244
245
- rn = neon_load_reg(a->vn, 0);
246
- rm = neon_load_reg(a->vm, 0);
247
+ rn = tcg_temp_new_i32();
248
+ rm = tcg_temp_new_i32();
249
+ read_neon_element32(rn, a->vn, 0, MO_32);
250
+ read_neon_element32(rm, a->vm, 0, MO_32);
251
opfn(rd0, rn, rm);
252
- tcg_temp_free_i32(rn);
253
- tcg_temp_free_i32(rm);
254
255
- rn = neon_load_reg(a->vn, 1);
256
- rm = neon_load_reg(a->vm, 1);
257
+ read_neon_element32(rn, a->vn, 1, MO_32);
258
+ read_neon_element32(rm, a->vm, 1, MO_32);
259
opfn(rd1, rn, rm);
260
tcg_temp_free_i32(rn);
261
tcg_temp_free_i32(rm);
262
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
263
264
static inline TCGv_i32 neon_get_scalar(int size, int reg)
265
{
266
- TCGv_i32 tmp;
267
- if (size == 1) {
268
- tmp = neon_load_reg(reg & 7, reg >> 4);
269
+ TCGv_i32 tmp = tcg_temp_new_i32();
270
+ if (size == MO_16) {
271
+ read_neon_element32(tmp, reg & 7, reg >> 4, MO_32);
272
if (reg & 8) {
273
gen_neon_dup_high16(tmp);
274
} else {
275
gen_neon_dup_low16(tmp);
276
}
277
} else {
278
- tmp = neon_load_reg(reg & 15, reg >> 4);
279
+ read_neon_element32(tmp, reg & 15, reg >> 4, MO_32);
280
}
281
return tmp;
282
}
283
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
284
* perform an accumulation operation of that result into the
285
* destination.
286
*/
287
- TCGv_i32 scalar;
288
+ TCGv_i32 scalar, tmp;
289
int pass;
290
291
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
292
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
293
}
294
295
scalar = neon_get_scalar(a->size, a->vm);
296
+ tmp = tcg_temp_new_i32();
297
298
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
299
- TCGv_i32 tmp = neon_load_reg(a->vn, pass);
300
+ read_neon_element32(tmp, a->vn, pass, MO_32);
301
opfn(tmp, tmp, scalar);
302
if (accfn) {
303
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
304
+ TCGv_i32 rd = tcg_temp_new_i32();
305
+ read_neon_element32(rd, a->vd, pass, MO_32);
306
accfn(tmp, rd, tmp);
307
tcg_temp_free_i32(rd);
308
}
309
- neon_store_reg(a->vd, pass, tmp);
310
+ write_neon_element32(tmp, a->vd, pass, MO_32);
311
}
312
+ tcg_temp_free_i32(tmp);
313
tcg_temp_free_i32(scalar);
314
return true;
315
}
316
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
317
* performs a kind of fused op-then-accumulate using a helper
318
* function that takes all of rd, rn and the scalar at once.
319
*/
320
- TCGv_i32 scalar;
321
+ TCGv_i32 scalar, rn, rd;
322
int pass;
323
324
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
325
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
326
}
327
328
scalar = neon_get_scalar(a->size, a->vm);
329
+ rn = tcg_temp_new_i32();
330
+ rd = tcg_temp_new_i32();
331
332
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
333
- TCGv_i32 rn = neon_load_reg(a->vn, pass);
334
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
335
+ read_neon_element32(rn, a->vn, pass, MO_32);
336
+ read_neon_element32(rd, a->vd, pass, MO_32);
337
opfn(rd, cpu_env, rn, scalar, rd);
338
- tcg_temp_free_i32(rn);
339
- neon_store_reg(a->vd, pass, rd);
340
+ write_neon_element32(rd, a->vd, pass, MO_32);
341
}
342
+ tcg_temp_free_i32(rn);
343
+ tcg_temp_free_i32(rd);
344
tcg_temp_free_i32(scalar);
345
346
return true;
347
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
348
scalar = neon_get_scalar(a->size, a->vm);
349
350
/* Load all inputs before writing any outputs, in case of overlap */
351
- rn = neon_load_reg(a->vn, 0);
352
+ rn = tcg_temp_new_i32();
353
+ read_neon_element32(rn, a->vn, 0, MO_32);
354
rn0_64 = tcg_temp_new_i64();
355
opfn(rn0_64, rn, scalar);
356
- tcg_temp_free_i32(rn);
357
358
- rn = neon_load_reg(a->vn, 1);
359
+ read_neon_element32(rn, a->vn, 1, MO_32);
360
rn1_64 = tcg_temp_new_i64();
361
opfn(rn1_64, rn, scalar);
362
tcg_temp_free_i32(rn);
363
@@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a)
364
return false;
365
}
366
n <<= 3;
367
+ tmp = tcg_temp_new_i32();
368
if (a->op) {
369
- tmp = neon_load_reg(a->vd, 0);
370
+ read_neon_element32(tmp, a->vd, 0, MO_32);
371
} else {
372
- tmp = tcg_temp_new_i32();
373
tcg_gen_movi_i32(tmp, 0);
374
}
375
- tmp2 = neon_load_reg(a->vm, 0);
376
+ tmp2 = tcg_temp_new_i32();
377
+ read_neon_element32(tmp2, a->vm, 0, MO_32);
378
ptr1 = vfp_reg_ptr(true, a->vn);
379
tmp4 = tcg_const_i32(n);
380
gen_helper_neon_tbl(tmp2, tmp2, tmp, ptr1, tmp4);
381
- tcg_temp_free_i32(tmp);
382
+
383
if (a->op) {
384
- tmp = neon_load_reg(a->vd, 1);
385
+ read_neon_element32(tmp, a->vd, 1, MO_32);
386
} else {
387
- tmp = tcg_temp_new_i32();
388
tcg_gen_movi_i32(tmp, 0);
389
}
390
- tmp3 = neon_load_reg(a->vm, 1);
391
+ tmp3 = tcg_temp_new_i32();
392
+ read_neon_element32(tmp3, a->vm, 1, MO_32);
393
gen_helper_neon_tbl(tmp3, tmp3, tmp, ptr1, tmp4);
394
+ tcg_temp_free_i32(tmp);
395
tcg_temp_free_i32(tmp4);
396
tcg_temp_free_ptr(ptr1);
397
- neon_store_reg(a->vd, 0, tmp2);
398
- neon_store_reg(a->vd, 1, tmp3);
399
- tcg_temp_free_i32(tmp);
400
+
401
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
402
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
403
+ tcg_temp_free_i32(tmp2);
404
+ tcg_temp_free_i32(tmp3);
405
return true;
406
}
407
408
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
409
static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
410
{
411
int pass, half;
412
+ TCGv_i32 tmp[2];
413
414
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
415
return false;
416
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
417
return true;
418
}
419
420
- for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
421
- TCGv_i32 tmp[2];
422
+ tmp[0] = tcg_temp_new_i32();
423
+ tmp[1] = tcg_temp_new_i32();
424
425
+ for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
426
for (half = 0; half < 2; half++) {
427
- tmp[half] = neon_load_reg(a->vm, pass * 2 + half);
428
+ read_neon_element32(tmp[half], a->vm, pass * 2 + half, MO_32);
429
switch (a->size) {
430
case 0:
431
tcg_gen_bswap32_i32(tmp[half], tmp[half]);
432
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
433
g_assert_not_reached();
434
}
435
}
436
- neon_store_reg(a->vd, pass * 2, tmp[1]);
437
- neon_store_reg(a->vd, pass * 2 + 1, tmp[0]);
438
+ write_neon_element32(tmp[1], a->vd, pass * 2, MO_32);
439
+ write_neon_element32(tmp[0], a->vd, pass * 2 + 1, MO_32);
440
}
441
+
442
+ tcg_temp_free_i32(tmp[0]);
443
+ tcg_temp_free_i32(tmp[1]);
444
return true;
445
}
446
447
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
448
rm0_64 = tcg_temp_new_i64();
449
rm1_64 = tcg_temp_new_i64();
450
rd_64 = tcg_temp_new_i64();
451
- tmp = neon_load_reg(a->vm, pass * 2);
452
+
453
+ tmp = tcg_temp_new_i32();
454
+ read_neon_element32(tmp, a->vm, pass * 2, MO_32);
455
widenfn(rm0_64, tmp);
456
- tcg_temp_free_i32(tmp);
457
- tmp = neon_load_reg(a->vm, pass * 2 + 1);
458
+ read_neon_element32(tmp, a->vm, pass * 2 + 1, MO_32);
459
widenfn(rm1_64, tmp);
460
tcg_temp_free_i32(tmp);
461
+
462
opfn(rd_64, rm0_64, rm1_64);
463
tcg_temp_free_i64(rm0_64);
464
tcg_temp_free_i64(rm1_64);
465
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
466
narrowfn(rd0, cpu_env, rm);
467
neon_load_reg64(rm, a->vm + 1);
468
narrowfn(rd1, cpu_env, rm);
469
- neon_store_reg(a->vd, 0, rd0);
470
- neon_store_reg(a->vd, 1, rd1);
471
+ write_neon_element32(rd0, a->vd, 0, MO_32);
472
+ write_neon_element32(rd1, a->vd, 1, MO_32);
473
+ tcg_temp_free_i32(rd0);
474
+ tcg_temp_free_i32(rd1);
475
tcg_temp_free_i64(rm);
476
return true;
477
}
478
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
479
}
480
481
rd = tcg_temp_new_i64();
482
+ rm0 = tcg_temp_new_i32();
483
+ rm1 = tcg_temp_new_i32();
484
485
- rm0 = neon_load_reg(a->vm, 0);
486
- rm1 = neon_load_reg(a->vm, 1);
487
+ read_neon_element32(rm0, a->vm, 0, MO_32);
488
+ read_neon_element32(rm1, a->vm, 1, MO_32);
489
490
widenfn(rd, rm0);
491
tcg_gen_shli_i64(rd, rd, 8 << a->size);
492
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
493
494
fpst = fpstatus_ptr(FPST_STD);
495
ahp = get_ahp_flag();
496
- tmp = neon_load_reg(a->vm, 0);
497
+ tmp = tcg_temp_new_i32();
498
+ read_neon_element32(tmp, a->vm, 0, MO_32);
499
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
500
- tmp2 = neon_load_reg(a->vm, 1);
501
+ tmp2 = tcg_temp_new_i32();
502
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
503
gen_helper_vfp_fcvt_f32_to_f16(tmp2, tmp2, fpst, ahp);
504
tcg_gen_shli_i32(tmp2, tmp2, 16);
505
tcg_gen_or_i32(tmp2, tmp2, tmp);
506
- tcg_temp_free_i32(tmp);
507
- tmp = neon_load_reg(a->vm, 2);
508
+ read_neon_element32(tmp, a->vm, 2, MO_32);
509
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
510
- tmp3 = neon_load_reg(a->vm, 3);
511
- neon_store_reg(a->vd, 0, tmp2);
512
+ tmp3 = tcg_temp_new_i32();
513
+ read_neon_element32(tmp3, a->vm, 3, MO_32);
514
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
515
+ tcg_temp_free_i32(tmp2);
516
gen_helper_vfp_fcvt_f32_to_f16(tmp3, tmp3, fpst, ahp);
517
tcg_gen_shli_i32(tmp3, tmp3, 16);
518
tcg_gen_or_i32(tmp3, tmp3, tmp);
519
- neon_store_reg(a->vd, 1, tmp3);
520
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
521
+ tcg_temp_free_i32(tmp3);
522
tcg_temp_free_i32(tmp);
523
tcg_temp_free_i32(ahp);
524
tcg_temp_free_ptr(fpst);
525
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
526
fpst = fpstatus_ptr(FPST_STD);
527
ahp = get_ahp_flag();
528
tmp3 = tcg_temp_new_i32();
529
- tmp = neon_load_reg(a->vm, 0);
530
- tmp2 = neon_load_reg(a->vm, 1);
531
+ tmp2 = tcg_temp_new_i32();
532
+ tmp = tcg_temp_new_i32();
533
+ read_neon_element32(tmp, a->vm, 0, MO_32);
534
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
535
tcg_gen_ext16u_i32(tmp3, tmp);
536
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
537
- neon_store_reg(a->vd, 0, tmp3);
538
+ write_neon_element32(tmp3, a->vd, 0, MO_32);
539
tcg_gen_shri_i32(tmp, tmp, 16);
540
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp);
541
- neon_store_reg(a->vd, 1, tmp);
542
- tmp3 = tcg_temp_new_i32();
543
+ write_neon_element32(tmp, a->vd, 1, MO_32);
544
+ tcg_temp_free_i32(tmp);
545
tcg_gen_ext16u_i32(tmp3, tmp2);
546
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
547
- neon_store_reg(a->vd, 2, tmp3);
548
+ write_neon_element32(tmp3, a->vd, 2, MO_32);
549
+ tcg_temp_free_i32(tmp3);
550
tcg_gen_shri_i32(tmp2, tmp2, 16);
551
gen_helper_vfp_fcvt_f16_to_f32(tmp2, tmp2, fpst, ahp);
552
- neon_store_reg(a->vd, 3, tmp2);
553
+ write_neon_element32(tmp2, a->vd, 3, MO_32);
554
+ tcg_temp_free_i32(tmp2);
555
tcg_temp_free_i32(ahp);
556
tcg_temp_free_ptr(fpst);
557
558
@@ -XXX,XX +XXX,XX @@ DO_2M_CRYPTO(SHA256SU0, aa32_sha2, 2)
559
560
static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
561
{
562
+ TCGv_i32 tmp;
563
int pass;
564
565
/* Handle a 2-reg-misc operation by iterating 32 bits at a time */
566
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
567
return true;
568
}
569
570
+ tmp = tcg_temp_new_i32();
571
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
572
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
573
+ read_neon_element32(tmp, a->vm, pass, MO_32);
574
fn(tmp, tmp);
575
- neon_store_reg(a->vd, pass, tmp);
576
+ write_neon_element32(tmp, a->vd, pass, MO_32);
577
}
578
+ tcg_temp_free_i32(tmp);
579
580
return true;
581
}
582
@@ -XXX,XX +XXX,XX @@ static bool trans_VTRN(DisasContext *s, arg_2misc *a)
583
return true;
584
}
585
586
- if (a->size == 2) {
587
+ tmp = tcg_temp_new_i32();
588
+ tmp2 = tcg_temp_new_i32();
589
+ if (a->size == MO_32) {
590
for (pass = 0; pass < (a->q ? 4 : 2); pass += 2) {
591
- tmp = neon_load_reg(a->vm, pass);
592
- tmp2 = neon_load_reg(a->vd, pass + 1);
593
- neon_store_reg(a->vm, pass, tmp2);
594
- neon_store_reg(a->vd, pass + 1, tmp);
595
+ read_neon_element32(tmp, a->vm, pass, MO_32);
596
+ read_neon_element32(tmp2, a->vd, pass + 1, MO_32);
597
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
598
+ write_neon_element32(tmp, a->vd, pass + 1, MO_32);
599
}
600
} else {
601
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
602
- tmp = neon_load_reg(a->vm, pass);
603
- tmp2 = neon_load_reg(a->vd, pass);
604
- if (a->size == 0) {
605
+ read_neon_element32(tmp, a->vm, pass, MO_32);
606
+ read_neon_element32(tmp2, a->vd, pass, MO_32);
607
+ if (a->size == MO_8) {
608
gen_neon_trn_u8(tmp, tmp2);
609
} else {
610
gen_neon_trn_u16(tmp, tmp2);
611
}
612
- neon_store_reg(a->vm, pass, tmp2);
613
- neon_store_reg(a->vd, pass, tmp);
614
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
615
+ write_neon_element32(tmp, a->vd, pass, MO_32);
616
}
617
}
618
+ tcg_temp_free_i32(tmp);
619
+ tcg_temp_free_i32(tmp2);
620
return true;
621
}
166
--
622
--
167
2.19.1
623
2.20.1
168
624
169
625
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
We can then use this to improve VMOV (scalar to gp) and
4
VMOV (gp to scalar) so that we simply perform the memory
5
operation that we wanted, rather than inserting or
6
extracting from a 32-bit quantity.
7
8
These were the last uses of neon_load/store_reg, so remove them.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20201030022618.785675-7-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/translate.c | 50 +++++++++++++-----------
16
target/arm/translate-vfp.c.inc | 71 +++++-----------------------------
17
2 files changed, 37 insertions(+), 84 deletions(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
24
* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
25
* where 0 is the least significant end of the register.
26
*/
27
-static long neon_element_offset(int reg, int element, MemOp size)
28
+static long neon_element_offset(int reg, int element, MemOp memop)
29
{
30
- int element_size = 1 << size;
31
+ int element_size = 1 << (memop & MO_SIZE);
32
int ofs = element * element_size;
33
#ifdef HOST_WORDS_BIGENDIAN
34
/*
35
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
36
}
37
}
38
39
-static TCGv_i32 neon_load_reg(int reg, int pass)
40
-{
41
- TCGv_i32 tmp = tcg_temp_new_i32();
42
- tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
43
- return tmp;
44
-}
45
-
46
-static void neon_store_reg(int reg, int pass, TCGv_i32 var)
47
-{
48
- tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
49
- tcg_temp_free_i32(var);
50
-}
51
-
52
static inline void neon_load_reg64(TCGv_i64 var, int reg)
53
{
54
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
55
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
56
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
57
}
58
59
-static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
60
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
61
{
62
- long off = neon_element_offset(reg, ele, size);
63
+ long off = neon_element_offset(reg, ele, memop);
64
65
- switch (size) {
66
- case MO_32:
67
+ switch (memop) {
68
+ case MO_SB:
69
+ tcg_gen_ld8s_i32(dest, cpu_env, off);
70
+ break;
71
+ case MO_UB:
72
+ tcg_gen_ld8u_i32(dest, cpu_env, off);
73
+ break;
74
+ case MO_SW:
75
+ tcg_gen_ld16s_i32(dest, cpu_env, off);
76
+ break;
77
+ case MO_UW:
78
+ tcg_gen_ld16u_i32(dest, cpu_env, off);
79
+ break;
80
+ case MO_UL:
81
+ case MO_SL:
82
tcg_gen_ld_i32(dest, cpu_env, off);
83
break;
84
default:
85
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
86
}
87
}
88
89
-static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
90
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
91
{
92
- long off = neon_element_offset(reg, ele, size);
93
+ long off = neon_element_offset(reg, ele, memop);
94
95
- switch (size) {
96
+ switch (memop) {
97
+ case MO_8:
98
+ tcg_gen_st8_i32(src, cpu_env, off);
99
+ break;
100
+ case MO_16:
101
+ tcg_gen_st16_i32(src, cpu_env, off);
102
+ break;
103
case MO_32:
104
tcg_gen_st_i32(src, cpu_env, off);
105
break;
106
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-vfp.c.inc
109
+++ b/target/arm/translate-vfp.c.inc
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
111
{
112
/* VMOV scalar to general purpose register */
113
TCGv_i32 tmp;
114
- int pass;
115
- uint32_t offset;
116
117
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
118
- if (a->size == 2
119
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
120
+ if (a->size == MO_32
121
? !dc_isar_feature(aa32_fpsp_v2, s)
122
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
123
return false;
124
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
125
return false;
126
}
127
128
- offset = a->index << a->size;
129
- pass = extract32(offset, 2, 1);
130
- offset = extract32(offset, 0, 2) * 8;
131
-
132
if (!vfp_access_check(s)) {
133
return true;
134
}
135
136
- tmp = neon_load_reg(a->vn, pass);
137
- switch (a->size) {
138
- case 0:
139
- if (offset) {
140
- tcg_gen_shri_i32(tmp, tmp, offset);
141
- }
142
- if (a->u) {
143
- gen_uxtb(tmp);
144
- } else {
145
- gen_sxtb(tmp);
146
- }
147
- break;
148
- case 1:
149
- if (a->u) {
150
- if (offset) {
151
- tcg_gen_shri_i32(tmp, tmp, 16);
152
- } else {
153
- gen_uxth(tmp);
154
- }
155
- } else {
156
- if (offset) {
157
- tcg_gen_sari_i32(tmp, tmp, 16);
158
- } else {
159
- gen_sxth(tmp);
160
- }
161
- }
162
- break;
163
- case 2:
164
- break;
165
- }
166
+ tmp = tcg_temp_new_i32();
167
+ read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN));
168
store_reg(s, a->rt, tmp);
169
170
return true;
171
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
172
static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
173
{
174
/* VMOV general purpose register to scalar */
175
- TCGv_i32 tmp, tmp2;
176
- int pass;
177
- uint32_t offset;
178
+ TCGv_i32 tmp;
179
180
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
181
- if (a->size == 2
182
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
183
+ if (a->size == MO_32
184
? !dc_isar_feature(aa32_fpsp_v2, s)
185
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
186
return false;
187
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
188
return false;
189
}
190
191
- offset = a->index << a->size;
192
- pass = extract32(offset, 2, 1);
193
- offset = extract32(offset, 0, 2) * 8;
194
-
195
if (!vfp_access_check(s)) {
196
return true;
197
}
198
199
tmp = load_reg(s, a->rt);
200
- switch (a->size) {
201
- case 0:
202
- tmp2 = neon_load_reg(a->vn, pass);
203
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 8);
204
- tcg_temp_free_i32(tmp2);
205
- break;
206
- case 1:
207
- tmp2 = neon_load_reg(a->vn, pass);
208
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 16);
209
- tcg_temp_free_i32(tmp2);
210
- break;
211
- case 2:
212
- break;
213
- }
214
- neon_store_reg(a->vn, pass, tmp);
215
+ write_neon_element32(tmp, a->vn, a->index, a->size);
216
+ tcg_temp_free_i32(tmp);
217
218
return true;
219
}
220
--
221
2.20.1
222
223
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Fix the assertion failure when running interrupts.
3
The only uses of this function are for loading VFP
4
single-precision values, and nothing to do with NEON.
4
5
5
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-8-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181109152119.9242-3-alex.bennee@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/kvm64.c | 2 ++
11
target/arm/translate.c | 4 +-
12
1 file changed, 2 insertions(+)
12
target/arm/translate-vfp.c.inc | 184 ++++++++++++++++-----------------
13
2 files changed, 94 insertions(+), 94 deletions(-)
13
14
14
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm64.c
17
--- a/target/arm/translate.c
17
+++ b/target/arm/kvm64.c
18
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
19
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg64(TCGv_i64 var, int reg)
19
cs->exception_index = EXCP_BKPT;
20
tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
20
env->exception.syndrome = debug_exit->hsr;
21
}
21
env->exception.vaddress = debug_exit->far;
22
22
+ qemu_mutex_lock_iothread();
23
-static inline void neon_load_reg32(TCGv_i32 var, int reg)
23
cc->do_interrupt(cs);
24
+static inline void vfp_load_reg32(TCGv_i32 var, int reg)
24
+ qemu_mutex_unlock_iothread();
25
{
25
26
tcg_gen_ld_i32(var, cpu_env, vfp_reg_offset(false, reg));
26
return false;
27
}
28
29
-static inline void neon_store_reg32(TCGv_i32 var, int reg)
30
+static inline void vfp_store_reg32(TCGv_i32 var, int reg)
31
{
32
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
33
}
34
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-vfp.c.inc
37
+++ b/target/arm/translate-vfp.c.inc
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
39
frn = tcg_temp_new_i32();
40
frm = tcg_temp_new_i32();
41
dest = tcg_temp_new_i32();
42
- neon_load_reg32(frn, rn);
43
- neon_load_reg32(frm, rm);
44
+ vfp_load_reg32(frn, rn);
45
+ vfp_load_reg32(frm, rm);
46
switch (a->cc) {
47
case 0: /* eq: Z */
48
tcg_gen_movcond_i32(TCG_COND_EQ, dest, cpu_ZF, zero,
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
50
if (sz == 1) {
51
tcg_gen_andi_i32(dest, dest, 0xffff);
52
}
53
- neon_store_reg32(dest, rd);
54
+ vfp_store_reg32(dest, rd);
55
tcg_temp_free_i32(frn);
56
tcg_temp_free_i32(frm);
57
tcg_temp_free_i32(dest);
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
59
TCGv_i32 tcg_res;
60
tcg_op = tcg_temp_new_i32();
61
tcg_res = tcg_temp_new_i32();
62
- neon_load_reg32(tcg_op, rm);
63
+ vfp_load_reg32(tcg_op, rm);
64
if (sz == 1) {
65
gen_helper_rinth(tcg_res, tcg_op, fpst);
66
} else {
67
gen_helper_rints(tcg_res, tcg_op, fpst);
68
}
69
- neon_store_reg32(tcg_res, rd);
70
+ vfp_store_reg32(tcg_res, rd);
71
tcg_temp_free_i32(tcg_op);
72
tcg_temp_free_i32(tcg_res);
73
}
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
gen_helper_vfp_tould(tcg_res, tcg_double, tcg_shift, fpst);
76
}
77
tcg_gen_extrl_i64_i32(tcg_tmp, tcg_res);
78
- neon_store_reg32(tcg_tmp, rd);
79
+ vfp_store_reg32(tcg_tmp, rd);
80
tcg_temp_free_i32(tcg_tmp);
81
tcg_temp_free_i64(tcg_res);
82
tcg_temp_free_i64(tcg_double);
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
84
TCGv_i32 tcg_single, tcg_res;
85
tcg_single = tcg_temp_new_i32();
86
tcg_res = tcg_temp_new_i32();
87
- neon_load_reg32(tcg_single, rm);
88
+ vfp_load_reg32(tcg_single, rm);
89
if (sz == 1) {
90
if (is_signed) {
91
gen_helper_vfp_toslh(tcg_res, tcg_single, tcg_shift, fpst);
92
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
93
gen_helper_vfp_touls(tcg_res, tcg_single, tcg_shift, fpst);
94
}
95
}
96
- neon_store_reg32(tcg_res, rd);
97
+ vfp_store_reg32(tcg_res, rd);
98
tcg_temp_free_i32(tcg_res);
99
tcg_temp_free_i32(tcg_single);
100
}
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
102
if (a->l) {
103
/* VFP to general purpose register */
104
tmp = tcg_temp_new_i32();
105
- neon_load_reg32(tmp, a->vn);
106
+ vfp_load_reg32(tmp, a->vn);
107
tcg_gen_andi_i32(tmp, tmp, 0xffff);
108
store_reg(s, a->rt, tmp);
109
} else {
110
/* general purpose register to VFP */
111
tmp = load_reg(s, a->rt);
112
tcg_gen_andi_i32(tmp, tmp, 0xffff);
113
- neon_store_reg32(tmp, a->vn);
114
+ vfp_store_reg32(tmp, a->vn);
115
tcg_temp_free_i32(tmp);
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
119
if (a->l) {
120
/* VFP to general purpose register */
121
tmp = tcg_temp_new_i32();
122
- neon_load_reg32(tmp, a->vn);
123
+ vfp_load_reg32(tmp, a->vn);
124
if (a->rt == 15) {
125
/* Set the 4 flag bits in the CPSR. */
126
gen_set_nzcv(tmp);
127
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
128
} else {
129
/* general purpose register to VFP */
130
tmp = load_reg(s, a->rt);
131
- neon_store_reg32(tmp, a->vn);
132
+ vfp_store_reg32(tmp, a->vn);
133
tcg_temp_free_i32(tmp);
134
}
135
136
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
137
if (a->op) {
138
/* fpreg to gpreg */
139
tmp = tcg_temp_new_i32();
140
- neon_load_reg32(tmp, a->vm);
141
+ vfp_load_reg32(tmp, a->vm);
142
store_reg(s, a->rt, tmp);
143
tmp = tcg_temp_new_i32();
144
- neon_load_reg32(tmp, a->vm + 1);
145
+ vfp_load_reg32(tmp, a->vm + 1);
146
store_reg(s, a->rt2, tmp);
147
} else {
148
/* gpreg to fpreg */
149
tmp = load_reg(s, a->rt);
150
- neon_store_reg32(tmp, a->vm);
151
+ vfp_store_reg32(tmp, a->vm);
152
tcg_temp_free_i32(tmp);
153
tmp = load_reg(s, a->rt2);
154
- neon_store_reg32(tmp, a->vm + 1);
155
+ vfp_store_reg32(tmp, a->vm + 1);
156
tcg_temp_free_i32(tmp);
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
160
if (a->op) {
161
/* fpreg to gpreg */
162
tmp = tcg_temp_new_i32();
163
- neon_load_reg32(tmp, a->vm * 2);
164
+ vfp_load_reg32(tmp, a->vm * 2);
165
store_reg(s, a->rt, tmp);
166
tmp = tcg_temp_new_i32();
167
- neon_load_reg32(tmp, a->vm * 2 + 1);
168
+ vfp_load_reg32(tmp, a->vm * 2 + 1);
169
store_reg(s, a->rt2, tmp);
170
} else {
171
/* gpreg to fpreg */
172
tmp = load_reg(s, a->rt);
173
- neon_store_reg32(tmp, a->vm * 2);
174
+ vfp_store_reg32(tmp, a->vm * 2);
175
tcg_temp_free_i32(tmp);
176
tmp = load_reg(s, a->rt2);
177
- neon_store_reg32(tmp, a->vm * 2 + 1);
178
+ vfp_store_reg32(tmp, a->vm * 2 + 1);
179
tcg_temp_free_i32(tmp);
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
183
tmp = tcg_temp_new_i32();
184
if (a->l) {
185
gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
186
- neon_store_reg32(tmp, a->vd);
187
+ vfp_store_reg32(tmp, a->vd);
188
} else {
189
- neon_load_reg32(tmp, a->vd);
190
+ vfp_load_reg32(tmp, a->vd);
191
gen_aa32_st16(s, tmp, addr, get_mem_index(s));
192
}
193
tcg_temp_free_i32(tmp);
194
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
195
tmp = tcg_temp_new_i32();
196
if (a->l) {
197
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
198
- neon_store_reg32(tmp, a->vd);
199
+ vfp_store_reg32(tmp, a->vd);
200
} else {
201
- neon_load_reg32(tmp, a->vd);
202
+ vfp_load_reg32(tmp, a->vd);
203
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
204
}
205
tcg_temp_free_i32(tmp);
206
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
207
if (a->l) {
208
/* load */
209
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
210
- neon_store_reg32(tmp, a->vd + i);
211
+ vfp_store_reg32(tmp, a->vd + i);
212
} else {
213
/* store */
214
- neon_load_reg32(tmp, a->vd + i);
215
+ vfp_load_reg32(tmp, a->vd + i);
216
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
217
}
218
tcg_gen_addi_i32(addr, addr, offset);
219
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
220
fd = tcg_temp_new_i32();
221
fpst = fpstatus_ptr(FPST_FPCR);
222
223
- neon_load_reg32(f0, vn);
224
- neon_load_reg32(f1, vm);
225
+ vfp_load_reg32(f0, vn);
226
+ vfp_load_reg32(f1, vm);
227
228
for (;;) {
229
if (reads_vd) {
230
- neon_load_reg32(fd, vd);
231
+ vfp_load_reg32(fd, vd);
232
}
233
fn(fd, f0, f1, fpst);
234
- neon_store_reg32(fd, vd);
235
+ vfp_store_reg32(fd, vd);
236
237
if (veclen == 0) {
238
break;
239
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
240
veclen--;
241
vd = vfp_advance_sreg(vd, delta_d);
242
vn = vfp_advance_sreg(vn, delta_d);
243
- neon_load_reg32(f0, vn);
244
+ vfp_load_reg32(f0, vn);
245
if (delta_m) {
246
vm = vfp_advance_sreg(vm, delta_m);
247
- neon_load_reg32(f1, vm);
248
+ vfp_load_reg32(f1, vm);
249
}
250
}
251
252
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_hp(DisasContext *s, VFPGen3OpSPFn *fn,
253
fd = tcg_temp_new_i32();
254
fpst = fpstatus_ptr(FPST_FPCR_F16);
255
256
- neon_load_reg32(f0, vn);
257
- neon_load_reg32(f1, vm);
258
+ vfp_load_reg32(f0, vn);
259
+ vfp_load_reg32(f1, vm);
260
261
if (reads_vd) {
262
- neon_load_reg32(fd, vd);
263
+ vfp_load_reg32(fd, vd);
264
}
265
fn(fd, f0, f1, fpst);
266
- neon_store_reg32(fd, vd);
267
+ vfp_store_reg32(fd, vd);
268
269
tcg_temp_free_i32(f0);
270
tcg_temp_free_i32(f1);
271
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
272
f0 = tcg_temp_new_i32();
273
fd = tcg_temp_new_i32();
274
275
- neon_load_reg32(f0, vm);
276
+ vfp_load_reg32(f0, vm);
277
278
for (;;) {
279
fn(fd, f0);
280
- neon_store_reg32(fd, vd);
281
+ vfp_store_reg32(fd, vd);
282
283
if (veclen == 0) {
284
break;
285
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
286
/* single source one-many */
287
while (veclen--) {
288
vd = vfp_advance_sreg(vd, delta_d);
289
- neon_store_reg32(fd, vd);
290
+ vfp_store_reg32(fd, vd);
291
}
292
break;
293
}
294
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
295
veclen--;
296
vd = vfp_advance_sreg(vd, delta_d);
297
vm = vfp_advance_sreg(vm, delta_m);
298
- neon_load_reg32(f0, vm);
299
+ vfp_load_reg32(f0, vm);
300
}
301
302
tcg_temp_free_i32(f0);
303
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
304
}
305
306
f0 = tcg_temp_new_i32();
307
- neon_load_reg32(f0, vm);
308
+ vfp_load_reg32(f0, vm);
309
fn(f0, f0);
310
- neon_store_reg32(f0, vd);
311
+ vfp_store_reg32(f0, vd);
312
tcg_temp_free_i32(f0);
313
314
return true;
315
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
316
vm = tcg_temp_new_i32();
317
vd = tcg_temp_new_i32();
318
319
- neon_load_reg32(vn, a->vn);
320
- neon_load_reg32(vm, a->vm);
321
+ vfp_load_reg32(vn, a->vn);
322
+ vfp_load_reg32(vm, a->vm);
323
if (neg_n) {
324
/* VFNMS, VFMS */
325
gen_helper_vfp_negh(vn, vn);
326
}
327
- neon_load_reg32(vd, a->vd);
328
+ vfp_load_reg32(vd, a->vd);
329
if (neg_d) {
330
/* VFNMA, VFNMS */
331
gen_helper_vfp_negh(vd, vd);
332
}
333
fpst = fpstatus_ptr(FPST_FPCR_F16);
334
gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst);
335
- neon_store_reg32(vd, a->vd);
336
+ vfp_store_reg32(vd, a->vd);
337
338
tcg_temp_free_ptr(fpst);
339
tcg_temp_free_i32(vn);
340
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
341
vm = tcg_temp_new_i32();
342
vd = tcg_temp_new_i32();
343
344
- neon_load_reg32(vn, a->vn);
345
- neon_load_reg32(vm, a->vm);
346
+ vfp_load_reg32(vn, a->vn);
347
+ vfp_load_reg32(vm, a->vm);
348
if (neg_n) {
349
/* VFNMS, VFMS */
350
gen_helper_vfp_negs(vn, vn);
351
}
352
- neon_load_reg32(vd, a->vd);
353
+ vfp_load_reg32(vd, a->vd);
354
if (neg_d) {
355
/* VFNMA, VFNMS */
356
gen_helper_vfp_negs(vd, vd);
357
}
358
fpst = fpstatus_ptr(FPST_FPCR);
359
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
360
- neon_store_reg32(vd, a->vd);
361
+ vfp_store_reg32(vd, a->vd);
362
363
tcg_temp_free_ptr(fpst);
364
tcg_temp_free_i32(vn);
365
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_hp(DisasContext *s, arg_VMOV_imm_sp *a)
366
}
367
368
fd = tcg_const_i32(vfp_expand_imm(MO_16, a->imm));
369
- neon_store_reg32(fd, a->vd);
370
+ vfp_store_reg32(fd, a->vd);
371
tcg_temp_free_i32(fd);
372
return true;
373
}
374
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
375
fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm));
376
377
for (;;) {
378
- neon_store_reg32(fd, vd);
379
+ vfp_store_reg32(fd, vd);
380
381
if (veclen == 0) {
382
break;
383
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
384
vd = tcg_temp_new_i32();
385
vm = tcg_temp_new_i32();
386
387
- neon_load_reg32(vd, a->vd);
388
+ vfp_load_reg32(vd, a->vd);
389
if (a->z) {
390
tcg_gen_movi_i32(vm, 0);
391
} else {
392
- neon_load_reg32(vm, a->vm);
393
+ vfp_load_reg32(vm, a->vm);
394
}
395
396
if (a->e) {
397
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_sp(DisasContext *s, arg_VCMP_sp *a)
398
vd = tcg_temp_new_i32();
399
vm = tcg_temp_new_i32();
400
401
- neon_load_reg32(vd, a->vd);
402
+ vfp_load_reg32(vd, a->vd);
403
if (a->z) {
404
tcg_gen_movi_i32(vm, 0);
405
} else {
406
- neon_load_reg32(vm, a->vm);
407
+ vfp_load_reg32(vm, a->vm);
408
}
409
410
if (a->e) {
411
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f32_f16(DisasContext *s, arg_VCVT_f32_f16 *a)
412
/* The T bit tells us if we want the low or high 16 bits of Vm */
413
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
414
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp_mode);
415
- neon_store_reg32(tmp, a->vd);
416
+ vfp_store_reg32(tmp, a->vd);
417
tcg_temp_free_i32(ahp_mode);
418
tcg_temp_free_ptr(fpst);
419
tcg_temp_free_i32(tmp);
420
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
421
ahp_mode = get_ahp_flag();
422
tmp = tcg_temp_new_i32();
423
424
- neon_load_reg32(tmp, a->vm);
425
+ vfp_load_reg32(tmp, a->vm);
426
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp_mode);
427
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
428
tcg_temp_free_i32(ahp_mode);
429
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_hp(DisasContext *s, arg_VRINTR_sp *a)
430
}
431
432
tmp = tcg_temp_new_i32();
433
- neon_load_reg32(tmp, a->vm);
434
+ vfp_load_reg32(tmp, a->vm);
435
fpst = fpstatus_ptr(FPST_FPCR_F16);
436
gen_helper_rinth(tmp, tmp, fpst);
437
- neon_store_reg32(tmp, a->vd);
438
+ vfp_store_reg32(tmp, a->vd);
439
tcg_temp_free_ptr(fpst);
440
tcg_temp_free_i32(tmp);
441
return true;
442
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_sp(DisasContext *s, arg_VRINTR_sp *a)
443
}
444
445
tmp = tcg_temp_new_i32();
446
- neon_load_reg32(tmp, a->vm);
447
+ vfp_load_reg32(tmp, a->vm);
448
fpst = fpstatus_ptr(FPST_FPCR);
449
gen_helper_rints(tmp, tmp, fpst);
450
- neon_store_reg32(tmp, a->vd);
451
+ vfp_store_reg32(tmp, a->vd);
452
tcg_temp_free_ptr(fpst);
453
tcg_temp_free_i32(tmp);
454
return true;
455
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_hp(DisasContext *s, arg_VRINTZ_sp *a)
456
}
457
458
tmp = tcg_temp_new_i32();
459
- neon_load_reg32(tmp, a->vm);
460
+ vfp_load_reg32(tmp, a->vm);
461
fpst = fpstatus_ptr(FPST_FPCR_F16);
462
tcg_rmode = tcg_const_i32(float_round_to_zero);
463
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
464
gen_helper_rinth(tmp, tmp, fpst);
465
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
466
- neon_store_reg32(tmp, a->vd);
467
+ vfp_store_reg32(tmp, a->vd);
468
tcg_temp_free_ptr(fpst);
469
tcg_temp_free_i32(tcg_rmode);
470
tcg_temp_free_i32(tmp);
471
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_sp(DisasContext *s, arg_VRINTZ_sp *a)
472
}
473
474
tmp = tcg_temp_new_i32();
475
- neon_load_reg32(tmp, a->vm);
476
+ vfp_load_reg32(tmp, a->vm);
477
fpst = fpstatus_ptr(FPST_FPCR);
478
tcg_rmode = tcg_const_i32(float_round_to_zero);
479
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
480
gen_helper_rints(tmp, tmp, fpst);
481
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
482
- neon_store_reg32(tmp, a->vd);
483
+ vfp_store_reg32(tmp, a->vd);
484
tcg_temp_free_ptr(fpst);
485
tcg_temp_free_i32(tcg_rmode);
486
tcg_temp_free_i32(tmp);
487
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_hp(DisasContext *s, arg_VRINTX_sp *a)
488
}
489
490
tmp = tcg_temp_new_i32();
491
- neon_load_reg32(tmp, a->vm);
492
+ vfp_load_reg32(tmp, a->vm);
493
fpst = fpstatus_ptr(FPST_FPCR_F16);
494
gen_helper_rinth_exact(tmp, tmp, fpst);
495
- neon_store_reg32(tmp, a->vd);
496
+ vfp_store_reg32(tmp, a->vd);
497
tcg_temp_free_ptr(fpst);
498
tcg_temp_free_i32(tmp);
499
return true;
500
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_sp(DisasContext *s, arg_VRINTX_sp *a)
501
}
502
503
tmp = tcg_temp_new_i32();
504
- neon_load_reg32(tmp, a->vm);
505
+ vfp_load_reg32(tmp, a->vm);
506
fpst = fpstatus_ptr(FPST_FPCR);
507
gen_helper_rints_exact(tmp, tmp, fpst);
508
- neon_store_reg32(tmp, a->vd);
509
+ vfp_store_reg32(tmp, a->vd);
510
tcg_temp_free_ptr(fpst);
511
tcg_temp_free_i32(tmp);
512
return true;
513
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
514
515
vm = tcg_temp_new_i32();
516
vd = tcg_temp_new_i64();
517
- neon_load_reg32(vm, a->vm);
518
+ vfp_load_reg32(vm, a->vm);
519
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
520
neon_store_reg64(vd, a->vd);
521
tcg_temp_free_i32(vm);
522
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
523
vm = tcg_temp_new_i64();
524
neon_load_reg64(vm, a->vm);
525
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
526
- neon_store_reg32(vd, a->vd);
527
+ vfp_store_reg32(vd, a->vd);
528
tcg_temp_free_i32(vd);
529
tcg_temp_free_i64(vm);
530
return true;
531
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
532
}
533
534
vm = tcg_temp_new_i32();
535
- neon_load_reg32(vm, a->vm);
536
+ vfp_load_reg32(vm, a->vm);
537
fpst = fpstatus_ptr(FPST_FPCR_F16);
538
if (a->s) {
539
/* i32 -> f16 */
540
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
541
/* u32 -> f16 */
542
gen_helper_vfp_uitoh(vm, vm, fpst);
543
}
544
- neon_store_reg32(vm, a->vd);
545
+ vfp_store_reg32(vm, a->vd);
546
tcg_temp_free_i32(vm);
547
tcg_temp_free_ptr(fpst);
548
return true;
549
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
550
}
551
552
vm = tcg_temp_new_i32();
553
- neon_load_reg32(vm, a->vm);
554
+ vfp_load_reg32(vm, a->vm);
555
fpst = fpstatus_ptr(FPST_FPCR);
556
if (a->s) {
557
/* i32 -> f32 */
558
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
559
/* u32 -> f32 */
560
gen_helper_vfp_uitos(vm, vm, fpst);
561
}
562
- neon_store_reg32(vm, a->vd);
563
+ vfp_store_reg32(vm, a->vd);
564
tcg_temp_free_i32(vm);
565
tcg_temp_free_ptr(fpst);
566
return true;
567
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
568
569
vm = tcg_temp_new_i32();
570
vd = tcg_temp_new_i64();
571
- neon_load_reg32(vm, a->vm);
572
+ vfp_load_reg32(vm, a->vm);
573
fpst = fpstatus_ptr(FPST_FPCR);
574
if (a->s) {
575
/* i32 -> f64 */
576
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
577
vd = tcg_temp_new_i32();
578
neon_load_reg64(vm, a->vm);
579
gen_helper_vjcvt(vd, vm, cpu_env);
580
- neon_store_reg32(vd, a->vd);
581
+ vfp_store_reg32(vd, a->vd);
582
tcg_temp_free_i64(vm);
583
tcg_temp_free_i32(vd);
584
return true;
585
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
586
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
587
588
vd = tcg_temp_new_i32();
589
- neon_load_reg32(vd, a->vd);
590
+ vfp_load_reg32(vd, a->vd);
591
592
fpst = fpstatus_ptr(FPST_FPCR_F16);
593
shift = tcg_const_i32(frac_bits);
594
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
595
g_assert_not_reached();
596
}
597
598
- neon_store_reg32(vd, a->vd);
599
+ vfp_store_reg32(vd, a->vd);
600
tcg_temp_free_i32(vd);
601
tcg_temp_free_i32(shift);
602
tcg_temp_free_ptr(fpst);
603
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
604
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
605
606
vd = tcg_temp_new_i32();
607
- neon_load_reg32(vd, a->vd);
608
+ vfp_load_reg32(vd, a->vd);
609
610
fpst = fpstatus_ptr(FPST_FPCR);
611
shift = tcg_const_i32(frac_bits);
612
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
613
g_assert_not_reached();
614
}
615
616
- neon_store_reg32(vd, a->vd);
617
+ vfp_store_reg32(vd, a->vd);
618
tcg_temp_free_i32(vd);
619
tcg_temp_free_i32(shift);
620
tcg_temp_free_ptr(fpst);
621
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
622
623
fpst = fpstatus_ptr(FPST_FPCR_F16);
624
vm = tcg_temp_new_i32();
625
- neon_load_reg32(vm, a->vm);
626
+ vfp_load_reg32(vm, a->vm);
627
628
if (a->s) {
629
if (a->rz) {
630
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
631
gen_helper_vfp_touih(vm, vm, fpst);
632
}
633
}
634
- neon_store_reg32(vm, a->vd);
635
+ vfp_store_reg32(vm, a->vd);
636
tcg_temp_free_i32(vm);
637
tcg_temp_free_ptr(fpst);
638
return true;
639
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
640
641
fpst = fpstatus_ptr(FPST_FPCR);
642
vm = tcg_temp_new_i32();
643
- neon_load_reg32(vm, a->vm);
644
+ vfp_load_reg32(vm, a->vm);
645
646
if (a->s) {
647
if (a->rz) {
648
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
649
gen_helper_vfp_touis(vm, vm, fpst);
650
}
651
}
652
- neon_store_reg32(vm, a->vd);
653
+ vfp_store_reg32(vm, a->vd);
654
tcg_temp_free_i32(vm);
655
tcg_temp_free_ptr(fpst);
656
return true;
657
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
658
gen_helper_vfp_touid(vd, vm, fpst);
659
}
660
}
661
- neon_store_reg32(vd, a->vd);
662
+ vfp_store_reg32(vd, a->vd);
663
tcg_temp_free_i32(vd);
664
tcg_temp_free_i64(vm);
665
tcg_temp_free_ptr(fpst);
666
@@ -XXX,XX +XXX,XX @@ static bool trans_VINS(DisasContext *s, arg_VINS *a)
667
/* Insert low half of Vm into high half of Vd */
668
rm = tcg_temp_new_i32();
669
rd = tcg_temp_new_i32();
670
- neon_load_reg32(rm, a->vm);
671
- neon_load_reg32(rd, a->vd);
672
+ vfp_load_reg32(rm, a->vm);
673
+ vfp_load_reg32(rd, a->vd);
674
tcg_gen_deposit_i32(rd, rd, rm, 16, 16);
675
- neon_store_reg32(rd, a->vd);
676
+ vfp_store_reg32(rd, a->vd);
677
tcg_temp_free_i32(rm);
678
tcg_temp_free_i32(rd);
679
return true;
680
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOVX(DisasContext *s, arg_VINS *a)
681
682
/* Set Vd to high half of Vm */
683
rm = tcg_temp_new_i32();
684
- neon_load_reg32(rm, a->vm);
685
+ vfp_load_reg32(rm, a->vm);
686
tcg_gen_shri_i32(rm, rm, 16);
687
- neon_store_reg32(rm, a->vd);
688
+ vfp_store_reg32(rm, a->vd);
689
tcg_temp_free_i32(rm);
690
return true;
27
}
691
}
28
--
692
--
29
2.19.1
693
2.20.1
30
694
31
695
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-9-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 26 +++++++++
11
target/arm/translate-neon.c.inc | 94 ++++++++++++++++-----------------
12
2 files changed, 73 insertions(+), 47 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
19
}
20
}
21
22
+static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
23
+{
24
+ long off = neon_element_offset(reg, ele, memop);
25
+
26
+ switch (memop) {
27
+ case MO_Q:
28
+ tcg_gen_ld_i64(dest, cpu_env, off);
29
+ break;
30
+ default:
31
+ g_assert_not_reached();
32
+ }
33
+}
34
+
35
static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
36
{
37
long off = neon_element_offset(reg, ele, memop);
38
@@ -XXX,XX +XXX,XX @@ static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
39
}
40
}
41
42
+static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
43
+{
44
+ long off = neon_element_offset(reg, ele, memop);
45
+
46
+ switch (memop) {
47
+ case MO_64:
48
+ tcg_gen_st_i64(src, cpu_env, off);
49
+ break;
50
+ default:
51
+ g_assert_not_reached();
52
+ }
53
+}
54
+
55
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
56
{
57
TCGv_ptr ret = tcg_temp_new_ptr();
58
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.c.inc
61
+++ b/target/arm/translate-neon.c.inc
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
63
for (pass = 0; pass < a->q + 1; pass++) {
64
TCGv_i64 tmp = tcg_temp_new_i64();
65
66
- neon_load_reg64(tmp, a->vm + pass);
67
+ read_neon_element64(tmp, a->vm, pass, MO_64);
68
fn(tmp, cpu_env, tmp, constimm);
69
- neon_store_reg64(tmp, a->vd + pass);
70
+ write_neon_element64(tmp, a->vd, pass, MO_64);
71
tcg_temp_free_i64(tmp);
72
}
73
tcg_temp_free_i64(constimm);
74
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
75
rd = tcg_temp_new_i32();
76
77
/* Load both inputs first to avoid potential overwrite if rm == rd */
78
- neon_load_reg64(rm1, a->vm);
79
- neon_load_reg64(rm2, a->vm + 1);
80
+ read_neon_element64(rm1, a->vm, 0, MO_64);
81
+ read_neon_element64(rm2, a->vm, 1, MO_64);
82
83
shiftfn(rm1, rm1, constimm);
84
narrowfn(rd, cpu_env, rm1);
85
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
86
tcg_gen_shli_i64(tmp, tmp, a->shift);
87
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
88
}
89
- neon_store_reg64(tmp, a->vd);
90
+ write_neon_element64(tmp, a->vd, 0, MO_64);
91
92
widenfn(tmp, rm1);
93
tcg_temp_free_i32(rm1);
94
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
95
tcg_gen_shli_i64(tmp, tmp, a->shift);
96
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
97
}
98
- neon_store_reg64(tmp, a->vd + 1);
99
+ write_neon_element64(tmp, a->vd, 1, MO_64);
100
tcg_temp_free_i64(tmp);
101
return true;
102
}
103
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
104
rm_64 = tcg_temp_new_i64();
105
106
if (src1_wide) {
107
- neon_load_reg64(rn0_64, a->vn);
108
+ read_neon_element64(rn0_64, a->vn, 0, MO_64);
109
} else {
110
TCGv_i32 tmp = tcg_temp_new_i32();
111
read_neon_element32(tmp, a->vn, 0, MO_32);
112
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
113
* avoid incorrect results if a narrow input overlaps with the result.
114
*/
115
if (src1_wide) {
116
- neon_load_reg64(rn1_64, a->vn + 1);
117
+ read_neon_element64(rn1_64, a->vn, 1, MO_64);
118
} else {
119
TCGv_i32 tmp = tcg_temp_new_i32();
120
read_neon_element32(tmp, a->vn, 1, MO_32);
121
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
122
rm = tcg_temp_new_i32();
123
read_neon_element32(rm, a->vm, 1, MO_32);
124
125
- neon_store_reg64(rn0_64, a->vd);
126
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
127
128
widenfn(rm_64, rm);
129
tcg_temp_free_i32(rm);
130
opfn(rn1_64, rn1_64, rm_64);
131
- neon_store_reg64(rn1_64, a->vd + 1);
132
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
133
134
tcg_temp_free_i64(rn0_64);
135
tcg_temp_free_i64(rn1_64);
136
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
137
rd0 = tcg_temp_new_i32();
138
rd1 = tcg_temp_new_i32();
139
140
- neon_load_reg64(rn_64, a->vn);
141
- neon_load_reg64(rm_64, a->vm);
142
+ read_neon_element64(rn_64, a->vn, 0, MO_64);
143
+ read_neon_element64(rm_64, a->vm, 0, MO_64);
144
145
opfn(rn_64, rn_64, rm_64);
146
147
narrowfn(rd0, rn_64);
148
149
- neon_load_reg64(rn_64, a->vn + 1);
150
- neon_load_reg64(rm_64, a->vm + 1);
151
+ read_neon_element64(rn_64, a->vn, 1, MO_64);
152
+ read_neon_element64(rm_64, a->vm, 1, MO_64);
153
154
opfn(rn_64, rn_64, rm_64);
155
156
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
157
/* Don't store results until after all loads: they might overlap */
158
if (accfn) {
159
tmp = tcg_temp_new_i64();
160
- neon_load_reg64(tmp, a->vd);
161
+ read_neon_element64(tmp, a->vd, 0, MO_64);
162
accfn(tmp, tmp, rd0);
163
- neon_store_reg64(tmp, a->vd);
164
- neon_load_reg64(tmp, a->vd + 1);
165
+ write_neon_element64(tmp, a->vd, 0, MO_64);
166
+ read_neon_element64(tmp, a->vd, 1, MO_64);
167
accfn(tmp, tmp, rd1);
168
- neon_store_reg64(tmp, a->vd + 1);
169
+ write_neon_element64(tmp, a->vd, 1, MO_64);
170
tcg_temp_free_i64(tmp);
171
} else {
172
- neon_store_reg64(rd0, a->vd);
173
- neon_store_reg64(rd1, a->vd + 1);
174
+ write_neon_element64(rd0, a->vd, 0, MO_64);
175
+ write_neon_element64(rd1, a->vd, 1, MO_64);
176
}
177
178
tcg_temp_free_i64(rd0);
179
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
180
181
if (accfn) {
182
TCGv_i64 t64 = tcg_temp_new_i64();
183
- neon_load_reg64(t64, a->vd);
184
+ read_neon_element64(t64, a->vd, 0, MO_64);
185
accfn(t64, t64, rn0_64);
186
- neon_store_reg64(t64, a->vd);
187
- neon_load_reg64(t64, a->vd + 1);
188
+ write_neon_element64(t64, a->vd, 0, MO_64);
189
+ read_neon_element64(t64, a->vd, 1, MO_64);
190
accfn(t64, t64, rn1_64);
191
- neon_store_reg64(t64, a->vd + 1);
192
+ write_neon_element64(t64, a->vd, 1, MO_64);
193
tcg_temp_free_i64(t64);
194
} else {
195
- neon_store_reg64(rn0_64, a->vd);
196
- neon_store_reg64(rn1_64, a->vd + 1);
197
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
198
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
199
}
200
tcg_temp_free_i64(rn0_64);
201
tcg_temp_free_i64(rn1_64);
202
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
203
right = tcg_temp_new_i64();
204
dest = tcg_temp_new_i64();
205
206
- neon_load_reg64(right, a->vn);
207
- neon_load_reg64(left, a->vm);
208
+ read_neon_element64(right, a->vn, 0, MO_64);
209
+ read_neon_element64(left, a->vm, 0, MO_64);
210
tcg_gen_extract2_i64(dest, right, left, a->imm * 8);
211
- neon_store_reg64(dest, a->vd);
212
+ write_neon_element64(dest, a->vd, 0, MO_64);
213
214
tcg_temp_free_i64(left);
215
tcg_temp_free_i64(right);
216
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
217
destright = tcg_temp_new_i64();
218
219
if (a->imm < 8) {
220
- neon_load_reg64(right, a->vn);
221
- neon_load_reg64(middle, a->vn + 1);
222
+ read_neon_element64(right, a->vn, 0, MO_64);
223
+ read_neon_element64(middle, a->vn, 1, MO_64);
224
tcg_gen_extract2_i64(destright, right, middle, a->imm * 8);
225
- neon_load_reg64(left, a->vm);
226
+ read_neon_element64(left, a->vm, 0, MO_64);
227
tcg_gen_extract2_i64(destleft, middle, left, a->imm * 8);
228
} else {
229
- neon_load_reg64(right, a->vn + 1);
230
- neon_load_reg64(middle, a->vm);
231
+ read_neon_element64(right, a->vn, 1, MO_64);
232
+ read_neon_element64(middle, a->vm, 0, MO_64);
233
tcg_gen_extract2_i64(destright, right, middle, (a->imm - 8) * 8);
234
- neon_load_reg64(left, a->vm + 1);
235
+ read_neon_element64(left, a->vm, 1, MO_64);
236
tcg_gen_extract2_i64(destleft, middle, left, (a->imm - 8) * 8);
237
}
238
239
- neon_store_reg64(destright, a->vd);
240
- neon_store_reg64(destleft, a->vd + 1);
241
+ write_neon_element64(destright, a->vd, 0, MO_64);
242
+ write_neon_element64(destleft, a->vd, 1, MO_64);
243
244
tcg_temp_free_i64(destright);
245
tcg_temp_free_i64(destleft);
246
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
247
248
if (accfn) {
249
TCGv_i64 tmp64 = tcg_temp_new_i64();
250
- neon_load_reg64(tmp64, a->vd + pass);
251
+ read_neon_element64(tmp64, a->vd, pass, MO_64);
252
accfn(rd_64, tmp64, rd_64);
253
tcg_temp_free_i64(tmp64);
254
}
255
- neon_store_reg64(rd_64, a->vd + pass);
256
+ write_neon_element64(rd_64, a->vd, pass, MO_64);
257
tcg_temp_free_i64(rd_64);
258
}
259
return true;
260
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
261
rd0 = tcg_temp_new_i32();
262
rd1 = tcg_temp_new_i32();
263
264
- neon_load_reg64(rm, a->vm);
265
+ read_neon_element64(rm, a->vm, 0, MO_64);
266
narrowfn(rd0, cpu_env, rm);
267
- neon_load_reg64(rm, a->vm + 1);
268
+ read_neon_element64(rm, a->vm, 1, MO_64);
269
narrowfn(rd1, cpu_env, rm);
270
write_neon_element32(rd0, a->vd, 0, MO_32);
271
write_neon_element32(rd1, a->vd, 1, MO_32);
272
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
273
274
widenfn(rd, rm0);
275
tcg_gen_shli_i64(rd, rd, 8 << a->size);
276
- neon_store_reg64(rd, a->vd);
277
+ write_neon_element64(rd, a->vd, 0, MO_64);
278
widenfn(rd, rm1);
279
tcg_gen_shli_i64(rd, rd, 8 << a->size);
280
- neon_store_reg64(rd, a->vd + 1);
281
+ write_neon_element64(rd, a->vd, 1, MO_64);
282
283
tcg_temp_free_i64(rd);
284
tcg_temp_free_i32(rm0);
285
@@ -XXX,XX +XXX,XX @@ static bool trans_VSWP(DisasContext *s, arg_2misc *a)
286
rm = tcg_temp_new_i64();
287
rd = tcg_temp_new_i64();
288
for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
289
- neon_load_reg64(rm, a->vm + pass);
290
- neon_load_reg64(rd, a->vd + pass);
291
- neon_store_reg64(rm, a->vd + pass);
292
- neon_store_reg64(rd, a->vm + pass);
293
+ read_neon_element64(rm, a->vm, pass, MO_64);
294
+ read_neon_element64(rd, a->vd, pass, MO_64);
295
+ write_neon_element64(rm, a->vd, pass, MO_64);
296
+ write_neon_element64(rd, a->vm, pass, MO_64);
297
}
298
tcg_temp_free_i64(rm);
299
tcg_temp_free_i64(rd);
300
--
301
2.20.1
302
303
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
The only uses of this function are for loading VFP
4
double-precision values, and nothing to do with NEON.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-10-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate.c | 8 ++--
12
target/arm/translate-vfp.c.inc | 84 +++++++++++++++++-----------------
13
2 files changed, 46 insertions(+), 46 deletions(-)
14
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
18
+++ b/target/arm/translate.c
19
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
20
}
21
}
22
23
-static inline void neon_load_reg64(TCGv_i64 var, int reg)
24
+static inline void vfp_load_reg64(TCGv_i64 var, int reg)
25
{
26
- tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
27
+ tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(true, reg));
28
}
29
30
-static inline void neon_store_reg64(TCGv_i64 var, int reg)
31
+static inline void vfp_store_reg64(TCGv_i64 var, int reg)
32
{
33
- tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
34
+ tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(true, reg));
35
}
36
37
static inline void vfp_load_reg32(TCGv_i32 var, int reg)
38
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-vfp.c.inc
41
+++ b/target/arm/translate-vfp.c.inc
42
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
43
tcg_gen_ext_i32_i64(nf, cpu_NF);
44
tcg_gen_ext_i32_i64(vf, cpu_VF);
45
46
- neon_load_reg64(frn, rn);
47
- neon_load_reg64(frm, rm);
48
+ vfp_load_reg64(frn, rn);
49
+ vfp_load_reg64(frm, rm);
50
switch (a->cc) {
51
case 0: /* eq: Z */
52
tcg_gen_movcond_i64(TCG_COND_EQ, dest, zf, zero,
53
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
54
tcg_temp_free_i64(tmp);
55
break;
56
}
57
- neon_store_reg64(dest, rd);
58
+ vfp_store_reg64(dest, rd);
59
tcg_temp_free_i64(frn);
60
tcg_temp_free_i64(frm);
61
tcg_temp_free_i64(dest);
62
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
63
TCGv_i64 tcg_res;
64
tcg_op = tcg_temp_new_i64();
65
tcg_res = tcg_temp_new_i64();
66
- neon_load_reg64(tcg_op, rm);
67
+ vfp_load_reg64(tcg_op, rm);
68
gen_helper_rintd(tcg_res, tcg_op, fpst);
69
- neon_store_reg64(tcg_res, rd);
70
+ vfp_store_reg64(tcg_res, rd);
71
tcg_temp_free_i64(tcg_op);
72
tcg_temp_free_i64(tcg_res);
73
} else {
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
tcg_double = tcg_temp_new_i64();
76
tcg_res = tcg_temp_new_i64();
77
tcg_tmp = tcg_temp_new_i32();
78
- neon_load_reg64(tcg_double, rm);
79
+ vfp_load_reg64(tcg_double, rm);
80
if (is_signed) {
81
gen_helper_vfp_tosld(tcg_res, tcg_double, tcg_shift, fpst);
82
} else {
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
84
tmp = tcg_temp_new_i64();
85
if (a->l) {
86
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
87
- neon_store_reg64(tmp, a->vd);
88
+ vfp_store_reg64(tmp, a->vd);
89
} else {
90
- neon_load_reg64(tmp, a->vd);
91
+ vfp_load_reg64(tmp, a->vd);
92
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
93
}
94
tcg_temp_free_i64(tmp);
95
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
96
if (a->l) {
97
/* load */
98
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
99
- neon_store_reg64(tmp, a->vd + i);
100
+ vfp_store_reg64(tmp, a->vd + i);
101
} else {
102
/* store */
103
- neon_load_reg64(tmp, a->vd + i);
104
+ vfp_load_reg64(tmp, a->vd + i);
105
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
106
}
107
tcg_gen_addi_i32(addr, addr, offset);
108
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
109
fd = tcg_temp_new_i64();
110
fpst = fpstatus_ptr(FPST_FPCR);
111
112
- neon_load_reg64(f0, vn);
113
- neon_load_reg64(f1, vm);
114
+ vfp_load_reg64(f0, vn);
115
+ vfp_load_reg64(f1, vm);
116
117
for (;;) {
118
if (reads_vd) {
119
- neon_load_reg64(fd, vd);
120
+ vfp_load_reg64(fd, vd);
121
}
122
fn(fd, f0, f1, fpst);
123
- neon_store_reg64(fd, vd);
124
+ vfp_store_reg64(fd, vd);
125
126
if (veclen == 0) {
127
break;
128
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
129
veclen--;
130
vd = vfp_advance_dreg(vd, delta_d);
131
vn = vfp_advance_dreg(vn, delta_d);
132
- neon_load_reg64(f0, vn);
133
+ vfp_load_reg64(f0, vn);
134
if (delta_m) {
135
vm = vfp_advance_dreg(vm, delta_m);
136
- neon_load_reg64(f1, vm);
137
+ vfp_load_reg64(f1, vm);
138
}
139
}
140
141
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
142
f0 = tcg_temp_new_i64();
143
fd = tcg_temp_new_i64();
144
145
- neon_load_reg64(f0, vm);
146
+ vfp_load_reg64(f0, vm);
147
148
for (;;) {
149
fn(fd, f0);
150
- neon_store_reg64(fd, vd);
151
+ vfp_store_reg64(fd, vd);
152
153
if (veclen == 0) {
154
break;
155
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
156
/* single source one-many */
157
while (veclen--) {
158
vd = vfp_advance_dreg(vd, delta_d);
159
- neon_store_reg64(fd, vd);
160
+ vfp_store_reg64(fd, vd);
161
}
162
break;
163
}
164
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
165
veclen--;
166
vd = vfp_advance_dreg(vd, delta_d);
167
vd = vfp_advance_dreg(vm, delta_m);
168
- neon_load_reg64(f0, vm);
169
+ vfp_load_reg64(f0, vm);
170
}
171
172
tcg_temp_free_i64(f0);
173
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
174
vm = tcg_temp_new_i64();
175
vd = tcg_temp_new_i64();
176
177
- neon_load_reg64(vn, a->vn);
178
- neon_load_reg64(vm, a->vm);
179
+ vfp_load_reg64(vn, a->vn);
180
+ vfp_load_reg64(vm, a->vm);
181
if (neg_n) {
182
/* VFNMS, VFMS */
183
gen_helper_vfp_negd(vn, vn);
184
}
185
- neon_load_reg64(vd, a->vd);
186
+ vfp_load_reg64(vd, a->vd);
187
if (neg_d) {
188
/* VFNMA, VFNMS */
189
gen_helper_vfp_negd(vd, vd);
190
}
191
fpst = fpstatus_ptr(FPST_FPCR);
192
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
193
- neon_store_reg64(vd, a->vd);
194
+ vfp_store_reg64(vd, a->vd);
195
196
tcg_temp_free_ptr(fpst);
197
tcg_temp_free_i64(vn);
198
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
199
fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm));
200
201
for (;;) {
202
- neon_store_reg64(fd, vd);
203
+ vfp_store_reg64(fd, vd);
204
205
if (veclen == 0) {
206
break;
207
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
208
vd = tcg_temp_new_i64();
209
vm = tcg_temp_new_i64();
210
211
- neon_load_reg64(vd, a->vd);
212
+ vfp_load_reg64(vd, a->vd);
213
if (a->z) {
214
tcg_gen_movi_i64(vm, 0);
215
} else {
216
- neon_load_reg64(vm, a->vm);
217
+ vfp_load_reg64(vm, a->vm);
218
}
219
220
if (a->e) {
221
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
222
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
223
vd = tcg_temp_new_i64();
224
gen_helper_vfp_fcvt_f16_to_f64(vd, tmp, fpst, ahp_mode);
225
- neon_store_reg64(vd, a->vd);
226
+ vfp_store_reg64(vd, a->vd);
227
tcg_temp_free_i32(ahp_mode);
228
tcg_temp_free_ptr(fpst);
229
tcg_temp_free_i32(tmp);
230
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
231
tmp = tcg_temp_new_i32();
232
vm = tcg_temp_new_i64();
233
234
- neon_load_reg64(vm, a->vm);
235
+ vfp_load_reg64(vm, a->vm);
236
gen_helper_vfp_fcvt_f64_to_f16(tmp, vm, fpst, ahp_mode);
237
tcg_temp_free_i64(vm);
238
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
239
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
240
}
241
242
tmp = tcg_temp_new_i64();
243
- neon_load_reg64(tmp, a->vm);
244
+ vfp_load_reg64(tmp, a->vm);
245
fpst = fpstatus_ptr(FPST_FPCR);
246
gen_helper_rintd(tmp, tmp, fpst);
247
- neon_store_reg64(tmp, a->vd);
248
+ vfp_store_reg64(tmp, a->vd);
249
tcg_temp_free_ptr(fpst);
250
tcg_temp_free_i64(tmp);
251
return true;
252
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
253
}
254
255
tmp = tcg_temp_new_i64();
256
- neon_load_reg64(tmp, a->vm);
257
+ vfp_load_reg64(tmp, a->vm);
258
fpst = fpstatus_ptr(FPST_FPCR);
259
tcg_rmode = tcg_const_i32(float_round_to_zero);
260
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
261
gen_helper_rintd(tmp, tmp, fpst);
262
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
263
- neon_store_reg64(tmp, a->vd);
264
+ vfp_store_reg64(tmp, a->vd);
265
tcg_temp_free_ptr(fpst);
266
tcg_temp_free_i64(tmp);
267
tcg_temp_free_i32(tcg_rmode);
268
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
269
}
270
271
tmp = tcg_temp_new_i64();
272
- neon_load_reg64(tmp, a->vm);
273
+ vfp_load_reg64(tmp, a->vm);
274
fpst = fpstatus_ptr(FPST_FPCR);
275
gen_helper_rintd_exact(tmp, tmp, fpst);
276
- neon_store_reg64(tmp, a->vd);
277
+ vfp_store_reg64(tmp, a->vd);
278
tcg_temp_free_ptr(fpst);
279
tcg_temp_free_i64(tmp);
280
return true;
281
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
282
vd = tcg_temp_new_i64();
283
vfp_load_reg32(vm, a->vm);
284
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
285
- neon_store_reg64(vd, a->vd);
286
+ vfp_store_reg64(vd, a->vd);
287
tcg_temp_free_i32(vm);
288
tcg_temp_free_i64(vd);
289
return true;
290
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
291
292
vd = tcg_temp_new_i32();
293
vm = tcg_temp_new_i64();
294
- neon_load_reg64(vm, a->vm);
295
+ vfp_load_reg64(vm, a->vm);
296
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
297
vfp_store_reg32(vd, a->vd);
298
tcg_temp_free_i32(vd);
299
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
300
/* u32 -> f64 */
301
gen_helper_vfp_uitod(vd, vm, fpst);
302
}
303
- neon_store_reg64(vd, a->vd);
304
+ vfp_store_reg64(vd, a->vd);
305
tcg_temp_free_i32(vm);
306
tcg_temp_free_i64(vd);
307
tcg_temp_free_ptr(fpst);
308
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
309
310
vm = tcg_temp_new_i64();
311
vd = tcg_temp_new_i32();
312
- neon_load_reg64(vm, a->vm);
313
+ vfp_load_reg64(vm, a->vm);
314
gen_helper_vjcvt(vd, vm, cpu_env);
315
vfp_store_reg32(vd, a->vd);
316
tcg_temp_free_i64(vm);
317
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
318
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
319
320
vd = tcg_temp_new_i64();
321
- neon_load_reg64(vd, a->vd);
322
+ vfp_load_reg64(vd, a->vd);
323
324
fpst = fpstatus_ptr(FPST_FPCR);
325
shift = tcg_const_i32(frac_bits);
326
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
327
g_assert_not_reached();
328
}
329
330
- neon_store_reg64(vd, a->vd);
331
+ vfp_store_reg64(vd, a->vd);
332
tcg_temp_free_i64(vd);
333
tcg_temp_free_i32(shift);
334
tcg_temp_free_ptr(fpst);
335
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
336
fpst = fpstatus_ptr(FPST_FPCR);
337
vm = tcg_temp_new_i64();
338
vd = tcg_temp_new_i32();
339
- neon_load_reg64(vm, a->vm);
340
+ vfp_load_reg64(vm, a->vm);
341
342
if (a->s) {
343
if (a->rz) {
344
--
345
2.20.1
346
347
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This would cause an infinite recursion or loop.
3
In both cases, we can sink the write-back and perform
4
the accumulate into the normal destination temps.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20201030022618.785675-11-richard.henderson@linaro.org
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20181110121711.15257-1-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.c | 2 +-
11
target/arm/translate-neon.c.inc | 23 +++++++++--------------
13
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 9 insertions(+), 14 deletions(-)
14
13
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
16
--- a/target/arm/translate-neon.c.inc
18
+++ b/target/arm/helper.c
17
+++ b/target/arm/translate-neon.c.inc
19
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
18
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
20
CPUState *cs = ENV_GET_CPU(env);
19
if (accfn) {
21
20
tmp = tcg_temp_new_i64();
22
if (tlb_force_broadcast(env)) {
21
read_neon_element64(tmp, a->vd, 0, MO_64);
23
- tlbi_aa64_vmalle1_write(env, NULL, value);
22
- accfn(tmp, tmp, rd0);
24
+ tlbi_aa64_vmalle1is_write(env, NULL, value);
23
- write_neon_element64(tmp, a->vd, 0, MO_64);
25
return;
24
+ accfn(rd0, tmp, rd0);
25
read_neon_element64(tmp, a->vd, 1, MO_64);
26
- accfn(tmp, tmp, rd1);
27
- write_neon_element64(tmp, a->vd, 1, MO_64);
28
+ accfn(rd1, tmp, rd1);
29
tcg_temp_free_i64(tmp);
30
- } else {
31
- write_neon_element64(rd0, a->vd, 0, MO_64);
32
- write_neon_element64(rd1, a->vd, 1, MO_64);
26
}
33
}
27
34
35
+ write_neon_element64(rd0, a->vd, 0, MO_64);
36
+ write_neon_element64(rd1, a->vd, 1, MO_64);
37
tcg_temp_free_i64(rd0);
38
tcg_temp_free_i64(rd1);
39
40
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
41
if (accfn) {
42
TCGv_i64 t64 = tcg_temp_new_i64();
43
read_neon_element64(t64, a->vd, 0, MO_64);
44
- accfn(t64, t64, rn0_64);
45
- write_neon_element64(t64, a->vd, 0, MO_64);
46
+ accfn(rn0_64, t64, rn0_64);
47
read_neon_element64(t64, a->vd, 1, MO_64);
48
- accfn(t64, t64, rn1_64);
49
- write_neon_element64(t64, a->vd, 1, MO_64);
50
+ accfn(rn1_64, t64, rn1_64);
51
tcg_temp_free_i64(t64);
52
- } else {
53
- write_neon_element64(rn0_64, a->vd, 0, MO_64);
54
- write_neon_element64(rn1_64, a->vd, 1, MO_64);
55
}
56
+
57
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
58
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
59
tcg_temp_free_i64(rn0_64);
60
tcg_temp_free_i64(rn1_64);
61
return true;
28
--
62
--
29
2.19.1
63
2.20.1
30
64
31
65
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Commit af7d64ede0b9 (hw/arm/sysbus-fdt: Allow device matching with DT
3
We can use proper widening loads to extend 32-bit inputs,
4
compatible value) introduced a match_fn callback which gets called
4
and skip the "widenfn" step.
5
for each registered combo to check whether a sysbus device can be
6
dynamically instantiated. However the callback gets called even if
7
the device type does not match the binding combo typename field.
8
This causes an assert when passing "-device ramfb" to the qemu
9
command line as vfio_platform_match() gets called on a non
10
vfio-platform device.
11
5
12
To fix this regression, let's change the add_fdt_node() logic so
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
that we first check the type and if the match_fn callback is defined,
7
Message-id: 20201030022618.785675-12-richard.henderson@linaro.org
14
then we also call it.
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
16
Binding combos only requesting a type check do not define the
17
match_fn callback.
18
19
Fixes: af7d64ede0b9 (hw/arm/sysbus-fdt: Allow device matching with
20
DT compatible value)
21
22
Signed-off-by: Eric Auger <eric.auger@redhat.com>
23
Reported-by: Thomas Huth <thuth@redhat.com>
24
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
25
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
26
Message-id: 20181106184212.29377-1-eric.auger@redhat.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
10
---
29
hw/arm/sysbus-fdt.c | 12 +++++++-----
11
target/arm/translate.c | 6 +++
30
1 file changed, 7 insertions(+), 5 deletions(-)
12
target/arm/translate-neon.c.inc | 66 ++++++++++++++++++---------------
13
2 files changed, 43 insertions(+), 29 deletions(-)
31
14
32
diff --git a/hw/arm/sysbus-fdt.c b/hw/arm/sysbus-fdt.c
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
33
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/sysbus-fdt.c
17
--- a/target/arm/translate.c
35
+++ b/hw/arm/sysbus-fdt.c
18
+++ b/target/arm/translate.c
36
@@ -XXX,XX +XXX,XX @@ static bool type_match(SysBusDevice *sbdev, const BindingEntry *entry)
19
@@ -XXX,XX +XXX,XX @@ static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
37
return !strcmp(object_get_typename(OBJECT(sbdev)), entry->typename);
20
long off = neon_element_offset(reg, ele, memop);
21
22
switch (memop) {
23
+ case MO_SL:
24
+ tcg_gen_ld32s_i64(dest, cpu_env, off);
25
+ break;
26
+ case MO_UL:
27
+ tcg_gen_ld32u_i64(dest, cpu_env, off);
28
+ break;
29
case MO_Q:
30
tcg_gen_ld_i64(dest, cpu_env, off);
31
break;
32
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-neon.c.inc
35
+++ b/target/arm/translate-neon.c.inc
36
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
37
static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
38
NeonGenWidenFn *widenfn,
39
NeonGenTwo64OpFn *opfn,
40
- bool src1_wide)
41
+ int src1_mop, int src2_mop)
42
{
43
/* 3-regs different lengths, prewidening case (VADDL/VSUBL/VAADW/VSUBW) */
44
TCGv_i64 rn0_64, rn1_64, rm_64;
45
- TCGv_i32 rm;
46
47
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
48
return false;
49
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
50
return false;
51
}
52
53
- if (!widenfn || !opfn) {
54
+ if (!opfn) {
55
/* size == 3 case, which is an entirely different insn group */
56
return false;
57
}
58
59
- if ((a->vd & 1) || (src1_wide && (a->vn & 1))) {
60
+ if ((a->vd & 1) || (src1_mop == MO_Q && (a->vn & 1))) {
61
return false;
62
}
63
64
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
65
rn1_64 = tcg_temp_new_i64();
66
rm_64 = tcg_temp_new_i64();
67
68
- if (src1_wide) {
69
- read_neon_element64(rn0_64, a->vn, 0, MO_64);
70
+ if (src1_mop >= 0) {
71
+ read_neon_element64(rn0_64, a->vn, 0, src1_mop);
72
} else {
73
TCGv_i32 tmp = tcg_temp_new_i32();
74
read_neon_element32(tmp, a->vn, 0, MO_32);
75
widenfn(rn0_64, tmp);
76
tcg_temp_free_i32(tmp);
77
}
78
- rm = tcg_temp_new_i32();
79
- read_neon_element32(rm, a->vm, 0, MO_32);
80
+ if (src2_mop >= 0) {
81
+ read_neon_element64(rm_64, a->vm, 0, src2_mop);
82
+ } else {
83
+ TCGv_i32 tmp = tcg_temp_new_i32();
84
+ read_neon_element32(tmp, a->vm, 0, MO_32);
85
+ widenfn(rm_64, tmp);
86
+ tcg_temp_free_i32(tmp);
87
+ }
88
89
- widenfn(rm_64, rm);
90
- tcg_temp_free_i32(rm);
91
opfn(rn0_64, rn0_64, rm_64);
92
93
/*
94
* Load second pass inputs before storing the first pass result, to
95
* avoid incorrect results if a narrow input overlaps with the result.
96
*/
97
- if (src1_wide) {
98
- read_neon_element64(rn1_64, a->vn, 1, MO_64);
99
+ if (src1_mop >= 0) {
100
+ read_neon_element64(rn1_64, a->vn, 1, src1_mop);
101
} else {
102
TCGv_i32 tmp = tcg_temp_new_i32();
103
read_neon_element32(tmp, a->vn, 1, MO_32);
104
widenfn(rn1_64, tmp);
105
tcg_temp_free_i32(tmp);
106
}
107
- rm = tcg_temp_new_i32();
108
- read_neon_element32(rm, a->vm, 1, MO_32);
109
+ if (src2_mop >= 0) {
110
+ read_neon_element64(rm_64, a->vm, 1, src2_mop);
111
+ } else {
112
+ TCGv_i32 tmp = tcg_temp_new_i32();
113
+ read_neon_element32(tmp, a->vm, 1, MO_32);
114
+ widenfn(rm_64, tmp);
115
+ tcg_temp_free_i32(tmp);
116
+ }
117
118
write_neon_element64(rn0_64, a->vd, 0, MO_64);
119
120
- widenfn(rm_64, rm);
121
- tcg_temp_free_i32(rm);
122
opfn(rn1_64, rn1_64, rm_64);
123
write_neon_element64(rn1_64, a->vd, 1, MO_64);
124
125
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
126
return true;
38
}
127
}
39
128
40
-#define TYPE_BINDING(type, add_fn) {(type), NULL, (add_fn), type_match}
129
-#define DO_PREWIDEN(INSN, S, EXT, OP, SRC1WIDE) \
41
+#define TYPE_BINDING(type, add_fn) {(type), NULL, (add_fn), NULL}
130
+#define DO_PREWIDEN(INSN, S, OP, SRC1WIDE, SIGN) \
42
131
static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
43
/* list of supported dynamic sysbus bindings */
132
{ \
44
static const BindingEntry bindings[] = {
133
static NeonGenWidenFn * const widenfn[] = { \
45
@@ -XXX,XX +XXX,XX @@ static void add_fdt_node(SysBusDevice *sbdev, void *opaque)
134
gen_helper_neon_widen_##S##8, \
46
for (i = 0; i < ARRAY_SIZE(bindings); i++) {
135
gen_helper_neon_widen_##S##16, \
47
const BindingEntry *iter = &bindings[i];
136
- tcg_gen_##EXT##_i32_i64, \
48
137
- NULL, \
49
- if (iter->match_fn(sbdev, iter)) {
138
+ NULL, NULL, \
50
- ret = iter->add_fn(sbdev, opaque);
139
}; \
51
- assert(!ret);
140
static NeonGenTwo64OpFn * const addfn[] = { \
52
- return;
141
gen_helper_neon_##OP##l_u16, \
53
+ if (type_match(sbdev, iter)) {
142
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
54
+ if (!iter->match_fn || iter->match_fn(sbdev, iter)) {
143
tcg_gen_##OP##_i64, \
55
+ ret = iter->add_fn(sbdev, opaque);
144
NULL, \
56
+ assert(!ret);
145
}; \
57
+ return;
146
- return do_prewiden_3d(s, a, widenfn[a->size], \
58
+ }
147
- addfn[a->size], SRC1WIDE); \
59
}
148
+ int narrow_mop = a->size == MO_32 ? MO_32 | SIGN : -1; \
149
+ return do_prewiden_3d(s, a, widenfn[a->size], addfn[a->size], \
150
+ SRC1WIDE ? MO_Q : narrow_mop, \
151
+ narrow_mop); \
60
}
152
}
61
error_report("Device %s can not be dynamically instantiated",
153
154
-DO_PREWIDEN(VADDL_S, s, ext, add, false)
155
-DO_PREWIDEN(VADDL_U, u, extu, add, false)
156
-DO_PREWIDEN(VSUBL_S, s, ext, sub, false)
157
-DO_PREWIDEN(VSUBL_U, u, extu, sub, false)
158
-DO_PREWIDEN(VADDW_S, s, ext, add, true)
159
-DO_PREWIDEN(VADDW_U, u, extu, add, true)
160
-DO_PREWIDEN(VSUBW_S, s, ext, sub, true)
161
-DO_PREWIDEN(VSUBW_U, u, extu, sub, true)
162
+DO_PREWIDEN(VADDL_S, s, add, false, MO_SIGN)
163
+DO_PREWIDEN(VADDL_U, u, add, false, 0)
164
+DO_PREWIDEN(VSUBL_S, s, sub, false, MO_SIGN)
165
+DO_PREWIDEN(VSUBL_U, u, sub, false, 0)
166
+DO_PREWIDEN(VADDW_S, s, add, true, MO_SIGN)
167
+DO_PREWIDEN(VADDW_U, u, add, true, 0)
168
+DO_PREWIDEN(VSUBW_S, s, sub, true, MO_SIGN)
169
+DO_PREWIDEN(VSUBW_U, u, sub, true, 0)
170
171
static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
172
NeonGenTwo64OpFn *opfn, NeonGenNarrowFn *narrowfn)
62
--
173
--
63
2.19.1
174
2.20.1
64
175
65
176
diff view generated by jsdifflib
1
Remove a TODO comment about implementing the vectored interrupt
1
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error
2
controller. We have had an implementation of that for a decade;
2
meant we were using the H4() address swizzler macro rather than the
3
it's in hw/intc/pl190.c.
3
H2() which is required for 2-byte data. This had no effect on
4
little-endian hosts but meant we put the result data into the
5
destination Dreg in the wrong order on big-endian hosts.
4
6
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181106164118.16184-1-peter.maydell@linaro.org
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
10
---
11
---
11
target/arm/helper.c | 1 -
12
target/arm/vec_helper.c | 8 ++++----
12
1 file changed, 1 deletion(-)
13
1 file changed, 4 insertions(+), 4 deletions(-)
13
14
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
--- a/target/arm/vec_helper.c
17
+++ b/target/arm/helper.c
18
+++ b/target/arm/vec_helper.c
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
19
@@ -XXX,XX +XXX,XX @@ DO_ABA(gvec_uaba_d, uint64_t)
19
return;
20
r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \
21
r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \
22
\
23
- d[H4(0)] = r0; \
24
- d[H4(1)] = r1; \
25
- d[H4(2)] = r2; \
26
- d[H4(3)] = r3; \
27
+ d[H2(0)] = r0; \
28
+ d[H2(1)] = r1; \
29
+ d[H2(2)] = r2; \
30
+ d[H2(3)] = r3; \
20
}
31
}
21
32
22
- /* TODO: Vectored interrupt controller. */
33
DO_NEON_PAIRWISE(neon_padd, add)
23
switch (cs->exception_index) {
24
case EXCP_UDEF:
25
new_mode = ARM_CPU_MODE_UND;
26
--
34
--
27
2.19.1
35
2.20.1
28
36
29
37
diff view generated by jsdifflib
New patch
1
The helper functions for performing the udot/sdot operations against
2
a scalar were not using an address-swizzling macro when converting
3
the index of the scalar element into a pointer into the vm array.
4
This had no effect on little-endian hosts but meant we generated
5
incorrect results on big-endian hosts.
1
6
7
For these insns, the index is indexing over group of 4 8-bit values,
8
so 32 bits per indexed entity, and H4() is therefore what we want.
9
(For Neon the only possible input indexes are 0 and 1.)
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
15
---
16
target/arm/vec_helper.c | 4 ++--
17
1 file changed, 2 insertions(+), 2 deletions(-)
18
19
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/vec_helper.c
22
+++ b/target/arm/vec_helper.c
23
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
24
intptr_t index = simd_data(desc);
25
uint32_t *d = vd;
26
int8_t *n = vn;
27
- int8_t *m_indexed = (int8_t *)vm + index * 4;
28
+ int8_t *m_indexed = (int8_t *)vm + H4(index) * 4;
29
30
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
31
* Otherwise opr_sz is a multiple of 16.
32
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
33
intptr_t index = simd_data(desc);
34
uint32_t *d = vd;
35
uint8_t *n = vn;
36
- uint8_t *m_indexed = (uint8_t *)vm + index * 4;
37
+ uint8_t *m_indexed = (uint8_t *)vm + H4(index) * 4;
38
39
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
40
* Otherwise opr_sz is a multiple of 16.
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
1
Before we supported direct execution from MMIO regions, we
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
implemented workarounds in commit 720424359917887c926a33d2
3
which let us avoid doing so, even if the SAU or MPU region
4
was less than page-sized.
5
2
6
Once we implemented execute-from-MMIO, we removed part
3
HCR should be applied when NS is set, not when it is cleared.
7
of those workarounds in commit d4b6275df320cee76; but
8
we forgot the one in get_phys_addr_pmsav8() which
9
suppressed use of small SAU regions in executable regions.
10
Remove that workaround now.
11
4
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20181106163801.14474-1-peter.maydell@linaro.org
16
---
8
---
17
target/arm/helper.c | 12 ------------
9
target/arm/helper.c | 5 ++---
18
1 file changed, 12 deletions(-)
10
1 file changed, 2 insertions(+), 3 deletions(-)
19
11
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
16
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
25
17
26
ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
18
/*
27
txattrs, prot, &mpu_is_subpage, fi, NULL);
19
* Non-IS variants of TLB operations are upgraded to
28
- /*
20
- * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
29
- * TODO: this is a temporary hack to ignore the fact that the SAU region
21
+ * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
30
- * is smaller than a page if this is an executable region. We never
22
* force broadcast of these operations.
31
- * supported small MPU regions, but we did (accidentally) allow small
23
*/
32
- * SAU regions, and if we now made small SAU regions not be executable
24
static bool tlb_force_broadcast(CPUARMState *env)
33
- * then this would break previously working guest code. We can't
25
{
34
- * remove this until/unless we implement support for execution from
26
- return (env->cp15.hcr_el2 & HCR_FB) &&
35
- * small regions.
27
- arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
36
- */
28
+ return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
37
- if (*prot & PAGE_EXEC) {
38
- sattrs.subpage = false;
39
- }
40
*page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE;
41
return ret;
42
}
29
}
30
31
static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
43
--
32
--
44
2.19.1
33
2.20.1
45
34
46
35
diff view generated by jsdifflib
1
This reverts commit 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f.
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
2
3
The implementation of HCR.VI and VF in that commit is not
3
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the
4
correct -- they do not track the overall "is there a pending
4
future HCR_EL2.TLOR when S-EL2 is enabled.
5
VIRQ or VFIQ" status, but whether there is a pending interrupt
6
due to "this mechanism", ie the hypervisor having set the VI/VF
7
bits. The overall pending state for VIRQ and VFIQ is effectively
8
the logical OR of the inbound lines from the GIC with the
9
VI and VF bits. Commit 8a0fc3a29fc231 would result in pending
10
VIRQ/VFIQ possibly being lost when the hypervisor wrote to HCR.
11
5
12
As a preliminary to implementing the HCR.VI/VF feature properly,
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
13
revert the broken one entirely.
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
18
Message-id: 20181109134731.11605-2-peter.maydell@linaro.org
19
---
9
---
20
target/arm/helper.c | 47 ++++-----------------------------------------
10
target/arm/helper.c | 19 +++++--------------
21
1 file changed, 4 insertions(+), 43 deletions(-)
11
1 file changed, 5 insertions(+), 14 deletions(-)
22
12
23
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
26
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
27
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
17
@@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
28
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
18
#endif
19
20
/* Shared logic between LORID and the rest of the LOR* registers.
21
- * Secure state has already been delt with.
22
+ * Secure state exclusion has already been dealt with.
23
*/
24
-static CPAccessResult access_lor_ns(CPUARMState *env)
25
+static CPAccessResult access_lor_ns(CPUARMState *env,
26
+ const ARMCPRegInfo *ri, bool isread)
29
{
27
{
30
ARMCPU *cpu = arm_env_get_cpu(env);
28
int el = arm_current_el(env);
31
- CPUState *cs = ENV_GET_CPU(env);
29
32
uint64_t valid_mask = HCR_MASK;
30
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_ns(CPUARMState *env)
33
31
return CP_ACCESS_OK;
34
if (arm_feature(env, ARM_FEATURE_EL3)) {
32
}
35
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
33
36
/* Clear RES0 bits. */
34
-static CPAccessResult access_lorid(CPUARMState *env, const ARMCPRegInfo *ri,
37
value &= valid_mask;
35
- bool isread)
38
36
-{
39
- /*
37
- if (arm_is_secure_below_el3(env)) {
40
- * VI and VF are kept in cs->interrupt_request. Modifying that
38
- /* Access ok in secure mode. */
41
- * requires that we have the iothread lock, which is done by
39
- return CP_ACCESS_OK;
42
- * marking the reginfo structs as ARM_CP_IO.
43
- * Note that if a write to HCR pends a VIRQ or VFIQ it is never
44
- * possible for it to be taken immediately, because VIRQ and
45
- * VFIQ are masked unless running at EL0 or EL1, and HCR
46
- * can only be written at EL2.
47
- */
48
- g_assert(qemu_mutex_iothread_locked());
49
- if (value & HCR_VI) {
50
- cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
51
- } else {
52
- cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
53
- }
40
- }
54
- if (value & HCR_VF) {
41
- return access_lor_ns(env);
55
- cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
56
- } else {
57
- cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
58
- }
59
- value &= ~(HCR_VI | HCR_VF);
60
-
61
/* These bits change the MMU setup:
62
* HCR_VM enables stage 2 translation
63
* HCR_PTW forbids certain page-table setups
64
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
65
hcr_write(env, NULL, value);
66
}
67
68
-static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
69
-{
70
- /* The VI and VF bits live in cs->interrupt_request */
71
- uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
72
- CPUState *cs = ENV_GET_CPU(env);
73
-
74
- if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
75
- ret |= HCR_VI;
76
- }
77
- if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
78
- ret |= HCR_VF;
79
- }
80
- return ret;
81
-}
42
-}
82
-
43
-
83
static const ARMCPRegInfo el2_cp_reginfo[] = {
44
static CPAccessResult access_lor_other(CPUARMState *env,
84
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
45
const ARMCPRegInfo *ri, bool isread)
85
- .type = ARM_CP_IO,
46
{
86
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_other(CPUARMState *env,
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
48
/* Access denied in secure mode. */
88
- .writefn = hcr_write, .readfn = hcr_read },
49
return CP_ACCESS_TRAP;
89
+ .writefn = hcr_write },
50
}
90
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
51
- return access_lor_ns(env);
91
- .type = ARM_CP_ALIAS | ARM_CP_IO,
52
+ return access_lor_ns(env, ri, isread);
92
+ .type = ARM_CP_ALIAS,
53
}
93
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
54
94
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
55
/*
95
- .writefn = hcr_writelow, .readfn = hcr_read },
56
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
96
+ .writefn = hcr_writelow },
57
.type = ARM_CP_CONST, .resetvalue = 0 },
97
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
58
{ .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
98
.type = ARM_CP_ALIAS,
59
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
99
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
60
- .access = PL1_R, .accessfn = access_lorid,
100
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
61
+ .access = PL1_R, .accessfn = access_lor_ns,
101
62
.type = ARM_CP_CONST, .resetvalue = 0 },
102
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
63
REGINFO_SENTINEL
103
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
64
};
104
- .type = ARM_CP_ALIAS | ARM_CP_IO,
105
+ .type = ARM_CP_ALIAS,
106
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
107
.access = PL2_RW,
108
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
109
--
65
--
110
2.19.1
66
2.20.1
111
67
112
68
diff view generated by jsdifflib
New patch
1
If we're using the capstone disassembler, disassembly of a run of
2
instructions more than 32 bytes long disassembles the wrong data for
3
instructions beyond the 32 byte mark:
1
4
5
(qemu) xp /16x 0x100
6
0000000000000100: 0x00000005 0x54410001 0x00000001 0x00001000
7
0000000000000110: 0x00000000 0x00000004 0x54410002 0x3c000000
8
0000000000000120: 0x00000000 0x00000004 0x54410009 0x74736574
9
0000000000000130: 0x00000000 0x00000000 0x00000000 0x00000000
10
(qemu) xp /16i 0x100
11
0x00000100: 00000005 andeq r0, r0, r5
12
0x00000104: 54410001 strbpl r0, [r1], #-1
13
0x00000108: 00000001 andeq r0, r0, r1
14
0x0000010c: 00001000 andeq r1, r0, r0
15
0x00000110: 00000000 andeq r0, r0, r0
16
0x00000114: 00000004 andeq r0, r0, r4
17
0x00000118: 54410002 strbpl r0, [r1], #-2
18
0x0000011c: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
19
0x00000120: 54410001 strbpl r0, [r1], #-1
20
0x00000124: 00000001 andeq r0, r0, r1
21
0x00000128: 00001000 andeq r1, r0, r0
22
0x0000012c: 00000000 andeq r0, r0, r0
23
0x00000130: 00000004 andeq r0, r0, r4
24
0x00000134: 54410002 strbpl r0, [r1], #-2
25
0x00000138: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
26
0x0000013c: 00000000 andeq r0, r0, r0
27
28
Here the disassembly of 0x120..0x13f is using the data that is in
29
0x104..0x123.
30
31
This is caused by passing the wrong value to the read_memory_func().
32
The intention is that at this point in the loop the 'cap_buf' buffer
33
already contains 'csize' bytes of data for the instruction at guest
34
addr 'pc', and we want to read in an extra 'tsize' bytes. Those
35
extra bytes are therefore at 'pc + csize', not 'pc'. On the first
36
time through the loop 'csize' happens to be zero, so the initial read
37
of 32 bytes into cap_buf is correct and as long as the disassembly
38
never needs to read more data we return the correct information.
39
40
Use the correct guest address in the call to read_memory_func().
41
42
Cc: qemu-stable@nongnu.org
43
Fixes: https://bugs.launchpad.net/qemu/+bug/1900779
44
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
45
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
46
Message-id: 20201022132445.25039-1-peter.maydell@linaro.org
47
---
48
disas/capstone.c | 2 +-
49
1 file changed, 1 insertion(+), 1 deletion(-)
50
51
diff --git a/disas/capstone.c b/disas/capstone.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/disas/capstone.c
54
+++ b/disas/capstone.c
55
@@ -XXX,XX +XXX,XX @@ bool cap_disas_monitor(disassemble_info *info, uint64_t pc, int count)
56
57
/* Make certain that we can make progress. */
58
assert(tsize != 0);
59
- info->read_memory_func(pc, cap_buf + csize, tsize, info);
60
+ info->read_memory_func(pc + csize, cap_buf + csize, tsize, info);
61
csize += tsize;
62
63
if (cs_disasm_iter(handle, &cbuf, &csize, &pc, insn)) {
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
We already have this symbol defined so lets use it.
3
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic.
4
This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN):
4
5
5
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
CID 1432363 (#1 of 1): Unintentional integer overflow:
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
7
Message-id: 20181109152119.9242-7-alex.bennee@linaro.org
8
overflow_before_widen:
9
Potentially overflowing expression 1 << scale with type int
10
(32 bits, signed) is evaluated using 32-bit arithmetic, and
11
then used in a context that expects an expression of type
12
hwaddr (64 bits, unsigned).
13
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Acked-by: Eric Auger <eric.auger@redhat.com>
16
Message-id: 20201030144617.1535064-1-philmd@redhat.com
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
19
---
10
target/arm/cpu.h | 2 +-
20
hw/arm/smmuv3.c | 3 ++-
11
1 file changed, 1 insertion(+), 1 deletion(-)
21
1 file changed, 2 insertions(+), 1 deletion(-)
12
22
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
14
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
25
--- a/hw/arm/smmuv3.c
16
+++ b/target/arm/cpu.h
26
+++ b/hw/arm/smmuv3.c
17
@@ -XXX,XX +XXX,XX @@ static inline int arm_debug_target_el(CPUARMState *env)
27
@@ -XXX,XX +XXX,XX @@
18
28
*/
19
if (arm_feature(env, ARM_FEATURE_EL2) && !secure) {
29
20
route_to_el2 = env->cp15.hcr_el2 & HCR_TGE ||
30
#include "qemu/osdep.h"
21
- env->cp15.mdcr_el2 & (1 << 8);
31
+#include "qemu/bitops.h"
22
+ env->cp15.mdcr_el2 & MDCR_TDE;
32
#include "hw/irq.h"
33
#include "hw/sysbus.h"
34
#include "migration/vmstate.h"
35
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
36
scale = CMD_SCALE(cmd);
37
num = CMD_NUM(cmd);
38
ttl = CMD_TTL(cmd);
39
- num_pages = (num + 1) * (1 << (scale));
40
+ num_pages = (num + 1) * BIT_ULL(scale);
23
}
41
}
24
42
25
if (route_to_el2) {
43
if (type == SMMU_CMD_TLBI_NH_VA) {
26
--
44
--
27
2.19.1
45
2.20.1
28
46
29
47
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
2
3
This only fails with some (broken) versions of gdb but we should
3
When booting a CPU with EL3 using the -kernel flag, set up CPTR_EL3 so
4
treat the top bits of DBGBVR as RESS. Properly sign extend QEMU's
4
that SVE will not trap to EL3.
5
reference copy of dbgbvr and also update the register descriptions in
6
the comment.
7
5
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181109152119.9242-2-alex.bennee@linaro.org
8
Message-id: 20201030151541.11976-1-remi@remlab.net
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
target/arm/kvm64.c | 17 +++++++++++++++--
11
hw/arm/boot.c | 3 +++
14
1 file changed, 15 insertions(+), 2 deletions(-)
12
1 file changed, 3 insertions(+)
15
13
16
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm64.c
16
--- a/hw/arm/boot.c
19
+++ b/target/arm/kvm64.c
17
+++ b/hw/arm/boot.c
20
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_init_debug(CPUState *cs)
18
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
21
* capable of fancier matching but that will require exposing that
19
if (cpu_isar_feature(aa64_mte, cpu)) {
22
* fanciness to GDB's interface
20
env->cp15.scr_el3 |= SCR_ATA;
23
*
21
}
24
- * D7.3.2 DBGBCR<n>_EL1, Debug Breakpoint Control Registers
22
+ if (cpu_isar_feature(aa64_sve, cpu)) {
25
+ * DBGBCR<n>_EL1, Debug Breakpoint Control Registers
23
+ env->cp15.cptr_el[3] |= CPTR_EZ;
26
*
24
+ }
27
* 31 24 23 20 19 16 15 14 13 12 9 8 5 4 3 2 1 0
25
/* AArch64 kernels never boot in secure mode */
28
* +------+------+-------+-----+----+------+-----+------+-----+---+
26
assert(!info->secure_boot);
29
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_init_debug(CPUState *cs)
27
/* This hook is only supported for AArch32 currently:
30
* SSC/HMC/PMC: Security, Higher and Priv access control (Table D-12)
31
* BAS: Byte Address Select (RES1 for AArch64)
32
* E: Enable bit
33
+ *
34
+ * DBGBVR<n>_EL1, Debug Breakpoint Value Registers
35
+ *
36
+ * 63 53 52 49 48 2 1 0
37
+ * +------+-----------+----------+-----+
38
+ * | RESS | VA[52:49] | VA[48:2] | 0 0 |
39
+ * +------+-----------+----------+-----+
40
+ *
41
+ * Depending on the addressing mode bits the top bits of the register
42
+ * are a sign extension of the highest applicable VA bit. Some
43
+ * versions of GDB don't do it correctly so we ensure they are correct
44
+ * here so future PC comparisons will work properly.
45
*/
46
+
47
static int insert_hw_breakpoint(target_ulong addr)
48
{
49
HWBreakpoint brk = {
50
.bcr = 0x1, /* BCR E=1, enable */
51
- .bvr = addr
52
+ .bvr = sextract64(addr, 0, 53)
53
};
54
55
if (cur_hw_bps >= max_hw_bps) {
56
--
28
--
57
2.19.1
29
2.20.1
58
30
59
31
diff view generated by jsdifflib
1
From: Thomas Huth <thuth@redhat.com>
1
From: AlexChen <alex.chen@huawei.com>
2
2
3
There is no active maintainer, but since Peter is picking up
3
In omap_lcd_interrupts(), the pointer omap_lcd is dereferinced before
4
patches via qemu-arm@nongnu.org, I think we could at least use
4
being check if it is valid, which may lead to NULL pointer dereference.
5
"Odd Fixes" as status here.
5
So move the assignment to surface after checking that the omap_lcd is valid
6
and move surface_bits_per_pixel(surface) to after the surface assignment.
6
7
7
Signed-off-by: Thomas Huth <thuth@redhat.com>
8
Reported-by: Euler Robot <euler.robot@huawei.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: AlexChen <alex.chen@huawei.com>
9
Message-id: 1541528230-31817-1-git-send-email-thuth@redhat.com
10
Message-id: 5F9CDB8A.9000001@huawei.com
10
[PMM: Also add myself as an M: contact]
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
MAINTAINERS | 7 +++++++
14
hw/display/omap_lcdc.c | 10 +++++++---
14
1 file changed, 7 insertions(+)
15
1 file changed, 7 insertions(+), 3 deletions(-)
15
16
16
diff --git a/MAINTAINERS b/MAINTAINERS
17
diff --git a/hw/display/omap_lcdc.c b/hw/display/omap_lcdc.c
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/MAINTAINERS
19
--- a/hw/display/omap_lcdc.c
19
+++ b/MAINTAINERS
20
+++ b/hw/display/omap_lcdc.c
20
@@ -XXX,XX +XXX,XX @@ F: hw/*/pxa2xx*
21
@@ -XXX,XX +XXX,XX @@ static void omap_lcd_interrupts(struct omap_lcd_panel_s *s)
21
F: hw/misc/mst_fpga.c
22
static void omap_update_display(void *opaque)
22
F: include/hw/arm/pxa.h
23
{
23
24
struct omap_lcd_panel_s *omap_lcd = (struct omap_lcd_panel_s *) opaque;
24
+Sharp SL-5500 (Collie) PDA
25
- DisplaySurface *surface = qemu_console_surface(omap_lcd->con);
25
+M: Peter Maydell <peter.maydell@linaro.org>
26
+ DisplaySurface *surface;
26
+L: qemu-arm@nongnu.org
27
draw_line_func draw_line;
27
+S: Odd Fixes
28
int size, height, first, last;
28
+F: hw/arm/collie.c
29
int width, linesize, step, bpp, frame_offset;
29
+F: hw/arm/strongarm*
30
hwaddr frame_base;
31
32
- if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable ||
33
- !surface_bits_per_pixel(surface)) {
34
+ if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable) {
35
+ return;
36
+ }
30
+
37
+
31
Stellaris
38
+ surface = qemu_console_surface(omap_lcd->con);
32
M: Peter Maydell <peter.maydell@linaro.org>
39
+ if (!surface_bits_per_pixel(surface)) {
33
L: qemu-arm@nongnu.org
40
return;
41
}
42
34
--
43
--
35
2.19.1
44
2.20.1
36
45
37
46
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: AlexChen <alex.chen@huawei.com>
2
2
3
You should declare you are using a global version of a variable before
3
In exynos4210_fimd_update(), the pointer s is dereferinced before
4
you attempt to modify it in a function.
4
being check if it is valid, which may lead to NULL pointer dereference.
5
So move the assignment to global_width after checking that the s is valid.
5
6
6
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
7
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20181109152119.9242-5-alex.bennee@linaro.org
10
Message-id: 5F9F8D88.9030102@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
tests/guest-debug/test-gdbstub.py | 1 +
13
hw/display/exynos4210_fimd.c | 4 +++-
13
1 file changed, 1 insertion(+)
14
1 file changed, 3 insertions(+), 1 deletion(-)
14
15
15
diff --git a/tests/guest-debug/test-gdbstub.py b/tests/guest-debug/test-gdbstub.py
16
diff --git a/hw/display/exynos4210_fimd.c b/hw/display/exynos4210_fimd.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/guest-debug/test-gdbstub.py
18
--- a/hw/display/exynos4210_fimd.c
18
+++ b/tests/guest-debug/test-gdbstub.py
19
+++ b/hw/display/exynos4210_fimd.c
19
@@ -XXX,XX +XXX,XX @@ def report(cond, msg):
20
@@ -XXX,XX +XXX,XX @@ static void exynos4210_fimd_update(void *opaque)
20
print ("PASS: %s" % (msg))
21
bool blend = false;
21
else:
22
uint8_t *host_fb_addr;
22
print ("FAIL: %s" % (msg))
23
bool is_dirty = false;
23
+ global failcount
24
- const int global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
24
failcount += 1
25
+ int global_width;
25
26
27
if (!s || !s->console || !s->enabled ||
28
surface_bits_per_pixel(qemu_console_surface(s->console)) == 0) {
29
return;
30
}
31
+
32
+ global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
33
exynos4210_update_resolution(s);
34
surface = qemu_console_surface(s->console);
26
35
27
--
36
--
28
2.19.1
37
2.20.1
29
38
30
39
diff view generated by jsdifflib
New patch
1
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to
2
armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el().
3
This is incorrect when the security state being queried is not the
4
current one, because arm_current_el() uses the current security state
5
to determine which of the banked CONTROL.nPRIV bits to look at.
6
The effect was that if (for instance) Secure state was in privileged
7
mode but Non-Secure was not then we would return the wrong MMU index.
1
8
9
The only places where we are using this function in a way that could
10
trigger this bug are for the stack loads during a v8M function-return
11
and for the instruction fetch of a v8M SG insn.
12
13
Fix the bug by expanding out the M-profile version of the
14
arm_current_el() logic inline so it can use the passed in secstate
15
rather than env->v7m.secure.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
20
---
21
target/arm/m_helper.c | 3 ++-
22
1 file changed, 2 insertions(+), 1 deletion(-)
23
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
27
+++ b/target/arm/m_helper.c
28
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
29
/* Return the MMU index for a v7M CPU in the specified security state */
30
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
31
{
32
- bool priv = arm_current_el(env) != 0;
33
+ bool priv = arm_v7m_is_handler_mode(env) ||
34
+ !(env->v7m.control[secstate] & 1);
35
36
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
37
}
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
New patch
1
On some hosts (eg Ubuntu Bionic) pkg-config returns a set of
2
libraries for gio-2.0 which don't actually work when compiling
3
statically. (Specifically, the returned library string includes
4
-lmount, but not -lblkid which -lmount depends upon, so linking
5
fails due to missing symbols.)
1
6
7
Check that the libraries work, and don't enable gio if they don't,
8
in the same way we do for gnutls.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20200928160402.7961-1-peter.maydell@linaro.org
14
---
15
configure | 10 +++++++++-
16
1 file changed, 9 insertions(+), 1 deletion(-)
17
18
diff --git a/configure b/configure
19
index XXXXXXX..XXXXXXX 100755
20
--- a/configure
21
+++ b/configure
22
@@ -XXX,XX +XXX,XX @@ if test "$static" = yes && test "$mingw32" = yes; then
23
fi
24
25
if $pkg_config --atleast-version=$glib_req_ver gio-2.0; then
26
- gio=yes
27
gio_cflags=$($pkg_config --cflags gio-2.0)
28
gio_libs=$($pkg_config --libs gio-2.0)
29
gdbus_codegen=$($pkg_config --variable=gdbus_codegen gio-2.0)
30
if [ ! -x "$gdbus_codegen" ]; then
31
gdbus_codegen=
32
fi
33
+ # Check that the libraries actually work -- Ubuntu 18.04 ships
34
+ # with pkg-config --static --libs data for gio-2.0 that is missing
35
+ # -lblkid and will give a link error.
36
+ write_c_skeleton
37
+ if compile_prog "" "gio_libs" ; then
38
+ gio=yes
39
+ else
40
+ gio=no
41
+ fi
42
else
43
gio=no
44
fi
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
New patch
1
In gicv3_init_cpuif() we copy the ARMCPU gicv3_maintenance_interrupt
2
into the GICv3CPUState struct's maintenance_irq field. This will
3
only work if the board happens to have already wired up the CPU
4
maintenance IRQ before the GIC was realized. Unfortunately this is
5
not the case for the 'virt' board, and so the value that gets copied
6
is NULL (since a qemu_irq is really a pointer to an IRQState struct
7
under the hood). The effect is that the CPU interface code never
8
actually raises the maintenance interrupt line.
1
9
10
Instead, since the GICv3CPUState has a pointer to the CPUState, make
11
the dereference at the point where we want to raise the interrupt, to
12
avoid an implicit requirement on board code to wire things up in a
13
particular order.
14
15
Reported-by: Jose Martins <josemartins90@gmail.com>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20201009153904.28529-1-peter.maydell@linaro.org
18
Reviewed-by: Luc Michel <luc@lmichel.fr>
19
---
20
include/hw/intc/arm_gicv3_common.h | 1 -
21
hw/intc/arm_gicv3_cpuif.c | 5 ++---
22
2 files changed, 2 insertions(+), 4 deletions(-)
23
24
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/intc/arm_gicv3_common.h
27
+++ b/include/hw/intc/arm_gicv3_common.h
28
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
29
qemu_irq parent_fiq;
30
qemu_irq parent_virq;
31
qemu_irq parent_vfiq;
32
- qemu_irq maintenance_irq;
33
34
/* Redistributor */
35
uint32_t level; /* Current IRQ level */
36
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/intc/arm_gicv3_cpuif.c
39
+++ b/hw/intc/arm_gicv3_cpuif.c
40
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
41
int irqlevel = 0;
42
int fiqlevel = 0;
43
int maintlevel = 0;
44
+ ARMCPU *cpu = ARM_CPU(cs->cpu);
45
46
idx = hppvi_index(cs);
47
trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx);
48
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
49
50
qemu_set_irq(cs->parent_vfiq, fiqlevel);
51
qemu_set_irq(cs->parent_virq, irqlevel);
52
- qemu_set_irq(cs->maintenance_irq, maintlevel);
53
+ qemu_set_irq(cpu->gicv3_maintenance_interrupt, maintlevel);
54
}
55
56
static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
58
&& cpu->gic_num_lrs) {
59
int j;
60
61
- cs->maintenance_irq = cpu->gicv3_maintenance_interrupt;
62
-
63
cs->num_list_regs = cpu->gic_num_lrs;
64
cs->vpribits = cpu->gic_vpribits;
65
cs->vprebits = cpu->gic_vprebits;
66
--
67
2.20.1
68
69
diff view generated by jsdifflib
New patch
1
The kerneldoc script currently emits Sphinx markup for a macro with
2
arguments that uses the c:function directive. This is correct for
3
Sphinx versions earlier than Sphinx 3, where c:macro doesn't allow
4
documentation of macros with arguments and c:function is not picky
5
about the syntax of what it is passed. However, in Sphinx 3 the
6
c:macro directive was enhanced to support macros with arguments,
7
and c:function was made more picky about what syntax it accepted.
1
8
9
When kerneldoc is told that it needs to produce output for Sphinx
10
3 or later, make it emit c:function only for functions and c:macro
11
for macros with arguments. We assume that anything with a return
12
type is a function and anything without is a macro.
13
14
This fixes the Sphinx error:
15
16
/home/petmay01/linaro/qemu-from-laptop/qemu/docs/../include/qom/object.h:155:Error in declarator
17
If declarator-id with parameters (e.g., 'void f(int arg)'):
18
Invalid C declaration: Expected identifier in nested name. [error at 25]
19
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
20
-------------------------^
21
If parenthesis in noptr-declarator (e.g., 'void (*f(int arg))(double)'):
22
Error in declarator or parameters
23
Invalid C declaration: Expecting "(" in parameters. [error at 39]
24
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
25
---------------------------------------^
26
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
29
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
30
Message-id: 20201030174700.7204-2-peter.maydell@linaro.org
31
---
32
scripts/kernel-doc | 18 +++++++++++++++++-
33
1 file changed, 17 insertions(+), 1 deletion(-)
34
35
diff --git a/scripts/kernel-doc b/scripts/kernel-doc
36
index XXXXXXX..XXXXXXX 100755
37
--- a/scripts/kernel-doc
38
+++ b/scripts/kernel-doc
39
@@ -XXX,XX +XXX,XX @@ sub output_function_rst(%) {
40
    output_highlight_rst($args{'purpose'});
41
    $start = "\n\n**Syntax**\n\n ``";
42
} else {
43
-    print ".. c:function:: ";
44
+ if ((split(/\./, $sphinx_version))[0] >= 3) {
45
+ # Sphinx 3 and later distinguish macros and functions and
46
+ # complain if you use c:function with something that's not
47
+ # syntactically valid as a function declaration.
48
+ # We assume that anything with a return type is a function
49
+ # and anything without is a macro.
50
+ if ($args{'functiontype'} ne "") {
51
+ print ".. c:function:: ";
52
+ } else {
53
+ print ".. c:macro:: ";
54
+ }
55
+ } else {
56
+ # Older Sphinx don't support documenting macros that take
57
+ # arguments with c:macro, and don't complain about the use
58
+ # of c:function for this.
59
+ print ".. c:function:: ";
60
+ }
61
}
62
if ($args{'functiontype'} ne "") {
63
    $start .= $args{'functiontype'} . " " . $args{'function'} . " (";
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
1
The Cortex-A15 and Cortex-A7 both have EL2; now we've implemented
1
Sphinx 3.2 is pickier than earlier versions about the option:: markup,
2
it properly we can enable the feature bit.
2
and complains about our usage in qemu-option-trace.rst:
3
4
../../docs/qemu-option-trace.rst.inc:4:Malformed option description
5
'[enable=]PATTERN', should look like "opt", "-opt args", "--opt args",
6
"/opt args" or "+opt args"
7
8
In this file, we're really trying to document the different parts of
9
the top-level --trace option, which qemu-nbd.rst and qemu-img.rst
10
have already introduced with an option:: markup. So it's not right
11
to use option:: here anyway. Switch to a different markup
12
(definition lists) which gives about the same formatted output.
13
14
(Unlike option::, this markup doesn't produce index entries; but
15
at the moment we don't do anything much with indexes anyway, and
16
in any case I think it doesn't make much sense to have individual
17
index entries for the sub-parts of the --trace option.)
3
18
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
21
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Message-id: 20181109173553.22341-3-peter.maydell@linaro.org
22
Message-id: 20201030174700.7204-3-peter.maydell@linaro.org
8
---
23
---
9
target/arm/cpu.c | 2 ++
24
docs/qemu-option-trace.rst.inc | 6 +++---
10
1 file changed, 2 insertions(+)
25
1 file changed, 3 insertions(+), 3 deletions(-)
11
26
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
27
diff --git a/docs/qemu-option-trace.rst.inc b/docs/qemu-option-trace.rst.inc
13
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
29
--- a/docs/qemu-option-trace.rst.inc
15
+++ b/target/arm/cpu.c
30
+++ b/docs/qemu-option-trace.rst.inc
16
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
31
@@ -XXX,XX +XXX,XX @@
17
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
32
18
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
33
Specify tracing options.
19
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
34
20
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
35
-.. option:: [enable=]PATTERN
21
set_feature(&cpu->env, ARM_FEATURE_EL3);
36
+``[enable=]PATTERN``
22
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
37
23
cpu->midr = 0x410fc075;
38
Immediately enable events matching *PATTERN*
24
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
39
(either event name or a globbing pattern). This option is only
25
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
40
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
26
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
41
27
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
42
Use :option:`-trace help` to print a list of names of trace points.
28
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
43
29
set_feature(&cpu->env, ARM_FEATURE_EL3);
44
-.. option:: events=FILE
30
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
45
+``events=FILE``
31
cpu->midr = 0x412fc0f1;
46
47
Immediately enable events listed in *FILE*.
48
The file must contain one event name (as listed in the ``trace-events-all``
49
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
50
available if QEMU has been compiled with the ``simple``, ``log`` or
51
``ftrace`` tracing backend.
52
53
-.. option:: file=FILE
54
+``file=FILE``
55
56
Log output traces to *FILE*.
57
This option is only available if QEMU has been compiled with
32
--
58
--
33
2.19.1
59
2.20.1
34
60
35
61
diff view generated by jsdifflib
1
Currently we track the state of the four irq lines from the GIC
1
The randomness tests in the NPCM7xx RNG test fail intermittently
2
only via the cs->interrupt_request or KVM irq state. That means
2
but fairly frequently. On my machine running the test in a loop:
3
that we assume that an interrupt is asserted if and only if the
3
while QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/npcm7xx_rng-test; do true; done
4
external line is set. This assumption is incorrect for VIRQ
5
and VFIQ, because the HCR_EL2.{VI,VF} bits allow assertion
6
of VIRQ and VFIQ separately from the state of the external line.
7
4
8
To handle this, start tracking the state of the external lines
5
will fail in less than a minute with an error like:
9
explicitly in a CPU state struct field, as is common practice
6
ERROR:../../tests/qtest/npcm7xx_rng-test.c:256:test_first_byte_runs:
10
for devices.
7
assertion failed (calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE) > 0.01): (0.00286205989 > 0.01)
11
8
12
The complicated part of this is dealing with inbound migration
9
(Failures have been observed on all 4 of the randomness tests,
13
from an older QEMU which didn't have this state. We assume in
10
not just first_byte_runs.)
14
that case that the older QEMU did not implement the HCR_EL2.{VI,VF}
11
15
bits as generating interrupts, and so the line state matches
12
It's not clear why these tests are failing like this, but intermittent
16
the current state in cs->interrupt_request. (This is not quite
13
failures make CI and merge testing awkward, so disable running them
17
true between commit 8a0fc3a29fc2315325400c7 and its revert, but
14
unless a developer specifically sets QEMU_TEST_FLAKY_RNG_TESTS when
18
that commit is broken and never made it into any released QEMU
15
running the test suite, until we work out the cause.
19
version.)
20
16
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
23
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
19
Message-id: 20201102152454.8287-1-peter.maydell@linaro.org
24
Message-id: 20181109134731.11605-3-peter.maydell@linaro.org
20
Reviewed-by: Havard Skinnemoen <hskinnemoen@google.com>
25
---
21
---
26
target/arm/cpu.h | 3 +++
22
tests/qtest/npcm7xx_rng-test.c | 14 ++++++++++----
27
target/arm/cpu.c | 16 ++++++++++++++
23
1 file changed, 10 insertions(+), 4 deletions(-)
28
target/arm/machine.c | 51 ++++++++++++++++++++++++++++++++++++++++++++
29
3 files changed, 70 insertions(+)
30
24
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
32
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
27
--- a/tests/qtest/npcm7xx_rng-test.c
34
+++ b/target/arm/cpu.h
28
+++ b/tests/qtest/npcm7xx_rng-test.c
35
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
29
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
36
uint64_t esr;
30
37
} serror;
31
qtest_add_func("npcm7xx_rng/enable_disable", test_enable_disable);
38
32
qtest_add_func("npcm7xx_rng/rosel", test_rosel);
39
+ /* State of our input IRQ/FIQ/VIRQ/VFIQ lines */
33
- qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
40
+ uint32_t irq_line_state;
34
- qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
41
+
35
- qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
42
/* Thumb-2 EE state. */
36
- qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
43
uint32_t teecr;
37
+ /*
44
uint32_t teehbr;
38
+ * These tests fail intermittently; only run them on explicit
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
39
+ * request until we figure out why.
46
index XXXXXXX..XXXXXXX 100644
40
+ */
47
--- a/target/arm/cpu.c
41
+ if (getenv("QEMU_TEST_FLAKY_RNG_TESTS")) {
48
+++ b/target/arm/cpu.c
42
+ qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
43
+ qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
50
[ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
44
+ qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
51
};
45
+ qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
52
53
+ if (level) {
54
+ env->irq_line_state |= mask[irq];
55
+ } else {
56
+ env->irq_line_state &= ~mask[irq];
57
+ }
46
+ }
58
+
47
59
switch (irq) {
48
qtest_start("-machine npcm750-evb");
60
case ARM_CPU_VIRQ:
49
ret = g_test_run();
61
case ARM_CPU_VFIQ:
62
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level)
63
ARMCPU *cpu = opaque;
64
CPUState *cs = CPU(cpu);
65
int kvm_irq = KVM_ARM_IRQ_TYPE_CPU << KVM_ARM_IRQ_TYPE_SHIFT;
66
+ uint32_t linestate_bit;
67
68
switch (irq) {
69
case ARM_CPU_IRQ:
70
kvm_irq |= KVM_ARM_IRQ_CPU_IRQ;
71
+ linestate_bit = CPU_INTERRUPT_HARD;
72
break;
73
case ARM_CPU_FIQ:
74
kvm_irq |= KVM_ARM_IRQ_CPU_FIQ;
75
+ linestate_bit = CPU_INTERRUPT_FIQ;
76
break;
77
default:
78
g_assert_not_reached();
79
}
80
+
81
+ if (level) {
82
+ env->irq_line_state |= linestate_bit;
83
+ } else {
84
+ env->irq_line_state &= ~linestate_bit;
85
+ }
86
+
87
kvm_irq |= cs->cpu_index << KVM_ARM_IRQ_VCPU_SHIFT;
88
kvm_set_irq(kvm_state, kvm_irq, level ? 1 : 0);
89
#endif
90
diff --git a/target/arm/machine.c b/target/arm/machine.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/machine.c
93
+++ b/target/arm/machine.c
94
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_serror = {
95
}
96
};
97
98
+static bool irq_line_state_needed(void *opaque)
99
+{
100
+ return true;
101
+}
102
+
103
+static const VMStateDescription vmstate_irq_line_state = {
104
+ .name = "cpu/irq-line-state",
105
+ .version_id = 1,
106
+ .minimum_version_id = 1,
107
+ .needed = irq_line_state_needed,
108
+ .fields = (VMStateField[]) {
109
+ VMSTATE_UINT32(env.irq_line_state, ARMCPU),
110
+ VMSTATE_END_OF_LIST()
111
+ }
112
+};
113
+
114
static bool m_needed(void *opaque)
115
{
116
ARMCPU *cpu = opaque;
117
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
118
return 0;
119
}
120
121
+static int cpu_pre_load(void *opaque)
122
+{
123
+ ARMCPU *cpu = opaque;
124
+ CPUARMState *env = &cpu->env;
125
+
126
+ /*
127
+ * Pre-initialize irq_line_state to a value that's never valid as
128
+ * real data, so cpu_post_load() can tell whether we've seen the
129
+ * irq-line-state subsection in the incoming migration state.
130
+ */
131
+ env->irq_line_state = UINT32_MAX;
132
+
133
+ return 0;
134
+}
135
+
136
static int cpu_post_load(void *opaque, int version_id)
137
{
138
ARMCPU *cpu = opaque;
139
+ CPUARMState *env = &cpu->env;
140
int i, v;
141
142
+ /*
143
+ * Handle migration compatibility from old QEMU which didn't
144
+ * send the irq-line-state subsection. A QEMU without it did not
145
+ * implement the HCR_EL2.{VI,VF} bits as generating interrupts,
146
+ * so for TCG the line state matches the bits set in cs->interrupt_request.
147
+ * For KVM the line state is not stored in cs->interrupt_request
148
+ * and so this will leave irq_line_state as 0, but this is OK because
149
+ * we only need to care about it for TCG.
150
+ */
151
+ if (env->irq_line_state == UINT32_MAX) {
152
+ CPUState *cs = CPU(cpu);
153
+
154
+ env->irq_line_state = cs->interrupt_request &
155
+ (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ |
156
+ CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ);
157
+ }
158
+
159
/* Update the values list from the incoming migration data.
160
* Anything in the incoming data which we don't know about is
161
* a migration failure; anything we know about but the incoming
162
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
163
.version_id = 22,
164
.minimum_version_id = 22,
165
.pre_save = cpu_pre_save,
166
+ .pre_load = cpu_pre_load,
167
.post_load = cpu_post_load,
168
.fields = (VMStateField[]) {
169
VMSTATE_UINT32_ARRAY(env.regs, ARMCPU, 16),
170
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
171
&vmstate_sve,
172
#endif
173
&vmstate_serror,
174
+ &vmstate_irq_line_state,
175
NULL
176
}
177
};
178
--
50
--
179
2.19.1
51
2.20.1
180
52
181
53
diff view generated by jsdifflib