1
Most of this is the Neon decodetree patches, followed by Edgar's versal cleanups.
1
First set of arm patches for 6.2. I have a lot more in my
2
to-review queue still...
2
3
3
thanks
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit d42685765653ec155fdf60910662f8830bdb2cef:
6
7
7
The following changes since commit 2ef486e76d64436be90f7359a3071fb2a56ce835:
8
Open 6.2 development tree (2021-08-25 10:25:12 +0100)
8
9
Merge remote-tracking branch 'remotes/marcel/tags/rdma-pull-request' into staging (2020-05-03 14:12:56 +0100)
10
9
11
are available in the Git repository at:
10
are available in the Git repository at:
12
11
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200504
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210825
14
13
15
for you to fetch changes up to 9aefc6cf9b73f66062d2f914a0136756e7a28211:
14
for you to fetch changes up to 24b1a6aa43615be22c7ee66bd68ec5675f6a6a9a:
16
15
17
target/arm: Move gen_ function typedefs to translate.h (2020-05-04 12:59:26 +0100)
16
docs: Document how to use gdb with unix sockets (2021-08-25 10:48:51 +0100)
18
17
19
----------------------------------------------------------------
18
----------------------------------------------------------------
20
target-arm queue:
19
target-arm queue:
21
* Start of conversion of Neon insns to decodetree
20
* More MVE emulation work
22
* versal board: support SD and RTC
21
* Implement M-profile trapping on division by zero
23
* Implement ARMv8.2-TTS2UXN
22
* kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()
24
* Make VQDMULL undefined when U=1
23
* hw/char/pl011: add support for sending break
25
* Some minor code cleanups
24
* fsl-imx6ul: Instantiate SAI1/2/3 and ASRC as unimplemented devices
25
* hw/dma/pl330: Add memory region to replace default
26
* sbsa-ref: Rename SBSA_GWDT enum value
27
* fsl-imx7: Instantiate SAI1/2/3 as unimplemented devices
28
* docs: Document how to use gdb with unix sockets
26
29
27
----------------------------------------------------------------
30
----------------------------------------------------------------
28
Edgar E. Iglesias (11):
31
Eduardo Habkost (1):
29
hw/arm: versal: Remove inclusion of arm_gicv3_common.h
32
sbsa-ref: Rename SBSA_GWDT enum value
30
hw/arm: versal: Move misplaced comment
31
hw/arm: versal-virt: Fix typo xlnx-ve -> xlnx-versal
32
hw/arm: versal: Embed the UARTs into the SoC type
33
hw/arm: versal: Embed the GEMs into the SoC type
34
hw/arm: versal: Embed the ADMAs into the SoC type
35
hw/arm: versal: Embed the APUs into the SoC type
36
hw/arm: versal: Add support for SD
37
hw/arm: versal: Add support for the RTC
38
hw/arm: versal-virt: Add support for SD
39
hw/arm: versal-virt: Add support for the RTC
40
33
41
Fredrik Strupe (1):
34
Guenter Roeck (2):
42
target/arm: Make VQDMULL undefined when U=1
35
fsl-imx6ul: Instantiate SAI1/2/3 and ASRC as unimplemented devices
36
fsl-imx7: Instantiate SAI1/2/3 as unimplemented devices
43
37
44
Peter Maydell (25):
38
Hamza Mahfooz (1):
45
target/arm: Don't use a TLB for ARMMMUIdx_Stage2
39
target/arm: kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()
46
target/arm: Use enum constant in get_phys_addr_lpae() call
47
target/arm: Add new 's1_is_el0' argument to get_phys_addr_lpae()
48
target/arm: Implement ARMv8.2-TTS2UXN
49
target/arm: Use correct variable for setting 'max' cpu's ID_AA64DFR0
50
target/arm/translate-vfp.inc.c: Remove duplicate simd_r32 check
51
target/arm: Don't allow Thumb Neon insns without FEATURE_NEON
52
target/arm: Add stubs for AArch32 Neon decodetree
53
target/arm: Convert VCMLA (vector) to decodetree
54
target/arm: Convert VCADD (vector) to decodetree
55
target/arm: Convert V[US]DOT (vector) to decodetree
56
target/arm: Convert VFM[AS]L (vector) to decodetree
57
target/arm: Convert VCMLA (scalar) to decodetree
58
target/arm: Convert V[US]DOT (scalar) to decodetree
59
target/arm: Convert VFM[AS]L (scalar) to decodetree
60
target/arm: Convert Neon load/store multiple structures to decodetree
61
target/arm: Convert Neon 'load single structure to all lanes' to decodetree
62
target/arm: Convert Neon 'load/store single structure' to decodetree
63
target/arm: Convert Neon 3-reg-same VADD/VSUB to decodetree
64
target/arm: Convert Neon 3-reg-same logic ops to decodetree
65
target/arm: Convert Neon 3-reg-same VMAX/VMIN to decodetree
66
target/arm: Convert Neon 3-reg-same comparisons to decodetree
67
target/arm: Convert Neon 3-reg-same VQADD/VQSUB to decodetree
68
target/arm: Convert Neon 3-reg-same VMUL, VMLA, VMLS, VSHL to decodetree
69
target/arm: Move gen_ function typedefs to translate.h
70
40
71
Philippe Mathieu-Daudé (2):
41
Jan Luebbe (1):
72
hw/arm/mps2-tz: Use TYPE_IOTKIT instead of hardcoded string
42
hw/char/pl011: add support for sending break
73
target/arm: Use uint64_t for midr field in CPU state struct
74
43
75
include/hw/arm/xlnx-versal.h | 31 +-
44
Peter Maydell (37):
76
target/arm/cpu-param.h | 2 +-
45
target/arm: Note that we handle VMOVL as a special case of VSHLL
77
target/arm/cpu.h | 38 ++-
46
target/arm: Print MVE VPR in CPU dumps
78
target/arm/translate-a64.h | 9 -
47
target/arm: Fix MVE VSLI by 0 and VSRI by <dt>
79
target/arm/translate.h | 26 ++
48
target/arm: Fix signed VADDV
80
target/arm/neon-dp.decode | 86 +++++
49
target/arm: Fix mask handling for MVE narrowing operations
81
target/arm/neon-ls.decode | 52 +++
50
target/arm: Fix 48-bit saturating shifts
82
target/arm/neon-shared.decode | 66 ++++
51
target/arm: Fix MVE 48-bit SQRSHRL for small right shifts
83
hw/arm/mps2-tz.c | 2 +-
52
target/arm: Fix calculation of LTP mask when LR is 0
84
hw/arm/xlnx-versal-virt.c | 74 ++++-
53
target/arm: Factor out mve_eci_mask()
85
hw/arm/xlnx-versal.c | 115 +++++--
54
target/arm: Fix VPT advance when ECI is non-zero
86
target/arm/cpu.c | 3 +-
55
target/arm: Fix VLDRB/H/W for predicated elements
87
target/arm/cpu64.c | 8 +-
56
target/arm: Implement MVE VMULL (polynomial)
88
target/arm/helper.c | 183 ++++------
57
target/arm: Implement MVE incrementing/decrementing dup insns
89
target/arm/translate-a64.c | 17 -
58
target/arm: Factor out gen_vpst()
90
target/arm/translate-neon.inc.c | 714 +++++++++++++++++++++++++++++++++++++++
59
target/arm: Implement MVE integer vector comparisons
91
target/arm/translate-vfp.inc.c | 6 -
60
target/arm: Implement MVE integer vector-vs-scalar comparisons
92
target/arm/translate.c | 716 +++-------------------------------------
61
target/arm: Implement MVE VPSEL
93
target/arm/Makefile.objs | 18 +
62
target/arm: Implement MVE VMLAS
94
19 files changed, 1302 insertions(+), 864 deletions(-)
63
target/arm: Implement MVE shift-by-scalar
95
create mode 100644 target/arm/neon-dp.decode
64
target/arm: Move 'x' and 'a' bit definitions into vmlaldav formats
96
create mode 100644 target/arm/neon-ls.decode
65
target/arm: Implement MVE integer min/max across vector
97
create mode 100644 target/arm/neon-shared.decode
66
target/arm: Implement MVE VABAV
98
create mode 100644 target/arm/translate-neon.inc.c
67
target/arm: Implement MVE narrowing moves
68
target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFn
69
target/arm: Implement MVE VMLADAV and VMLSLDAV
70
target/arm: Implement MVE VMLA
71
target/arm: Implement MVE saturating doubling multiply accumulates
72
target/arm: Implement MVE VQABS, VQNEG
73
target/arm: Implement MVE VMAXA, VMINA
74
target/arm: Implement MVE VMOV to/from 2 general-purpose registers
75
target/arm: Implement MVE VPNOT
76
target/arm: Implement MVE VCTP
77
target/arm: Implement MVE scatter-gather insns
78
target/arm: Implement MVE scatter-gather immediate forms
79
target/arm: Implement MVE interleaving loads/stores
80
target/arm: Re-indent sdiv and udiv helpers
81
target/arm: Implement M-profile trapping on division by zero
99
82
83
Sebastian Meyer (1):
84
docs: Document how to use gdb with unix sockets
85
86
Wen, Jianxian (1):
87
hw/dma/pl330: Add memory region to replace default
88
89
docs/system/gdb.rst | 26 +-
90
include/hw/arm/fsl-imx7.h | 5 +
91
target/arm/cpu.h | 1 +
92
target/arm/helper-mve.h | 283 ++++++++++
93
target/arm/helper.h | 4 +-
94
target/arm/translate-a32.h | 2 +
95
target/arm/vec_internal.h | 11 +
96
target/arm/mve.decode | 226 +++++++-
97
target/arm/t32.decode | 1 +
98
hw/arm/exynos4210.c | 3 +
99
hw/arm/fsl-imx6ul.c | 12 +
100
hw/arm/fsl-imx7.c | 7 +
101
hw/arm/sbsa-ref.c | 6 +-
102
hw/arm/xilinx_zynq.c | 3 +
103
hw/char/pl011.c | 6 +
104
hw/dma/pl330.c | 26 +-
105
target/arm/cpu.c | 3 +
106
target/arm/helper.c | 34 +-
107
target/arm/kvm.c | 17 +-
108
target/arm/m_helper.c | 4 +
109
target/arm/mve_helper.c | 1254 ++++++++++++++++++++++++++++++++++++++++++--
110
target/arm/translate-mve.c | 877 ++++++++++++++++++++++++++++++-
111
target/arm/translate-vfp.c | 2 +-
112
target/arm/translate.c | 37 +-
113
target/arm/vec_helper.c | 14 +-
114
25 files changed, 2746 insertions(+), 118 deletions(-)
115
diff view generated by jsdifflib
New patch
1
Although the architecture doesn't define it as an alias, VMOVL
2
(vector move long) is encoded as a VSHLL with a zero shift.
3
Add a comment in the decode file noting that we handle VMOVL
4
as part of VSHLL.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/arm/mve.decode | 2 ++
10
1 file changed, 2 insertions(+)
11
12
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/mve.decode
15
+++ b/target/arm/mve.decode
16
@@ -XXX,XX +XXX,XX @@ VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
17
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
18
19
# VSHLL T1 encoding; the T2 VSHLL encoding is elsewhere in this file
20
+# Note that VMOVL is encoded as "VSHLL with a zero shift count"; we
21
+# implement it that way rather than special-casing it in the decode.
22
VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_b
23
VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_h
24
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Include the MVE VPR register value in the CPU dumps produced by
2
arm_cpu_dump_state() if we are printing FPU information. This
3
makes it easier to interpret debug logs when predication is
4
active.
2
5
3
MIDR_EL1 is a 64-bit system register with the top 32-bit being RES0.
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Represent it in QEMU's ARMCPU struct with a uint64_t, not a
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
uint32_t.
8
---
9
target/arm/cpu.c | 3 +++
10
1 file changed, 3 insertions(+)
6
11
7
This fixes an error when compiling with -Werror=conversion
8
because we were manipulating the register value using a
9
local uint64_t variable:
10
11
target/arm/cpu64.c: In function ‘aarch64_max_initfn’:
12
target/arm/cpu64.c:628:21: error: conversion from ‘uint64_t’ {aka ‘long unsigned int’} to ‘uint32_t’ {aka ‘unsigned int’} may change value [-Werror=conversion]
13
628 | cpu->midr = t;
14
| ^
15
16
and future-proofs us against a possible future architecture
17
change using some of the top 32 bits.
18
19
Suggested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
20
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
22
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
23
Message-id: 20200428172634.29707-1-f4bug@amsat.org
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
target/arm/cpu.h | 2 +-
28
target/arm/cpu.c | 2 +-
29
2 files changed, 2 insertions(+), 2 deletions(-)
30
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
36
uint64_t id_aa64dfr0;
37
uint64_t id_aa64dfr1;
38
} isar;
39
- uint32_t midr;
40
+ uint64_t midr;
41
uint32_t revidr;
42
uint32_t reset_fpsid;
43
uint32_t ctr;
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
14
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
48
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_cpus[] = {
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags)
49
static Property arm_cpu_properties[] = {
17
i, v);
50
DEFINE_PROP_BOOL("start-powered-off", ARMCPU, start_powered_off, false),
18
}
51
DEFINE_PROP_UINT32("psci-conduit", ARMCPU, psci_conduit, 0),
19
qemu_fprintf(f, "FPSCR: %08x\n", vfp_get_fpscr(env));
52
- DEFINE_PROP_UINT32("midr", ARMCPU, midr, 0),
20
+ if (cpu_isar_feature(aa32_mve, cpu)) {
53
+ DEFINE_PROP_UINT64("midr", ARMCPU, midr, 0),
21
+ qemu_fprintf(f, "VPR: %08x\n", env->v7m.vpr);
54
DEFINE_PROP_UINT64("mp-affinity", ARMCPU,
22
+ }
55
mp_affinity, ARM64_AFFINITY_INVALID),
23
}
56
DEFINE_PROP_INT32("node-id", ARMCPU, node_id, CPU_UNSET_NUMA_NODE_ID),
24
}
25
57
--
26
--
58
2.20.1
27
2.20.1
59
28
60
29
diff view generated by jsdifflib
New patch
1
In the MVE shift-and-insert insns, we special case VSLI by 0
2
and VSRI by <dt>. VSRI by <dt> means "don't update the destination",
3
which is what we've implemented. However VSLI by 0 is "set
4
destination to the input", so we don't want to use the same
5
special-casing that we do for VSRI by <dt>.
1
6
7
Since the generic logic gives the right answer for a shift
8
by 0, just use that.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/mve_helper.c | 9 +++++----
14
1 file changed, 5 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/mve_helper.c
19
+++ b/target/arm/mve_helper.c
20
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
21
uint16_t mask; \
22
uint64_t shiftmask; \
23
unsigned e; \
24
- if (shift == 0 || shift == ESIZE * 8) { \
25
+ if (shift == ESIZE * 8) { \
26
/* \
27
- * Only VSLI can shift by 0; only VSRI can shift by <dt>. \
28
- * The generic logic would give the right answer for 0 but \
29
- * fails for <dt>. \
30
+ * Only VSRI can shift by <dt>; it should mean "don't \
31
+ * update the destination". The generic logic can't handle \
32
+ * this because it would try to shift by an out-of-range \
33
+ * amount, so special case it here. \
34
*/ \
35
goto done; \
36
} \
37
--
38
2.20.1
39
40
diff view generated by jsdifflib
1
In aarch64_max_initfn() we update both 32-bit and 64-bit ID
1
A cut-and-paste error meant we handled signed VADDV like
2
registers. The intended pattern is that for 64-bit ID registers we
2
unsigned VADDV; fix the type used.
3
use FIELD_DP64 and the uint64_t 't' register, while 32-bit ID
4
registers use FIELD_DP32 and the uint32_t 'u' register. For
5
ID_AA64DFR0 we accidentally used 'u', meaning that the top 32 bits of
6
this 64-bit ID register would end up always zero. Luckily at the
7
moment that's what they should be anyway, so this bug has no visible
8
effects.
9
3
10
Use the right-sized variable.
11
12
Fixes: 3bec78447a958d481991
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20200423110915.10527-1-peter.maydell@linaro.org
17
---
6
---
18
target/arm/cpu64.c | 6 +++---
7
target/arm/mve_helper.c | 6 +++---
19
1 file changed, 3 insertions(+), 3 deletions(-)
8
1 file changed, 3 insertions(+), 3 deletions(-)
20
9
21
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
10
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
22
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu64.c
12
--- a/target/arm/mve_helper.c
24
+++ b/target/arm/cpu64.c
13
+++ b/target/arm/mve_helper.c
25
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
14
@@ -XXX,XX +XXX,XX @@ DO_LDAVH(vrmlsldavhxsw, int32_t, int64_t, true, true)
26
u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
15
return ra; \
27
cpu->isar.id_mmfr4 = u;
16
} \
28
17
29
- u = cpu->isar.id_aa64dfr0;
18
-DO_VADDV(vaddvsb, 1, uint8_t)
30
- u = FIELD_DP64(u, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
19
-DO_VADDV(vaddvsh, 2, uint16_t)
31
- cpu->isar.id_aa64dfr0 = u;
20
-DO_VADDV(vaddvsw, 4, uint32_t)
32
+ t = cpu->isar.id_aa64dfr0;
21
+DO_VADDV(vaddvsb, 1, int8_t)
33
+ t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
22
+DO_VADDV(vaddvsh, 2, int16_t)
34
+ cpu->isar.id_aa64dfr0 = t;
23
+DO_VADDV(vaddvsw, 4, int32_t)
35
24
DO_VADDV(vaddvub, 1, uint8_t)
36
u = cpu->isar.id_dfr0;
25
DO_VADDV(vaddvuh, 2, uint16_t)
37
u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
26
DO_VADDV(vaddvuw, 4, uint32_t)
38
--
27
--
39
2.20.1
28
2.20.1
40
29
41
30
diff view generated by jsdifflib
New patch
1
In the MVE helpers for the narrowing operations (DO_VSHRN and
2
DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for
3
the 'top' versions of the insn. This is because the loop works over
4
the double-sized input elements and shifts the predicate mask by that
5
many bits each time, but when we write out the half-sized output we
6
must look at the mask bits for whichever half of the element we are
7
writing to.
1
8
9
Correct this by shifting the whole mask right by ESIZE bits for the
10
'top' insns. This allows us also to simplify the saturation bit
11
checking (where we had noticed that we needed to look at a different
12
mask bit for the 'top' insn.)
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
---
17
target/arm/mve_helper.c | 4 +++-
18
1 file changed, 3 insertions(+), 1 deletion(-)
19
20
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/mve_helper.c
23
+++ b/target/arm/mve_helper.c
24
@@ -XXX,XX +XXX,XX @@ DO_VSHLL_ALL(vshllt, true)
25
TYPE *d = vd; \
26
uint16_t mask = mve_element_mask(env); \
27
unsigned le; \
28
+ mask >>= ESIZE * TOP; \
29
for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
30
TYPE r = FN(m[H##LESIZE(le)], shift); \
31
mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
32
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhs(int64_t val, int64_t min, int64_t max,
33
uint16_t mask = mve_element_mask(env); \
34
bool qc = false; \
35
unsigned le; \
36
+ mask >>= ESIZE * TOP; \
37
for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
38
bool sat = false; \
39
TYPE r = FN(m[H##LESIZE(le)], shift, &sat); \
40
mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
41
- qc |= sat && (mask & 1 << (TOP * ESIZE)); \
42
+ qc |= sat & mask & 1; \
43
} \
44
if (qc) { \
45
env->vfp.qc[0] = qc; \
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge
2
cases wrong and failed to saturate correctly:
2
3
3
Add support for the RTC.
4
(1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs()
5
does to obtain the saturated most-negative and most-positive 48-bit
6
signed values for the large-shift-left case. This gives (1 << 47)
7
for saturate-to-most-negative, but we weren't sign-extending this
8
value to the 64-bit output as the pseudocode requires.
4
9
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
(2) For left shifts by less than 48, we copied the "8/16 bit" code
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
from do_sqrshl_bhs() and do_uqrshl_bhs(). This doesn't do the right
7
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
12
thing because it assumes the C type we're working with is at least
8
Message-id: 20200427181649.26851-12-edgar.iglesias@gmail.com
13
twice the number of bits we're saturating to (so that a shift left by
14
bits-1 can't shift anything off the top of the value). This isn't
15
true for bits == 48, so we would incorrectly return 0 rather than the
16
most-positive value for situations like "shift (1 << 44) right by
17
20". Instead check for saturation by doing the shift and signextend
18
and then testing whether shifting back left again gives the original
19
value.
20
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
23
---
11
hw/arm/xlnx-versal-virt.c | 22 ++++++++++++++++++++++
24
target/arm/mve_helper.c | 12 +++++-------
12
1 file changed, 22 insertions(+)
25
1 file changed, 5 insertions(+), 7 deletions(-)
13
26
14
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
27
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
15
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/xlnx-versal-virt.c
29
--- a/target/arm/mve_helper.c
17
+++ b/hw/arm/xlnx-versal-virt.c
30
+++ b/target/arm/mve_helper.c
18
@@ -XXX,XX +XXX,XX @@ static void fdt_add_sd_nodes(VersalVirt *s)
31
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
32
}
33
return src >> -shift;
34
} else if (shift < 48) {
35
- int64_t val = src << shift;
36
- int64_t extval = sextract64(val, 0, 48);
37
- if (!sat || val == extval) {
38
+ int64_t extval = sextract64(src << shift, 0, 48);
39
+ if (!sat || src == (extval >> shift)) {
40
return extval;
41
}
42
} else if (!sat || src == 0) {
43
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
19
}
44
}
45
46
*sat = 1;
47
- return (1ULL << 47) - (src >= 0);
48
+ return src >= 0 ? MAKE_64BIT_MASK(0, 47) : MAKE_64BIT_MASK(47, 17);
20
}
49
}
21
50
22
+static void fdt_add_rtc_node(VersalVirt *s)
51
/* Operate on 64-bit values, but saturate at 48 bits */
23
+{
52
@@ -XXX,XX +XXX,XX @@ static inline uint64_t do_uqrshl48_d(uint64_t src, int64_t shift,
24
+ const char compat[] = "xlnx,zynqmp-rtc";
53
return extval;
25
+ const char interrupt_names[] = "alarm\0sec";
54
}
26
+ char *name = g_strdup_printf("/rtc@%x", MM_PMC_RTC);
55
} else if (shift < 48) {
27
+
56
- uint64_t val = src << shift;
28
+ qemu_fdt_add_subnode(s->fdt, name);
57
- uint64_t extval = extract64(val, 0, 48);
29
+
58
- if (!sat || val == extval) {
30
+ qemu_fdt_setprop_cells(s->fdt, name, "interrupts",
59
+ uint64_t extval = extract64(src << shift, 0, 48);
31
+ GIC_FDT_IRQ_TYPE_SPI, VERSAL_RTC_ALARM_IRQ,
60
+ if (!sat || src == (extval >> shift)) {
32
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI,
61
return extval;
33
+ GIC_FDT_IRQ_TYPE_SPI, VERSAL_RTC_SECONDS_IRQ,
62
}
34
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
63
} else if (!sat || src == 0) {
35
+ qemu_fdt_setprop(s->fdt, name, "interrupt-names",
36
+ interrupt_names, sizeof(interrupt_names));
37
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
38
+ 2, MM_PMC_RTC, 2, MM_PMC_RTC_SIZE);
39
+ qemu_fdt_setprop(s->fdt, name, "compatible", compat, sizeof(compat));
40
+ g_free(name);
41
+}
42
+
43
static void fdt_nop_memory_nodes(void *fdt, Error **errp)
44
{
45
Error *err = NULL;
46
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
47
fdt_add_timer_nodes(s);
48
fdt_add_zdma_nodes(s);
49
fdt_add_sd_nodes(s);
50
+ fdt_add_rtc_node(s);
51
fdt_add_cpu_nodes(s, psci_conduit);
52
fdt_add_clk_node(s, "/clk125", 125000000, s->phandle.clk_125Mhz);
53
fdt_add_clk_node(s, "/clk25", 25000000, s->phandle.clk_25Mhz);
54
--
64
--
55
2.20.1
65
2.20.1
56
66
57
67
diff view generated by jsdifflib
New patch
1
We got an edge case wrong in the 48-bit SQRSHRL implementation: if
2
the shift is to the right, although it always makes the result
3
smaller than the input value it might not be within the 48-bit range
4
the result is supposed to be if the input had some bits in [63..48]
5
set and the shift didn't bring all of those within the [47..0] range.
1
6
7
Handle this similarly to the way we already do for this case in
8
do_uqrshl48_d(): extend the calculated result from 48 bits,
9
and return that if not saturating or if it doesn't change the
10
result; otherwise fall through to return a saturated value.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
---
15
target/arm/mve_helper.c | 11 +++++++++--
16
1 file changed, 9 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/mve_helper.c
21
+++ b/target/arm/mve_helper.c
22
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mve_uqrshll)(CPUARMState *env, uint64_t n, uint32_t shift)
23
static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
24
bool round, uint32_t *sat)
25
{
26
+ int64_t val, extval;
27
+
28
if (shift <= -48) {
29
/* Rounding the sign bit always produces 0. */
30
if (round) {
31
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
32
} else if (shift < 0) {
33
if (round) {
34
src >>= -shift - 1;
35
- return (src >> 1) + (src & 1);
36
+ val = (src >> 1) + (src & 1);
37
+ } else {
38
+ val = src >> -shift;
39
+ }
40
+ extval = sextract64(val, 0, 48);
41
+ if (!sat || val == extval) {
42
+ return extval;
43
}
44
- return src >> -shift;
45
} else if (shift < 48) {
46
int64_t extval = sextract64(src << shift, 0, 48);
47
if (!sat || src == (extval >> shift)) {
48
--
49
2.20.1
50
51
diff view generated by jsdifflib
1
Somewhere along theline we accidentally added a duplicate
1
In mve_element_mask(), we calculate a mask for tail predication which
2
"using D16-D31 when they don't exist" check to do_vfm_dp()
2
should have a number of 1 bits based on the value of LR. However,
3
(probably an artifact of a patchseries rebase). Remove it.
3
our MAKE_64BIT_MASK() macro has undefined behaviour when passed a
4
zero length. Special case this to give the all-zeroes mask we
5
require.
4
6
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20200430181003.21682-2-peter.maydell@linaro.org
9
---
9
---
10
target/arm/translate-vfp.inc.c | 6 ------
10
target/arm/mve_helper.c | 3 ++-
11
1 file changed, 6 deletions(-)
11
1 file changed, 2 insertions(+), 1 deletion(-)
12
12
13
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
13
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.inc.c
15
--- a/target/arm/mve_helper.c
16
+++ b/target/arm/translate-vfp.inc.c
16
+++ b/target/arm/mve_helper.c
17
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
17
@@ -XXX,XX +XXX,XX @@ static uint16_t mve_element_mask(CPUARMState *env)
18
return false;
18
*/
19
int masklen = env->regs[14] << env->v7m.ltpsize;
20
assert(masklen <= 16);
21
- mask &= MAKE_64BIT_MASK(0, masklen);
22
+ uint16_t ltpmask = masklen ? MAKE_64BIT_MASK(0, masklen) : 0;
23
+ mask &= ltpmask;
19
}
24
}
20
25
21
- /* UNDEF accesses to D16-D31 if they don't exist. */
26
if ((env->condexec_bits & 0xf) == 0) {
22
- if (!dc_isar_feature(aa32_simd_r32, s) &&
23
- ((a->vd | a->vn | a->vm) & 0x10)) {
24
- return false;
25
- }
26
-
27
if (!vfp_access_check(s)) {
28
return true;
29
}
30
--
27
--
31
2.20.1
28
2.20.1
32
29
33
30
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
In some situations we need a mask telling us which parts of the
2
vector correspond to beats that are not being executed because of
3
ECI, separately from the combined "which bytes are predicated away"
4
mask. Factor this mask calculation out of mve_element_mask() into
5
its own function.
2
6
3
Add support for SD.
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
target/arm/mve_helper.c | 58 ++++++++++++++++++++++++-----------------
11
1 file changed, 34 insertions(+), 24 deletions(-)
4
12
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20200427181649.26851-9-edgar.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/hw/arm/xlnx-versal.h | 12 ++++++++++++
13
hw/arm/xlnx-versal.c | 31 +++++++++++++++++++++++++++++++
14
2 files changed, 43 insertions(+)
15
16
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/xlnx-versal.h
15
--- a/target/arm/mve_helper.c
19
+++ b/include/hw/arm/xlnx-versal.h
16
+++ b/target/arm/mve_helper.c
20
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@
21
18
#include "exec/exec-all.h"
22
#include "hw/sysbus.h"
19
#include "tcg/tcg.h"
23
#include "hw/arm/boot.h"
20
24
+#include "hw/sd/sdhci.h"
21
+static uint16_t mve_eci_mask(CPUARMState *env)
25
#include "hw/intc/arm_gicv3.h"
22
+{
26
#include "hw/char/pl011.h"
23
+ /*
27
#include "hw/dma/xlnx-zdma.h"
24
+ * Return the mask of which elements in the MVE vector correspond
28
@@ -XXX,XX +XXX,XX @@
25
+ * to beats being executed. The mask has 1 bits for executed lanes
29
#define XLNX_VERSAL_NR_UARTS 2
26
+ * and 0 bits where ECI says this beat was already executed.
30
#define XLNX_VERSAL_NR_GEMS 2
27
+ */
31
#define XLNX_VERSAL_NR_ADMAS 8
28
+ int eci;
32
+#define XLNX_VERSAL_NR_SDS 2
33
#define XLNX_VERSAL_NR_IRQS 192
34
35
typedef struct Versal {
36
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
37
} iou;
38
} lpd;
39
40
+ /* The Platform Management Controller subsystem. */
41
+ struct {
42
+ struct {
43
+ SDHCIState sd[XLNX_VERSAL_NR_SDS];
44
+ } iou;
45
+ } pmc;
46
+
29
+
47
struct {
30
+ if ((env->condexec_bits & 0xf) != 0) {
48
MemoryRegion *mr_ddr;
31
+ return 0xffff;
49
uint32_t psci_conduit;
32
+ }
50
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
51
#define VERSAL_GEM1_IRQ_0 58
52
#define VERSAL_GEM1_WAKE_IRQ_0 59
53
#define VERSAL_ADMA_IRQ_0 60
54
+#define VERSAL_SD0_IRQ_0 126
55
56
/* Architecturally reserved IRQs suitable for virtualization. */
57
#define VERSAL_RSVD_IRQ_FIRST 111
58
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
59
#define MM_FPD_CRF 0xfd1a0000U
60
#define MM_FPD_CRF_SIZE 0x140000
61
62
+#define MM_PMC_SD0 0xf1040000U
63
+#define MM_PMC_SD0_SIZE 0x10000
64
#define MM_PMC_CRP 0xf1260000U
65
#define MM_PMC_CRP_SIZE 0x10000
66
#endif
67
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/hw/arm/xlnx-versal.c
70
+++ b/hw/arm/xlnx-versal.c
71
@@ -XXX,XX +XXX,XX @@ static void versal_create_admas(Versal *s, qemu_irq *pic)
72
}
73
}
74
75
+#define SDHCI_CAPABILITIES 0x280737ec6481 /* Same as on ZynqMP. */
76
+static void versal_create_sds(Versal *s, qemu_irq *pic)
77
+{
78
+ int i;
79
+
33
+
80
+ for (i = 0; i < ARRAY_SIZE(s->pmc.iou.sd); i++) {
34
+ eci = env->condexec_bits >> 4;
81
+ DeviceState *dev;
35
+ switch (eci) {
82
+ MemoryRegion *mr;
36
+ case ECI_NONE:
83
+
37
+ return 0xffff;
84
+ sysbus_init_child_obj(OBJECT(s), "sd[*]",
38
+ case ECI_A0:
85
+ &s->pmc.iou.sd[i], sizeof(s->pmc.iou.sd[i]),
39
+ return 0xfff0;
86
+ TYPE_SYSBUS_SDHCI);
40
+ case ECI_A0A1:
87
+ dev = DEVICE(&s->pmc.iou.sd[i]);
41
+ return 0xff00;
88
+
42
+ case ECI_A0A1A2:
89
+ object_property_set_uint(OBJECT(dev),
43
+ case ECI_A0A1A2B0:
90
+ 3, "sd-spec-version", &error_fatal);
44
+ return 0xf000;
91
+ object_property_set_uint(OBJECT(dev), SDHCI_CAPABILITIES, "capareg",
45
+ default:
92
+ &error_fatal);
46
+ g_assert_not_reached();
93
+ object_property_set_uint(OBJECT(dev), UHS_I, "uhs", &error_fatal);
94
+ qdev_init_nofail(dev);
95
+
96
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
97
+ memory_region_add_subregion(&s->mr_ps,
98
+ MM_PMC_SD0 + i * MM_PMC_SD0_SIZE, mr);
99
+
100
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0,
101
+ pic[VERSAL_SD0_IRQ_0 + i * 2]);
102
+ }
47
+ }
103
+}
48
+}
104
+
49
+
105
/* This takes the board allocated linear DDR memory and creates aliases
50
static uint16_t mve_element_mask(CPUARMState *env)
106
* for each split DDR range/aperture on the Versal address map.
51
{
107
*/
52
/*
108
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
53
@@ -XXX,XX +XXX,XX @@ static uint16_t mve_element_mask(CPUARMState *env)
109
versal_create_uarts(s, pic);
54
mask &= ltpmask;
110
versal_create_gems(s, pic);
55
}
111
versal_create_admas(s, pic);
56
112
+ versal_create_sds(s, pic);
57
- if ((env->condexec_bits & 0xf) == 0) {
113
versal_map_ddr(s);
58
- /*
114
versal_unimp(s);
59
- * ECI bits indicate which beats are already executed;
60
- * we handle this by effectively predicating them out.
61
- */
62
- int eci = env->condexec_bits >> 4;
63
- switch (eci) {
64
- case ECI_NONE:
65
- break;
66
- case ECI_A0:
67
- mask &= 0xfff0;
68
- break;
69
- case ECI_A0A1:
70
- mask &= 0xff00;
71
- break;
72
- case ECI_A0A1A2:
73
- case ECI_A0A1A2B0:
74
- mask &= 0xf000;
75
- break;
76
- default:
77
- g_assert_not_reached();
78
- }
79
- }
80
-
81
+ /*
82
+ * ECI bits indicate which beats are already executed;
83
+ * we handle this by effectively predicating them out.
84
+ */
85
+ mask &= mve_eci_mask(env);
86
return mask;
87
}
115
88
116
--
89
--
117
2.20.1
90
2.20.1
118
91
119
92
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
We were not paying attention to the ECI state when advancing the VPT
2
state. Architecturally, VPT state advance happens for every beat
3
(see the pseudocode VPTAdvance()), so on every beat the 4 bits of
4
VPR.P0 corresponding to the current beat are inverted if required,
5
and at the end of beats 1 and 3 the VPR MASK fields are updated.
6
This means that if the ECI state says we should not be executing all
7
4 beats then we need to skip some of the updating of the VPR that we
8
currently do in mve_advance_vpt().
2
9
3
Embed the ADMAs into the SoC type.
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/mve_helper.c | 24 +++++++++++++++++-------
14
1 file changed, 17 insertions(+), 7 deletions(-)
4
15
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
16
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20200427181649.26851-7-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/xlnx-versal.h | 3 ++-
14
hw/arm/xlnx-versal.c | 14 +++++++-------
15
2 files changed, 9 insertions(+), 8 deletions(-)
16
17
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/xlnx-versal.h
18
--- a/target/arm/mve_helper.c
20
+++ b/include/hw/arm/xlnx-versal.h
19
+++ b/target/arm/mve_helper.c
21
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
22
#include "hw/arm/boot.h"
21
/* Advance the VPT and ECI state if necessary */
23
#include "hw/intc/arm_gicv3.h"
22
uint32_t vpr = env->v7m.vpr;
24
#include "hw/char/pl011.h"
23
unsigned mask01, mask23;
25
+#include "hw/dma/xlnx-zdma.h"
24
+ uint16_t inv_mask;
26
#include "hw/net/cadence_gem.h"
25
+ uint16_t eci_mask = mve_eci_mask(env);
27
26
28
#define TYPE_XLNX_VERSAL "xlnx-versal"
27
if ((env->condexec_bits & 0xf) == 0) {
29
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
28
env->condexec_bits = (env->condexec_bits == (ECI_A0A1A2B0 << 4)) ?
30
struct {
29
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
31
PL011State uart[XLNX_VERSAL_NR_UARTS];
30
return;
32
CadenceGEMState gem[XLNX_VERSAL_NR_GEMS];
33
- SysBusDevice *adma[XLNX_VERSAL_NR_ADMAS];
34
+ XlnxZDMA adma[XLNX_VERSAL_NR_ADMAS];
35
} iou;
36
} lpd;
37
38
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/xlnx-versal.c
41
+++ b/hw/arm/xlnx-versal.c
42
@@ -XXX,XX +XXX,XX @@ static void versal_create_admas(Versal *s, qemu_irq *pic)
43
DeviceState *dev;
44
MemoryRegion *mr;
45
46
- dev = qdev_create(NULL, "xlnx.zdma");
47
- s->lpd.iou.adma[i] = SYS_BUS_DEVICE(dev);
48
- object_property_set_int(OBJECT(s->lpd.iou.adma[i]), 128, "bus-width",
49
- &error_abort);
50
- object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal);
51
+ sysbus_init_child_obj(OBJECT(s), name,
52
+ &s->lpd.iou.adma[i], sizeof(s->lpd.iou.adma[i]),
53
+ TYPE_XLNX_ZDMA);
54
+ dev = DEVICE(&s->lpd.iou.adma[i]);
55
+ object_property_set_int(OBJECT(dev), 128, "bus-width", &error_abort);
56
qdev_init_nofail(dev);
57
58
- mr = sysbus_mmio_get_region(s->lpd.iou.adma[i], 0);
59
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
60
memory_region_add_subregion(&s->mr_ps,
61
MM_ADMA_CH0 + i * MM_ADMA_CH0_SIZE, mr);
62
63
- sysbus_connect_irq(s->lpd.iou.adma[i], 0, pic[VERSAL_ADMA_IRQ_0 + i]);
64
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[VERSAL_ADMA_IRQ_0 + i]);
65
g_free(name);
66
}
31
}
32
33
+ /* Invert P0 bits if needed, but only for beats we actually executed */
34
mask01 = FIELD_EX32(vpr, V7M_VPR, MASK01);
35
mask23 = FIELD_EX32(vpr, V7M_VPR, MASK23);
36
- if (mask01 > 8) {
37
- /* high bit set, but not 0b1000: invert the relevant half of P0 */
38
- vpr ^= 0xff;
39
+ /* Start by assuming we invert all bits corresponding to executed beats */
40
+ inv_mask = eci_mask;
41
+ if (mask01 <= 8) {
42
+ /* MASK01 says don't invert low half of P0 */
43
+ inv_mask &= ~0xff;
44
}
45
- if (mask23 > 8) {
46
- /* high bit set, but not 0b1000: invert the relevant half of P0 */
47
- vpr ^= 0xff00;
48
+ if (mask23 <= 8) {
49
+ /* MASK23 says don't invert high half of P0 */
50
+ inv_mask &= ~0xff00;
51
}
52
- vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
53
+ vpr ^= inv_mask;
54
+ /* Only update MASK01 if beat 1 executed */
55
+ if (eci_mask & 0xf0) {
56
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
57
+ }
58
+ /* Beat 3 always executes, so update MASK23 */
59
vpr = FIELD_DP32(vpr, V7M_VPR, MASK23, mask23 << 1);
60
env->v7m.vpr = vpr;
67
}
61
}
68
--
62
--
69
2.20.1
63
2.20.1
70
64
71
65
diff view generated by jsdifflib
New patch
1
For vector loads, predicated elements are zeroed, instead of
2
retaining their previous values (as happens for most data
3
processing operations). This means we need to distinguish
4
"beat not executed due to ECI" (don't touch destination
5
element) from "beat executed but predicated out" (zero
6
destination element).
1
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
target/arm/mve_helper.c | 8 +++++---
12
1 file changed, 5 insertions(+), 3 deletions(-)
13
14
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/mve_helper.c
17
+++ b/target/arm/mve_helper.c
18
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
19
env->v7m.vpr = vpr;
20
}
21
22
-
23
+/* For loads, predicated lanes are zeroed instead of keeping their old values */
24
#define DO_VLDR(OP, MSIZE, LDTYPE, ESIZE, TYPE) \
25
void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
26
{ \
27
TYPE *d = vd; \
28
uint16_t mask = mve_element_mask(env); \
29
+ uint16_t eci_mask = mve_eci_mask(env); \
30
unsigned b, e; \
31
/* \
32
* R_SXTM allows the dest reg to become UNKNOWN for abandoned \
33
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
34
* then take an exception. \
35
*/ \
36
for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
37
- if (mask & (1 << b)) { \
38
- d[H##ESIZE(e)] = cpu_##LDTYPE##_data_ra(env, addr, GETPC()); \
39
+ if (eci_mask & (1 << b)) { \
40
+ d[H##ESIZE(e)] = (mask & (1 << b)) ? \
41
+ cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
42
} \
43
addr += MSIZE; \
44
} \
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
1
The ARMv8.2-TTS2UXN feature extends the XN field in stage 2
1
Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes
2
translation table descriptors from just bit [54] to bits [54:53],
2
in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the
3
allowing stage 2 to control execution permissions separately for EL0
3
inputs are in either the low or the high half of each double-width
4
and EL1. Implement the new semantics of the XN field and enable
4
element.
5
the feature for our 'max' CPU.
5
6
The assembler for this insn indicates the size with "P8" or "P16",
7
encoded into bit 28 as size = 0 or 1. We choose to follow the
8
same encoding as VQDMULL and decode this into a->size as MO_16
9
or MO_32 indicating the size of the result elements. This then
10
carries through to the helper function names where it then
11
matches up with the existing pmull_h() which does an 8x8->16
12
operation and a new pmull_w() which does the 16x16->32.
6
13
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200330210400.11724-5-peter.maydell@linaro.org
11
---
16
---
12
target/arm/cpu.h | 15 +++++++++++++++
17
target/arm/helper-mve.h | 5 +++++
13
target/arm/cpu.c | 1 +
18
target/arm/vec_internal.h | 11 +++++++++++
14
target/arm/cpu64.c | 2 ++
19
target/arm/mve.decode | 14 ++++++++++----
15
target/arm/helper.c | 37 +++++++++++++++++++++++++++++++------
20
target/arm/mve_helper.c | 16 ++++++++++++++++
16
4 files changed, 49 insertions(+), 6 deletions(-)
21
target/arm/translate-mve.c | 28 ++++++++++++++++++++++++++++
22
target/arm/vec_helper.c | 14 +++++++++++++-
23
6 files changed, 83 insertions(+), 5 deletions(-)
17
24
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
19
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
27
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/cpu.h
28
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ccidx(const ARMISARegisters *id)
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
return FIELD_EX32(id->id_mmfr4, ID_MMFR4, CCIDX) != 0;
30
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
33
+DEF_HELPER_FLAGS_4(mve_vmullpbh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vmullpth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vmullpbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+DEF_HELPER_FLAGS_4(mve_vmullptw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
+
38
DEF_HELPER_FLAGS_4(mve_vqdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
DEF_HELPER_FLAGS_4(mve_vqdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
DEF_HELPER_FLAGS_4(mve_vqdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/vec_internal.h
44
+++ b/target/arm/vec_internal.h
45
@@ -XXX,XX +XXX,XX @@ int16_t do_sqrdmlah_h(int16_t, int16_t, int16_t, bool, bool, uint32_t *);
46
int32_t do_sqrdmlah_s(int32_t, int32_t, int32_t, bool, bool, uint32_t *);
47
int64_t do_sqrdmlah_d(int64_t, int64_t, int64_t, bool, bool);
48
49
+/*
50
+ * 8 x 8 -> 16 vector polynomial multiply where the inputs are
51
+ * in the low 8 bits of each 16-bit element
52
+*/
53
+uint64_t pmull_h(uint64_t op1, uint64_t op2);
54
+/*
55
+ * 16 x 16 -> 32 vector polynomial multiply where the inputs are
56
+ * in the low 16 bits of each 32-bit element
57
+ */
58
+uint64_t pmull_w(uint64_t op1, uint64_t op2);
59
+
60
#endif /* TARGET_ARM_VEC_INTERNALS_H */
61
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/mve.decode
64
+++ b/target/arm/mve.decode
65
@@ -XXX,XX +XXX,XX @@ VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
66
VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
67
VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
68
69
-VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
70
-VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
71
-VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
72
-VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
73
+{
74
+ VMULLP_B 111 . 1110 0 . 11 ... 1 ... 0 1110 . 0 . 0 ... 0 @2op_sz28
75
+ VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
76
+ VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
77
+}
78
+{
79
+ VMULLP_T 111 . 1110 0 . 11 ... 1 ... 1 1110 . 0 . 0 ... 0 @2op_sz28
80
+ VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
81
+ VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
82
+}
83
84
VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
85
VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
86
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/mve_helper.c
89
+++ b/target/arm/mve_helper.c
90
@@ -XXX,XX +XXX,XX @@ DO_2OP_L(vmulltub, 1, 1, uint8_t, 2, uint16_t, DO_MUL)
91
DO_2OP_L(vmulltuh, 1, 2, uint16_t, 4, uint32_t, DO_MUL)
92
DO_2OP_L(vmulltuw, 1, 4, uint32_t, 8, uint64_t, DO_MUL)
93
94
+/*
95
+ * Polynomial multiply. We can always do this generating 64 bits
96
+ * of the result at a time, so we don't need to use DO_2OP_L.
97
+ */
98
+#define VMULLPH_MASK 0x00ff00ff00ff00ffULL
99
+#define VMULLPW_MASK 0x0000ffff0000ffffULL
100
+#define DO_VMULLPBH(N, M) pmull_h((N) & VMULLPH_MASK, (M) & VMULLPH_MASK)
101
+#define DO_VMULLPTH(N, M) DO_VMULLPBH((N) >> 8, (M) >> 8)
102
+#define DO_VMULLPBW(N, M) pmull_w((N) & VMULLPW_MASK, (M) & VMULLPW_MASK)
103
+#define DO_VMULLPTW(N, M) DO_VMULLPBW((N) >> 16, (M) >> 16)
104
+
105
+DO_2OP(vmullpbh, 8, uint64_t, DO_VMULLPBH)
106
+DO_2OP(vmullpth, 8, uint64_t, DO_VMULLPTH)
107
+DO_2OP(vmullpbw, 8, uint64_t, DO_VMULLPBW)
108
+DO_2OP(vmullptw, 8, uint64_t, DO_VMULLPTW)
109
+
110
/*
111
* Because the computation type is at least twice as large as required,
112
* these work for both signed and unsigned source types.
113
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/translate-mve.c
116
+++ b/target/arm/translate-mve.c
117
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
118
return do_2op(s, a, fns[a->size]);
24
}
119
}
25
120
26
+static inline bool isar_feature_aa32_tts2uxn(const ARMISARegisters *id)
121
+static bool trans_VMULLP_B(DisasContext *s, arg_2op *a)
27
+{
122
+{
28
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, XNX) != 0;
123
+ /*
124
+ * Note that a->size indicates the output size, ie VMULL.P8
125
+ * is the 8x8->16 operation and a->size is MO_16; VMULL.P16
126
+ * is the 16x16->32 operation and a->size is MO_32.
127
+ */
128
+ static MVEGenTwoOpFn * const fns[] = {
129
+ NULL,
130
+ gen_helper_mve_vmullpbh,
131
+ gen_helper_mve_vmullpbw,
132
+ NULL,
133
+ };
134
+ return do_2op(s, a, fns[a->size]);
135
+}
136
+
137
+static bool trans_VMULLP_T(DisasContext *s, arg_2op *a)
138
+{
139
+ /* a->size is as for trans_VMULLP_B */
140
+ static MVEGenTwoOpFn * const fns[] = {
141
+ NULL,
142
+ gen_helper_mve_vmullpth,
143
+ gen_helper_mve_vmullptw,
144
+ NULL,
145
+ };
146
+ return do_2op(s, a, fns[a->size]);
29
+}
147
+}
30
+
148
+
31
/*
149
/*
32
* 64-bit feature tests via id registers.
150
* VADC and VSBC: these perform an add-with-carry or subtract-with-carry
33
*/
151
* of the 32-bit elements in each lane of the input vectors, where the
34
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
152
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
35
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/vec_helper.c
155
+++ b/target/arm/vec_helper.c
156
@@ -XXX,XX +XXX,XX @@ static uint64_t expand_byte_to_half(uint64_t x)
157
| ((x & 0xff000000) << 24);
36
}
158
}
37
159
38
+static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
160
-static uint64_t pmull_h(uint64_t op1, uint64_t op2)
161
+uint64_t pmull_w(uint64_t op1, uint64_t op2)
162
{
163
uint64_t result = 0;
164
int i;
165
+ for (i = 0; i < 16; ++i) {
166
+ uint64_t mask = (op1 & 0x0000000100000001ull) * 0xffffffff;
167
+ result ^= op2 & mask;
168
+ op1 >>= 1;
169
+ op2 <<= 1;
170
+ }
171
+ return result;
172
+}
173
174
+uint64_t pmull_h(uint64_t op1, uint64_t op2)
39
+{
175
+{
40
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
176
+ uint64_t result = 0;
41
+}
177
+ int i;
42
+
178
for (i = 0; i < 8; ++i) {
43
/*
179
uint64_t mask = (op1 & 0x0001000100010001ull) * 0xffff;
44
* Feature tests for "does this exist in either 32-bit or 64-bit?"
180
result ^= op2 & mask;
45
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_ccidx(const ARMISARegisters *id)
47
return isar_feature_aa64_ccidx(id) || isar_feature_aa32_ccidx(id);
48
}
49
50
+static inline bool isar_feature_any_tts2uxn(const ARMISARegisters *id)
51
+{
52
+ return isar_feature_aa64_tts2uxn(id) || isar_feature_aa32_tts2uxn(id);
53
+}
54
+
55
/*
56
* Forward to the above feature tests given an ARMCPU pointer.
57
*/
58
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/cpu.c
61
+++ b/target/arm/cpu.c
62
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
63
t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
64
t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
65
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
66
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
67
cpu->isar.id_mmfr4 = t;
68
}
69
#endif
70
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/cpu64.c
73
+++ b/target/arm/cpu64.c
74
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
75
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
76
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
77
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
78
+ t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */
79
cpu->isar.id_aa64mmfr1 = t;
80
81
t = cpu->isar.id_aa64mmfr2;
82
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
83
u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
84
u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
85
u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
86
+ u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
87
cpu->isar.id_mmfr4 = u;
88
89
u = cpu->isar.id_aa64dfr0;
90
diff --git a/target/arm/helper.c b/target/arm/helper.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/helper.c
93
+++ b/target/arm/helper.c
94
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
95
*
96
* @env: CPUARMState
97
* @s2ap: The 2-bit stage2 access permissions (S2AP)
98
- * @xn: XN (execute-never) bit
99
+ * @xn: XN (execute-never) bits
100
+ * @s1_is_el0: true if this is S2 of an S1+2 walk for EL0
101
*/
102
-static int get_S2prot(CPUARMState *env, int s2ap, int xn)
103
+static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0)
104
{
105
int prot = 0;
106
107
@@ -XXX,XX +XXX,XX @@ static int get_S2prot(CPUARMState *env, int s2ap, int xn)
108
if (s2ap & 2) {
109
prot |= PAGE_WRITE;
110
}
111
- if (!xn) {
112
- if (arm_el_is_aa64(env, 2) || prot & PAGE_READ) {
113
+
114
+ if (cpu_isar_feature(any_tts2uxn, env_archcpu(env))) {
115
+ switch (xn) {
116
+ case 0:
117
prot |= PAGE_EXEC;
118
+ break;
119
+ case 1:
120
+ if (s1_is_el0) {
121
+ prot |= PAGE_EXEC;
122
+ }
123
+ break;
124
+ case 2:
125
+ break;
126
+ case 3:
127
+ if (!s1_is_el0) {
128
+ prot |= PAGE_EXEC;
129
+ }
130
+ break;
131
+ default:
132
+ g_assert_not_reached();
133
+ }
134
+ } else {
135
+ if (!extract32(xn, 1, 1)) {
136
+ if (arm_el_is_aa64(env, 2) || prot & PAGE_READ) {
137
+ prot |= PAGE_EXEC;
138
+ }
139
}
140
}
141
return prot;
142
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
143
}
144
145
ap = extract32(attrs, 4, 2);
146
- xn = extract32(attrs, 12, 1);
147
148
if (mmu_idx == ARMMMUIdx_Stage2) {
149
ns = true;
150
- *prot = get_S2prot(env, ap, xn);
151
+ xn = extract32(attrs, 11, 2);
152
+ *prot = get_S2prot(env, ap, xn, s1_is_el0);
153
} else {
154
ns = extract32(attrs, 3, 1);
155
+ xn = extract32(attrs, 12, 1);
156
pxn = extract32(attrs, 11, 1);
157
*prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
158
}
159
--
181
--
160
2.20.1
182
2.20.1
161
183
162
184
diff view generated by jsdifflib
1
Convert VCMLA (scalar) in the 2reg-scalar-ext group to decodetree.
1
Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP,
2
VIWDUP and VDWDUP. These fill the elements of a vector with
3
successively incrementing values, starting at the offset specified in
4
a general purpose register. The final value of the offset is written
5
back to this register. The wrapping variants take a second general
6
purpose register which specifies the point where the count should
7
wrap back to 0.
2
8
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-9-peter.maydell@linaro.org
6
---
11
---
7
target/arm/neon-shared.decode | 5 +++++
12
target/arm/helper-mve.h | 12 ++++
8
target/arm/translate-neon.inc.c | 40 +++++++++++++++++++++++++++++++++
13
target/arm/mve.decode | 25 ++++++++
9
target/arm/translate.c | 26 +--------------------
14
target/arm/mve_helper.c | 63 +++++++++++++++++++
10
3 files changed, 46 insertions(+), 25 deletions(-)
15
target/arm/translate-mve.c | 120 +++++++++++++++++++++++++++++++++++++
16
4 files changed, 220 insertions(+)
11
17
12
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-shared.decode
20
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/neon-shared.decode
21
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ VFML 1111 110 0 s:1 . 10 .... .... 1000 . 0 . 1 .... \
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
17
vm=%vm_sp vn=%vn_sp vd=%vd_dp q=0
23
18
VFML 1111 110 0 s:1 . 10 .... .... 1000 . 1 . 1 .... \
24
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
19
vm=%vm_dp vn=%vn_dp vd=%vd_dp q=1
25
20
+
26
+DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
21
+VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
27
+DEF_HELPER_FLAGS_4(mve_viduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
22
+ vn=%vn_dp vd=%vd_dp size=0
28
+DEF_HELPER_FLAGS_4(mve_vidupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
23
+VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
29
+
24
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp size=1 index=0
30
+DEF_HELPER_FLAGS_5(mve_viwdupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
25
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
31
+DEF_HELPER_FLAGS_5(mve_viwduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
32
+DEF_HELPER_FLAGS_5(mve_viwdupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
33
+
34
+DEF_HELPER_FLAGS_5(mve_vdwdupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
35
+DEF_HELPER_FLAGS_5(mve_vdwduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
36
+DEF_HELPER_FLAGS_5(mve_vdwdupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
37
+
38
DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
40
DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
26
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/translate-neon.inc.c
43
--- a/target/arm/mve.decode
28
+++ b/target/arm/translate-neon.inc.c
44
+++ b/target/arm/mve.decode
29
@@ -XXX,XX +XXX,XX @@ static bool trans_VFML(DisasContext *s, arg_VFML *a)
45
@@ -XXX,XX +XXX,XX @@
30
gen_helper_gvec_fmlal_a32);
46
&2scalar qd qn rm size
47
&1imm qd imm cmode op
48
&2shift qd qm shift size
49
+&vidup qd rn size imm
50
+&viwdup qd rn rm size imm
51
52
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
53
# Note that both Rn and Qd are 3 bits only (no D bit)
54
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
55
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
56
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
57
58
+# Incrementing and decrementing dup
59
+
60
+# VIDUP, VDDUP format immediate: 1 << (immh:imml)
61
+%imm_vidup 7:1 0:1 !function=vidup_imm
62
+
63
+# VIDUP, VDDUP registers: Rm bits [3:1] from insn, bit 0 is 1;
64
+# Rn bits [3:1] from insn, bit 0 is 0
65
+%vidup_rm 1:3 !function=times_2_plus_1
66
+%vidup_rn 17:3 !function=times_2
67
+
68
+@vidup .... .... . . size:2 .... .... .... .... .... \
69
+ qd=%qd imm=%imm_vidup rn=%vidup_rn &vidup
70
+@viwdup .... .... . . size:2 .... .... .... .... .... \
71
+ qd=%qd imm=%imm_vidup rm=%vidup_rm rn=%vidup_rn &viwdup
72
+{
73
+ VIDUP 1110 1110 0 . .. ... 1 ... 0 1111 . 110 111 . @vidup
74
+ VIWDUP 1110 1110 0 . .. ... 1 ... 0 1111 . 110 ... . @viwdup
75
+}
76
+{
77
+ VDDUP 1110 1110 0 . .. ... 1 ... 1 1111 . 110 111 . @vidup
78
+ VDWDUP 1110 1110 0 . .. ... 1 ... 1 1111 . 110 ... . @viwdup
79
+}
80
+
81
# multiply-add long dual accumulate
82
# rdahi: bits [3:1] from insn, bit 0 is 1
83
# rdalo: bits [3:1] from insn, bit 0 is 0
84
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/mve_helper.c
87
+++ b/target/arm/mve_helper.c
88
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_sqrshr)(CPUARMState *env, uint32_t n, uint32_t shift)
89
{
90
return do_sqrshl_bhs(n, -(int8_t)shift, 32, true, &env->QF);
91
}
92
+
93
+#define DO_VIDUP(OP, ESIZE, TYPE, FN) \
94
+ uint32_t HELPER(mve_##OP)(CPUARMState *env, void *vd, \
95
+ uint32_t offset, uint32_t imm) \
96
+ { \
97
+ TYPE *d = vd; \
98
+ uint16_t mask = mve_element_mask(env); \
99
+ unsigned e; \
100
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
101
+ mergemask(&d[H##ESIZE(e)], offset, mask); \
102
+ offset = FN(offset, imm); \
103
+ } \
104
+ mve_advance_vpt(env); \
105
+ return offset; \
106
+ }
107
+
108
+#define DO_VIWDUP(OP, ESIZE, TYPE, FN) \
109
+ uint32_t HELPER(mve_##OP)(CPUARMState *env, void *vd, \
110
+ uint32_t offset, uint32_t wrap, \
111
+ uint32_t imm) \
112
+ { \
113
+ TYPE *d = vd; \
114
+ uint16_t mask = mve_element_mask(env); \
115
+ unsigned e; \
116
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
117
+ mergemask(&d[H##ESIZE(e)], offset, mask); \
118
+ offset = FN(offset, wrap, imm); \
119
+ } \
120
+ mve_advance_vpt(env); \
121
+ return offset; \
122
+ }
123
+
124
+#define DO_VIDUP_ALL(OP, FN) \
125
+ DO_VIDUP(OP##b, 1, int8_t, FN) \
126
+ DO_VIDUP(OP##h, 2, int16_t, FN) \
127
+ DO_VIDUP(OP##w, 4, int32_t, FN)
128
+
129
+#define DO_VIWDUP_ALL(OP, FN) \
130
+ DO_VIWDUP(OP##b, 1, int8_t, FN) \
131
+ DO_VIWDUP(OP##h, 2, int16_t, FN) \
132
+ DO_VIWDUP(OP##w, 4, int32_t, FN)
133
+
134
+static uint32_t do_add_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
135
+{
136
+ offset += imm;
137
+ if (offset == wrap) {
138
+ offset = 0;
139
+ }
140
+ return offset;
141
+}
142
+
143
+static uint32_t do_sub_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
144
+{
145
+ if (offset == 0) {
146
+ offset = wrap;
147
+ }
148
+ offset -= imm;
149
+ return offset;
150
+}
151
+
152
+DO_VIDUP_ALL(vidup, DO_ADD)
153
+DO_VIWDUP_ALL(viwdup, do_add_wrap)
154
+DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
155
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
156
index XXXXXXX..XXXXXXX 100644
157
--- a/target/arm/translate-mve.c
158
+++ b/target/arm/translate-mve.c
159
@@ -XXX,XX +XXX,XX @@
160
#include "translate.h"
161
#include "translate-a32.h"
162
163
+static inline int vidup_imm(DisasContext *s, int x)
164
+{
165
+ return 1 << x;
166
+}
167
+
168
/* Include the generated decoder */
169
#include "decode-mve.c.inc"
170
171
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
172
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
173
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
174
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
175
+typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
176
+typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
177
178
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
179
static inline long mve_qreg_offset(unsigned reg)
180
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLC(DisasContext *s, arg_VSHLC *a)
181
mve_update_eci(s);
31
return true;
182
return true;
32
}
183
}
33
+
184
+
34
+static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
185
+static bool do_vidup(DisasContext *s, arg_vidup *a, MVEGenVIDUPFn *fn)
35
+{
186
+{
36
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
187
+ TCGv_ptr qd;
37
+ int opr_sz;
188
+ TCGv_i32 rn;
38
+ TCGv_ptr fpst;
189
+
39
+
190
+ /*
40
+ if (!dc_isar_feature(aa32_vcma, s)) {
191
+ * Vector increment/decrement with wrap and duplicate (VIDUP, VDDUP).
192
+ * This fills the vector with elements of successively increasing
193
+ * or decreasing values, starting from Rn.
194
+ */
195
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) {
41
+ return false;
196
+ return false;
42
+ }
197
+ }
43
+ if (a->size == 0 && !dc_isar_feature(aa32_fp16_arith, s)) {
198
+ if (a->size == MO_64) {
199
+ /* size 0b11 is another encoding */
44
+ return false;
200
+ return false;
45
+ }
201
+ }
46
+
202
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
47
+ /* UNDEF accesses to D16-D31 if they don't exist. */
203
+ return true;
48
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
204
+ }
49
+ ((a->vd | a->vn | a->vm) & 0x10)) {
205
+
206
+ qd = mve_qreg_ptr(a->qd);
207
+ rn = load_reg(s, a->rn);
208
+ fn(rn, cpu_env, qd, rn, tcg_constant_i32(a->imm));
209
+ store_reg(s, a->rn, rn);
210
+ tcg_temp_free_ptr(qd);
211
+ mve_update_eci(s);
212
+ return true;
213
+}
214
+
215
+static bool do_viwdup(DisasContext *s, arg_viwdup *a, MVEGenVIWDUPFn *fn)
216
+{
217
+ TCGv_ptr qd;
218
+ TCGv_i32 rn, rm;
219
+
220
+ /*
221
+ * Vector increment/decrement with wrap and duplicate (VIWDUp, VDWDUP)
222
+ * This fills the vector with elements of successively increasing
223
+ * or decreasing values, starting from Rn. Rm specifies a point where
224
+ * the count wraps back around to 0. The updated offset is written back
225
+ * to Rn.
226
+ */
227
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) {
50
+ return false;
228
+ return false;
51
+ }
229
+ }
52
+
230
+ if (!fn || a->rm == 13 || a->rm == 15) {
53
+ if ((a->vd | a->vn) & a->q) {
231
+ /*
232
+ * size 0b11 is another encoding; Rm == 13 is UNPREDICTABLE;
233
+ * Rm == 13 is VIWDUP, VDWDUP.
234
+ */
54
+ return false;
235
+ return false;
55
+ }
236
+ }
56
+
237
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
57
+ if (!vfp_access_check(s)) {
58
+ return true;
238
+ return true;
59
+ }
239
+ }
60
+
240
+
61
+ fn_gvec_ptr = (a->size ? gen_helper_gvec_fcmlas_idx
241
+ qd = mve_qreg_ptr(a->qd);
62
+ : gen_helper_gvec_fcmlah_idx);
242
+ rn = load_reg(s, a->rn);
63
+ opr_sz = (1 + a->q) * 8;
243
+ rm = load_reg(s, a->rm);
64
+ fpst = get_fpstatus_ptr(1);
244
+ fn(rn, cpu_env, qd, rn, rm, tcg_constant_i32(a->imm));
65
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
245
+ store_reg(s, a->rn, rn);
66
+ vfp_reg_offset(1, a->vn),
246
+ tcg_temp_free_ptr(qd);
67
+ vfp_reg_offset(1, a->vm),
247
+ tcg_temp_free_i32(rm);
68
+ fpst, opr_sz, opr_sz,
248
+ mve_update_eci(s);
69
+ (a->index << 2) | a->rot, fn_gvec_ptr);
70
+ tcg_temp_free_ptr(fpst);
71
+ return true;
249
+ return true;
72
+}
250
+}
73
diff --git a/target/arm/translate.c b/target/arm/translate.c
251
+
74
index XXXXXXX..XXXXXXX 100644
252
+static bool trans_VIDUP(DisasContext *s, arg_vidup *a)
75
--- a/target/arm/translate.c
253
+{
76
+++ b/target/arm/translate.c
254
+ static MVEGenVIDUPFn * const fns[] = {
77
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
255
+ gen_helper_mve_vidupb,
78
bool is_long = false, q = extract32(insn, 6, 1);
256
+ gen_helper_mve_viduph,
79
bool ptr_is_env = false;
257
+ gen_helper_mve_vidupw,
80
258
+ NULL,
81
- if ((insn & 0xff000f10) == 0xfe000800) {
259
+ };
82
- /* VCMLA (indexed) -- 1111 1110 S.RR .... .... 1000 ...0 .... */
260
+ return do_vidup(s, a, fns[a->size]);
83
- int rot = extract32(insn, 20, 2);
261
+}
84
- int size = extract32(insn, 23, 1);
262
+
85
- int index;
263
+static bool trans_VDDUP(DisasContext *s, arg_vidup *a)
86
-
264
+{
87
- if (!dc_isar_feature(aa32_vcma, s)) {
265
+ static MVEGenVIDUPFn * const fns[] = {
88
- return 1;
266
+ gen_helper_mve_vidupb,
89
- }
267
+ gen_helper_mve_viduph,
90
- if (size == 0) {
268
+ gen_helper_mve_vidupw,
91
- if (!dc_isar_feature(aa32_fp16_arith, s)) {
269
+ NULL,
92
- return 1;
270
+ };
93
- }
271
+ /* VDDUP is just like VIDUP but with a negative immediate */
94
- /* For fp16, rm is just Vm, and index is M. */
272
+ a->imm = -a->imm;
95
- rm = extract32(insn, 0, 4);
273
+ return do_vidup(s, a, fns[a->size]);
96
- index = extract32(insn, 5, 1);
274
+}
97
- } else {
275
+
98
- /* For fp32, rm is the usual M:Vm, and index is 0. */
276
+static bool trans_VIWDUP(DisasContext *s, arg_viwdup *a)
99
- VFP_DREG_M(rm, insn);
277
+{
100
- index = 0;
278
+ static MVEGenVIWDUPFn * const fns[] = {
101
- }
279
+ gen_helper_mve_viwdupb,
102
- data = (index << 2) | rot;
280
+ gen_helper_mve_viwduph,
103
- fn_gvec_ptr = (size ? gen_helper_gvec_fcmlas_idx
281
+ gen_helper_mve_viwdupw,
104
- : gen_helper_gvec_fcmlah_idx);
282
+ NULL,
105
- } else if ((insn & 0xffb00f00) == 0xfe200d00) {
283
+ };
106
+ if ((insn & 0xffb00f00) == 0xfe200d00) {
284
+ return do_viwdup(s, a, fns[a->size]);
107
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
285
+}
108
int u = extract32(insn, 4, 1);
286
+
109
287
+static bool trans_VDWDUP(DisasContext *s, arg_viwdup *a)
288
+{
289
+ static MVEGenVIWDUPFn * const fns[] = {
290
+ gen_helper_mve_vdwdupb,
291
+ gen_helper_mve_vdwduph,
292
+ gen_helper_mve_vdwdupw,
293
+ NULL,
294
+ };
295
+ return do_viwdup(s, a, fns[a->size]);
296
+}
110
--
297
--
111
2.20.1
298
2.20.1
112
299
113
300
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Factor out the "generate code to update VPR.MASK01/MASK23" part of
2
trans_VPST(); we are going to want to reuse it for the VPT insns.
2
3
3
Embed the GEMs into the SoC type.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
target/arm/translate-mve.c | 31 +++++++++++++++++--------------
8
1 file changed, 17 insertions(+), 14 deletions(-)
4
9
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20200427181649.26851-6-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/xlnx-versal.h | 3 ++-
14
hw/arm/xlnx-versal.c | 15 ++++++++-------
15
2 files changed, 10 insertions(+), 8 deletions(-)
16
17
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
18
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/xlnx-versal.h
12
--- a/target/arm/translate-mve.c
20
+++ b/include/hw/arm/xlnx-versal.h
13
+++ b/target/arm/translate-mve.c
21
@@ -XXX,XX +XXX,XX @@
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
22
#include "hw/arm/boot.h"
15
return do_long_dual_acc(s, a, fns[a->x]);
23
#include "hw/intc/arm_gicv3.h"
16
}
24
#include "hw/char/pl011.h"
17
25
+#include "hw/net/cadence_gem.h"
18
-static bool trans_VPST(DisasContext *s, arg_VPST *a)
26
19
+static void gen_vpst(DisasContext *s, uint32_t mask)
27
#define TYPE_XLNX_VERSAL "xlnx-versal"
20
{
28
#define XLNX_VERSAL(obj) OBJECT_CHECK(Versal, (obj), TYPE_XLNX_VERSAL)
21
- TCGv_i32 vpr;
29
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
22
-
30
23
- /* mask == 0 is a "related encoding" */
31
struct {
24
- if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
32
PL011State uart[XLNX_VERSAL_NR_UARTS];
25
- return false;
33
- SysBusDevice *gem[XLNX_VERSAL_NR_GEMS];
26
- }
34
+ CadenceGEMState gem[XLNX_VERSAL_NR_GEMS];
27
- if (!mve_eci_check(s) || !vfp_access_check(s)) {
35
SysBusDevice *adma[XLNX_VERSAL_NR_ADMAS];
28
- return true;
36
} iou;
29
- }
37
} lpd;
30
/*
38
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
31
* Set the VPR mask fields. We take advantage of MASK01 and MASK23
39
index XXXXXXX..XXXXXXX 100644
32
* being adjacent fields in the register.
40
--- a/hw/arm/xlnx-versal.c
33
*
41
+++ b/hw/arm/xlnx-versal.c
34
- * This insn is not predicated, but it is subject to beat-wise
42
@@ -XXX,XX +XXX,XX @@ static void versal_create_gems(Versal *s, qemu_irq *pic)
35
+ * Updating the masks is not predicated, but it is subject to beat-wise
43
DeviceState *dev;
36
* execution, and the mask is updated on the odd-numbered beats.
44
MemoryRegion *mr;
37
* So if PSR.ECI says we should skip beat 1, we mustn't update the
45
38
* 01 mask field.
46
- dev = qdev_create(NULL, "cadence_gem");
39
*/
47
- s->lpd.iou.gem[i] = SYS_BUS_DEVICE(dev);
40
- vpr = load_cpu_field(v7m.vpr);
48
- object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal);
41
+ TCGv_i32 vpr = load_cpu_field(v7m.vpr);
49
+ sysbus_init_child_obj(OBJECT(s), name,
42
switch (s->eci) {
50
+ &s->lpd.iou.gem[i], sizeof(s->lpd.iou.gem[i]),
43
case ECI_NONE:
51
+ TYPE_CADENCE_GEM);
44
case ECI_A0:
52
+ dev = DEVICE(&s->lpd.iou.gem[i]);
45
/* Update both 01 and 23 fields */
53
if (nd->used) {
46
tcg_gen_deposit_i32(vpr, vpr,
54
qemu_check_nic_model(nd, "cadence_gem");
47
- tcg_constant_i32(a->mask | (a->mask << 4)),
55
qdev_set_nic_properties(dev, nd);
48
+ tcg_constant_i32(mask | (mask << 4)),
56
}
49
R_V7M_VPR_MASK01_SHIFT,
57
- object_property_set_int(OBJECT(s->lpd.iou.gem[i]),
50
R_V7M_VPR_MASK01_LENGTH + R_V7M_VPR_MASK23_LENGTH);
58
+ object_property_set_int(OBJECT(dev),
51
break;
59
2, "num-priority-queues",
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
60
&error_abort);
53
case ECI_A0A1A2B0:
61
- object_property_set_link(OBJECT(s->lpd.iou.gem[i]),
54
/* Update only the 23 mask field */
62
+ object_property_set_link(OBJECT(dev),
55
tcg_gen_deposit_i32(vpr, vpr,
63
OBJECT(&s->mr_ps), "dma",
56
- tcg_constant_i32(a->mask),
64
&error_abort);
57
+ tcg_constant_i32(mask),
65
qdev_init_nofail(dev);
58
R_V7M_VPR_MASK23_SHIFT, R_V7M_VPR_MASK23_LENGTH);
66
59
break;
67
- mr = sysbus_mmio_get_region(s->lpd.iou.gem[i], 0);
60
default:
68
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
61
g_assert_not_reached();
69
memory_region_add_subregion(&s->mr_ps, addrs[i], mr);
70
71
- sysbus_connect_irq(s->lpd.iou.gem[i], 0, pic[irqs[i]]);
72
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irqs[i]]);
73
g_free(name);
74
}
62
}
63
store_cpu_field(vpr, v7m.vpr);
64
+}
65
+
66
+static bool trans_VPST(DisasContext *s, arg_VPST *a)
67
+{
68
+ /* mask == 0 is a "related encoding" */
69
+ if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
70
+ return false;
71
+ }
72
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
73
+ return true;
74
+ }
75
+ gen_vpst(s, a->mask);
76
mve_update_and_store_eci(s);
77
return true;
75
}
78
}
76
--
79
--
77
2.20.1
80
2.20.1
78
81
79
82
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MVE integer vector comparison instructions. These are
2
2
"VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings
3
hw/arm: versal: Add support for the RTC.
3
T1, T2 and T3.
4
4
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
5
These insns compare corresponding elements in each vector, and update
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
the VPR.P0 predicate bits with the results of the comparison. VPT
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
8
"VCMP then VPST".
9
Message-id: 20200427181649.26851-10-edgar.iglesias@gmail.com
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
---
12
---
12
include/hw/arm/xlnx-versal.h | 8 ++++++++
13
target/arm/helper-mve.h | 32 ++++++++++++++++++++++
13
hw/arm/xlnx-versal.c | 21 +++++++++++++++++++++
14
target/arm/mve.decode | 18 +++++++++++-
14
2 files changed, 29 insertions(+)
15
target/arm/mve_helper.c | 56 ++++++++++++++++++++++++++++++++++++++
15
16
target/arm/translate-mve.c | 47 ++++++++++++++++++++++++++++++++
16
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
17
4 files changed, 152 insertions(+), 1 deletion(-)
17
index XXXXXXX..XXXXXXX 100644
18
18
--- a/include/hw/arm/xlnx-versal.h
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
19
+++ b/include/hw/arm/xlnx-versal.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_uqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
24
DEF_HELPER_FLAGS_3(mve_sqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
25
DEF_HELPER_FLAGS_3(mve_uqrshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
26
DEF_HELPER_FLAGS_3(mve_sqrshr, TCG_CALL_NO_RWG, i32, env, i32, i32)
27
+
28
+DEF_HELPER_FLAGS_3(mve_vcmpeqb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
+DEF_HELPER_FLAGS_3(mve_vcmpeqh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
+DEF_HELPER_FLAGS_3(mve_vcmpeqw, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
+
32
+DEF_HELPER_FLAGS_3(mve_vcmpneb, TCG_CALL_NO_WG, void, env, ptr, ptr)
33
+DEF_HELPER_FLAGS_3(mve_vcmpneh, TCG_CALL_NO_WG, void, env, ptr, ptr)
34
+DEF_HELPER_FLAGS_3(mve_vcmpnew, TCG_CALL_NO_WG, void, env, ptr, ptr)
35
+
36
+DEF_HELPER_FLAGS_3(mve_vcmpcsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
37
+DEF_HELPER_FLAGS_3(mve_vcmpcsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
38
+DEF_HELPER_FLAGS_3(mve_vcmpcsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
+
40
+DEF_HELPER_FLAGS_3(mve_vcmphib, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
+DEF_HELPER_FLAGS_3(mve_vcmphih, TCG_CALL_NO_WG, void, env, ptr, ptr)
42
+DEF_HELPER_FLAGS_3(mve_vcmphiw, TCG_CALL_NO_WG, void, env, ptr, ptr)
43
+
44
+DEF_HELPER_FLAGS_3(mve_vcmpgeb, TCG_CALL_NO_WG, void, env, ptr, ptr)
45
+DEF_HELPER_FLAGS_3(mve_vcmpgeh, TCG_CALL_NO_WG, void, env, ptr, ptr)
46
+DEF_HELPER_FLAGS_3(mve_vcmpgew, TCG_CALL_NO_WG, void, env, ptr, ptr)
47
+
48
+DEF_HELPER_FLAGS_3(mve_vcmpltb, TCG_CALL_NO_WG, void, env, ptr, ptr)
49
+DEF_HELPER_FLAGS_3(mve_vcmplth, TCG_CALL_NO_WG, void, env, ptr, ptr)
50
+DEF_HELPER_FLAGS_3(mve_vcmpltw, TCG_CALL_NO_WG, void, env, ptr, ptr)
51
+
52
+DEF_HELPER_FLAGS_3(mve_vcmpgtb, TCG_CALL_NO_WG, void, env, ptr, ptr)
53
+DEF_HELPER_FLAGS_3(mve_vcmpgth, TCG_CALL_NO_WG, void, env, ptr, ptr)
54
+DEF_HELPER_FLAGS_3(mve_vcmpgtw, TCG_CALL_NO_WG, void, env, ptr, ptr)
55
+
56
+DEF_HELPER_FLAGS_3(mve_vcmpleb, TCG_CALL_NO_WG, void, env, ptr, ptr)
57
+DEF_HELPER_FLAGS_3(mve_vcmpleh, TCG_CALL_NO_WG, void, env, ptr, ptr)
58
+DEF_HELPER_FLAGS_3(mve_vcmplew, TCG_CALL_NO_WG, void, env, ptr, ptr)
59
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/mve.decode
62
+++ b/target/arm/mve.decode
20
@@ -XXX,XX +XXX,XX @@
63
@@ -XXX,XX +XXX,XX @@
21
#include "hw/char/pl011.h"
64
&2shift qd qm shift size
22
#include "hw/dma/xlnx-zdma.h"
65
&vidup qd rn size imm
23
#include "hw/net/cadence_gem.h"
66
&viwdup qd rn rm size imm
24
+#include "hw/rtc/xlnx-zynqmp-rtc.h"
67
+&vcmp qm qn size mask
25
68
26
#define TYPE_XLNX_VERSAL "xlnx-versal"
69
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
27
#define XLNX_VERSAL(obj) OBJECT_CHECK(Versal, (obj), TYPE_XLNX_VERSAL)
70
# Note that both Rn and Qd are 3 bits only (no D bit)
28
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
71
@@ -XXX,XX +XXX,XX @@
29
struct {
72
@2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \
30
SDHCIState sd[XLNX_VERSAL_NR_SDS];
73
size=2 shift=%rshift_i5
31
} iou;
74
32
+
75
+# Vector comparison; 4-bit Qm but 3-bit Qn
33
+ XlnxZynqMPRTC rtc;
76
+%mask_22_13 22:1 13:3
34
} pmc;
77
+@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
35
78
+
36
struct {
79
# Vector loads and stores
37
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
80
38
#define VERSAL_GEM1_IRQ_0 58
81
# Widening loads and narrowing stores:
39
#define VERSAL_GEM1_WAKE_IRQ_0 59
82
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
40
#define VERSAL_ADMA_IRQ_0 60
41
+#define VERSAL_RTC_APB_ERR_IRQ 121
42
#define VERSAL_SD0_IRQ_0 126
43
+#define VERSAL_RTC_ALARM_IRQ 142
44
+#define VERSAL_RTC_SECONDS_IRQ 143
45
46
/* Architecturally reserved IRQs suitable for virtualization. */
47
#define VERSAL_RSVD_IRQ_FIRST 111
48
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
49
#define MM_PMC_SD0_SIZE 0x10000
50
#define MM_PMC_CRP 0xf1260000U
51
#define MM_PMC_CRP_SIZE 0x10000
52
+#define MM_PMC_RTC 0xf12a0000
53
+#define MM_PMC_RTC_SIZE 0x10000
54
#endif
55
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/xlnx-versal.c
58
+++ b/hw/arm/xlnx-versal.c
59
@@ -XXX,XX +XXX,XX @@ static void versal_create_sds(Versal *s, qemu_irq *pic)
60
}
61
}
83
}
62
84
63
+static void versal_create_rtc(Versal *s, qemu_irq *pic)
85
# Predicate operations
86
-%mask_22_13 22:1 13:3
87
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
88
89
# Logical immediate operations (1 reg and modified-immediate)
90
@@ -XXX,XX +XXX,XX @@ VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
91
VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
92
93
VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd
94
+
95
+# Comparisons. We expand out the conditions which are split across
96
+# encodings T1, T2, T3 and the fc bits. These include VPT, which is
97
+# effectively "VCMP then VPST". A plain "VCMP" has a mask field of zero.
98
+VCMPEQ 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 0 @vcmp
99
+VCMPNE 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 0 @vcmp
100
+VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
101
+VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
102
+VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
103
+VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
104
+VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
105
+VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
106
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/mve_helper.c
109
+++ b/target/arm/mve_helper.c
110
@@ -XXX,XX +XXX,XX @@ static uint32_t do_sub_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
111
DO_VIDUP_ALL(vidup, DO_ADD)
112
DO_VIWDUP_ALL(viwdup, do_add_wrap)
113
DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
114
+
115
+/*
116
+ * Vector comparison.
117
+ * P0 bits for non-executed beats (where eci_mask is 0) are unchanged.
118
+ * P0 bits for predicated lanes in executed beats (where mask is 0) are 0.
119
+ * P0 bits otherwise are updated with the results of the comparisons.
120
+ * We must also keep unchanged the MASK fields at the top of v7m.vpr.
121
+ */
122
+#define DO_VCMP(OP, ESIZE, TYPE, FN) \
123
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, void *vm) \
124
+ { \
125
+ TYPE *n = vn, *m = vm; \
126
+ uint16_t mask = mve_element_mask(env); \
127
+ uint16_t eci_mask = mve_eci_mask(env); \
128
+ uint16_t beatpred = 0; \
129
+ uint16_t emask = MAKE_64BIT_MASK(0, ESIZE); \
130
+ unsigned e; \
131
+ for (e = 0; e < 16 / ESIZE; e++) { \
132
+ bool r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)]); \
133
+ /* Comparison sets 0/1 bits for each byte in the element */ \
134
+ beatpred |= r * emask; \
135
+ emask <<= ESIZE; \
136
+ } \
137
+ beatpred &= mask; \
138
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | \
139
+ (beatpred & eci_mask); \
140
+ mve_advance_vpt(env); \
141
+ }
142
+
143
+#define DO_VCMP_S(OP, FN) \
144
+ DO_VCMP(OP##b, 1, int8_t, FN) \
145
+ DO_VCMP(OP##h, 2, int16_t, FN) \
146
+ DO_VCMP(OP##w, 4, int32_t, FN)
147
+
148
+#define DO_VCMP_U(OP, FN) \
149
+ DO_VCMP(OP##b, 1, uint8_t, FN) \
150
+ DO_VCMP(OP##h, 2, uint16_t, FN) \
151
+ DO_VCMP(OP##w, 4, uint32_t, FN)
152
+
153
+#define DO_EQ(N, M) ((N) == (M))
154
+#define DO_NE(N, M) ((N) != (M))
155
+#define DO_EQ(N, M) ((N) == (M))
156
+#define DO_EQ(N, M) ((N) == (M))
157
+#define DO_GE(N, M) ((N) >= (M))
158
+#define DO_LT(N, M) ((N) < (M))
159
+#define DO_GT(N, M) ((N) > (M))
160
+#define DO_LE(N, M) ((N) <= (M))
161
+
162
+DO_VCMP_U(vcmpeq, DO_EQ)
163
+DO_VCMP_U(vcmpne, DO_NE)
164
+DO_VCMP_U(vcmpcs, DO_GE)
165
+DO_VCMP_U(vcmphi, DO_GT)
166
+DO_VCMP_S(vcmpge, DO_GE)
167
+DO_VCMP_S(vcmplt, DO_LT)
168
+DO_VCMP_S(vcmpgt, DO_GT)
169
+DO_VCMP_S(vcmple, DO_LE)
170
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-mve.c
173
+++ b/target/arm/translate-mve.c
174
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
175
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
176
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
177
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
178
+typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
179
180
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
181
static inline long mve_qreg_offset(unsigned reg)
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VDWDUP(DisasContext *s, arg_viwdup *a)
183
};
184
return do_viwdup(s, a, fns[a->size]);
185
}
186
+
187
+static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
64
+{
188
+{
65
+ SysBusDevice *sbd;
189
+ TCGv_ptr qn, qm;
66
+ MemoryRegion *mr;
190
+
67
+
191
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qm) ||
68
+ sysbus_init_child_obj(OBJECT(s), "rtc", &s->pmc.rtc, sizeof(s->pmc.rtc),
192
+ !fn) {
69
+ TYPE_XLNX_ZYNQMP_RTC);
193
+ return false;
70
+ sbd = SYS_BUS_DEVICE(&s->pmc.rtc);
194
+ }
71
+ qdev_init_nofail(DEVICE(sbd));
195
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
72
+
196
+ return true;
73
+ mr = sysbus_mmio_get_region(sbd, 0);
197
+ }
74
+ memory_region_add_subregion(&s->mr_ps, MM_PMC_RTC, mr);
198
+
75
+
199
+ qn = mve_qreg_ptr(a->qn);
76
+ /*
200
+ qm = mve_qreg_ptr(a->qm);
77
+ * TODO: Connect the ALARM and SECONDS interrupts once our RTC model
201
+ fn(cpu_env, qn, qm);
78
+ * supports them.
202
+ tcg_temp_free_ptr(qn);
79
+ */
203
+ tcg_temp_free_ptr(qm);
80
+ sysbus_connect_irq(sbd, 1, pic[VERSAL_RTC_APB_ERR_IRQ]);
204
+ if (a->mask) {
205
+ /* VPT */
206
+ gen_vpst(s, a->mask);
207
+ }
208
+ mve_update_eci(s);
209
+ return true;
81
+}
210
+}
82
+
211
+
83
/* This takes the board allocated linear DDR memory and creates aliases
212
+#define DO_VCMP(INSN, FN) \
84
* for each split DDR range/aperture on the Versal address map.
213
+ static bool trans_##INSN(DisasContext *s, arg_vcmp *a) \
85
*/
214
+ { \
86
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
215
+ static MVEGenCmpFn * const fns[] = { \
87
versal_create_gems(s, pic);
216
+ gen_helper_mve_##FN##b, \
88
versal_create_admas(s, pic);
217
+ gen_helper_mve_##FN##h, \
89
versal_create_sds(s, pic);
218
+ gen_helper_mve_##FN##w, \
90
+ versal_create_rtc(s, pic);
219
+ NULL, \
91
versal_map_ddr(s);
220
+ }; \
92
versal_unimp(s);
221
+ return do_vcmp(s, a, fns[a->size]); \
93
222
+ }
223
+
224
+DO_VCMP(VCMPEQ, vcmpeq)
225
+DO_VCMP(VCMPNE, vcmpne)
226
+DO_VCMP(VCMPCS, vcmpcs)
227
+DO_VCMP(VCMPHI, vcmphi)
228
+DO_VCMP(VCMPGE, vcmpge)
229
+DO_VCMP(VCMPLT, vcmplt)
230
+DO_VCMP(VCMPGT, vcmpgt)
231
+DO_VCMP(VCMPLE, vcmple)
94
--
232
--
95
2.20.1
233
2.20.1
96
234
97
235
diff view generated by jsdifflib
1
Convert the VFM[AS]L (scalar) insns in the 2reg-scalar-ext group
1
Implement the MVE integer vector comparison instructions that compare
2
to decodetree. These are the last ones in the group so we can remove
2
each element against a scalar from a general purpose register. These
3
all the legacy decode for the group.
3
are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)"
4
4
encodings T4, T5 and T6.
5
Note that in disas_thumb2_insn() the parts of this encoding space
5
6
where the decodetree decoder returns false will correctly be directed
6
We have to move the decodetree pattern for VPST, because it
7
to illegal_op by the "(insn & (1 << 28))" check so they won't fall
7
overlaps with VCMP T4 with size = 0b11.
8
into disas_coproc_insn() by mistake.
9
8
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200430181003.21682-11-peter.maydell@linaro.org
13
---
11
---
14
target/arm/neon-shared.decode | 7 +++
12
target/arm/helper-mve.h | 32 +++++++++++++++++++++++++++
15
target/arm/translate-neon.inc.c | 32 ++++++++++
13
target/arm/mve.decode | 18 +++++++++++++---
16
target/arm/translate.c | 107 +-------------------------------
14
target/arm/mve_helper.c | 44 +++++++++++++++++++++++++++++++-------
17
3 files changed, 40 insertions(+), 106 deletions(-)
15
target/arm/translate-mve.c | 43 +++++++++++++++++++++++++++++++++++++
18
16
4 files changed, 126 insertions(+), 11 deletions(-)
19
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
17
20
index XXXXXXX..XXXXXXX 100644
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
21
--- a/target/arm/neon-shared.decode
19
index XXXXXXX..XXXXXXX 100644
22
+++ b/target/arm/neon-shared.decode
20
--- a/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
21
+++ b/target/arm/helper-mve.h
24
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vcmpgtw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
VDOT_scalar 1111 1110 0 . 10 .... .... 1101 . q:1 index:1 u:1 rm:4 \
23
DEF_HELPER_FLAGS_3(mve_vcmpleb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
vm=%vm_dp vn=%vn_dp vd=%vd_dp
24
DEF_HELPER_FLAGS_3(mve_vcmpleh, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+
25
DEF_HELPER_FLAGS_3(mve_vcmplew, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
+%vfml_scalar_q0_rm 0:3 5:1
26
+
29
+%vfml_scalar_q1_index 5:1 3:1
27
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
30
+VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 0 . 1 index:1 ... \
28
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
31
+ rm=%vfml_scalar_q0_rm vn=%vn_sp vd=%vd_dp q=0
29
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
32
+VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 1 . 1 . rm:3 \
30
+
33
+ index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp q=1
31
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
34
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
32
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
35
index XXXXXXX..XXXXXXX 100644
33
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
36
--- a/target/arm/translate-neon.inc.c
34
+
37
+++ b/target/arm/translate-neon.inc.c
35
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
36
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
39
tcg_temp_free_ptr(fpst);
37
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
40
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
41
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
44
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
45
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
46
+
47
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
48
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
49
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
50
+
51
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
52
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
53
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
54
+
55
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
56
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
57
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
58
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve.decode
61
+++ b/target/arm/mve.decode
62
@@ -XXX,XX +XXX,XX @@
63
&vidup qd rn size imm
64
&viwdup qd rn rm size imm
65
&vcmp qm qn size mask
66
+&vcmp_scalar qn rm size mask
67
68
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
69
# Note that both Rn and Qd are 3 bits only (no D bit)
70
@@ -XXX,XX +XXX,XX @@
71
# Vector comparison; 4-bit Qm but 3-bit Qn
72
%mask_22_13 22:1 13:3
73
@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
74
+@vcmp_scalar .... .... .. size:2 qn:3 . .... .... .... rm:4 &vcmp_scalar \
75
+ mask=%mask_22_13
76
77
# Vector loads and stores
78
79
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
80
rdahi=%rdahi rdalo=%rdalo
81
}
82
83
-# Predicate operations
84
-VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
85
-
86
# Logical immediate operations (1 reg and modified-immediate)
87
88
# The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but
89
@@ -XXX,XX +XXX,XX @@ VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
90
VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
91
VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
92
VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
93
+
94
+{
95
+ VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
96
+ VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar
97
+}
98
+VCMPNE_scalar 1111 1110 0 . .. ... 1 ... 0 1111 1 1 0 0 .... @vcmp_scalar
99
+VCMPCS_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 1 0 .... @vcmp_scalar
100
+VCMPHI_scalar 1111 1110 0 . .. ... 1 ... 0 1111 1 1 1 0 .... @vcmp_scalar
101
+VCMPGE_scalar 1111 1110 0 . .. ... 1 ... 1 1111 0 1 0 0 .... @vcmp_scalar
102
+VCMPLT_scalar 1111 1110 0 . .. ... 1 ... 1 1111 1 1 0 0 .... @vcmp_scalar
103
+VCMPGT_scalar 1111 1110 0 . .. ... 1 ... 1 1111 0 1 1 0 .... @vcmp_scalar
104
+VCMPLE_scalar 1111 1110 0 . .. ... 1 ... 1 1111 1 1 1 0 .... @vcmp_scalar
105
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/mve_helper.c
108
+++ b/target/arm/mve_helper.c
109
@@ -XXX,XX +XXX,XX @@ DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
110
mve_advance_vpt(env); \
111
}
112
113
-#define DO_VCMP_S(OP, FN) \
114
- DO_VCMP(OP##b, 1, int8_t, FN) \
115
- DO_VCMP(OP##h, 2, int16_t, FN) \
116
- DO_VCMP(OP##w, 4, int32_t, FN)
117
+#define DO_VCMP_SCALAR(OP, ESIZE, TYPE, FN) \
118
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
119
+ uint32_t rm) \
120
+ { \
121
+ TYPE *n = vn; \
122
+ uint16_t mask = mve_element_mask(env); \
123
+ uint16_t eci_mask = mve_eci_mask(env); \
124
+ uint16_t beatpred = 0; \
125
+ uint16_t emask = MAKE_64BIT_MASK(0, ESIZE); \
126
+ unsigned e; \
127
+ for (e = 0; e < 16 / ESIZE; e++) { \
128
+ bool r = FN(n[H##ESIZE(e)], (TYPE)rm); \
129
+ /* Comparison sets 0/1 bits for each byte in the element */ \
130
+ beatpred |= r * emask; \
131
+ emask <<= ESIZE; \
132
+ } \
133
+ beatpred &= mask; \
134
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | \
135
+ (beatpred & eci_mask); \
136
+ mve_advance_vpt(env); \
137
+ }
138
139
-#define DO_VCMP_U(OP, FN) \
140
- DO_VCMP(OP##b, 1, uint8_t, FN) \
141
- DO_VCMP(OP##h, 2, uint16_t, FN) \
142
- DO_VCMP(OP##w, 4, uint32_t, FN)
143
+#define DO_VCMP_S(OP, FN) \
144
+ DO_VCMP(OP##b, 1, int8_t, FN) \
145
+ DO_VCMP(OP##h, 2, int16_t, FN) \
146
+ DO_VCMP(OP##w, 4, int32_t, FN) \
147
+ DO_VCMP_SCALAR(OP##_scalarb, 1, int8_t, FN) \
148
+ DO_VCMP_SCALAR(OP##_scalarh, 2, int16_t, FN) \
149
+ DO_VCMP_SCALAR(OP##_scalarw, 4, int32_t, FN)
150
+
151
+#define DO_VCMP_U(OP, FN) \
152
+ DO_VCMP(OP##b, 1, uint8_t, FN) \
153
+ DO_VCMP(OP##h, 2, uint16_t, FN) \
154
+ DO_VCMP(OP##w, 4, uint32_t, FN) \
155
+ DO_VCMP_SCALAR(OP##_scalarb, 1, uint8_t, FN) \
156
+ DO_VCMP_SCALAR(OP##_scalarh, 2, uint16_t, FN) \
157
+ DO_VCMP_SCALAR(OP##_scalarw, 4, uint32_t, FN)
158
159
#define DO_EQ(N, M) ((N) == (M))
160
#define DO_NE(N, M) ((N) != (M))
161
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-mve.c
164
+++ b/target/arm/translate-mve.c
165
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
166
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
167
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
168
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
169
+typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
170
171
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
172
static inline long mve_qreg_offset(unsigned reg)
173
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
40
return true;
174
return true;
41
}
175
}
42
+
176
43
+static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
177
+static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a,
178
+ MVEGenScalarCmpFn *fn)
44
+{
179
+{
45
+ int opr_sz;
180
+ TCGv_ptr qn;
46
+
181
+ TCGv_i32 rm;
47
+ if (!dc_isar_feature(aa32_fhm, s)) {
182
+
183
+ if (!dc_isar_feature(aa32_mve, s) || !fn || a->rm == 13) {
48
+ return false;
184
+ return false;
49
+ }
185
+ }
50
+
186
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
51
+ /* UNDEF accesses to D16-D31 if they don't exist. */
52
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
53
+ ((a->vd & 0x10) || (a->q && (a->vn & 0x10)))) {
54
+ return false;
55
+ }
56
+
57
+ if (a->vd & a->q) {
58
+ return false;
59
+ }
60
+
61
+ if (!vfp_access_check(s)) {
62
+ return true;
187
+ return true;
63
+ }
188
+ }
64
+
189
+
65
+ opr_sz = (1 + a->q) * 8;
190
+ qn = mve_qreg_ptr(a->qn);
66
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
191
+ if (a->rm == 15) {
67
+ vfp_reg_offset(a->q, a->vn),
192
+ /* Encoding Rm=0b1111 means "constant zero" */
68
+ vfp_reg_offset(a->q, a->rm),
193
+ rm = tcg_constant_i32(0);
69
+ cpu_env, opr_sz, opr_sz,
194
+ } else {
70
+ (a->index << 2) | a->s, /* is_2 == 0 */
195
+ rm = load_reg(s, a->rm);
71
+ gen_helper_gvec_fmlal_idx_a32);
196
+ }
197
+ fn(cpu_env, qn, rm);
198
+ tcg_temp_free_ptr(qn);
199
+ tcg_temp_free_i32(rm);
200
+ if (a->mask) {
201
+ /* VPT */
202
+ gen_vpst(s, a->mask);
203
+ }
204
+ mve_update_eci(s);
72
+ return true;
205
+ return true;
73
+}
206
+}
74
diff --git a/target/arm/translate.c b/target/arm/translate.c
207
+
75
index XXXXXXX..XXXXXXX 100644
208
#define DO_VCMP(INSN, FN) \
76
--- a/target/arm/translate.c
209
static bool trans_##INSN(DisasContext *s, arg_vcmp *a) \
77
+++ b/target/arm/translate.c
210
{ \
78
@@ -XXX,XX +XXX,XX @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn)
211
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
79
}
212
NULL, \
80
213
}; \
81
#define VFP_REG_SHR(x, n) (((n) > 0) ? (x) >> (n) : (x) << -(n))
214
return do_vcmp(s, a, fns[a->size]); \
82
-#define VFP_SREG(insn, bigbit, smallbit) \
215
+ } \
83
- ((VFP_REG_SHR(insn, bigbit - 1) & 0x1e) | (((insn) >> (smallbit)) & 1))
216
+ static bool trans_##INSN##_scalar(DisasContext *s, \
84
#define VFP_DREG(reg, insn, bigbit, smallbit) do { \
217
+ arg_vcmp_scalar *a) \
85
if (dc_isar_feature(aa32_simd_r32, s)) { \
218
+ { \
86
reg = (((insn) >> (bigbit)) & 0x0f) \
219
+ static MVEGenScalarCmpFn * const fns[] = { \
87
@@ -XXX,XX +XXX,XX @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn)
220
+ gen_helper_mve_##FN##_scalarb, \
88
reg = ((insn) >> (bigbit)) & 0x0f; \
221
+ gen_helper_mve_##FN##_scalarh, \
89
}} while (0)
222
+ gen_helper_mve_##FN##_scalarw, \
90
223
+ NULL, \
91
-#define VFP_SREG_D(insn) VFP_SREG(insn, 12, 22)
224
+ }; \
92
#define VFP_DREG_D(reg, insn) VFP_DREG(reg, insn, 12, 22)
225
+ return do_vcmp_scalar(s, a, fns[a->size]); \
93
-#define VFP_SREG_N(insn) VFP_SREG(insn, 16, 7)
94
#define VFP_DREG_N(reg, insn) VFP_DREG(reg, insn, 16, 7)
95
-#define VFP_SREG_M(insn) VFP_SREG(insn, 0, 5)
96
#define VFP_DREG_M(reg, insn) VFP_DREG(reg, insn, 0, 5)
97
98
static void gen_neon_dup_low16(TCGv_i32 var)
99
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
100
return 0;
101
}
102
103
-/* Advanced SIMD two registers and a scalar extension.
104
- * 31 24 23 22 20 16 12 11 10 9 8 3 0
105
- * +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
106
- * | 1 1 1 1 1 1 1 0 | o1 | D | o2 | Vn | Vd | 1 | o3 | 0 | o4 | N Q M U | Vm |
107
- * +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
108
- *
109
- */
110
-
111
-static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
112
-{
113
- gen_helper_gvec_3 *fn_gvec = NULL;
114
- gen_helper_gvec_3_ptr *fn_gvec_ptr = NULL;
115
- int rd, rn, rm, opr_sz, data;
116
- int off_rn, off_rm;
117
- bool is_long = false, q = extract32(insn, 6, 1);
118
- bool ptr_is_env = false;
119
-
120
- if ((insn & 0xffa00f10) == 0xfe000810) {
121
- /* VFM[AS]L -- 1111 1110 0.0S .... .... 1000 .Q.1 .... */
122
- int is_s = extract32(insn, 20, 1);
123
- int vm20 = extract32(insn, 0, 3);
124
- int vm3 = extract32(insn, 3, 1);
125
- int m = extract32(insn, 5, 1);
126
- int index;
127
-
128
- if (!dc_isar_feature(aa32_fhm, s)) {
129
- return 1;
130
- }
131
- if (q) {
132
- rm = vm20;
133
- index = m * 2 + vm3;
134
- } else {
135
- rm = vm20 * 2 + m;
136
- index = vm3;
137
- }
138
- is_long = true;
139
- data = (index << 2) | is_s; /* is_2 == 0 */
140
- fn_gvec_ptr = gen_helper_gvec_fmlal_idx_a32;
141
- ptr_is_env = true;
142
- } else {
143
- return 1;
144
- }
145
-
146
- VFP_DREG_D(rd, insn);
147
- if (rd & q) {
148
- return 1;
149
- }
150
- if (q || !is_long) {
151
- VFP_DREG_N(rn, insn);
152
- if (rn & q & !is_long) {
153
- return 1;
154
- }
155
- off_rn = vfp_reg_offset(1, rn);
156
- off_rm = vfp_reg_offset(1, rm);
157
- } else {
158
- rn = VFP_SREG_N(insn);
159
- off_rn = vfp_reg_offset(0, rn);
160
- off_rm = vfp_reg_offset(0, rm);
161
- }
162
- if (s->fp_excp_el) {
163
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
164
- syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
165
- return 0;
166
- }
167
- if (!s->vfp_enabled) {
168
- return 1;
169
- }
170
-
171
- opr_sz = (1 + q) * 8;
172
- if (fn_gvec_ptr) {
173
- TCGv_ptr ptr;
174
- if (ptr_is_env) {
175
- ptr = cpu_env;
176
- } else {
177
- ptr = get_fpstatus_ptr(1);
178
- }
179
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd), off_rn, off_rm, ptr,
180
- opr_sz, opr_sz, data, fn_gvec_ptr);
181
- if (!ptr_is_env) {
182
- tcg_temp_free_ptr(ptr);
183
- }
184
- } else {
185
- tcg_gen_gvec_3_ool(vfp_reg_offset(1, rd), off_rn, off_rm,
186
- opr_sz, opr_sz, data, fn_gvec);
187
- }
188
- return 0;
189
-}
190
-
191
static int disas_coproc_insn(DisasContext *s, uint32_t insn)
192
{
193
int cpnum, is64, crn, crm, opc1, opc2, isread, rt, rt2;
194
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
195
}
196
}
197
}
198
- } else if ((insn & 0x0f000a00) == 0x0e000800
199
- && arm_dc_feature(s, ARM_FEATURE_V8)) {
200
- if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
201
- goto illegal_op;
202
- }
203
- return;
204
}
205
goto illegal_op;
206
}
226
}
207
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
227
208
}
228
DO_VCMP(VCMPEQ, vcmpeq)
209
break;
210
}
211
- if ((insn & 0xff000a00) == 0xfe000800
212
- && arm_dc_feature(s, ARM_FEATURE_V8)) {
213
- /* The Thumb2 and ARM encodings are identical. */
214
- if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
215
- goto illegal_op;
216
- }
217
- } else if (((insn >> 24) & 3) == 3) {
218
+ if (((insn >> 24) & 3) == 3) {
219
/* Translate into the equivalent ARM encoding. */
220
insn = (insn & 0xe2ffffff) | ((insn & (1 << 28)) >> 4) | (1 << 28);
221
if (disas_neon_data_insn(s, insn)) {
222
--
229
--
223
2.20.1
230
2.20.1
224
231
225
232
diff view generated by jsdifflib
1
Convert the Neon VQADD/VQSUB insns in the 3-reg-same grouping
1
Implement the MVE VPSEL insn, which sets each byte of the destination
2
to decodetree.
2
vector Qd to the byte from either Qn or Qm depending on the value of
3
the corresponding bit in VPR.P0.
3
4
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-19-peter.maydell@linaro.org
7
---
7
---
8
target/arm/neon-dp.decode | 6 ++++++
8
target/arm/helper-mve.h | 2 ++
9
target/arm/translate-neon.inc.c | 15 +++++++++++++++
9
target/arm/mve.decode | 7 +++++--
10
target/arm/translate.c | 14 ++------------
10
target/arm/mve_helper.c | 19 +++++++++++++++++++
11
3 files changed, 23 insertions(+), 12 deletions(-)
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 28 insertions(+), 2 deletions(-)
12
13
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
16
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/neon-dp.decode
17
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
@3same .... ... . . . size:2 .... .... .... . q:1 . . .... \
19
DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
20
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
21
21
+VQADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 1 .... @3same
22
+DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+VQADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 1 .... @3same
23
+
23
+
24
@3same_logic .... ... . . . .. .... .... .... . q:1 .. .... \
24
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0
25
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
26
DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
@@ -XXX,XX +XXX,XX @@ VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
28
index XXXXXXX..XXXXXXX 100644
29
VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
29
--- a/target/arm/mve.decode
30
30
+++ b/target/arm/mve.decode
31
+VQSUB_S_3s 1111 001 0 0 . .. .... .... 0010 . . . 1 .... @3same
31
@@ -XXX,XX +XXX,XX @@ VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd
32
+VQSUB_U_3s 1111 001 1 0 . .. .... .... 0010 . . . 1 .... @3same
32
# effectively "VCMP then VPST". A plain "VCMP" has a mask field of zero.
33
VCMPEQ 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 0 @vcmp
34
VCMPNE 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 0 @vcmp
35
-VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
36
-VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
37
+{
38
+ VPSEL 1111 1110 0 . 11 ... 1 ... 0 1111 . 0 . 0 ... 1 @2op_nosz
39
+ VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
40
+ VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
41
+}
42
VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
43
VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
44
VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
45
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/mve_helper.c
48
+++ b/target/arm/mve_helper.c
49
@@ -XXX,XX +XXX,XX @@ DO_VCMP_S(vcmpge, DO_GE)
50
DO_VCMP_S(vcmplt, DO_LT)
51
DO_VCMP_S(vcmpgt, DO_GT)
52
DO_VCMP_S(vcmple, DO_LE)
33
+
53
+
34
VCGT_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 0 .... @3same
54
+void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
35
VCGT_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 0 .... @3same
55
+{
36
VCGE_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 1 .... @3same
56
+ /*
37
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
57
+ * Qd[n] = VPR.P0[n] ? Qn[n] : Qm[n]
58
+ * but note that whether bytes are written to Qd is still subject
59
+ * to (all forms of) predication in the usual way.
60
+ */
61
+ uint64_t *d = vd, *n = vn, *m = vm;
62
+ uint16_t mask = mve_element_mask(env);
63
+ uint16_t p0 = FIELD_EX32(env->v7m.vpr, V7M_VPR, P0);
64
+ unsigned e;
65
+ for (e = 0; e < 16 / 8; e++, mask >>= 8, p0 >>= 8) {
66
+ uint64_t r = m[H8(e)];
67
+ mergemask(&r, n[H8(e)], p0);
68
+ mergemask(&d[H8(e)], r, mask);
69
+ }
70
+ mve_advance_vpt(env);
71
+}
72
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
38
index XXXXXXX..XXXXXXX 100644
73
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-neon.inc.c
74
--- a/target/arm/translate-mve.c
40
+++ b/target/arm/translate-neon.inc.c
75
+++ b/target/arm/translate-mve.c
41
@@ -XXX,XX +XXX,XX @@ static void gen_VTST_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
76
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VORR, gen_helper_mve_vorr)
42
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &cmtst_op[vece]);
77
DO_LOGIC(VORN, gen_helper_mve_vorn)
43
}
78
DO_LOGIC(VEOR, gen_helper_mve_veor)
44
DO_3SAME_NO_SZ_3(VTST, gen_VTST_3s)
79
80
+DO_LOGIC(VPSEL, gen_helper_mve_vpsel)
45
+
81
+
46
+#define DO_3SAME_GVEC4(INSN, OPARRAY) \
82
#define DO_2OP(INSN, FN) \
47
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
83
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
48
+ uint32_t rn_ofs, uint32_t rm_ofs, \
84
{ \
49
+ uint32_t oprsz, uint32_t maxsz) \
50
+ { \
51
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc), \
52
+ rn_ofs, rm_ofs, oprsz, maxsz, &OPARRAY[vece]); \
53
+ } \
54
+ DO_3SAME(INSN, gen_##INSN##_3s)
55
+
56
+DO_3SAME_GVEC4(VQADD_S, sqadd_op)
57
+DO_3SAME_GVEC4(VQADD_U, uqadd_op)
58
+DO_3SAME_GVEC4(VQSUB_S, sqsub_op)
59
+DO_3SAME_GVEC4(VQSUB_U, uqsub_op)
60
diff --git a/target/arm/translate.c b/target/arm/translate.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate.c
63
+++ b/target/arm/translate.c
64
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
65
}
66
return 1;
67
68
- case NEON_3R_VQADD:
69
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
70
- rn_ofs, rm_ofs, vec_size, vec_size,
71
- (u ? uqadd_op : sqadd_op) + size);
72
- return 0;
73
-
74
- case NEON_3R_VQSUB:
75
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
76
- rn_ofs, rm_ofs, vec_size, vec_size,
77
- (u ? uqsub_op : sqsub_op) + size);
78
- return 0;
79
-
80
case NEON_3R_VMUL: /* VMUL */
81
if (u) {
82
/* Polynomial case allows only P8. */
83
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
84
case NEON_3R_VTST_VCEQ:
85
case NEON_3R_VCGT:
86
case NEON_3R_VCGE:
87
+ case NEON_3R_VQADD:
88
+ case NEON_3R_VQSUB:
89
/* Already handled by decodetree */
90
return 1;
91
}
92
--
85
--
93
2.20.1
86
2.20.1
94
87
95
88
diff view generated by jsdifflib
1
We're going to want at least some of the NeonGen* typedefs
1
Implement the MVE VMLAS insn, which multiplies a vector by a vector
2
for the refactored 32-bit Neon decoder, so move them all
2
and adds a scalar.
3
to translate.h since it makes more sense to keep them in
4
one group.
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200430181003.21682-23-peter.maydell@linaro.org
9
---
6
---
10
target/arm/translate.h | 17 +++++++++++++++++
7
target/arm/helper-mve.h | 4 ++++
11
target/arm/translate-a64.c | 17 -----------------
8
target/arm/mve.decode | 3 +++
12
2 files changed, 17 insertions(+), 17 deletions(-)
9
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 34 insertions(+)
13
12
14
diff --git a/target/arm/translate.h b/target/arm/translate.h
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.h
15
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/translate.h
16
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i3
19
typedef void GVecGen4Fn(unsigned, uint32_t, uint32_t, uint32_t,
18
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
uint32_t, uint32_t, uint32_t);
19
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
20
22
+/* Function prototype for gen_ functions for calling Neon helpers */
21
+DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+typedef void NeonGenOneOpEnvFn(TCGv_i32, TCGv_ptr, TCGv_i32);
22
+DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+typedef void NeonGenTwoOpFn(TCGv_i32, TCGv_i32, TCGv_i32);
23
+DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+typedef void NeonGenTwoOpEnvFn(TCGv_i32, TCGv_ptr, TCGv_i32, TCGv_i32);
26
+typedef void NeonGenTwo64OpFn(TCGv_i64, TCGv_i64, TCGv_i64);
27
+typedef void NeonGenTwo64OpEnvFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i64);
28
+typedef void NeonGenNarrowFn(TCGv_i32, TCGv_i64);
29
+typedef void NeonGenNarrowEnvFn(TCGv_i32, TCGv_ptr, TCGv_i64);
30
+typedef void NeonGenWidenFn(TCGv_i64, TCGv_i32);
31
+typedef void NeonGenTwoSingleOPFn(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
32
+typedef void NeonGenTwoDoubleOPFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_ptr);
33
+typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
34
+typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
35
+typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
36
+typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
37
+typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
38
+
24
+
39
#endif /* TARGET_ARM_TRANSLATE_H */
25
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
40
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
41
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate-a64.c
30
--- a/target/arm/mve.decode
43
+++ b/target/arm/translate-a64.c
31
+++ b/target/arm/mve.decode
44
@@ -XXX,XX +XXX,XX @@ typedef struct AArch64DecodeTable {
32
@@ -XXX,XX +XXX,XX @@ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
45
AArch64DecodeFn *disas_fn;
33
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
46
} AArch64DecodeTable;
34
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
47
35
48
-/* Function prototype for gen_ functions for calling Neon helpers */
36
+# The U bit (28) is don't-care because it does not affect the result
49
-typedef void NeonGenOneOpEnvFn(TCGv_i32, TCGv_ptr, TCGv_i32);
37
+VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
50
-typedef void NeonGenTwoOpFn(TCGv_i32, TCGv_i32, TCGv_i32);
38
+
51
-typedef void NeonGenTwoOpEnvFn(TCGv_i32, TCGv_ptr, TCGv_i32, TCGv_i32);
39
# Vector add across vector
52
-typedef void NeonGenTwo64OpFn(TCGv_i64, TCGv_i64, TCGv_i64);
40
{
53
-typedef void NeonGenTwo64OpEnvFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i64);
41
VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
54
-typedef void NeonGenNarrowFn(TCGv_i32, TCGv_i64);
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
55
-typedef void NeonGenNarrowEnvFn(TCGv_i32, TCGv_ptr, TCGv_i64);
43
index XXXXXXX..XXXXXXX 100644
56
-typedef void NeonGenWidenFn(TCGv_i64, TCGv_i32);
44
--- a/target/arm/mve_helper.c
57
-typedef void NeonGenTwoSingleOPFn(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
45
+++ b/target/arm/mve_helper.c
58
-typedef void NeonGenTwoDoubleOPFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_ptr);
46
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
59
-typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
47
mve_advance_vpt(env); \
60
-typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
48
}
61
-typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
49
62
-typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
50
+/* "accumulating" version where FN takes d as well as n and m */
63
-typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
51
+#define DO_2OP_ACC_SCALAR(OP, ESIZE, TYPE, FN) \
64
-
52
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
65
/* initialize TCG globals. */
53
+ uint32_t rm) \
66
void a64_translate_init(void)
54
+ { \
55
+ TYPE *d = vd, *n = vn; \
56
+ TYPE m = rm; \
57
+ uint16_t mask = mve_element_mask(env); \
58
+ unsigned e; \
59
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
60
+ mergemask(&d[H##ESIZE(e)], \
61
+ FN(d[H##ESIZE(e)], n[H##ESIZE(e)], m), mask); \
62
+ } \
63
+ mve_advance_vpt(env); \
64
+ }
65
+
66
/* provide unsigned 2-op scalar helpers for all sizes */
67
#define DO_2OP_SCALAR_U(OP, FN) \
68
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
69
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
70
DO_2OP_SCALAR(OP##h, 2, int16_t, FN) \
71
DO_2OP_SCALAR(OP##w, 4, int32_t, FN)
72
73
+#define DO_2OP_ACC_SCALAR_U(OP, FN) \
74
+ DO_2OP_ACC_SCALAR(OP##b, 1, uint8_t, FN) \
75
+ DO_2OP_ACC_SCALAR(OP##h, 2, uint16_t, FN) \
76
+ DO_2OP_ACC_SCALAR(OP##w, 4, uint32_t, FN)
77
+
78
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
79
DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
80
DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
81
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
82
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
83
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
84
85
+/* Vector by vector plus scalar */
86
+#define DO_VMLAS(D, N, M) ((N) * (D) + (M))
87
+
88
+DO_2OP_ACC_SCALAR_U(vmlas, DO_VMLAS)
89
+
90
/*
91
* Long saturating scalar ops. As with DO_2OP_L, TYPE and H are for the
92
* input (smaller) type and LESIZE, LTYPE, LH for the output (long) type.
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-mve.c
96
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
98
DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
99
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
100
DO_2OP_SCALAR(VBRSR, vbrsr)
101
+DO_2OP_SCALAR(VMLAS, vmlas)
102
103
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
67
{
104
{
68
--
105
--
69
2.20.1
106
2.20.1
70
107
71
108
diff view generated by jsdifflib
1
Convert the Neon "load/store multiple structures" insns to decodetree.
1
Implement the MVE instructions which perform shifts by a scalar.
2
These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2. They take the
3
shift amount in a general purpose register and shift every element in
4
the vector by that amount.
5
6
Mostly we can reuse the helper functions for shift-by-immediate; we
7
do need two new helpers for VQRSHL.
2
8
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-12-peter.maydell@linaro.org
6
---
11
---
7
target/arm/neon-ls.decode | 7 ++
12
target/arm/helper-mve.h | 8 +++++++
8
target/arm/translate-neon.inc.c | 124 ++++++++++++++++++++++++++++++++
13
target/arm/mve.decode | 23 ++++++++++++++++---
9
target/arm/translate.c | 91 +----------------------
14
target/arm/mve_helper.c | 2 ++
10
3 files changed, 133 insertions(+), 89 deletions(-)
15
target/arm/translate-mve.c | 46 ++++++++++++++++++++++++++++++++++++++
16
4 files changed, 76 insertions(+), 3 deletions(-)
11
17
12
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-ls.decode
20
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/neon-ls.decode
21
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_4(mve_vrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(mve_vqrshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vqrshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vqrshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vqrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vqrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vqrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+
34
DEF_HELPER_FLAGS_4(mve_vshllbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_4(mve_vshllbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(mve_vshllbub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve.decode
40
+++ b/target/arm/mve.decode
16
@@ -XXX,XX +XXX,XX @@
41
@@ -XXX,XX +XXX,XX @@
17
# 0b1111_1001_xxx0_xxxx_xxxx_xxxx_xxxx_xxxx
42
&viwdup qd rn rm size imm
18
# This file works on the A32 encoding only; calling code for T32 has to
43
&vcmp qm qn size mask
19
# transform the insn into the A32 version first.
44
&vcmp_scalar qn rm size mask
45
+&shl_scalar qda rm size
46
47
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
48
# Note that both Rn and Qd are 3 bits only (no D bit)
49
@@ -XXX,XX +XXX,XX @@
50
@2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \
51
size=2 shift=%rshift_i5
52
53
+@shl_scalar .... .... .... size:2 .. .... .... .... rm:4 &shl_scalar qda=%qd
20
+
54
+
21
+%vd_dp 22:1 12:4
55
# Vector comparison; 4-bit Qm but 3-bit Qn
56
%mask_22_13 22:1 13:3
57
@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
58
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
59
60
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
61
VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
62
-VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
22
+
63
+
23
+# Neon load/store multiple structures
24
+
25
+VLDST_multiple 1111 0100 0 . l:1 0 rn:4 .... itype:4 size:2 align:2 rm:4 \
26
+ vd=%vd_dp
27
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate-neon.inc.c
30
+++ b/target/arm/translate-neon.inc.c
31
@@ -XXX,XX +XXX,XX @@ static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
32
gen_helper_gvec_fmlal_idx_a32);
33
return true;
34
}
35
+
36
+static struct {
37
+ int nregs;
38
+ int interleave;
39
+ int spacing;
40
+} const neon_ls_element_type[11] = {
41
+ {1, 4, 1},
42
+ {1, 4, 2},
43
+ {4, 1, 1},
44
+ {2, 2, 2},
45
+ {1, 3, 1},
46
+ {1, 3, 2},
47
+ {3, 1, 1},
48
+ {1, 1, 1},
49
+ {1, 2, 1},
50
+ {1, 2, 2},
51
+ {2, 1, 1}
52
+};
53
+
54
+static void gen_neon_ldst_base_update(DisasContext *s, int rm, int rn,
55
+ int stride)
56
+{
64
+{
57
+ if (rm != 15) {
65
+ VSHL_S_scalar 1110 1110 0 . 11 .. 01 ... 1 1110 0110 .... @shl_scalar
58
+ TCGv_i32 base;
66
+ VRSHL_S_scalar 1110 1110 0 . 11 .. 11 ... 1 1110 0110 .... @shl_scalar
59
+
67
+ VQSHL_S_scalar 1110 1110 0 . 11 .. 01 ... 1 1110 1110 .... @shl_scalar
60
+ base = load_reg(s, rn);
68
+ VQRSHL_S_scalar 1110 1110 0 . 11 .. 11 ... 1 1110 1110 .... @shl_scalar
61
+ if (rm == 13) {
69
+ VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
62
+ tcg_gen_addi_i32(base, base, stride);
63
+ } else {
64
+ TCGv_i32 index;
65
+ index = load_reg(s, rm);
66
+ tcg_gen_add_i32(base, base, index);
67
+ tcg_temp_free_i32(index);
68
+ }
69
+ store_reg(s, rn, base);
70
+ }
71
+}
70
+}
72
+
71
+
73
+static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a)
74
+{
72
+{
75
+ /* Neon load/store multiple structures */
73
+ VSHL_U_scalar 1111 1110 0 . 11 .. 01 ... 1 1110 0110 .... @shl_scalar
76
+ int nregs, interleave, spacing, reg, n;
74
+ VRSHL_U_scalar 1111 1110 0 . 11 .. 11 ... 1 1110 0110 .... @shl_scalar
77
+ MemOp endian = s->be_data;
75
+ VQSHL_U_scalar 1111 1110 0 . 11 .. 01 ... 1 1110 1110 .... @shl_scalar
78
+ int mmu_idx = get_mem_index(s);
76
+ VQRSHL_U_scalar 1111 1110 0 . 11 .. 11 ... 1 1110 1110 .... @shl_scalar
79
+ int size = a->size;
77
+ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
80
+ TCGv_i64 tmp64;
78
+}
81
+ TCGv_i32 addr, tmp;
82
+
79
+
83
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
80
VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
81
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
82
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
83
@@ -XXX,XX +XXX,XX @@ VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
84
size=%size_28
85
}
86
87
-VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
88
-
89
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
90
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
91
92
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/mve_helper.c
95
+++ b/target/arm/mve_helper.c
96
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
97
DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
98
DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
99
DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
100
+DO_2SHIFT_SAT_U(vqrshli_u, DO_UQRSHL_OP)
101
+DO_2SHIFT_SAT_S(vqrshli_s, DO_SQRSHL_OP)
102
103
/* Shift-and-insert; we always work with 64 bits at a time */
104
#define DO_2SHIFT_INSERT(OP, ESIZE, SHIFTFN, MASKFN) \
105
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-mve.c
108
+++ b/target/arm/translate-mve.c
109
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VRSHRI_U, vrshli_u, true)
110
DO_2SHIFT(VSRI, vsri, false)
111
DO_2SHIFT(VSLI, vsli, false)
112
113
+static bool do_2shift_scalar(DisasContext *s, arg_shl_scalar *a,
114
+ MVEGenTwoOpShiftFn *fn)
115
+{
116
+ TCGv_ptr qda;
117
+ TCGv_i32 rm;
118
+
119
+ if (!dc_isar_feature(aa32_mve, s) ||
120
+ !mve_check_qreg_bank(s, a->qda) ||
121
+ a->rm == 13 || a->rm == 15 || !fn) {
122
+ /* Rm cases are UNPREDICTABLE */
84
+ return false;
123
+ return false;
85
+ }
124
+ }
86
+
125
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
87
+ /* UNDEF accesses to D16-D31 if they don't exist */
88
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
89
+ return false;
90
+ }
91
+ if (a->itype > 10) {
92
+ return false;
93
+ }
94
+ /* Catch UNDEF cases for bad values of align field */
95
+ switch (a->itype & 0xc) {
96
+ case 4:
97
+ if (a->align >= 2) {
98
+ return false;
99
+ }
100
+ break;
101
+ case 8:
102
+ if (a->align == 3) {
103
+ return false;
104
+ }
105
+ break;
106
+ default:
107
+ break;
108
+ }
109
+ nregs = neon_ls_element_type[a->itype].nregs;
110
+ interleave = neon_ls_element_type[a->itype].interleave;
111
+ spacing = neon_ls_element_type[a->itype].spacing;
112
+ if (size == 3 && (interleave | spacing) != 1) {
113
+ return false;
114
+ }
115
+
116
+ if (!vfp_access_check(s)) {
117
+ return true;
126
+ return true;
118
+ }
127
+ }
119
+
128
+
120
+ /* For our purposes, bytes are always little-endian. */
129
+ qda = mve_qreg_ptr(a->qda);
121
+ if (size == 0) {
130
+ rm = load_reg(s, a->rm);
122
+ endian = MO_LE;
131
+ fn(cpu_env, qda, qda, rm);
123
+ }
132
+ tcg_temp_free_ptr(qda);
124
+ /*
133
+ tcg_temp_free_i32(rm);
125
+ * Consecutive little-endian elements from a single register
134
+ mve_update_eci(s);
126
+ * can be promoted to a larger little-endian operation.
127
+ */
128
+ if (interleave == 1 && endian == MO_LE) {
129
+ size = 3;
130
+ }
131
+ tmp64 = tcg_temp_new_i64();
132
+ addr = tcg_temp_new_i32();
133
+ tmp = tcg_const_i32(1 << size);
134
+ load_reg_var(s, addr, a->rn);
135
+ for (reg = 0; reg < nregs; reg++) {
136
+ for (n = 0; n < 8 >> size; n++) {
137
+ int xs;
138
+ for (xs = 0; xs < interleave; xs++) {
139
+ int tt = a->vd + reg + spacing * xs;
140
+
141
+ if (a->l) {
142
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
143
+ neon_store_element64(tt, n, size, tmp64);
144
+ } else {
145
+ neon_load_element64(tmp64, tt, n, size);
146
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
147
+ }
148
+ tcg_gen_add_i32(addr, addr, tmp);
149
+ }
150
+ }
151
+ }
152
+ tcg_temp_free_i32(addr);
153
+ tcg_temp_free_i32(tmp);
154
+ tcg_temp_free_i64(tmp64);
155
+
156
+ gen_neon_ldst_base_update(s, a->rm, a->rn, nregs * interleave * 8);
157
+ return true;
135
+ return true;
158
+}
136
+}
159
diff --git a/target/arm/translate.c b/target/arm/translate.c
137
+
160
index XXXXXXX..XXXXXXX 100644
138
+#define DO_2SHIFT_SCALAR(INSN, FN) \
161
--- a/target/arm/translate.c
139
+ static bool trans_##INSN(DisasContext *s, arg_shl_scalar *a) \
162
+++ b/target/arm/translate.c
140
+ { \
163
@@ -XXX,XX +XXX,XX @@ static void gen_neon_trn_u16(TCGv_i32 t0, TCGv_i32 t1)
141
+ static MVEGenTwoOpShiftFn * const fns[] = { \
164
}
142
+ gen_helper_mve_##FN##b, \
165
143
+ gen_helper_mve_##FN##h, \
166
144
+ gen_helper_mve_##FN##w, \
167
-static struct {
145
+ NULL, \
168
- int nregs;
146
+ }; \
169
- int interleave;
147
+ return do_2shift_scalar(s, a, fns[a->size]); \
170
- int spacing;
148
+ }
171
-} const neon_ls_element_type[11] = {
149
+
172
- {1, 4, 1},
150
+DO_2SHIFT_SCALAR(VSHL_S_scalar, vshli_s)
173
- {1, 4, 2},
151
+DO_2SHIFT_SCALAR(VSHL_U_scalar, vshli_u)
174
- {4, 1, 1},
152
+DO_2SHIFT_SCALAR(VRSHL_S_scalar, vrshli_s)
175
- {2, 2, 2},
153
+DO_2SHIFT_SCALAR(VRSHL_U_scalar, vrshli_u)
176
- {1, 3, 1},
154
+DO_2SHIFT_SCALAR(VQSHL_S_scalar, vqshli_s)
177
- {1, 3, 2},
155
+DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u)
178
- {3, 1, 1},
156
+DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s)
179
- {1, 1, 1},
157
+DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u)
180
- {1, 2, 1},
158
+
181
- {1, 2, 2},
159
#define DO_VSHLL(INSN, FN) \
182
- {2, 1, 1}
160
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
183
-};
161
{ \
184
-
185
/* Translate a NEON load/store element instruction. Return nonzero if the
186
instruction is invalid. */
187
static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
188
{
189
int rd, rn, rm;
190
- int op;
191
int nregs;
192
- int interleave;
193
- int spacing;
194
int stride;
195
int size;
196
int reg;
197
int load;
198
- int n;
199
int vec_size;
200
- int mmu_idx;
201
- MemOp endian;
202
TCGv_i32 addr;
203
TCGv_i32 tmp;
204
- TCGv_i32 tmp2;
205
- TCGv_i64 tmp64;
206
207
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
208
return 1;
209
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
210
rn = (insn >> 16) & 0xf;
211
rm = insn & 0xf;
212
load = (insn & (1 << 21)) != 0;
213
- endian = s->be_data;
214
- mmu_idx = get_mem_index(s);
215
if ((insn & (1 << 23)) == 0) {
216
- /* Load store all elements. */
217
- op = (insn >> 8) & 0xf;
218
- size = (insn >> 6) & 3;
219
- if (op > 10)
220
- return 1;
221
- /* Catch UNDEF cases for bad values of align field */
222
- switch (op & 0xc) {
223
- case 4:
224
- if (((insn >> 5) & 1) == 1) {
225
- return 1;
226
- }
227
- break;
228
- case 8:
229
- if (((insn >> 4) & 3) == 3) {
230
- return 1;
231
- }
232
- break;
233
- default:
234
- break;
235
- }
236
- nregs = neon_ls_element_type[op].nregs;
237
- interleave = neon_ls_element_type[op].interleave;
238
- spacing = neon_ls_element_type[op].spacing;
239
- if (size == 3 && (interleave | spacing) != 1) {
240
- return 1;
241
- }
242
- /* For our purposes, bytes are always little-endian. */
243
- if (size == 0) {
244
- endian = MO_LE;
245
- }
246
- /* Consecutive little-endian elements from a single register
247
- * can be promoted to a larger little-endian operation.
248
- */
249
- if (interleave == 1 && endian == MO_LE) {
250
- size = 3;
251
- }
252
- tmp64 = tcg_temp_new_i64();
253
- addr = tcg_temp_new_i32();
254
- tmp2 = tcg_const_i32(1 << size);
255
- load_reg_var(s, addr, rn);
256
- for (reg = 0; reg < nregs; reg++) {
257
- for (n = 0; n < 8 >> size; n++) {
258
- int xs;
259
- for (xs = 0; xs < interleave; xs++) {
260
- int tt = rd + reg + spacing * xs;
261
-
262
- if (load) {
263
- gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
264
- neon_store_element64(tt, n, size, tmp64);
265
- } else {
266
- neon_load_element64(tmp64, tt, n, size);
267
- gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
268
- }
269
- tcg_gen_add_i32(addr, addr, tmp2);
270
- }
271
- }
272
- }
273
- tcg_temp_free_i32(addr);
274
- tcg_temp_free_i32(tmp2);
275
- tcg_temp_free_i64(tmp64);
276
- stride = nregs * interleave * 8;
277
+ /* Load store all elements -- handled already by decodetree */
278
+ return 1;
279
} else {
280
size = (insn >> 10) & 3;
281
if (size == 3) {
282
--
162
--
283
2.20.1
163
2.20.1
284
164
285
165
diff view generated by jsdifflib
1
Convert the Neon comparison ops in the 3-reg-same grouping
1
All the users of the vmlaldav formats have an 'x bit in bit 12 and an
2
to decodetree.
2
'a' bit in bit 5; move these to the format rather than specifying them
3
in each insn pattern.
3
4
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-18-peter.maydell@linaro.org
7
---
7
---
8
target/arm/neon-dp.decode | 8 ++++++++
8
target/arm/mve.decode | 16 ++++++++--------
9
target/arm/translate-neon.inc.c | 22 ++++++++++++++++++++++
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
target/arm/translate.c | 23 +++--------------------
11
3 files changed, 33 insertions(+), 20 deletions(-)
12
10
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
11
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
13
--- a/target/arm/mve.decode
16
+++ b/target/arm/neon-dp.decode
14
+++ b/target/arm/mve.decode
17
@@ -XXX,XX +XXX,XX @@ VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
15
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
18
VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
16
19
VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
17
&vmlaldav rdahi rdalo size qn qm x a
20
18
21
+VCGT_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 0 .... @3same
19
-@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
22
+VCGT_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 0 .... @3same
20
+@vmlaldav .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
23
+VCGE_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 1 .... @3same
21
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
24
+VCGE_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 1 .... @3same
22
-@vmlaldav_nosz .... .... . ... ... . ... . .... .... qm:3 . \
25
+
23
+@vmlaldav_nosz .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
26
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
24
qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
27
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
25
-VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
28
VMIN_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 1 .... @3same
26
-VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
29
@@ -XXX,XX +XXX,XX @@ VMIN_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 1 .... @3same
27
+VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
30
28
+VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
31
VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
29
32
VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
30
-VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
33
+
31
+VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
34
+VTST_3s 1111 001 0 0 . .. .... .... 1000 . . . 1 .... @3same
32
35
+VCEQ_3s 1111 001 1 0 . .. .... .... 1000 . . . 1 .... @3same
33
-VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
34
-VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
37
index XXXXXXX..XXXXXXX 100644
35
+VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
38
--- a/target/arm/translate-neon.inc.c
36
+VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
39
+++ b/target/arm/translate-neon.inc.c
37
40
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VMAX_S, tcg_gen_gvec_smax)
38
-VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
41
DO_3SAME_NO_SZ_3(VMAX_U, tcg_gen_gvec_umax)
39
+VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
42
DO_3SAME_NO_SZ_3(VMIN_S, tcg_gen_gvec_smin)
40
43
DO_3SAME_NO_SZ_3(VMIN_U, tcg_gen_gvec_umin)
41
# Scalar operations
44
+
42
45
+#define DO_3SAME_CMP(INSN, COND) \
46
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
47
+ uint32_t rn_ofs, uint32_t rm_ofs, \
48
+ uint32_t oprsz, uint32_t maxsz) \
49
+ { \
50
+ tcg_gen_gvec_cmp(COND, vece, rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz); \
51
+ } \
52
+ DO_3SAME_NO_SZ_3(INSN, gen_##INSN##_3s)
53
+
54
+DO_3SAME_CMP(VCGT_S, TCG_COND_GT)
55
+DO_3SAME_CMP(VCGT_U, TCG_COND_GTU)
56
+DO_3SAME_CMP(VCGE_S, TCG_COND_GE)
57
+DO_3SAME_CMP(VCGE_U, TCG_COND_GEU)
58
+DO_3SAME_CMP(VCEQ, TCG_COND_EQ)
59
+
60
+static void gen_VTST_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
61
+ uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
62
+{
63
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &cmtst_op[vece]);
64
+}
65
+DO_3SAME_NO_SZ_3(VTST, gen_VTST_3s)
66
diff --git a/target/arm/translate.c b/target/arm/translate.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate.c
69
+++ b/target/arm/translate.c
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
71
u ? &mls_op[size] : &mla_op[size]);
72
return 0;
73
74
- case NEON_3R_VTST_VCEQ:
75
- if (u) { /* VCEQ */
76
- tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
77
- vec_size, vec_size);
78
- } else { /* VTST */
79
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
80
- vec_size, vec_size, &cmtst_op[size]);
81
- }
82
- return 0;
83
-
84
- case NEON_3R_VCGT:
85
- tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
86
- rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
87
- return 0;
88
-
89
- case NEON_3R_VCGE:
90
- tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
91
- rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
92
- return 0;
93
-
94
case NEON_3R_VSHL:
95
/* Note the operation is vshl vd,vm,vn */
96
tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
97
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
98
case NEON_3R_LOGIC:
99
case NEON_3R_VMAX:
100
case NEON_3R_VMIN:
101
+ case NEON_3R_VTST_VCEQ:
102
+ case NEON_3R_VCGT:
103
+ case NEON_3R_VCGE:
104
/* Already handled by decodetree */
105
return 1;
106
}
107
--
43
--
108
2.20.1
44
2.20.1
109
45
110
46
diff view generated by jsdifflib
1
Convert the V[US]DOT (scalar) insns in the 2reg-scalar-ext group
1
Implement the MVE integer min/max across vector insns
2
to decodetree.
2
VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum
3
from the vector elements and a general purpose register,
4
and store the maximum back into the general purpose
5
register.
6
7
These insns overlap with VRMLALDAVH (they use what would
8
be RdaHi=0b110).
3
9
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-10-peter.maydell@linaro.org
7
---
12
---
8
target/arm/neon-shared.decode | 3 +++
13
target/arm/helper-mve.h | 20 ++++++++++++
9
target/arm/translate-neon.inc.c | 35 +++++++++++++++++++++++++++++++++
14
target/arm/mve.decode | 18 +++++++++--
10
target/arm/translate.c | 13 +-----------
15
target/arm/mve_helper.c | 66 ++++++++++++++++++++++++++++++++++++++
11
3 files changed, 39 insertions(+), 12 deletions(-)
16
target/arm/translate-mve.c | 48 +++++++++++++++++++++++++++
12
17
4 files changed, 150 insertions(+), 2 deletions(-)
13
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
18
14
index XXXXXXX..XXXXXXX 100644
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
--- a/target/arm/neon-shared.decode
20
index XXXXXXX..XXXXXXX 100644
16
+++ b/target/arm/neon-shared.decode
21
--- a/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
22
+++ b/target/arm/helper-mve.h
18
vn=%vn_dp vd=%vd_dp size=0
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
19
VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
24
DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
20
vm=%vm_dp vn=%vn_dp vd=%vd_dp size=1 index=0
25
DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
21
+
26
22
+VDOT_scalar 1111 1110 0 . 10 .... .... 1101 . q:1 index:1 u:1 rm:4 \
27
+DEF_HELPER_FLAGS_3(mve_vmaxvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
23
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
28
+DEF_HELPER_FLAGS_3(mve_vmaxvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
24
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
29
+DEF_HELPER_FLAGS_3(mve_vmaxvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
25
index XXXXXXX..XXXXXXX 100644
30
+DEF_HELPER_FLAGS_3(mve_vmaxvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
26
--- a/target/arm/translate-neon.inc.c
31
+DEF_HELPER_FLAGS_3(mve_vmaxvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
27
+++ b/target/arm/translate-neon.inc.c
32
+DEF_HELPER_FLAGS_3(mve_vmaxvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
28
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
33
+DEF_HELPER_FLAGS_3(mve_vmaxavb, TCG_CALL_NO_WG, i32, env, ptr, i32)
29
tcg_temp_free_ptr(fpst);
34
+DEF_HELPER_FLAGS_3(mve_vmaxavh, TCG_CALL_NO_WG, i32, env, ptr, i32)
30
return true;
35
+DEF_HELPER_FLAGS_3(mve_vmaxavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
31
}
36
+
32
+
37
+DEF_HELPER_FLAGS_3(mve_vminvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
33
+static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
38
+DEF_HELPER_FLAGS_3(mve_vminvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
34
+{
39
+DEF_HELPER_FLAGS_3(mve_vminvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
35
+ gen_helper_gvec_3 *fn_gvec;
40
+DEF_HELPER_FLAGS_3(mve_vminvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
36
+ int opr_sz;
41
+DEF_HELPER_FLAGS_3(mve_vminvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
37
+ TCGv_ptr fpst;
42
+DEF_HELPER_FLAGS_3(mve_vminvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
38
+
43
+DEF_HELPER_FLAGS_3(mve_vminavb, TCG_CALL_NO_WG, i32, env, ptr, i32)
39
+ if (!dc_isar_feature(aa32_dp, s)) {
44
+DEF_HELPER_FLAGS_3(mve_vminavh, TCG_CALL_NO_WG, i32, env, ptr, i32)
45
+DEF_HELPER_FLAGS_3(mve_vminavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
46
+
47
DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64)
48
DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64)
49
50
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/mve.decode
53
+++ b/target/arm/mve.decode
54
@@ -XXX,XX +XXX,XX @@
55
&vcmp qm qn size mask
56
&vcmp_scalar qn rm size mask
57
&shl_scalar qda rm size
58
+&vmaxv qm rda size
59
60
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
61
# Note that both Rn and Qd are 3 bits only (no D bit)
62
@@ -XXX,XX +XXX,XX @@
63
@vcmp_scalar .... .... .. size:2 qn:3 . .... .... .... rm:4 &vcmp_scalar \
64
mask=%mask_22_13
65
66
+@vmaxv .... .... .... size:2 .. rda:4 .... .... .... &vmaxv qm=%qm
67
+
68
# Vector loads and stores
69
70
# Widening loads and narrowing stores:
71
@@ -XXX,XX +XXX,XX @@ VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
72
73
VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
74
75
-VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
76
-VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
77
+{
78
+ VMAXV_S 1110 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
79
+ VMINV_S 1110 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
80
+ VMAXAV 1110 1110 1110 .. 00 .... 1111 0 0 . 0 ... 0 @vmaxv
81
+ VMINAV 1110 1110 1110 .. 00 .... 1111 1 0 . 0 ... 0 @vmaxv
82
+ VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
83
+}
84
+
85
+{
86
+ VMAXV_U 1111 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
87
+ VMINV_U 1111 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
88
+ VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
89
+}
90
91
VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
92
93
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/mve_helper.c
96
+++ b/target/arm/mve_helper.c
97
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvub, 1, uint8_t)
98
DO_VADDV(vaddvuh, 2, uint16_t)
99
DO_VADDV(vaddvuw, 4, uint32_t)
100
101
+/*
102
+ * Vector max/min across vector. Unlike VADDV, we must
103
+ * read ra as the element size, not its full width.
104
+ * We work with int64_t internally for simplicity.
105
+ */
106
+#define DO_VMAXMINV(OP, ESIZE, TYPE, RATYPE, FN) \
107
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
108
+ uint32_t ra_in) \
109
+ { \
110
+ uint16_t mask = mve_element_mask(env); \
111
+ unsigned e; \
112
+ TYPE *m = vm; \
113
+ int64_t ra = (RATYPE)ra_in; \
114
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
115
+ if (mask & 1) { \
116
+ ra = FN(ra, m[H##ESIZE(e)]); \
117
+ } \
118
+ } \
119
+ mve_advance_vpt(env); \
120
+ return ra; \
121
+ } \
122
+
123
+#define DO_VMAXMINV_U(INSN, FN) \
124
+ DO_VMAXMINV(INSN##b, 1, uint8_t, uint8_t, FN) \
125
+ DO_VMAXMINV(INSN##h, 2, uint16_t, uint16_t, FN) \
126
+ DO_VMAXMINV(INSN##w, 4, uint32_t, uint32_t, FN)
127
+#define DO_VMAXMINV_S(INSN, FN) \
128
+ DO_VMAXMINV(INSN##b, 1, int8_t, int8_t, FN) \
129
+ DO_VMAXMINV(INSN##h, 2, int16_t, int16_t, FN) \
130
+ DO_VMAXMINV(INSN##w, 4, int32_t, int32_t, FN)
131
+
132
+/*
133
+ * Helpers for max and min of absolute values across vector:
134
+ * note that we only take the absolute value of 'm', not 'n'
135
+ */
136
+static int64_t do_maxa(int64_t n, int64_t m)
137
+{
138
+ if (m < 0) {
139
+ m = -m;
140
+ }
141
+ return MAX(n, m);
142
+}
143
+
144
+static int64_t do_mina(int64_t n, int64_t m)
145
+{
146
+ if (m < 0) {
147
+ m = -m;
148
+ }
149
+ return MIN(n, m);
150
+}
151
+
152
+DO_VMAXMINV_S(vmaxvs, DO_MAX)
153
+DO_VMAXMINV_U(vmaxvu, DO_MAX)
154
+DO_VMAXMINV_S(vminvs, DO_MIN)
155
+DO_VMAXMINV_U(vminvu, DO_MIN)
156
+/*
157
+ * VMAXAV, VMINAV treat the general purpose input as unsigned
158
+ * and the vector elements as signed.
159
+ */
160
+DO_VMAXMINV(vmaxavb, 1, int8_t, uint8_t, do_maxa)
161
+DO_VMAXMINV(vmaxavh, 2, int16_t, uint16_t, do_maxa)
162
+DO_VMAXMINV(vmaxavw, 4, int32_t, uint32_t, do_maxa)
163
+DO_VMAXMINV(vminavb, 1, int8_t, uint8_t, do_mina)
164
+DO_VMAXMINV(vminavh, 2, int16_t, uint16_t, do_mina)
165
+DO_VMAXMINV(vminavw, 4, int32_t, uint32_t, do_mina)
166
+
167
#define DO_VADDLV(OP, TYPE, LTYPE) \
168
uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
169
uint64_t ra) \
170
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-mve.c
173
+++ b/target/arm/translate-mve.c
174
@@ -XXX,XX +XXX,XX @@ DO_VCMP(VCMPGE, vcmpge)
175
DO_VCMP(VCMPLT, vcmplt)
176
DO_VCMP(VCMPGT, vcmpgt)
177
DO_VCMP(VCMPLE, vcmple)
178
+
179
+static bool do_vmaxv(DisasContext *s, arg_vmaxv *a, MVEGenVADDVFn fn)
180
+{
181
+ /*
182
+ * MIN/MAX operations across a vector: compute the min or
183
+ * max of the initial value in a general purpose register
184
+ * and all the elements in the vector, and store it back
185
+ * into the general purpose register.
186
+ */
187
+ TCGv_ptr qm;
188
+ TCGv_i32 rda;
189
+
190
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qm) ||
191
+ !fn || a->rda == 13 || a->rda == 15) {
192
+ /* Rda cases are UNPREDICTABLE */
40
+ return false;
193
+ return false;
41
+ }
194
+ }
42
+
195
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
43
+ /* UNDEF accesses to D16-D31 if they don't exist. */
44
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
45
+ ((a->vd | a->vn) & 0x10)) {
46
+ return false;
47
+ }
48
+
49
+ if ((a->vd | a->vn) & a->q) {
50
+ return false;
51
+ }
52
+
53
+ if (!vfp_access_check(s)) {
54
+ return true;
196
+ return true;
55
+ }
197
+ }
56
+
198
+
57
+ fn_gvec = a->u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
199
+ qm = mve_qreg_ptr(a->qm);
58
+ opr_sz = (1 + a->q) * 8;
200
+ rda = load_reg(s, a->rda);
59
+ fpst = get_fpstatus_ptr(1);
201
+ fn(rda, cpu_env, qm, rda);
60
+ tcg_gen_gvec_3_ool(vfp_reg_offset(1, a->vd),
202
+ store_reg(s, a->rda, rda);
61
+ vfp_reg_offset(1, a->vn),
203
+ tcg_temp_free_ptr(qm);
62
+ vfp_reg_offset(1, a->rm),
204
+ mve_update_eci(s);
63
+ opr_sz, opr_sz, a->index, fn_gvec);
64
+ tcg_temp_free_ptr(fpst);
65
+ return true;
205
+ return true;
66
+}
206
+}
67
diff --git a/target/arm/translate.c b/target/arm/translate.c
207
+
68
index XXXXXXX..XXXXXXX 100644
208
+#define DO_VMAXV(INSN, FN) \
69
--- a/target/arm/translate.c
209
+ static bool trans_##INSN(DisasContext *s, arg_vmaxv *a) \
70
+++ b/target/arm/translate.c
210
+ { \
71
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
211
+ static MVEGenVADDVFn * const fns[] = { \
72
bool is_long = false, q = extract32(insn, 6, 1);
212
+ gen_helper_mve_##FN##b, \
73
bool ptr_is_env = false;
213
+ gen_helper_mve_##FN##h, \
74
214
+ gen_helper_mve_##FN##w, \
75
- if ((insn & 0xffb00f00) == 0xfe200d00) {
215
+ NULL, \
76
- /* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
216
+ }; \
77
- int u = extract32(insn, 4, 1);
217
+ return do_vmaxv(s, a, fns[a->size]); \
78
-
218
+ }
79
- if (!dc_isar_feature(aa32_dp, s)) {
219
+
80
- return 1;
220
+DO_VMAXV(VMAXV_S, vmaxvs)
81
- }
221
+DO_VMAXV(VMAXV_U, vmaxvu)
82
- fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
222
+DO_VMAXV(VMAXAV, vmaxav)
83
- /* rm is just Vm, and index is M. */
223
+DO_VMAXV(VMINV_S, vminvs)
84
- data = extract32(insn, 5, 1); /* index */
224
+DO_VMAXV(VMINV_U, vminvu)
85
- rm = extract32(insn, 0, 4);
225
+DO_VMAXV(VMINAV, vminav)
86
- } else if ((insn & 0xffa00f10) == 0xfe000810) {
87
+ if ((insn & 0xffa00f10) == 0xfe000810) {
88
/* VFM[AS]L -- 1111 1110 0.0S .... .... 1000 .Q.1 .... */
89
int is_s = extract32(insn, 20, 1);
90
int vm20 = extract32(insn, 0, 3);
91
--
226
--
92
2.20.1
227
2.20.1
93
228
94
229
diff view generated by jsdifflib
1
Convert the Neon 3-reg-same VADD and VSUB insns to decodetree.
1
Implement the MVE VABAV insn, which computes absolute differences
2
2
between elements of two vectors and accumulates the result into
3
Note that we don't need the neon_3r_sizes[op] check here because all
3
a general purpose register.
4
size values are OK for VADD and VSUB; we'll add this when we convert
5
the first insn that has size restrictions.
6
7
For this we need one of the GVecGen*Fn typedefs currently in
8
translate-a64.h; move them all to translate.h as a block so they
9
are visible to the 32-bit decoder.
10
4
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20200430181003.21682-15-peter.maydell@linaro.org
14
---
7
---
15
target/arm/translate-a64.h | 9 --------
8
target/arm/helper-mve.h | 7 +++++++
16
target/arm/translate.h | 9 ++++++++
9
target/arm/mve.decode | 6 ++++++
17
target/arm/neon-dp.decode | 17 +++++++++++++++
10
target/arm/mve_helper.c | 26 +++++++++++++++++++++++
18
target/arm/translate-neon.inc.c | 38 +++++++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 43 ++++++++++++++++++++++++++++++++++++++
19
target/arm/translate.c | 14 ++++--------
12
4 files changed, 82 insertions(+)
20
5 files changed, 68 insertions(+), 19 deletions(-)
21
13
22
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
23
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/translate-a64.h
16
--- a/target/arm/helper-mve.h
25
+++ b/target/arm/translate-a64.h
17
+++ b/target/arm/helper-mve.h
26
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vminavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
27
19
DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64)
28
bool disas_sve(DisasContext *, uint32_t);
20
DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64)
29
21
30
-/* Note that the gvec expanders operate on offsets + sizes. */
22
+DEF_HELPER_FLAGS_4(mve_vabavsb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
31
-typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
23
+DEF_HELPER_FLAGS_4(mve_vabavsh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
32
-typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t, int64_t,
24
+DEF_HELPER_FLAGS_4(mve_vabavsw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
33
- uint32_t, uint32_t);
25
+DEF_HELPER_FLAGS_4(mve_vabavub, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
34
-typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
26
+DEF_HELPER_FLAGS_4(mve_vabavuh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
35
- uint32_t, uint32_t, uint32_t);
27
+DEF_HELPER_FLAGS_4(mve_vabavuw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
36
-typedef void GVecGen4Fn(unsigned, uint32_t, uint32_t, uint32_t,
28
+
37
- uint32_t, uint32_t, uint32_t);
29
DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
38
-
30
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
39
#endif /* TARGET_ARM_TRANSLATE_A64_H */
31
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
40
diff --git a/target/arm/translate.h b/target/arm/translate.h
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
41
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate.h
34
--- a/target/arm/mve.decode
43
+++ b/target/arm/translate.h
35
+++ b/target/arm/mve.decode
44
@@ -XXX,XX +XXX,XX @@ void gen_sshl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
36
@@ -XXX,XX +XXX,XX @@
45
#define dc_isar_feature(name, ctx) \
37
&vcmp_scalar qn rm size mask
46
({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
38
&shl_scalar qda rm size
47
39
&vmaxv qm rda size
48
+/* Note that the gvec expanders operate on offsets + sizes. */
40
+&vabav qn qm rda size
49
+typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
41
50
+typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t, int64_t,
42
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
51
+ uint32_t, uint32_t);
43
# Note that both Rn and Qd are 3 bits only (no D bit)
52
+typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
44
@@ -XXX,XX +XXX,XX @@ VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
53
+ uint32_t, uint32_t, uint32_t);
45
rdahi=%rdahi rdalo=%rdalo
54
+typedef void GVecGen4Fn(unsigned, uint32_t, uint32_t, uint32_t,
46
}
55
+ uint32_t, uint32_t, uint32_t);
47
48
+@vabav .... .... .. size:2 .... rda:4 .... .... .... &vabav qn=%qn qm=%qm
56
+
49
+
57
#endif /* TARGET_ARM_TRANSLATE_H */
50
+VABAV_S 111 0 1110 10 .. ... 0 .... 1111 . 0 . 0 ... 1 @vabav
58
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
51
+VABAV_U 111 1 1110 10 .. ... 0 .... 1111 . 0 . 0 ... 1 @vabav
52
+
53
# Logical immediate operations (1 reg and modified-immediate)
54
55
# The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/neon-dp.decode
58
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/neon-dp.decode
59
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@
60
@@ -XXX,XX +XXX,XX @@ DO_VMAXMINV(vminavb, 1, int8_t, uint8_t, do_mina)
63
#
61
DO_VMAXMINV(vminavh, 2, int16_t, uint16_t, do_mina)
64
# This file is processed by scripts/decodetree.py
62
DO_VMAXMINV(vminavw, 4, int32_t, uint32_t, do_mina)
65
#
63
66
+# VFP/Neon register fields; same as vfp.decode
64
+#define DO_VABAV(OP, ESIZE, TYPE) \
67
+%vm_dp 5:1 0:4
65
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
68
+%vn_dp 7:1 16:4
66
+ void *vm, uint32_t ra) \
69
+%vd_dp 22:1 12:4
67
+ { \
70
68
+ uint16_t mask = mve_element_mask(env); \
71
# Encodings for Neon data processing instructions where the T32 encoding
69
+ unsigned e; \
72
# is a simple transformation of the A32 encoding.
70
+ TYPE *m = vm, *n = vn; \
73
@@ -XXX,XX +XXX,XX @@
71
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
74
# 0b111p_1111_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
72
+ if (mask & 1) { \
75
# This file works on the A32 encoding only; calling code for T32 has to
73
+ int64_t n0 = n[H##ESIZE(e)]; \
76
# transform the insn into the A32 version first.
74
+ int64_t m0 = m[H##ESIZE(e)]; \
75
+ uint32_t r = n0 >= m0 ? (n0 - m0) : (m0 - n0); \
76
+ ra += r; \
77
+ } \
78
+ } \
79
+ mve_advance_vpt(env); \
80
+ return ra; \
81
+ }
77
+
82
+
78
+######################################################################
83
+DO_VABAV(vabavsb, 1, int8_t)
79
+# 3-reg-same grouping:
84
+DO_VABAV(vabavsh, 2, int16_t)
80
+# 1111 001 U 0 D sz:2 Vn:4 Vd:4 opc:4 N Q M op Vm:4
85
+DO_VABAV(vabavsw, 4, int32_t)
81
+######################################################################
86
+DO_VABAV(vabavub, 1, uint8_t)
87
+DO_VABAV(vabavuh, 2, uint16_t)
88
+DO_VABAV(vabavuw, 4, uint32_t)
82
+
89
+
83
+&3same vm vn vd q size
90
#define DO_VADDLV(OP, TYPE, LTYPE) \
91
uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
92
uint64_t ra) \
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-mve.c
96
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
98
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
99
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
100
typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
101
+typedef void MVEGenVABAVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
102
103
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
104
static inline long mve_qreg_offset(unsigned reg)
105
@@ -XXX,XX +XXX,XX @@ DO_VMAXV(VMAXAV, vmaxav)
106
DO_VMAXV(VMINV_S, vminvs)
107
DO_VMAXV(VMINV_U, vminvu)
108
DO_VMAXV(VMINAV, vminav)
84
+
109
+
85
+@3same .... ... . . . size:2 .... .... .... . q:1 . . .... \
110
+static bool do_vabav(DisasContext *s, arg_vabav *a, MVEGenVABAVFn *fn)
86
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
111
+{
112
+ /* Absolute difference accumulated across vector */
113
+ TCGv_ptr qn, qm;
114
+ TCGv_i32 rda;
87
+
115
+
88
+VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
116
+ if (!dc_isar_feature(aa32_mve, s) ||
89
+VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
117
+ !mve_check_qreg_bank(s, a->qm | a->qn) ||
90
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
118
+ !fn || a->rda == 13 || a->rda == 15) {
91
index XXXXXXX..XXXXXXX 100644
119
+ /* Rda cases are UNPREDICTABLE */
92
--- a/target/arm/translate-neon.inc.c
93
+++ b/target/arm/translate-neon.inc.c
94
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
95
96
return true;
97
}
98
+
99
+static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
100
+{
101
+ int vec_size = a->q ? 16 : 8;
102
+ int rd_ofs = neon_reg_offset(a->vd, 0);
103
+ int rn_ofs = neon_reg_offset(a->vn, 0);
104
+ int rm_ofs = neon_reg_offset(a->vm, 0);
105
+
106
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
107
+ return false;
120
+ return false;
108
+ }
121
+ }
109
+
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
110
+ /* UNDEF accesses to D16-D31 if they don't exist. */
111
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
112
+ ((a->vd | a->vn | a->vm) & 0x10)) {
113
+ return false;
114
+ }
115
+
116
+ if ((a->vn | a->vm | a->vd) & a->q) {
117
+ return false;
118
+ }
119
+
120
+ if (!vfp_access_check(s)) {
121
+ return true;
123
+ return true;
122
+ }
124
+ }
123
+
125
+
124
+ fn(a->size, rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
126
+ qm = mve_qreg_ptr(a->qm);
127
+ qn = mve_qreg_ptr(a->qn);
128
+ rda = load_reg(s, a->rda);
129
+ fn(rda, cpu_env, qn, qm, rda);
130
+ store_reg(s, a->rda, rda);
131
+ tcg_temp_free_ptr(qm);
132
+ tcg_temp_free_ptr(qn);
133
+ mve_update_eci(s);
125
+ return true;
134
+ return true;
126
+}
135
+}
127
+
136
+
128
+#define DO_3SAME(INSN, FUNC) \
137
+#define DO_VABAV(INSN, FN) \
129
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
138
+ static bool trans_##INSN(DisasContext *s, arg_vabav *a) \
130
+ { \
139
+ { \
131
+ return do_3same(s, a, FUNC); \
140
+ static MVEGenVABAVFn * const fns[] = { \
141
+ gen_helper_mve_##FN##b, \
142
+ gen_helper_mve_##FN##h, \
143
+ gen_helper_mve_##FN##w, \
144
+ NULL, \
145
+ }; \
146
+ return do_vabav(s, a, fns[a->size]); \
132
+ }
147
+ }
133
+
148
+
134
+DO_3SAME(VADD, tcg_gen_gvec_add)
149
+DO_VABAV(VABAV_S, vabavs)
135
+DO_3SAME(VSUB, tcg_gen_gvec_sub)
150
+DO_VABAV(VABAV_U, vabavu)
136
diff --git a/target/arm/translate.c b/target/arm/translate.c
137
index XXXXXXX..XXXXXXX 100644
138
--- a/target/arm/translate.c
139
+++ b/target/arm/translate.c
140
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
141
}
142
return 0;
143
144
- case NEON_3R_VADD_VSUB:
145
- if (u) {
146
- tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
147
- vec_size, vec_size);
148
- } else {
149
- tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
150
- vec_size, vec_size);
151
- }
152
- return 0;
153
-
154
case NEON_3R_VQADD:
155
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
156
rn_ofs, rm_ofs, vec_size, vec_size,
157
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
158
tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
159
u ? &ushl_op[size] : &sshl_op[size]);
160
return 0;
161
+
162
+ case NEON_3R_VADD_VSUB:
163
+ /* Already handled by decodetree */
164
+ return 1;
165
}
166
167
if (size == 3) {
168
--
151
--
169
2.20.1
152
2.20.1
170
153
171
154
diff view generated by jsdifflib
1
We were accidentally permitting decode of Thumb Neon insns even if
1
Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN.
2
the CPU didn't have the FEATURE_NEON bit set, because the feature
2
These take a double-width input, narrow it (possibly saturating) and
3
check was being done before the call to disas_neon_data_insn() and
3
store the result to either the top or bottom half of the output
4
disas_neon_ls_insn() in the Arm decoder but was omitted from the
4
element.
5
Thumb decoder. Push the feature bit check down into the called
6
functions so it is done for both Arm and Thumb encodings.
7
5
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20200430181003.21682-3-peter.maydell@linaro.org
12
---
8
---
13
target/arm/translate.c | 16 ++++++++--------
9
target/arm/helper-mve.h | 20 ++++++++++
14
1 file changed, 8 insertions(+), 8 deletions(-)
10
target/arm/mve.decode | 12 ++++++
15
11
target/arm/mve_helper.c | 78 ++++++++++++++++++++++++++++++++++++++
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
target/arm/translate-mve.c | 22 +++++++++++
17
index XXXXXXX..XXXXXXX 100644
13
4 files changed, 132 insertions(+)
18
--- a/target/arm/translate.c
14
19
+++ b/target/arm/translate.c
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
16
index XXXXXXX..XXXXXXX 100644
21
TCGv_i32 tmp2;
17
--- a/target/arm/helper-mve.h
22
TCGv_i64 tmp64;
18
+++ b/target/arm/helper-mve.h
23
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
20
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+ return 1;
21
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
23
+DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vmovnth, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+
28
+DEF_HELPER_FLAGS_3(mve_vqmovunbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
+DEF_HELPER_FLAGS_3(mve_vqmovunbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
+DEF_HELPER_FLAGS_3(mve_vqmovuntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
+DEF_HELPER_FLAGS_3(mve_vqmovunth, TCG_CALL_NO_WG, void, env, ptr, ptr)
32
+
33
+DEF_HELPER_FLAGS_3(mve_vqmovnbsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
34
+DEF_HELPER_FLAGS_3(mve_vqmovnbsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
35
+DEF_HELPER_FLAGS_3(mve_vqmovntsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
36
+DEF_HELPER_FLAGS_3(mve_vqmovntsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
37
+
38
+DEF_HELPER_FLAGS_3(mve_vqmovnbub, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
+DEF_HELPER_FLAGS_3(mve_vqmovnbuh, TCG_CALL_NO_WG, void, env, ptr, ptr)
40
+DEF_HELPER_FLAGS_3(mve_vqmovntub, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
+DEF_HELPER_FLAGS_3(mve_vqmovntuh, TCG_CALL_NO_WG, void, env, ptr, ptr)
42
+
43
DEF_HELPER_FLAGS_4(mve_vand, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
46
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve.decode
49
+++ b/target/arm/mve.decode
50
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
51
VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
52
VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
53
54
+ VQMOVUNB 111 0 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
55
+ VQMOVN_BS 111 0 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
56
+
57
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
58
}
59
60
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
61
VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
62
VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
63
64
+ VMOVNB 111 1 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
65
+ VQMOVN_BU 111 1 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
66
+
67
VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
68
}
69
70
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
71
VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
72
VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
73
74
+ VQMOVUNT 111 0 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
75
+ VQMOVN_TS 111 0 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
76
+
77
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
78
}
79
80
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
81
VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
82
VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
83
84
+ VMOVNT 111 1 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
85
+ VQMOVN_TU 111 1 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
86
+
87
VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
88
}
89
90
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/mve_helper.c
93
+++ b/target/arm/mve_helper.c
94
@@ -XXX,XX +XXX,XX @@ DO_VSHRN_SAT_UH(vqrshrnb_uh, vqrshrnt_uh, DO_RSHRN_UH)
95
DO_VSHRN_SAT_SB(vqrshrunbb, vqrshruntb, DO_RSHRUN_B)
96
DO_VSHRN_SAT_SH(vqrshrunbh, vqrshrunth, DO_RSHRUN_H)
97
98
+#define DO_VMOVN(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE) \
99
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
100
+ { \
101
+ LTYPE *m = vm; \
102
+ TYPE *d = vd; \
103
+ uint16_t mask = mve_element_mask(env); \
104
+ unsigned le; \
105
+ mask >>= ESIZE * TOP; \
106
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
107
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], \
108
+ m[H##LESIZE(le)], mask); \
109
+ } \
110
+ mve_advance_vpt(env); \
26
+ }
111
+ }
27
+
112
+
28
/* FIXME: this access check should not take precedence over UNDEF
113
+DO_VMOVN(vmovnbb, false, 1, uint8_t, 2, uint16_t)
29
* for invalid encodings; we will generate incorrect syndrome information
114
+DO_VMOVN(vmovnbh, false, 2, uint16_t, 4, uint32_t)
30
* for attempts to execute invalid vfp/neon encodings with FP disabled.
115
+DO_VMOVN(vmovntb, true, 1, uint8_t, 2, uint16_t)
31
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
116
+DO_VMOVN(vmovnth, true, 2, uint16_t, 4, uint32_t)
32
TCGv_ptr ptr1, ptr2, ptr3;
117
+
33
TCGv_i64 tmp64;
118
+#define DO_VMOVN_SAT(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
34
119
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
35
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
120
+ { \
36
+ return 1;
121
+ LTYPE *m = vm; \
122
+ TYPE *d = vd; \
123
+ uint16_t mask = mve_element_mask(env); \
124
+ bool qc = false; \
125
+ unsigned le; \
126
+ mask >>= ESIZE * TOP; \
127
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
128
+ bool sat = false; \
129
+ TYPE r = FN(m[H##LESIZE(le)], &sat); \
130
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
131
+ qc |= sat & mask & 1; \
132
+ } \
133
+ if (qc) { \
134
+ env->vfp.qc[0] = qc; \
135
+ } \
136
+ mve_advance_vpt(env); \
37
+ }
137
+ }
38
+
138
+
39
/* FIXME: this access check should not take precedence over UNDEF
139
+#define DO_VMOVN_SAT_UB(BOP, TOP, FN) \
40
* for invalid encodings; we will generate incorrect syndrome information
140
+ DO_VMOVN_SAT(BOP, false, 1, uint8_t, 2, uint16_t, FN) \
41
* for attempts to execute invalid vfp/neon encodings with FP disabled.
141
+ DO_VMOVN_SAT(TOP, true, 1, uint8_t, 2, uint16_t, FN)
42
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
142
+
43
143
+#define DO_VMOVN_SAT_UH(BOP, TOP, FN) \
44
if (((insn >> 25) & 7) == 1) {
144
+ DO_VMOVN_SAT(BOP, false, 2, uint16_t, 4, uint32_t, FN) \
45
/* NEON Data processing. */
145
+ DO_VMOVN_SAT(TOP, true, 2, uint16_t, 4, uint32_t, FN)
46
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
146
+
47
- goto illegal_op;
147
+#define DO_VMOVN_SAT_SB(BOP, TOP, FN) \
48
- }
148
+ DO_VMOVN_SAT(BOP, false, 1, int8_t, 2, int16_t, FN) \
49
-
149
+ DO_VMOVN_SAT(TOP, true, 1, int8_t, 2, int16_t, FN)
50
if (disas_neon_data_insn(s, insn)) {
150
+
51
goto illegal_op;
151
+#define DO_VMOVN_SAT_SH(BOP, TOP, FN) \
52
}
152
+ DO_VMOVN_SAT(BOP, false, 2, int16_t, 4, int32_t, FN) \
53
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
153
+ DO_VMOVN_SAT(TOP, true, 2, int16_t, 4, int32_t, FN)
54
}
154
+
55
if ((insn & 0x0f100000) == 0x04000000) {
155
+#define DO_VQMOVN_SB(N, SATP) \
56
/* NEON load/store. */
156
+ do_sat_bhs((int64_t)(N), INT8_MIN, INT8_MAX, SATP)
57
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
157
+#define DO_VQMOVN_UB(N, SATP) \
58
- goto illegal_op;
158
+ do_sat_bhs((uint64_t)(N), 0, UINT8_MAX, SATP)
59
- }
159
+#define DO_VQMOVUN_B(N, SATP) \
60
-
160
+ do_sat_bhs((int64_t)(N), 0, UINT8_MAX, SATP)
61
if (disas_neon_ls_insn(s, insn)) {
161
+
62
goto illegal_op;
162
+#define DO_VQMOVN_SH(N, SATP) \
63
}
163
+ do_sat_bhs((int64_t)(N), INT16_MIN, INT16_MAX, SATP)
164
+#define DO_VQMOVN_UH(N, SATP) \
165
+ do_sat_bhs((uint64_t)(N), 0, UINT16_MAX, SATP)
166
+#define DO_VQMOVUN_H(N, SATP) \
167
+ do_sat_bhs((int64_t)(N), 0, UINT16_MAX, SATP)
168
+
169
+DO_VMOVN_SAT_SB(vqmovnbsb, vqmovntsb, DO_VQMOVN_SB)
170
+DO_VMOVN_SAT_SH(vqmovnbsh, vqmovntsh, DO_VQMOVN_SH)
171
+DO_VMOVN_SAT_UB(vqmovnbub, vqmovntub, DO_VQMOVN_UB)
172
+DO_VMOVN_SAT_UH(vqmovnbuh, vqmovntuh, DO_VQMOVN_UH)
173
+DO_VMOVN_SAT_SB(vqmovunbb, vqmovuntb, DO_VQMOVUN_B)
174
+DO_VMOVN_SAT_SH(vqmovunbh, vqmovunth, DO_VQMOVUN_H)
175
+
176
uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
177
uint32_t shift)
178
{
179
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
180
index XXXXXXX..XXXXXXX 100644
181
--- a/target/arm/translate-mve.c
182
+++ b/target/arm/translate-mve.c
183
@@ -XXX,XX +XXX,XX @@ DO_1OP(VCLS, vcls)
184
DO_1OP(VABS, vabs)
185
DO_1OP(VNEG, vneg)
186
187
+/* Narrowing moves: only size 0 and 1 are valid */
188
+#define DO_VMOVN(INSN, FN) \
189
+ static bool trans_##INSN(DisasContext *s, arg_1op *a) \
190
+ { \
191
+ static MVEGenOneOpFn * const fns[] = { \
192
+ gen_helper_mve_##FN##b, \
193
+ gen_helper_mve_##FN##h, \
194
+ NULL, \
195
+ NULL, \
196
+ }; \
197
+ return do_1op(s, a, fns[a->size]); \
198
+ }
199
+
200
+DO_VMOVN(VMOVNB, vmovnb)
201
+DO_VMOVN(VMOVNT, vmovnt)
202
+DO_VMOVN(VQMOVUNB, vqmovunb)
203
+DO_VMOVN(VQMOVUNT, vqmovunt)
204
+DO_VMOVN(VQMOVN_BS, vqmovnbs)
205
+DO_VMOVN(VQMOVN_TS, vqmovnts)
206
+DO_VMOVN(VQMOVN_BU, vqmovnbu)
207
+DO_VMOVN(VQMOVN_TU, vqmovntu)
208
+
209
static bool trans_VREV16(DisasContext *s, arg_1op *a)
210
{
211
static MVEGenOneOpFn * const fns[] = {
64
--
212
--
65
2.20.1
213
2.20.1
66
214
67
215
diff view generated by jsdifflib
1
Convert the Neon logic ops in the 3-reg-same grouping to decodetree.
1
The MVEGenDualAccOpFn is a bit misnamed, since it is used for
2
Note that for the logic ops the 'size' field forms part of their
2
the "long dual accumulate" operations that use a 64-bit
3
decode and the actual operations are always bitwise.
3
accumulator. Rename it to MVEGenLongDualAccOpFn so we can
4
use the former name for the 32-bit accumulator insns.
4
5
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200430181003.21682-16-peter.maydell@linaro.org
8
---
8
---
9
target/arm/neon-dp.decode | 12 +++++++++++
9
target/arm/translate-mve.c | 16 ++++++++--------
10
target/arm/translate-neon.inc.c | 19 +++++++++++++++++
10
1 file changed, 8 insertions(+), 8 deletions(-)
11
target/arm/translate.c | 38 +--------------------------------
12
3 files changed, 32 insertions(+), 37 deletions(-)
13
11
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
14
--- a/target/arm/translate-mve.c
17
+++ b/target/arm/neon-dp.decode
15
+++ b/target/arm/translate-mve.c
18
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
19
@3same .... ... . . . size:2 .... .... .... . q:1 . . .... \
17
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
20
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
18
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
21
19
typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
22
+@3same_logic .... ... . . . .. .... .... .... . q:1 .. .... \
20
-typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
23
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0
21
+typedef void MVEGenLongDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
24
+
22
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
25
+VAND_3s 1111 001 0 0 . 00 .... .... 0001 ... 1 .... @3same_logic
23
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
26
+VBIC_3s 1111 001 0 0 . 01 .... .... 0001 ... 1 .... @3same_logic
24
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
27
+VORR_3s 1111 001 0 0 . 10 .... .... 0001 ... 1 .... @3same_logic
25
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT_scalar(DisasContext *s, arg_2scalar *a)
28
+VORN_3s 1111 001 0 0 . 11 .... .... 0001 ... 1 .... @3same_logic
26
}
29
+VEOR_3s 1111 001 1 0 . 00 .... .... 0001 ... 1 .... @3same_logic
27
30
+VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
28
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
31
+VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
29
- MVEGenDualAccOpFn *fn)
32
+VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
30
+ MVEGenLongDualAccOpFn *fn)
33
+
31
{
34
VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
32
TCGv_ptr qn, qm;
35
VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
33
TCGv_i64 rda;
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
34
@@ -XXX,XX +XXX,XX @@ static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
37
index XXXXXXX..XXXXXXX 100644
35
38
--- a/target/arm/translate-neon.inc.c
36
static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
39
+++ b/target/arm/translate-neon.inc.c
37
{
40
@@ -XXX,XX +XXX,XX @@ static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
38
- static MVEGenDualAccOpFn * const fns[4][2] = {
41
39
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
42
DO_3SAME(VADD, tcg_gen_gvec_add)
40
{ NULL, NULL },
43
DO_3SAME(VSUB, tcg_gen_gvec_sub)
41
{ gen_helper_mve_vmlaldavsh, gen_helper_mve_vmlaldavxsh },
44
+DO_3SAME(VAND, tcg_gen_gvec_and)
42
{ gen_helper_mve_vmlaldavsw, gen_helper_mve_vmlaldavxsw },
45
+DO_3SAME(VBIC, tcg_gen_gvec_andc)
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
46
+DO_3SAME(VORR, tcg_gen_gvec_or)
44
47
+DO_3SAME(VORN, tcg_gen_gvec_orc)
45
static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
48
+DO_3SAME(VEOR, tcg_gen_gvec_xor)
46
{
49
+
47
- static MVEGenDualAccOpFn * const fns[4][2] = {
50
+/* These insns are all gvec_bitsel but with the inputs in various orders. */
48
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
51
+#define DO_3SAME_BITSEL(INSN, O1, O2, O3) \
49
{ NULL, NULL },
52
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
50
{ gen_helper_mve_vmlaldavuh, NULL },
53
+ uint32_t rn_ofs, uint32_t rm_ofs, \
51
{ gen_helper_mve_vmlaldavuw, NULL },
54
+ uint32_t oprsz, uint32_t maxsz) \
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
55
+ { \
53
56
+ tcg_gen_gvec_bitsel(vece, rd_ofs, O1, O2, O3, oprsz, maxsz); \
54
static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
57
+ } \
55
{
58
+ DO_3SAME(INSN, gen_##INSN##_3s)
56
- static MVEGenDualAccOpFn * const fns[4][2] = {
59
+
57
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
60
+DO_3SAME_BITSEL(VBSL, rd_ofs, rn_ofs, rm_ofs)
58
{ NULL, NULL },
61
+DO_3SAME_BITSEL(VBIT, rm_ofs, rn_ofs, rd_ofs)
59
{ gen_helper_mve_vmlsldavsh, gen_helper_mve_vmlsldavxsh },
62
+DO_3SAME_BITSEL(VBIF, rm_ofs, rd_ofs, rn_ofs)
60
{ gen_helper_mve_vmlsldavsw, gen_helper_mve_vmlsldavxsw },
63
diff --git a/target/arm/translate.c b/target/arm/translate.c
61
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
64
index XXXXXXX..XXXXXXX 100644
62
65
--- a/target/arm/translate.c
63
static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
66
+++ b/target/arm/translate.c
64
{
67
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
65
- static MVEGenDualAccOpFn * const fns[] = {
68
}
66
+ static MVEGenLongDualAccOpFn * const fns[] = {
69
return 1;
67
gen_helper_mve_vrmlaldavhsw, gen_helper_mve_vrmlaldavhxsw,
70
68
};
71
- case NEON_3R_LOGIC: /* Logic ops. */
69
return do_long_dual_acc(s, a, fns[a->x]);
72
- switch ((u << 2) | size) {
70
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
73
- case 0: /* VAND */
71
74
- tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
72
static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
75
- vec_size, vec_size);
73
{
76
- break;
74
- static MVEGenDualAccOpFn * const fns[] = {
77
- case 1: /* VBIC */
75
+ static MVEGenLongDualAccOpFn * const fns[] = {
78
- tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
76
gen_helper_mve_vrmlaldavhuw, NULL,
79
- vec_size, vec_size);
77
};
80
- break;
78
return do_long_dual_acc(s, a, fns[a->x]);
81
- case 2: /* VORR */
79
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
82
- tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
80
83
- vec_size, vec_size);
81
static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
84
- break;
82
{
85
- case 3: /* VORN */
83
- static MVEGenDualAccOpFn * const fns[] = {
86
- tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
84
+ static MVEGenLongDualAccOpFn * const fns[] = {
87
- vec_size, vec_size);
85
gen_helper_mve_vrmlsldavhsw, gen_helper_mve_vrmlsldavhxsw,
88
- break;
86
};
89
- case 4: /* VEOR */
87
return do_long_dual_acc(s, a, fns[a->x]);
90
- tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
91
- vec_size, vec_size);
92
- break;
93
- case 5: /* VBSL */
94
- tcg_gen_gvec_bitsel(MO_8, rd_ofs, rd_ofs, rn_ofs, rm_ofs,
95
- vec_size, vec_size);
96
- break;
97
- case 6: /* VBIT */
98
- tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rn_ofs, rd_ofs,
99
- vec_size, vec_size);
100
- break;
101
- case 7: /* VBIF */
102
- tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rd_ofs, rn_ofs,
103
- vec_size, vec_size);
104
- break;
105
- }
106
- return 0;
107
-
108
case NEON_3R_VQADD:
109
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
110
rn_ofs, rm_ofs, vec_size, vec_size,
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
112
return 0;
113
114
case NEON_3R_VADD_VSUB:
115
+ case NEON_3R_LOGIC:
116
/* Already handled by decodetree */
117
return 1;
118
}
119
--
88
--
120
2.20.1
89
2.20.1
121
90
122
91
diff view generated by jsdifflib
1
Convert the V[US]DOT (vector) insns to decodetree.
1
Implement the MVE VMLADAV and VMLSLDAV insns. Like the VMLALDAV and
2
VMLSLDAV insns already implemented, these accumulate multiplied
3
vector elements; but they accumulate a 32-bit result rather than a
4
64-bit one.
5
6
Note that these encodings overlap with what would be RdaHi=0b111 for
7
VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH.
2
8
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-7-peter.maydell@linaro.org
6
---
11
---
7
target/arm/neon-shared.decode | 4 ++++
12
target/arm/helper-mve.h | 17 ++++++++++
8
target/arm/translate-neon.inc.c | 32 ++++++++++++++++++++++++++++++++
13
target/arm/mve.decode | 33 +++++++++++++++++---
9
target/arm/translate.c | 9 +--------
14
target/arm/mve_helper.c | 41 ++++++++++++++++++++++++
10
3 files changed, 37 insertions(+), 8 deletions(-)
15
target/arm/translate-mve.c | 64 ++++++++++++++++++++++++++++++++++++++
11
16
4 files changed, 150 insertions(+), 5 deletions(-)
12
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
17
13
index XXXXXXX..XXXXXXX 100644
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
--- a/target/arm/neon-shared.decode
19
index XXXXXXX..XXXXXXX 100644
15
+++ b/target/arm/neon-shared.decode
20
--- a/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ VCMLA 1111 110 rot:2 . 1 size:1 .... .... 1000 . q:1 . 0 .... \
21
+++ b/target/arm/helper-mve.h
17
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
18
VCADD 1111 110 rot:1 1 . 0 size:1 .... .... 1000 . q:1 . 0 .... \
23
DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
19
vm=%vm_dp vn=%vn_dp vd=%vd_dp
24
DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
20
+
25
21
+# VUDOT and VSDOT
26
+DEF_HELPER_FLAGS_4(mve_vmladavsb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
22
+VDOT 1111 110 00 . 10 .... .... 1101 . q:1 . u:1 .... \
27
+DEF_HELPER_FLAGS_4(mve_vmladavsh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
23
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
28
+DEF_HELPER_FLAGS_4(mve_vmladavsw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
24
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
29
+DEF_HELPER_FLAGS_4(mve_vmladavub, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
25
index XXXXXXX..XXXXXXX 100644
30
+DEF_HELPER_FLAGS_4(mve_vmladavuh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
26
--- a/target/arm/translate-neon.inc.c
31
+DEF_HELPER_FLAGS_4(mve_vmladavuw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
27
+++ b/target/arm/translate-neon.inc.c
32
+DEF_HELPER_FLAGS_4(mve_vmlsdavb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
28
@@ -XXX,XX +XXX,XX @@ static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
33
+DEF_HELPER_FLAGS_4(mve_vmlsdavh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
29
tcg_temp_free_ptr(fpst);
34
+DEF_HELPER_FLAGS_4(mve_vmlsdavw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
30
return true;
35
+
36
+DEF_HELPER_FLAGS_4(mve_vmladavsxb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(mve_vmladavsxh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vmladavsxw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vmlsdavxb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(mve_vmlsdavxh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(mve_vmlsdavxw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
42
+
43
DEF_HELPER_FLAGS_3(mve_vaddvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
44
DEF_HELPER_FLAGS_3(mve_vaddvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
45
DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
46
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve.decode
49
+++ b/target/arm/mve.decode
50
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
51
%size_16 16:1 !function=plus_1
52
53
&vmlaldav rdahi rdalo size qn qm x a
54
+&vmladav rda size qn qm x a
55
56
@vmlaldav .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
57
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
58
@vmlaldav_nosz .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
59
qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
60
-VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
61
-VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
62
+@vmladav .... .... .... ... . ... x:1 .... . . a:1 . qm:3 . \
63
+ qn=%qn rda=%rdalo size=%size_16 &vmladav
64
+@vmladav_nosz .... .... .... ... . ... x:1 .... . . a:1 . qm:3 . \
65
+ qn=%qn rda=%rdalo size=0 &vmladav
66
67
-VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
68
+{
69
+ VMLADAV_S 1110 1110 1111 ... . ... . 1110 . 0 . 0 ... 0 @vmladav
70
+ VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
71
+}
72
+{
73
+ VMLADAV_U 1111 1110 1111 ... . ... . 1110 . 0 . 0 ... 0 @vmladav
74
+ VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
75
+}
76
+
77
+{
78
+ VMLSDAV 1110 1110 1111 ... . ... . 1110 . 0 . 0 ... 1 @vmladav
79
+ VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
80
+}
81
+
82
+{
83
+ VMLSDAV 1111 1110 1111 ... 0 ... . 1110 . 0 . 0 ... 1 @vmladav_nosz
84
+ VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
85
+}
86
+
87
+VMLADAV_S 1110 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 1 @vmladav_nosz
88
+VMLADAV_U 1111 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 1 @vmladav_nosz
89
90
{
91
VMAXV_S 1110 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
92
VMINV_S 1110 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
93
VMAXAV 1110 1110 1110 .. 00 .... 1111 0 0 . 0 ... 0 @vmaxv
94
VMINAV 1110 1110 1110 .. 00 .... 1111 1 0 . 0 ... 0 @vmaxv
95
+ VMLADAV_S 1110 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 0 @vmladav_nosz
96
VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
31
}
97
}
32
+
98
33
+static bool trans_VDOT(DisasContext *s, arg_VDOT *a)
99
{
34
+{
100
VMAXV_U 1111 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
35
+ int opr_sz;
101
VMINV_U 1111 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
36
+ gen_helper_gvec_3 *fn_gvec;
102
+ VMLADAV_U 1111 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 0 @vmladav_nosz
37
+
103
VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
38
+ if (!dc_isar_feature(aa32_dp, s)) {
104
}
105
106
-VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
107
-
108
# Scalar operations
109
110
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
111
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
112
index XXXXXXX..XXXXXXX 100644
113
--- a/target/arm/mve_helper.c
114
+++ b/target/arm/mve_helper.c
115
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
116
DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
117
DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
118
119
+/*
120
+ * Multiply add dual accumulate ops
121
+ */
122
+#define DO_DAV(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC) \
123
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
124
+ void *vm, uint32_t a) \
125
+ { \
126
+ uint16_t mask = mve_element_mask(env); \
127
+ unsigned e; \
128
+ TYPE *n = vn, *m = vm; \
129
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
130
+ if (mask & 1) { \
131
+ if (e & 1) { \
132
+ a ODDACC \
133
+ n[H##ESIZE(e - 1 * XCHG)] * m[H##ESIZE(e)]; \
134
+ } else { \
135
+ a EVENACC \
136
+ n[H##ESIZE(e + 1 * XCHG)] * m[H##ESIZE(e)]; \
137
+ } \
138
+ } \
139
+ } \
140
+ mve_advance_vpt(env); \
141
+ return a; \
142
+ }
143
+
144
+#define DO_DAV_S(INSN, XCHG, EVENACC, ODDACC) \
145
+ DO_DAV(INSN##b, 1, int8_t, XCHG, EVENACC, ODDACC) \
146
+ DO_DAV(INSN##h, 2, int16_t, XCHG, EVENACC, ODDACC) \
147
+ DO_DAV(INSN##w, 4, int32_t, XCHG, EVENACC, ODDACC)
148
+
149
+#define DO_DAV_U(INSN, XCHG, EVENACC, ODDACC) \
150
+ DO_DAV(INSN##b, 1, uint8_t, XCHG, EVENACC, ODDACC) \
151
+ DO_DAV(INSN##h, 2, uint16_t, XCHG, EVENACC, ODDACC) \
152
+ DO_DAV(INSN##w, 4, uint32_t, XCHG, EVENACC, ODDACC)
153
+
154
+DO_DAV_S(vmladavs, false, +=, +=)
155
+DO_DAV_U(vmladavu, false, +=, +=)
156
+DO_DAV_S(vmlsdav, false, +=, -=)
157
+DO_DAV_S(vmladavsx, true, +=, +=)
158
+DO_DAV_S(vmlsdavx, true, +=, -=)
159
+
160
/*
161
* Rounding multiply add long dual accumulate high. In the pseudocode
162
* this is implemented with a 72-bit internal accumulator value of which
163
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-mve.c
166
+++ b/target/arm/translate-mve.c
167
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TC
168
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
169
typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
170
typedef void MVEGenVABAVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
171
+typedef void MVEGenDualAccOpFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
172
173
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
174
static inline long mve_qreg_offset(unsigned reg)
175
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
176
return do_long_dual_acc(s, a, fns[a->x]);
177
}
178
179
+static bool do_dual_acc(DisasContext *s, arg_vmladav *a, MVEGenDualAccOpFn *fn)
180
+{
181
+ TCGv_ptr qn, qm;
182
+ TCGv_i32 rda;
183
+
184
+ if (!dc_isar_feature(aa32_mve, s) ||
185
+ !mve_check_qreg_bank(s, a->qn) ||
186
+ !fn) {
39
+ return false;
187
+ return false;
40
+ }
188
+ }
41
+
189
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
42
+ /* UNDEF accesses to D16-D31 if they don't exist. */
43
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
44
+ ((a->vd | a->vn | a->vm) & 0x10)) {
45
+ return false;
46
+ }
47
+
48
+ if ((a->vn | a->vm | a->vd) & a->q) {
49
+ return false;
50
+ }
51
+
52
+ if (!vfp_access_check(s)) {
53
+ return true;
190
+ return true;
54
+ }
191
+ }
55
+
192
+
56
+ opr_sz = (1 + a->q) * 8;
193
+ qn = mve_qreg_ptr(a->qn);
57
+ fn_gvec = a->u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
194
+ qm = mve_qreg_ptr(a->qm);
58
+ tcg_gen_gvec_3_ool(vfp_reg_offset(1, a->vd),
195
+
59
+ vfp_reg_offset(1, a->vn),
196
+ /*
60
+ vfp_reg_offset(1, a->vm),
197
+ * This insn is subject to beat-wise execution. Partial execution
61
+ opr_sz, opr_sz, 0, fn_gvec);
198
+ * of an A=0 (no-accumulate) insn which does not execute the first
199
+ * beat must start with the current rda value, not 0.
200
+ */
201
+ if (a->a || mve_skip_first_beat(s)) {
202
+ rda = load_reg(s, a->rda);
203
+ } else {
204
+ rda = tcg_const_i32(0);
205
+ }
206
+
207
+ fn(rda, cpu_env, qn, qm, rda);
208
+ store_reg(s, a->rda, rda);
209
+ tcg_temp_free_ptr(qn);
210
+ tcg_temp_free_ptr(qm);
211
+
212
+ mve_update_eci(s);
62
+ return true;
213
+ return true;
63
+}
214
+}
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
215
+
65
index XXXXXXX..XXXXXXX 100644
216
+#define DO_DUAL_ACC(INSN, FN) \
66
--- a/target/arm/translate.c
217
+ static bool trans_##INSN(DisasContext *s, arg_vmladav *a) \
67
+++ b/target/arm/translate.c
218
+ { \
68
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
219
+ static MVEGenDualAccOpFn * const fns[4][2] = { \
69
bool is_long = false, q = extract32(insn, 6, 1);
220
+ { gen_helper_mve_##FN##b, gen_helper_mve_##FN##xb }, \
70
bool ptr_is_env = false;
221
+ { gen_helper_mve_##FN##h, gen_helper_mve_##FN##xh }, \
71
222
+ { gen_helper_mve_##FN##w, gen_helper_mve_##FN##xw }, \
72
- if ((insn & 0xfeb00f00) == 0xfc200d00) {
223
+ { NULL, NULL }, \
73
- /* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
224
+ }; \
74
- bool u = extract32(insn, 4, 1);
225
+ return do_dual_acc(s, a, fns[a->size][a->x]); \
75
- if (!dc_isar_feature(aa32_dp, s)) {
226
+ }
76
- return 1;
227
+
77
- }
228
+DO_DUAL_ACC(VMLADAV_S, vmladavs)
78
- fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
229
+DO_DUAL_ACC(VMLSDAV, vmlsdav)
79
- } else if ((insn & 0xff300f10) == 0xfc200810) {
230
+
80
+ if ((insn & 0xff300f10) == 0xfc200810) {
231
+static bool trans_VMLADAV_U(DisasContext *s, arg_vmladav *a)
81
/* VFM[AS]L -- 1111 1100 S.10 .... .... 1000 .Q.1 .... */
232
+{
82
int is_s = extract32(insn, 23, 1);
233
+ static MVEGenDualAccOpFn * const fns[4][2] = {
83
if (!dc_isar_feature(aa32_fhm, s)) {
234
+ { gen_helper_mve_vmladavub, NULL },
235
+ { gen_helper_mve_vmladavuh, NULL },
236
+ { gen_helper_mve_vmladavuw, NULL },
237
+ { NULL, NULL },
238
+ };
239
+ return do_dual_acc(s, a, fns[a->size][a->x]);
240
+}
241
+
242
static void gen_vpst(DisasContext *s, uint32_t mask)
243
{
244
/*
84
--
245
--
85
2.20.1
246
2.20.1
86
247
87
248
diff view generated by jsdifflib
1
The access_type argument to get_phys_addr_lpae() is an MMUAccessType;
1
Implement the MVE VMLA insn, which multiplies a vector by a scalar
2
use the enum constant MMU_DATA_LOAD rather than a literal 0 when we
2
and accumulates into another vector.
3
call it in S1_ptw_translate().
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200330210400.11724-3-peter.maydell@linaro.org
9
---
6
---
10
target/arm/helper.c | 5 +++--
7
target/arm/helper-mve.h | 4 ++++
11
1 file changed, 3 insertions(+), 2 deletions(-)
8
target/arm/mve.decode | 1 +
9
target/arm/mve_helper.c | 5 +++++
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 11 insertions(+)
12
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i3
18
pcacheattrs = &cacheattrs;
18
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
}
19
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
20
21
- ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_Stage2, &s2pa,
21
+DEF_HELPER_FLAGS_4(mve_vmlab, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
- &txattrs, &s2prot, &s2size, fi, pcacheattrs);
22
+DEF_HELPER_FLAGS_4(mve_vmlah, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, ARMMMUIdx_Stage2,
23
+DEF_HELPER_FLAGS_4(mve_vmlaw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+ &s2pa, &txattrs, &s2prot, &s2size, fi,
24
+
25
+ pcacheattrs);
25
DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
if (ret) {
26
DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
assert(fi->type != ARMFault_None);
27
DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
fi->s2addr = addr;
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
33
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
34
35
# The U bit (28) is don't-care because it does not affect the result
36
+VMLA 111- 1110 0 . .. ... 1 ... 0 1110 . 100 .... @2scalar
37
VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
38
39
# Vector add across vector
40
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/mve_helper.c
43
+++ b/target/arm/mve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
45
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
46
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
47
48
+/* Vector by scalar plus vector */
49
+#define DO_VMLA(D, N, M) ((N) * (M) + (D))
50
+
51
+DO_2OP_ACC_SCALAR_U(vmla, DO_VMLA)
52
+
53
/* Vector by vector plus scalar */
54
#define DO_VMLAS(D, N, M) ((N) * (D) + (M))
55
56
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-mve.c
59
+++ b/target/arm/translate-mve.c
60
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
61
DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
62
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
63
DO_2OP_SCALAR(VBRSR, vbrsr)
64
+DO_2OP_SCALAR(VMLA, vmla)
65
DO_2OP_SCALAR(VMLAS, vmlas)
66
67
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
29
--
68
--
30
2.20.1
69
2.20.1
31
70
32
71
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MVE saturating doubling multiply accumulate insns
2
VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH. These perform a multiply,
3
double, add the accumulator shifted by the element size, possibly
4
round, saturate to twice the element size, then take the high half of
5
the result. The *MLAH insns do vector * scalar + vector, and the
6
*MLASH insns do vector * vector + scalar.
2
7
3
Fix typo xlnx-ve -> xlnx-versal.
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
target/arm/helper-mve.h | 16 +++++++
12
target/arm/mve.decode | 5 ++
13
target/arm/mve_helper.c | 95 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 4 ++
15
4 files changed, 120 insertions(+)
4
16
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20200427181649.26851-4-edgar.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/xlnx-versal-virt.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/xlnx-versal-virt.c
19
--- a/target/arm/helper-mve.h
18
+++ b/hw/arm/xlnx-versal-virt.c
20
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
psci_conduit = QEMU_PSCI_CONDUIT_SMC;
22
DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(mve_vqdmlahb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vqdmlahh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vqdmlahw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vqrdmlahb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vqrdmlahh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vqrdmlahw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(mve_vqdmlashb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(mve_vqdmlashh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vqdmlashw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_4(mve_vqrdmlashb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vqrdmlashh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vqrdmlashw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+
41
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
42
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
43
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
44
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/mve.decode
47
+++ b/target/arm/mve.decode
48
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
49
VMLA 111- 1110 0 . .. ... 1 ... 0 1110 . 100 .... @2scalar
50
VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
51
52
+VQRDMLAH 1110 1110 0 . .. ... 0 ... 0 1110 . 100 .... @2scalar
53
+VQRDMLASH 1110 1110 0 . .. ... 0 ... 1 1110 . 100 .... @2scalar
54
+VQDMLAH 1110 1110 0 . .. ... 0 ... 0 1110 . 110 .... @2scalar
55
+VQDMLASH 1110 1110 0 . .. ... 0 ... 1 1110 . 110 .... @2scalar
56
+
57
# Vector add across vector
58
{
59
VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
60
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/mve_helper.c
63
+++ b/target/arm/mve_helper.c
64
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
65
mve_advance_vpt(env); \
21
}
66
}
22
67
23
- sysbus_init_child_obj(OBJECT(machine), "xlnx-ve", &s->soc,
68
+#define DO_2OP_SAT_ACC_SCALAR(OP, ESIZE, TYPE, FN) \
24
+ sysbus_init_child_obj(OBJECT(machine), "xlnx-versal", &s->soc,
69
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
25
sizeof(s->soc), TYPE_XLNX_VERSAL);
70
+ uint32_t rm) \
26
object_property_set_link(OBJECT(&s->soc), OBJECT(machine->ram),
71
+ { \
27
"ddr", &error_abort);
72
+ TYPE *d = vd, *n = vn; \
73
+ TYPE m = rm; \
74
+ uint16_t mask = mve_element_mask(env); \
75
+ unsigned e; \
76
+ bool qc = false; \
77
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
78
+ bool sat = false; \
79
+ mergemask(&d[H##ESIZE(e)], \
80
+ FN(d[H##ESIZE(e)], n[H##ESIZE(e)], m, &sat), \
81
+ mask); \
82
+ qc |= sat & mask & 1; \
83
+ } \
84
+ if (qc) { \
85
+ env->vfp.qc[0] = qc; \
86
+ } \
87
+ mve_advance_vpt(env); \
88
+ }
89
+
90
/* provide unsigned 2-op scalar helpers for all sizes */
91
#define DO_2OP_SCALAR_U(OP, FN) \
92
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
93
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
94
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
95
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
96
97
+static int8_t do_vqdmlah_b(int8_t a, int8_t b, int8_t c, int round, bool *sat)
98
+{
99
+ int64_t r = (int64_t)a * b * 2 + ((int64_t)c << 8) + (round << 7);
100
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
101
+}
102
+
103
+static int16_t do_vqdmlah_h(int16_t a, int16_t b, int16_t c,
104
+ int round, bool *sat)
105
+{
106
+ int64_t r = (int64_t)a * b * 2 + ((int64_t)c << 16) + (round << 15);
107
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
108
+}
109
+
110
+static int32_t do_vqdmlah_w(int32_t a, int32_t b, int32_t c,
111
+ int round, bool *sat)
112
+{
113
+ /*
114
+ * Architecturally we should do the entire add, double, round
115
+ * and then check for saturation. We do three saturating adds,
116
+ * but we need to be careful about the order. If the first
117
+ * m1 + m2 saturates then it's impossible for the *2+rc to
118
+ * bring it back into the non-saturated range. However, if
119
+ * m1 + m2 is negative then it's possible that doing the doubling
120
+ * would take the intermediate result below INT64_MAX and the
121
+ * addition of the rounding constant then brings it back in range.
122
+ * So we add half the rounding constant and half the "c << esize"
123
+ * before doubling rather than adding the rounding constant after
124
+ * the doubling.
125
+ */
126
+ int64_t m1 = (int64_t)a * b;
127
+ int64_t m2 = (int64_t)c << 31;
128
+ int64_t r;
129
+ if (sadd64_overflow(m1, m2, &r) ||
130
+ sadd64_overflow(r, (round << 30), &r) ||
131
+ sadd64_overflow(r, r, &r)) {
132
+ *sat = true;
133
+ return r < 0 ? INT32_MAX : INT32_MIN;
134
+ }
135
+ return r >> 32;
136
+}
137
+
138
+/*
139
+ * The *MLAH insns are vector * scalar + vector;
140
+ * the *MLASH insns are vector * vector + scalar
141
+ */
142
+#define DO_VQDMLAH_B(D, N, M, S) do_vqdmlah_b(N, M, D, 0, S)
143
+#define DO_VQDMLAH_H(D, N, M, S) do_vqdmlah_h(N, M, D, 0, S)
144
+#define DO_VQDMLAH_W(D, N, M, S) do_vqdmlah_w(N, M, D, 0, S)
145
+#define DO_VQRDMLAH_B(D, N, M, S) do_vqdmlah_b(N, M, D, 1, S)
146
+#define DO_VQRDMLAH_H(D, N, M, S) do_vqdmlah_h(N, M, D, 1, S)
147
+#define DO_VQRDMLAH_W(D, N, M, S) do_vqdmlah_w(N, M, D, 1, S)
148
+
149
+#define DO_VQDMLASH_B(D, N, M, S) do_vqdmlah_b(N, D, M, 0, S)
150
+#define DO_VQDMLASH_H(D, N, M, S) do_vqdmlah_h(N, D, M, 0, S)
151
+#define DO_VQDMLASH_W(D, N, M, S) do_vqdmlah_w(N, D, M, 0, S)
152
+#define DO_VQRDMLASH_B(D, N, M, S) do_vqdmlah_b(N, D, M, 1, S)
153
+#define DO_VQRDMLASH_H(D, N, M, S) do_vqdmlah_h(N, D, M, 1, S)
154
+#define DO_VQRDMLASH_W(D, N, M, S) do_vqdmlah_w(N, D, M, 1, S)
155
+
156
+DO_2OP_SAT_ACC_SCALAR(vqdmlahb, 1, int8_t, DO_VQDMLAH_B)
157
+DO_2OP_SAT_ACC_SCALAR(vqdmlahh, 2, int16_t, DO_VQDMLAH_H)
158
+DO_2OP_SAT_ACC_SCALAR(vqdmlahw, 4, int32_t, DO_VQDMLAH_W)
159
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahb, 1, int8_t, DO_VQRDMLAH_B)
160
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahh, 2, int16_t, DO_VQRDMLAH_H)
161
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahw, 4, int32_t, DO_VQRDMLAH_W)
162
+
163
+DO_2OP_SAT_ACC_SCALAR(vqdmlashb, 1, int8_t, DO_VQDMLASH_B)
164
+DO_2OP_SAT_ACC_SCALAR(vqdmlashh, 2, int16_t, DO_VQDMLASH_H)
165
+DO_2OP_SAT_ACC_SCALAR(vqdmlashw, 4, int32_t, DO_VQDMLASH_W)
166
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashb, 1, int8_t, DO_VQRDMLASH_B)
167
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashh, 2, int16_t, DO_VQRDMLASH_H)
168
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashw, 4, int32_t, DO_VQRDMLASH_W)
169
+
170
/* Vector by scalar plus vector */
171
#define DO_VMLA(D, N, M) ((N) * (M) + (D))
172
173
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/translate-mve.c
176
+++ b/target/arm/translate-mve.c
177
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
178
DO_2OP_SCALAR(VBRSR, vbrsr)
179
DO_2OP_SCALAR(VMLA, vmla)
180
DO_2OP_SCALAR(VMLAS, vmlas)
181
+DO_2OP_SCALAR(VQDMLAH, vqdmlah)
182
+DO_2OP_SCALAR(VQRDMLAH, vqrdmlah)
183
+DO_2OP_SCALAR(VQDMLASH, vqdmlash)
184
+DO_2OP_SCALAR(VQRDMLASH, vqrdmlash)
185
186
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
187
{
28
--
188
--
29
2.20.1
189
2.20.1
30
190
31
191
diff view generated by jsdifflib
1
Convert the Neon 3-reg-same VMAX and VMIN insns to decodetree.
1
Implement the MVE 1-operand saturating operations VQABS and VQNEG.
2
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-17-peter.maydell@linaro.org
6
---
5
---
7
target/arm/neon-dp.decode | 5 +++++
6
target/arm/helper-mve.h | 8 ++++++++
8
target/arm/translate-neon.inc.c | 14 ++++++++++++++
7
target/arm/mve.decode | 3 +++
9
target/arm/translate.c | 21 ++-------------------
8
target/arm/mve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
10
3 files changed, 21 insertions(+), 19 deletions(-)
9
target/arm/translate-mve.c | 2 ++
10
4 files changed, 50 insertions(+)
11
11
12
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
12
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-dp.decode
14
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/neon-dp.decode
15
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
17
VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
17
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
18
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
19
20
+VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
20
+DEF_HELPER_FLAGS_3(mve_vqabsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
21
+DEF_HELPER_FLAGS_3(mve_vqabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+VMIN_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 1 .... @3same
22
+DEF_HELPER_FLAGS_3(mve_vqabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+VMIN_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 1 .... @3same
24
+
23
+
25
VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
24
+DEF_HELPER_FLAGS_3(mve_vqnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
25
+DEF_HELPER_FLAGS_3(mve_vqnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
26
+DEF_HELPER_FLAGS_3(mve_vqnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+
28
DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate-neon.inc.c
33
--- a/target/arm/mve.decode
30
+++ b/target/arm/translate-neon.inc.c
34
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ DO_3SAME(VEOR, tcg_gen_gvec_xor)
35
@@ -XXX,XX +XXX,XX @@ VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
32
DO_3SAME_BITSEL(VBSL, rd_ofs, rn_ofs, rm_ofs)
36
VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
33
DO_3SAME_BITSEL(VBIT, rm_ofs, rn_ofs, rd_ofs)
37
VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
34
DO_3SAME_BITSEL(VBIF, rm_ofs, rd_ofs, rn_ofs)
38
39
+VQABS 1111 1111 1 . 11 .. 00 ... 0 0111 01 . 0 ... 0 @1op
40
+VQNEG 1111 1111 1 . 11 .. 00 ... 0 0111 11 . 0 ... 0 @1op
35
+
41
+
36
+#define DO_3SAME_NO_SZ_3(INSN, FUNC) \
42
&vdup qd rt size
37
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
43
# Qd is in the fields usually named Qn
44
@vdup .... .... . . .. ... . rt:4 .... . . . . .... qd=%qn &vdup
45
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/mve_helper.c
48
+++ b/target/arm/mve_helper.c
49
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
50
}
51
mve_advance_vpt(env);
52
}
53
+
54
+#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
55
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
38
+ { \
56
+ { \
39
+ if (a->size == 3) { \
57
+ TYPE *d = vd, *m = vm; \
40
+ return false; \
58
+ uint16_t mask = mve_element_mask(env); \
59
+ unsigned e; \
60
+ bool qc = false; \
61
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
62
+ bool sat = false; \
63
+ mergemask(&d[H##ESIZE(e)], FN(m[H##ESIZE(e)], &sat), mask); \
64
+ qc |= sat & mask & 1; \
41
+ } \
65
+ } \
42
+ return do_3same(s, a, FUNC); \
66
+ if (qc) { \
67
+ env->vfp.qc[0] = qc; \
68
+ } \
69
+ mve_advance_vpt(env); \
43
+ }
70
+ }
44
+
71
+
45
+DO_3SAME_NO_SZ_3(VMAX_S, tcg_gen_gvec_smax)
72
+#define DO_VQABS_B(N, SATP) \
46
+DO_3SAME_NO_SZ_3(VMAX_U, tcg_gen_gvec_umax)
73
+ do_sat_bhs(DO_ABS((int64_t)N), INT8_MIN, INT8_MAX, SATP)
47
+DO_3SAME_NO_SZ_3(VMIN_S, tcg_gen_gvec_smin)
74
+#define DO_VQABS_H(N, SATP) \
48
+DO_3SAME_NO_SZ_3(VMIN_U, tcg_gen_gvec_umin)
75
+ do_sat_bhs(DO_ABS((int64_t)N), INT16_MIN, INT16_MAX, SATP)
49
diff --git a/target/arm/translate.c b/target/arm/translate.c
76
+#define DO_VQABS_W(N, SATP) \
77
+ do_sat_bhs(DO_ABS((int64_t)N), INT32_MIN, INT32_MAX, SATP)
78
+
79
+#define DO_VQNEG_B(N, SATP) do_sat_bhs(-(int64_t)N, INT8_MIN, INT8_MAX, SATP)
80
+#define DO_VQNEG_H(N, SATP) do_sat_bhs(-(int64_t)N, INT16_MIN, INT16_MAX, SATP)
81
+#define DO_VQNEG_W(N, SATP) do_sat_bhs(-(int64_t)N, INT32_MIN, INT32_MAX, SATP)
82
+
83
+DO_1OP_SAT(vqabsb, 1, int8_t, DO_VQABS_B)
84
+DO_1OP_SAT(vqabsh, 2, int16_t, DO_VQABS_H)
85
+DO_1OP_SAT(vqabsw, 4, int32_t, DO_VQABS_W)
86
+
87
+DO_1OP_SAT(vqnegb, 1, int8_t, DO_VQNEG_B)
88
+DO_1OP_SAT(vqnegh, 2, int16_t, DO_VQNEG_H)
89
+DO_1OP_SAT(vqnegw, 4, int32_t, DO_VQNEG_W)
90
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
50
index XXXXXXX..XXXXXXX 100644
91
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/translate.c
92
--- a/target/arm/translate-mve.c
52
+++ b/target/arm/translate.c
93
+++ b/target/arm/translate-mve.c
53
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
94
@@ -XXX,XX +XXX,XX @@ DO_1OP(VCLZ, vclz)
54
rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
95
DO_1OP(VCLS, vcls)
55
return 0;
96
DO_1OP(VABS, vabs)
56
97
DO_1OP(VNEG, vneg)
57
- case NEON_3R_VMAX:
98
+DO_1OP(VQABS, vqabs)
58
- if (u) {
99
+DO_1OP(VQNEG, vqneg)
59
- tcg_gen_gvec_umax(size, rd_ofs, rn_ofs, rm_ofs,
100
60
- vec_size, vec_size);
101
/* Narrowing moves: only size 0 and 1 are valid */
61
- } else {
102
#define DO_VMOVN(INSN, FN) \
62
- tcg_gen_gvec_smax(size, rd_ofs, rn_ofs, rm_ofs,
63
- vec_size, vec_size);
64
- }
65
- return 0;
66
- case NEON_3R_VMIN:
67
- if (u) {
68
- tcg_gen_gvec_umin(size, rd_ofs, rn_ofs, rm_ofs,
69
- vec_size, vec_size);
70
- } else {
71
- tcg_gen_gvec_smin(size, rd_ofs, rn_ofs, rm_ofs,
72
- vec_size, vec_size);
73
- }
74
- return 0;
75
-
76
case NEON_3R_VSHL:
77
/* Note the operation is vshl vd,vm,vn */
78
tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
79
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
80
81
case NEON_3R_VADD_VSUB:
82
case NEON_3R_LOGIC:
83
+ case NEON_3R_VMAX:
84
+ case NEON_3R_VMIN:
85
/* Already handled by decodetree */
86
return 1;
87
}
88
--
103
--
89
2.20.1
104
2.20.1
90
105
91
106
diff view generated by jsdifflib
1
Add the infrastructure for building and invoking a decodetree decoder
1
Implement the MVE VMAXA and VMINA insns, which take the absolute
2
for the AArch32 Neon encodings. At the moment the new decoder covers
2
value of the signed elements in the input vector and then accumulate
3
nothing, so we always fall back to the existing hand-written decode.
3
the unsigned max or min into the destination vector.
4
5
We follow the same pattern we did for the VFP decodetree conversion
6
(commit 78e138bc1f672c145ef6ace74617d and following): code that deals
7
with Neon will be moving gradually out to translate-neon.vfp.inc,
8
which we #include into translate.c.
9
10
In order to share the decode files between A32 and T32, we
11
split Neon into 3 parts:
12
* data-processing
13
* load-store
14
* 'shared' encodings
15
16
The first two groups of instructions have similar but not identical
17
A32 and T32 encodings, so we need to manually transform the T32
18
encoding into the A32 one before calling the decoder; the third group
19
covers the Neon instructions which are identical in A32 and T32.
20
4
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20200430181003.21682-4-peter.maydell@linaro.org
24
---
7
---
25
target/arm/neon-dp.decode | 29 ++++++++++++++++++++++++++
8
target/arm/helper-mve.h | 8 ++++++++
26
target/arm/neon-ls.decode | 29 ++++++++++++++++++++++++++
9
target/arm/mve.decode | 4 ++++
27
target/arm/neon-shared.decode | 27 +++++++++++++++++++++++++
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
28
target/arm/translate-neon.inc.c | 32 +++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
29
target/arm/translate.c | 36 +++++++++++++++++++++++++++++++--
12
4 files changed, 40 insertions(+)
30
target/arm/Makefile.objs | 18 +++++++++++++++++
31
6 files changed, 169 insertions(+), 2 deletions(-)
32
create mode 100644 target/arm/neon-dp.decode
33
create mode 100644 target/arm/neon-ls.decode
34
create mode 100644 target/arm/neon-shared.decode
35
create mode 100644 target/arm/translate-neon.inc.c
36
13
37
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
38
new file mode 100644
15
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX
16
--- a/target/arm/helper-mve.h
40
--- /dev/null
17
+++ b/target/arm/helper-mve.h
41
+++ b/target/arm/neon-dp.decode
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vqnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
42
@@ -XXX,XX +XXX,XX @@
19
DEF_HELPER_FLAGS_3(mve_vqnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
43
+# AArch32 Neon data-processing instruction descriptions
20
DEF_HELPER_FLAGS_3(mve_vqnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
44
+#
21
45
+# Copyright (c) 2020 Linaro, Ltd
22
+DEF_HELPER_FLAGS_3(mve_vmaxab, TCG_CALL_NO_WG, void, env, ptr, ptr)
46
+#
23
+DEF_HELPER_FLAGS_3(mve_vmaxah, TCG_CALL_NO_WG, void, env, ptr, ptr)
47
+# This library is free software; you can redistribute it and/or
24
+DEF_HELPER_FLAGS_3(mve_vmaxaw, TCG_CALL_NO_WG, void, env, ptr, ptr)
48
+# modify it under the terms of the GNU Lesser General Public
49
+# License as published by the Free Software Foundation; either
50
+# version 2 of the License, or (at your option) any later version.
51
+#
52
+# This library is distributed in the hope that it will be useful,
53
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
54
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
55
+# Lesser General Public License for more details.
56
+#
57
+# You should have received a copy of the GNU Lesser General Public
58
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
59
+
25
+
60
+#
26
+DEF_HELPER_FLAGS_3(mve_vminab, TCG_CALL_NO_WG, void, env, ptr, ptr)
61
+# This file is processed by scripts/decodetree.py
27
+DEF_HELPER_FLAGS_3(mve_vminah, TCG_CALL_NO_WG, void, env, ptr, ptr)
62
+#
28
+DEF_HELPER_FLAGS_3(mve_vminaw, TCG_CALL_NO_WG, void, env, ptr, ptr)
63
+
29
+
64
+# Encodings for Neon data processing instructions where the T32 encoding
30
DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
65
+# is a simple transformation of the A32 encoding.
31
DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
66
+# More specifically, this file covers instructions where the A32 encoding is
32
DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
67
+# 0b1111_001p_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
68
+# and the T32 encoding is
34
index XXXXXXX..XXXXXXX 100644
69
+# 0b111p_1111_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
35
--- a/target/arm/mve.decode
70
+# This file works on the A32 encoding only; calling code for T32 has to
36
+++ b/target/arm/mve.decode
71
+# transform the insn into the A32 version first.
37
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
72
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
38
VQMOVUNB 111 0 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
73
new file mode 100644
39
VQMOVN_BS 111 0 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
74
index XXXXXXX..XXXXXXX
40
75
--- /dev/null
41
+ VMAXA 111 0 1110 0 . 11 .. 11 ... 0 1110 1 0 . 0 ... 1 @1op
76
+++ b/target/arm/neon-ls.decode
77
@@ -XXX,XX +XXX,XX @@
78
+# AArch32 Neon load/store instruction descriptions
79
+#
80
+# Copyright (c) 2020 Linaro, Ltd
81
+#
82
+# This library is free software; you can redistribute it and/or
83
+# modify it under the terms of the GNU Lesser General Public
84
+# License as published by the Free Software Foundation; either
85
+# version 2 of the License, or (at your option) any later version.
86
+#
87
+# This library is distributed in the hope that it will be useful,
88
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
89
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
90
+# Lesser General Public License for more details.
91
+#
92
+# You should have received a copy of the GNU Lesser General Public
93
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
94
+
42
+
95
+#
43
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
96
+# This file is processed by scripts/decodetree.py
44
}
97
+#
45
46
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
47
VQMOVUNT 111 0 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
48
VQMOVN_TS 111 0 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
49
50
+ VMINA 111 0 1110 0 . 11 .. 11 ... 1 1110 1 0 . 0 ... 1 @1op
98
+
51
+
99
+# Encodings for Neon load/store instructions where the T32 encoding
52
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
100
+# is a simple transformation of the A32 encoding.
53
}
101
+# More specifically, this file covers instructions where the A32 encoding is
54
102
+# 0b1111_0100_xxx0_xxxx_xxxx_xxxx_xxxx_xxxx
55
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
103
+# and the T32 encoding is
56
index XXXXXXX..XXXXXXX 100644
104
+# 0b1111_1001_xxx0_xxxx_xxxx_xxxx_xxxx_xxxx
57
--- a/target/arm/mve_helper.c
105
+# This file works on the A32 encoding only; calling code for T32 has to
58
+++ b/target/arm/mve_helper.c
106
+# transform the insn into the A32 version first.
59
@@ -XXX,XX +XXX,XX @@ DO_1OP_SAT(vqabsw, 4, int32_t, DO_VQABS_W)
107
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
60
DO_1OP_SAT(vqnegb, 1, int8_t, DO_VQNEG_B)
108
new file mode 100644
61
DO_1OP_SAT(vqnegh, 2, int16_t, DO_VQNEG_H)
109
index XXXXXXX..XXXXXXX
62
DO_1OP_SAT(vqnegw, 4, int32_t, DO_VQNEG_W)
110
--- /dev/null
111
+++ b/target/arm/neon-shared.decode
112
@@ -XXX,XX +XXX,XX @@
113
+# AArch32 Neon instruction descriptions
114
+#
115
+# Copyright (c) 2020 Linaro, Ltd
116
+#
117
+# This library is free software; you can redistribute it and/or
118
+# modify it under the terms of the GNU Lesser General Public
119
+# License as published by the Free Software Foundation; either
120
+# version 2 of the License, or (at your option) any later version.
121
+#
122
+# This library is distributed in the hope that it will be useful,
123
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
124
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
125
+# Lesser General Public License for more details.
126
+#
127
+# You should have received a copy of the GNU Lesser General Public
128
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
129
+
130
+#
131
+# This file is processed by scripts/decodetree.py
132
+#
133
+
134
+# Encodings for Neon instructions whose encoding is the same for
135
+# both A32 and T32.
136
+
137
+# More specifically, this covers:
138
+# 2reg scalar ext: 0b1111_1110_xxxx_xxxx_xxxx_1x0x_xxxx_xxxx
139
+# 3same ext: 0b1111_110x_xxxx_xxxx_xxxx_1x0x_xxxx_xxxx
140
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
141
new file mode 100644
142
index XXXXXXX..XXXXXXX
143
--- /dev/null
144
+++ b/target/arm/translate-neon.inc.c
145
@@ -XXX,XX +XXX,XX @@
146
+/*
147
+ * ARM translation: AArch32 Neon instructions
148
+ *
149
+ * Copyright (c) 2003 Fabrice Bellard
150
+ * Copyright (c) 2005-2007 CodeSourcery
151
+ * Copyright (c) 2007 OpenedHand, Ltd.
152
+ * Copyright (c) 2020 Linaro, Ltd.
153
+ *
154
+ * This library is free software; you can redistribute it and/or
155
+ * modify it under the terms of the GNU Lesser General Public
156
+ * License as published by the Free Software Foundation; either
157
+ * version 2 of the License, or (at your option) any later version.
158
+ *
159
+ * This library is distributed in the hope that it will be useful,
160
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
161
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
162
+ * Lesser General Public License for more details.
163
+ *
164
+ * You should have received a copy of the GNU Lesser General Public
165
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
166
+ */
167
+
63
+
168
+/*
64
+/*
169
+ * This file is intended to be included from translate.c; it uses
65
+ * VMAXA, VMINA: vd is unsigned; vm is signed, and we take its
170
+ * some macros and definitions provided by that file.
66
+ * absolute value; we then do an unsigned comparison.
171
+ * It might be possible to convert it to a standalone .c file eventually.
172
+ */
67
+ */
173
+
68
+#define DO_VMAXMINA(OP, ESIZE, STYPE, UTYPE, FN) \
174
+/* Include the generated Neon decoder */
69
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
175
+#include "decode-neon-dp.inc.c"
70
+ { \
176
+#include "decode-neon-ls.inc.c"
71
+ UTYPE *d = vd; \
177
+#include "decode-neon-shared.inc.c"
72
+ STYPE *m = vm; \
178
diff --git a/target/arm/translate.c b/target/arm/translate.c
73
+ uint16_t mask = mve_element_mask(env); \
179
index XXXXXXX..XXXXXXX 100644
74
+ unsigned e; \
180
--- a/target/arm/translate.c
75
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
181
+++ b/target/arm/translate.c
76
+ UTYPE r = DO_ABS(m[H##ESIZE(e)]); \
182
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
77
+ r = FN(d[H##ESIZE(e)], r); \
183
78
+ mergemask(&d[H##ESIZE(e)], r, mask); \
184
#define ARM_CP_RW_BIT (1 << 20)
79
+ } \
185
80
+ mve_advance_vpt(env); \
186
-/* Include the VFP decoder */
187
+/* Include the VFP and Neon decoders */
188
#include "translate-vfp.inc.c"
189
+#include "translate-neon.inc.c"
190
191
static inline void iwmmxt_load_reg(TCGv_i64 var, int reg)
192
{
193
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
194
/* Unconditional instructions. */
195
/* TODO: Perhaps merge these into one decodetree output file. */
196
if (disas_a32_uncond(s, insn) ||
197
- disas_vfp_uncond(s, insn)) {
198
+ disas_vfp_uncond(s, insn) ||
199
+ disas_neon_dp(s, insn) ||
200
+ disas_neon_ls(s, insn) ||
201
+ disas_neon_shared(s, insn)) {
202
return;
203
}
204
/* fall back to legacy decoder */
205
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
206
ARCH(6T2);
207
}
208
209
+ if ((insn & 0xef000000) == 0xef000000) {
210
+ /*
211
+ * T32 encodings 0b111p_1111_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
212
+ * transform into
213
+ * A32 encodings 0b1111_001p_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
214
+ */
215
+ uint32_t a32_insn = (insn & 0xe2ffffff) |
216
+ ((insn & (1 << 28)) >> 4) | (1 << 28);
217
+
218
+ if (disas_neon_dp(s, a32_insn)) {
219
+ return;
220
+ }
221
+ }
81
+ }
222
+
82
+
223
+ if ((insn & 0xff100000) == 0xf9000000) {
83
+DO_VMAXMINA(vmaxab, 1, int8_t, uint8_t, DO_MAX)
224
+ /*
84
+DO_VMAXMINA(vmaxah, 2, int16_t, uint16_t, DO_MAX)
225
+ * T32 encodings 0b1111_1001_ppp0_qqqq_qqqq_qqqq_qqqq_qqqq
85
+DO_VMAXMINA(vmaxaw, 4, int32_t, uint32_t, DO_MAX)
226
+ * transform into
86
+DO_VMAXMINA(vminab, 1, int8_t, uint8_t, DO_MIN)
227
+ * A32 encodings 0b1111_0100_ppp0_qqqq_qqqq_qqqq_qqqq_qqqq
87
+DO_VMAXMINA(vminah, 2, int16_t, uint16_t, DO_MIN)
228
+ */
88
+DO_VMAXMINA(vminaw, 4, int32_t, uint32_t, DO_MIN)
229
+ uint32_t a32_insn = (insn & 0x00ffffff) | 0xf4000000;
89
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
230
+
231
+ if (disas_neon_ls(s, a32_insn)) {
232
+ return;
233
+ }
234
+ }
235
+
236
/*
237
* TODO: Perhaps merge these into one decodetree output file.
238
* Note disas_vfp is written for a32 with cond field in the
239
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
240
*/
241
if (disas_t32(s, insn) ||
242
disas_vfp_uncond(s, insn) ||
243
+ disas_neon_shared(s, insn) ||
244
((insn >> 28) == 0xe && disas_vfp(s, insn))) {
245
return;
246
}
247
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
248
index XXXXXXX..XXXXXXX 100644
90
index XXXXXXX..XXXXXXX 100644
249
--- a/target/arm/Makefile.objs
91
--- a/target/arm/translate-mve.c
250
+++ b/target/arm/Makefile.objs
92
+++ b/target/arm/translate-mve.c
251
@@ -XXX,XX +XXX,XX @@ target/arm/decode-sve.inc.c: $(SRC_PATH)/target/arm/sve.decode $(DECODETREE)
93
@@ -XXX,XX +XXX,XX @@ DO_1OP(VABS, vabs)
252
     $(PYTHON) $(DECODETREE) --decode disas_sve -o $@ $<,\
94
DO_1OP(VNEG, vneg)
253
     "GEN", $(TARGET_DIR)$@)
95
DO_1OP(VQABS, vqabs)
254
96
DO_1OP(VQNEG, vqneg)
255
+target/arm/decode-neon-shared.inc.c: $(SRC_PATH)/target/arm/neon-shared.decode $(DECODETREE)
97
+DO_1OP(VMAXA, vmaxa)
256
+    $(call quiet-command,\
98
+DO_1OP(VMINA, vmina)
257
+     $(PYTHON) $(DECODETREE) --static-decode disas_neon_shared -o $@ $<,\
99
258
+     "GEN", $(TARGET_DIR)$@)
100
/* Narrowing moves: only size 0 and 1 are valid */
259
+
101
#define DO_VMOVN(INSN, FN) \
260
+target/arm/decode-neon-dp.inc.c: $(SRC_PATH)/target/arm/neon-dp.decode $(DECODETREE)
261
+    $(call quiet-command,\
262
+     $(PYTHON) $(DECODETREE) --static-decode disas_neon_dp -o $@ $<,\
263
+     "GEN", $(TARGET_DIR)$@)
264
+
265
+target/arm/decode-neon-ls.inc.c: $(SRC_PATH)/target/arm/neon-ls.decode $(DECODETREE)
266
+    $(call quiet-command,\
267
+     $(PYTHON) $(DECODETREE) --static-decode disas_neon_ls -o $@ $<,\
268
+     "GEN", $(TARGET_DIR)$@)
269
+
270
target/arm/decode-vfp.inc.c: $(SRC_PATH)/target/arm/vfp.decode $(DECODETREE)
271
    $(call quiet-command,\
272
     $(PYTHON) $(DECODETREE) --static-decode disas_vfp -o $@ $<,\
273
@@ -XXX,XX +XXX,XX @@ target/arm/decode-t16.inc.c: $(SRC_PATH)/target/arm/t16.decode $(DECODETREE)
274
     "GEN", $(TARGET_DIR)$@)
275
276
target/arm/translate-sve.o: target/arm/decode-sve.inc.c
277
+target/arm/translate.o: target/arm/decode-neon-shared.inc.c
278
+target/arm/translate.o: target/arm/decode-neon-dp.inc.c
279
+target/arm/translate.o: target/arm/decode-neon-ls.inc.c
280
target/arm/translate.o: target/arm/decode-vfp.inc.c
281
target/arm/translate.o: target/arm/decode-vfp-uncond.inc.c
282
target/arm/translate.o: target/arm/decode-a32.inc.c
283
--
102
--
284
2.20.1
103
2.20.1
285
104
286
105
diff view generated by jsdifflib
1
Convert the VFM[AS]L (vector) insns to decodetree. This is the last
1
Implement the MVE VMOV forms that move data between 2 general-purpose
2
insn in the legacy decoder for the 3same_ext group, so we can
2
registers and 2 32-bit lanes in a vector register.
3
delete the legacy decoder function for the group entirely.
4
5
Note that in disas_thumb2_insn() the parts of this encoding space
6
where the decodetree decoder returns false will correctly be directed
7
to illegal_op by the "(insn & (1 << 28))" check so they won't fall
8
into disas_coproc_insn() by mistake.
9
3
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200430181003.21682-8-peter.maydell@linaro.org
13
---
6
---
14
target/arm/neon-shared.decode | 6 +++
7
target/arm/translate-a32.h | 1 +
15
target/arm/translate-neon.inc.c | 31 +++++++++++
8
target/arm/mve.decode | 4 ++
16
target/arm/translate.c | 92 +--------------------------------
9
target/arm/translate-mve.c | 85 ++++++++++++++++++++++++++++++++++++++
17
3 files changed, 38 insertions(+), 91 deletions(-)
10
target/arm/translate-vfp.c | 2 +-
11
4 files changed, 91 insertions(+), 1 deletion(-)
18
12
19
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
13
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/neon-shared.decode
15
--- a/target/arm/translate-a32.h
22
+++ b/target/arm/neon-shared.decode
16
+++ b/target/arm/translate-a32.h
23
@@ -XXX,XX +XXX,XX @@ VCADD 1111 110 rot:1 1 . 0 size:1 .... .... 1000 . q:1 . 0 .... \
17
@@ -XXX,XX +XXX,XX @@ void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
24
# VUDOT and VSDOT
18
void clear_eci_state(DisasContext *s);
25
VDOT 1111 110 00 . 10 .... .... 1101 . q:1 . u:1 .... \
19
bool mve_eci_check(DisasContext *s);
26
vm=%vm_dp vn=%vn_dp vd=%vd_dp
20
void mve_update_and_store_eci(DisasContext *s);
21
+bool mve_skip_vmov(DisasContext *s, int vn, int index, int size);
22
23
static inline TCGv_i32 load_cpu_offset(int offset)
24
{
25
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/mve.decode
28
+++ b/target/arm/mve.decode
29
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
30
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
31
size=2 p=1
32
33
+# Moves between 2 32-bit vector lanes and 2 general purpose registers
34
+VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
35
+VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
27
+
36
+
28
+# VFM[AS]L
37
# Vector 2-op
29
+VFML 1111 110 0 s:1 . 10 .... .... 1000 . 0 . 1 .... \
38
VAND 1110 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
30
+ vm=%vm_sp vn=%vn_sp vd=%vd_dp q=0
39
VBIC 1110 1111 0 . 01 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
31
+VFML 1111 110 0 s:1 . 10 .... .... 1000 . 1 . 1 .... \
40
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
32
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp q=1
33
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
34
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-neon.inc.c
42
--- a/target/arm/translate-mve.c
36
+++ b/target/arm/translate-neon.inc.c
43
+++ b/target/arm/translate-mve.c
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VDOT(DisasContext *s, arg_VDOT *a)
44
@@ -XXX,XX +XXX,XX @@ static bool do_vabav(DisasContext *s, arg_vabav *a, MVEGenVABAVFn *fn)
38
opr_sz, opr_sz, 0, fn_gvec);
45
39
return true;
46
DO_VABAV(VABAV_S, vabavs)
40
}
47
DO_VABAV(VABAV_U, vabavu)
41
+
48
+
42
+static bool trans_VFML(DisasContext *s, arg_VFML *a)
49
+static bool trans_VMOV_to_2gp(DisasContext *s, arg_VMOV_to_2gp *a)
43
+{
50
+{
44
+ int opr_sz;
51
+ /*
52
+ * VMOV two 32-bit vector lanes to two general-purpose registers.
53
+ * This insn is not predicated but it is subject to beat-wise
54
+ * execution if it is not in an IT block. For us this means
55
+ * only that if PSR.ECI says we should not be executing the beat
56
+ * corresponding to the lane of the vector register being accessed
57
+ * then we should skip perfoming the move, and that we need to do
58
+ * the usual check for bad ECI state and advance of ECI state.
59
+ * (If PSR.ECI is non-zero then we cannot be in an IT block.)
60
+ */
61
+ TCGv_i32 tmp;
62
+ int vd;
45
+
63
+
46
+ if (!dc_isar_feature(aa32_fhm, s)) {
64
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd) ||
65
+ a->rt == 13 || a->rt == 15 || a->rt2 == 13 || a->rt2 == 15 ||
66
+ a->rt == a->rt2) {
67
+ /* Rt/Rt2 cases are UNPREDICTABLE */
47
+ return false;
68
+ return false;
48
+ }
69
+ }
49
+
70
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
50
+ /* UNDEF accesses to D16-D31 if they don't exist. */
51
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
52
+ (a->vd & 0x10)) {
53
+ return false;
54
+ }
55
+
56
+ if (a->vd & a->q) {
57
+ return false;
58
+ }
59
+
60
+ if (!vfp_access_check(s)) {
61
+ return true;
71
+ return true;
62
+ }
72
+ }
63
+
73
+
64
+ opr_sz = (1 + a->q) * 8;
74
+ /* Convert Qreg index to Dreg for read_neon_element32() etc */
65
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
75
+ vd = a->qd * 2;
66
+ vfp_reg_offset(a->q, a->vn),
76
+
67
+ vfp_reg_offset(a->q, a->vm),
77
+ if (!mve_skip_vmov(s, vd, a->idx, MO_32)) {
68
+ cpu_env, opr_sz, opr_sz, a->s, /* is_2 == 0 */
78
+ tmp = tcg_temp_new_i32();
69
+ gen_helper_gvec_fmlal_a32);
79
+ read_neon_element32(tmp, vd, a->idx, MO_32);
80
+ store_reg(s, a->rt, tmp);
81
+ }
82
+ if (!mve_skip_vmov(s, vd + 1, a->idx, MO_32)) {
83
+ tmp = tcg_temp_new_i32();
84
+ read_neon_element32(tmp, vd + 1, a->idx, MO_32);
85
+ store_reg(s, a->rt2, tmp);
86
+ }
87
+
88
+ mve_update_and_store_eci(s);
70
+ return true;
89
+ return true;
71
+}
90
+}
72
diff --git a/target/arm/translate.c b/target/arm/translate.c
91
+
92
+static bool trans_VMOV_from_2gp(DisasContext *s, arg_VMOV_to_2gp *a)
93
+{
94
+ /*
95
+ * VMOV two general-purpose registers to two 32-bit vector lanes.
96
+ * This insn is not predicated but it is subject to beat-wise
97
+ * execution if it is not in an IT block. For us this means
98
+ * only that if PSR.ECI says we should not be executing the beat
99
+ * corresponding to the lane of the vector register being accessed
100
+ * then we should skip perfoming the move, and that we need to do
101
+ * the usual check for bad ECI state and advance of ECI state.
102
+ * (If PSR.ECI is non-zero then we cannot be in an IT block.)
103
+ */
104
+ TCGv_i32 tmp;
105
+ int vd;
106
+
107
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd) ||
108
+ a->rt == 13 || a->rt == 15 || a->rt2 == 13 || a->rt2 == 15) {
109
+ /* Rt/Rt2 cases are UNPREDICTABLE */
110
+ return false;
111
+ }
112
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
113
+ return true;
114
+ }
115
+
116
+ /* Convert Qreg idx to Dreg for read_neon_element32() etc */
117
+ vd = a->qd * 2;
118
+
119
+ if (!mve_skip_vmov(s, vd, a->idx, MO_32)) {
120
+ tmp = load_reg(s, a->rt);
121
+ write_neon_element32(tmp, vd, a->idx, MO_32);
122
+ tcg_temp_free_i32(tmp);
123
+ }
124
+ if (!mve_skip_vmov(s, vd + 1, a->idx, MO_32)) {
125
+ tmp = load_reg(s, a->rt2);
126
+ write_neon_element32(tmp, vd + 1, a->idx, MO_32);
127
+ tcg_temp_free_i32(tmp);
128
+ }
129
+
130
+ mve_update_and_store_eci(s);
131
+ return true;
132
+}
133
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
73
index XXXXXXX..XXXXXXX 100644
134
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/translate.c
135
--- a/target/arm/translate-vfp.c
75
+++ b/target/arm/translate.c
136
+++ b/target/arm/translate-vfp.c
76
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
137
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
77
return 0;
138
return true;
78
}
139
}
79
140
80
-/* Advanced SIMD three registers of the same length extension.
141
-static bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
81
- * 31 25 23 22 20 16 12 11 10 9 8 3 0
142
+bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
82
- * +---------------+-----+---+-----+----+----+---+----+---+----+---------+----+
143
{
83
- * | 1 1 1 1 1 1 0 | op1 | D | op2 | Vn | Vd | 1 | o3 | 0 | o4 | N Q M U | Vm |
144
/*
84
- * +---------------+-----+---+-----+----+----+---+----+---+----+---------+----+
145
* In a CPU with MVE, the VMOV (vector lane to general-purpose register)
85
- */
86
-static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
87
-{
88
- gen_helper_gvec_3 *fn_gvec = NULL;
89
- gen_helper_gvec_3_ptr *fn_gvec_ptr = NULL;
90
- int rd, rn, rm, opr_sz;
91
- int data = 0;
92
- int off_rn, off_rm;
93
- bool is_long = false, q = extract32(insn, 6, 1);
94
- bool ptr_is_env = false;
95
-
96
- if ((insn & 0xff300f10) == 0xfc200810) {
97
- /* VFM[AS]L -- 1111 1100 S.10 .... .... 1000 .Q.1 .... */
98
- int is_s = extract32(insn, 23, 1);
99
- if (!dc_isar_feature(aa32_fhm, s)) {
100
- return 1;
101
- }
102
- is_long = true;
103
- data = is_s; /* is_2 == 0 */
104
- fn_gvec_ptr = gen_helper_gvec_fmlal_a32;
105
- ptr_is_env = true;
106
- } else {
107
- return 1;
108
- }
109
-
110
- VFP_DREG_D(rd, insn);
111
- if (rd & q) {
112
- return 1;
113
- }
114
- if (q || !is_long) {
115
- VFP_DREG_N(rn, insn);
116
- VFP_DREG_M(rm, insn);
117
- if ((rn | rm) & q & !is_long) {
118
- return 1;
119
- }
120
- off_rn = vfp_reg_offset(1, rn);
121
- off_rm = vfp_reg_offset(1, rm);
122
- } else {
123
- rn = VFP_SREG_N(insn);
124
- rm = VFP_SREG_M(insn);
125
- off_rn = vfp_reg_offset(0, rn);
126
- off_rm = vfp_reg_offset(0, rm);
127
- }
128
-
129
- if (s->fp_excp_el) {
130
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
131
- syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
132
- return 0;
133
- }
134
- if (!s->vfp_enabled) {
135
- return 1;
136
- }
137
-
138
- opr_sz = (1 + q) * 8;
139
- if (fn_gvec_ptr) {
140
- TCGv_ptr ptr;
141
- if (ptr_is_env) {
142
- ptr = cpu_env;
143
- } else {
144
- ptr = get_fpstatus_ptr(1);
145
- }
146
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd), off_rn, off_rm, ptr,
147
- opr_sz, opr_sz, data, fn_gvec_ptr);
148
- if (!ptr_is_env) {
149
- tcg_temp_free_ptr(ptr);
150
- }
151
- } else {
152
- tcg_gen_gvec_3_ool(vfp_reg_offset(1, rd), off_rn, off_rm,
153
- opr_sz, opr_sz, data, fn_gvec);
154
- }
155
- return 0;
156
-}
157
-
158
/* Advanced SIMD two registers and a scalar extension.
159
* 31 24 23 22 20 16 12 11 10 9 8 3 0
160
* +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
161
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
162
}
163
}
164
}
165
- } else if ((insn & 0x0e000a00) == 0x0c000800
166
- && arm_dc_feature(s, ARM_FEATURE_V8)) {
167
- if (disas_neon_insn_3same_ext(s, insn)) {
168
- goto illegal_op;
169
- }
170
- return;
171
} else if ((insn & 0x0f000a00) == 0x0e000800
172
&& arm_dc_feature(s, ARM_FEATURE_V8)) {
173
if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
174
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
175
}
176
break;
177
}
178
- if ((insn & 0xfe000a00) == 0xfc000800
179
+ if ((insn & 0xff000a00) == 0xfe000800
180
&& arm_dc_feature(s, ARM_FEATURE_V8)) {
181
/* The Thumb2 and ARM encodings are identical. */
182
- if (disas_neon_insn_3same_ext(s, insn)) {
183
- goto illegal_op;
184
- }
185
- } else if ((insn & 0xff000a00) == 0xfe000800
186
- && arm_dc_feature(s, ARM_FEATURE_V8)) {
187
- /* The Thumb2 and ARM encodings are identical. */
188
if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
189
goto illegal_op;
190
}
191
--
146
--
192
2.20.1
147
2.20.1
193
148
194
149
diff view generated by jsdifflib
1
Convert the Neon "load/store single structure to one lane" insns to
1
Implement the MVE VPNOT insn, which inverts the bits in VPR.P0
2
decodetree.
2
(subject to both predication and to beatwise execution).
3
4
As this is the last set of insns in the neon load/store group,
5
we can remove the whole disas_neon_ls_insn() function.
6
3
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200430181003.21682-14-peter.maydell@linaro.org
10
---
6
---
11
target/arm/neon-ls.decode | 11 +++
7
target/arm/helper-mve.h | 1 +
12
target/arm/translate-neon.inc.c | 89 +++++++++++++++++++
8
target/arm/mve.decode | 1 +
13
target/arm/translate.c | 147 --------------------------------
9
target/arm/mve_helper.c | 17 +++++++++++++++++
14
3 files changed, 100 insertions(+), 147 deletions(-)
10
target/arm/translate-mve.c | 19 +++++++++++++++++++
11
4 files changed, 38 insertions(+)
15
12
16
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/neon-ls.decode
15
--- a/target/arm/helper-mve.h
19
+++ b/target/arm/neon-ls.decode
16
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ VLDST_multiple 1111 0100 0 . l:1 0 rn:4 .... itype:4 size:2 align:2 rm:4 \
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
18
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
VLD_all_lanes 1111 0100 1 . 1 0 rn:4 .... 11 n:2 size:2 t:1 a:1 rm:4 \
19
23
vd=%vd_dp
20
DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+
21
+DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env)
25
+# Neon load/store single structure to one lane
22
26
+%imm1_5_p1 5:1 !function=plus1
23
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+%imm1_6_p1 6:1 !function=plus1
24
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
25
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
+VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 00 n:2 reg_idx:3 align:1 rm:4 \
30
+ vd=%vd_dp size=0 stride=1
31
+VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 01 n:2 reg_idx:2 align:2 rm:4 \
32
+ vd=%vd_dp size=1 stride=%imm1_5_p1
33
+VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 10 n:2 reg_idx:1 align:3 rm:4 \
34
+ vd=%vd_dp size=2 stride=%imm1_6_p1
35
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
36
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/translate-neon.inc.c
27
--- a/target/arm/mve.decode
38
+++ b/target/arm/translate-neon.inc.c
28
+++ b/target/arm/mve.decode
39
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@ VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
40
* It might be possible to convert it to a standalone .c file eventually.
30
VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
41
*/
31
42
32
{
43
+static inline int plus1(DisasContext *s, int x)
33
+ VPNOT 1111 1110 0 0 11 000 1 000 0 1111 0100 1101
34
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
35
VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar
36
}
37
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve_helper.c
40
+++ b/target/arm/mve_helper.c
41
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
42
mve_advance_vpt(env);
43
}
44
45
+void HELPER(mve_vpnot)(CPUARMState *env)
44
+{
46
+{
45
+ return x + 1;
47
+ /*
48
+ * P0 bits for unexecuted beats (where eci_mask is 0) are unchanged.
49
+ * P0 bits for predicated lanes in executed bits (where mask is 0) are 0.
50
+ * P0 bits otherwise are inverted.
51
+ * (This is the same logic as VCMP.)
52
+ * This insn is itself subject to predication and to beat-wise execution,
53
+ * and after it executes VPT state advances in the usual way.
54
+ */
55
+ uint16_t mask = mve_element_mask(env);
56
+ uint16_t eci_mask = mve_eci_mask(env);
57
+ uint16_t beatpred = ~env->v7m.vpr & mask;
58
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (beatpred & eci_mask);
59
+ mve_advance_vpt(env);
46
+}
60
+}
47
+
61
+
48
/* Include the generated Neon decoder */
62
#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
49
#include "decode-neon-dp.inc.c"
63
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
50
#include "decode-neon-ls.inc.c"
64
{ \
51
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
65
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
52
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate-mve.c
68
+++ b/target/arm/translate-mve.c
69
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
53
return true;
70
return true;
54
}
71
}
55
+
72
56
+static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
73
+static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
57
+{
74
+{
58
+ /* Neon load/store single structure to one lane */
75
+ /*
59
+ int reg;
76
+ * Invert the predicate in VPR.P0. We have call out to
60
+ int nregs = a->n + 1;
77
+ * a helper because this insn itself is beatwise and can
61
+ int vd = a->vd;
78
+ * be predicated.
62
+ TCGv_i32 addr, tmp;
79
+ */
63
+
80
+ if (!dc_isar_feature(aa32_mve, s)) {
64
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
65
+ return false;
81
+ return false;
66
+ }
82
+ }
67
+
83
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
68
+ /* UNDEF accesses to D16-D31 if they don't exist */
69
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
70
+ return false;
71
+ }
72
+
73
+ /* Catch the UNDEF cases. This is unavoidably a bit messy. */
74
+ switch (nregs) {
75
+ case 1:
76
+ if (((a->align & (1 << a->size)) != 0) ||
77
+ (a->size == 2 && ((a->align & 3) == 1 || (a->align & 3) == 2))) {
78
+ return false;
79
+ }
80
+ break;
81
+ case 3:
82
+ if ((a->align & 1) != 0) {
83
+ return false;
84
+ }
85
+ /* fall through */
86
+ case 2:
87
+ if (a->size == 2 && (a->align & 2) != 0) {
88
+ return false;
89
+ }
90
+ break;
91
+ case 4:
92
+ if ((a->size == 2) && ((a->align & 3) == 3)) {
93
+ return false;
94
+ }
95
+ break;
96
+ default:
97
+ abort();
98
+ }
99
+ if ((vd + a->stride * (nregs - 1)) > 31) {
100
+ /*
101
+ * Attempts to write off the end of the register file are
102
+ * UNPREDICTABLE; we choose to UNDEF because otherwise we would
103
+ * access off the end of the array that holds the register data.
104
+ */
105
+ return false;
106
+ }
107
+
108
+ if (!vfp_access_check(s)) {
109
+ return true;
84
+ return true;
110
+ }
85
+ }
111
+
86
+
112
+ tmp = tcg_temp_new_i32();
87
+ gen_helper_mve_vpnot(cpu_env);
113
+ addr = tcg_temp_new_i32();
88
+ mve_update_eci(s);
114
+ load_reg_var(s, addr, a->rn);
115
+ /*
116
+ * TODO: if we implemented alignment exceptions, we should check
117
+ * addr against the alignment encoded in a->align here.
118
+ */
119
+ for (reg = 0; reg < nregs; reg++) {
120
+ if (a->l) {
121
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
122
+ s->be_data | a->size);
123
+ neon_store_element(vd, a->reg_idx, a->size, tmp);
124
+ } else { /* Store */
125
+ neon_load_element(tmp, vd, a->reg_idx, a->size);
126
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
127
+ s->be_data | a->size);
128
+ }
129
+ vd += a->stride;
130
+ tcg_gen_addi_i32(addr, addr, 1 << a->size);
131
+ }
132
+ tcg_temp_free_i32(addr);
133
+ tcg_temp_free_i32(tmp);
134
+
135
+ gen_neon_ldst_base_update(s, a->rm, a->rn, (1 << a->size) * nregs);
136
+
137
+ return true;
89
+ return true;
138
+}
90
+}
139
diff --git a/target/arm/translate.c b/target/arm/translate.c
91
+
140
index XXXXXXX..XXXXXXX 100644
92
static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
141
--- a/target/arm/translate.c
142
+++ b/target/arm/translate.c
143
@@ -XXX,XX +XXX,XX @@ static void gen_neon_trn_u16(TCGv_i32 t0, TCGv_i32 t1)
144
tcg_temp_free_i32(rd);
145
}
146
147
-
148
-/* Translate a NEON load/store element instruction. Return nonzero if the
149
- instruction is invalid. */
150
-static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
151
-{
152
- int rd, rn, rm;
153
- int nregs;
154
- int stride;
155
- int size;
156
- int reg;
157
- int load;
158
- TCGv_i32 addr;
159
- TCGv_i32 tmp;
160
-
161
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
162
- return 1;
163
- }
164
-
165
- /* FIXME: this access check should not take precedence over UNDEF
166
- * for invalid encodings; we will generate incorrect syndrome information
167
- * for attempts to execute invalid vfp/neon encodings with FP disabled.
168
- */
169
- if (s->fp_excp_el) {
170
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
171
- syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
172
- return 0;
173
- }
174
-
175
- if (!s->vfp_enabled)
176
- return 1;
177
- VFP_DREG_D(rd, insn);
178
- rn = (insn >> 16) & 0xf;
179
- rm = insn & 0xf;
180
- load = (insn & (1 << 21)) != 0;
181
- if ((insn & (1 << 23)) == 0) {
182
- /* Load store all elements -- handled already by decodetree */
183
- return 1;
184
- } else {
185
- size = (insn >> 10) & 3;
186
- if (size == 3) {
187
- /* Load single element to all lanes -- handled by decodetree */
188
- return 1;
189
- } else {
190
- /* Single element. */
191
- int idx = (insn >> 4) & 0xf;
192
- int reg_idx;
193
- switch (size) {
194
- case 0:
195
- reg_idx = (insn >> 5) & 7;
196
- stride = 1;
197
- break;
198
- case 1:
199
- reg_idx = (insn >> 6) & 3;
200
- stride = (insn & (1 << 5)) ? 2 : 1;
201
- break;
202
- case 2:
203
- reg_idx = (insn >> 7) & 1;
204
- stride = (insn & (1 << 6)) ? 2 : 1;
205
- break;
206
- default:
207
- abort();
208
- }
209
- nregs = ((insn >> 8) & 3) + 1;
210
- /* Catch the UNDEF cases. This is unavoidably a bit messy. */
211
- switch (nregs) {
212
- case 1:
213
- if (((idx & (1 << size)) != 0) ||
214
- (size == 2 && ((idx & 3) == 1 || (idx & 3) == 2))) {
215
- return 1;
216
- }
217
- break;
218
- case 3:
219
- if ((idx & 1) != 0) {
220
- return 1;
221
- }
222
- /* fall through */
223
- case 2:
224
- if (size == 2 && (idx & 2) != 0) {
225
- return 1;
226
- }
227
- break;
228
- case 4:
229
- if ((size == 2) && ((idx & 3) == 3)) {
230
- return 1;
231
- }
232
- break;
233
- default:
234
- abort();
235
- }
236
- if ((rd + stride * (nregs - 1)) > 31) {
237
- /* Attempts to write off the end of the register file
238
- * are UNPREDICTABLE; we choose to UNDEF because otherwise
239
- * the neon_load_reg() would write off the end of the array.
240
- */
241
- return 1;
242
- }
243
- tmp = tcg_temp_new_i32();
244
- addr = tcg_temp_new_i32();
245
- load_reg_var(s, addr, rn);
246
- for (reg = 0; reg < nregs; reg++) {
247
- if (load) {
248
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
249
- s->be_data | size);
250
- neon_store_element(rd, reg_idx, size, tmp);
251
- } else { /* Store */
252
- neon_load_element(tmp, rd, reg_idx, size);
253
- gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
254
- s->be_data | size);
255
- }
256
- rd += stride;
257
- tcg_gen_addi_i32(addr, addr, 1 << size);
258
- }
259
- tcg_temp_free_i32(addr);
260
- tcg_temp_free_i32(tmp);
261
- stride = nregs * (1 << size);
262
- }
263
- }
264
- if (rm != 15) {
265
- TCGv_i32 base;
266
-
267
- base = load_reg(s, rn);
268
- if (rm == 13) {
269
- tcg_gen_addi_i32(base, base, stride);
270
- } else {
271
- TCGv_i32 index;
272
- index = load_reg(s, rm);
273
- tcg_gen_add_i32(base, base, index);
274
- tcg_temp_free_i32(index);
275
- }
276
- store_reg(s, rn, base);
277
- }
278
- return 0;
279
-}
280
-
281
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
282
{
93
{
283
switch (size) {
94
/* VADDV: vector add across vector */
284
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
285
}
286
return;
287
}
288
- if ((insn & 0x0f100000) == 0x04000000) {
289
- /* NEON load/store. */
290
- if (disas_neon_ls_insn(s, insn)) {
291
- goto illegal_op;
292
- }
293
- return;
294
- }
295
if ((insn & 0x0e000f00) == 0x0c000100) {
296
if (arm_dc_feature(s, ARM_FEATURE_IWMMXT)) {
297
/* iWMMXt register transfer. */
298
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
299
}
300
break;
301
case 12:
302
- if ((insn & 0x01100000) == 0x01000000) {
303
- if (disas_neon_ls_insn(s, insn)) {
304
- goto illegal_op;
305
- }
306
- break;
307
- }
308
goto illegal_op;
309
default:
310
illegal_op:
311
--
95
--
312
2.20.1
96
2.20.1
313
97
314
98
diff view generated by jsdifflib
1
Convert the Neon "load single structure to all lanes" insns to
1
Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so
2
decodetree.
2
as to predicate any element at index Rn or greater is predicated. As
3
with VPNOT, this insn itself is predicable and subject to beatwise
4
execution.
5
6
The calculation of the mask is the same as is used to determine
7
ltpmask in mve_element_mask(), but we precalculate masklen in
8
generated code to avoid having to have 4 helpers specialized by size.
9
10
We put the decode line in with the low-overhead-loop insns in
11
t32.decode because it's logically part of that collection of insn
12
patterns, even though it is an MVE only insn.
3
13
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-13-peter.maydell@linaro.org
7
---
16
---
8
target/arm/neon-ls.decode | 5 +++
17
target/arm/helper-mve.h | 2 ++
9
target/arm/translate-neon.inc.c | 73 +++++++++++++++++++++++++++++++++
18
target/arm/translate-a32.h | 1 +
10
target/arm/translate.c | 55 +------------------------
19
target/arm/t32.decode | 1 +
11
3 files changed, 80 insertions(+), 53 deletions(-)
20
target/arm/mve_helper.c | 20 ++++++++++++++++++++
21
target/arm/translate-mve.c | 2 +-
22
target/arm/translate.c | 33 +++++++++++++++++++++++++++++++++
23
6 files changed, 58 insertions(+), 1 deletion(-)
12
24
13
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
25
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-ls.decode
27
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/neon-ls.decode
28
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
30
DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
VLDST_multiple 1111 0100 0 . l:1 0 rn:4 .... itype:4 size:2 align:2 rm:4 \
31
DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env)
20
vd=%vd_dp
32
33
+DEF_HELPER_FLAGS_2(mve_vctp, TCG_CALL_NO_WG, void, env, i32)
21
+
34
+
22
+# Neon load single element to all lanes
35
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-a32.h
41
+++ b/target/arm/translate-a32.h
42
@@ -XXX,XX +XXX,XX @@ long neon_element_offset(int reg, int element, MemOp memop);
43
void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
44
void clear_eci_state(DisasContext *s);
45
bool mve_eci_check(DisasContext *s);
46
+void mve_update_eci(DisasContext *s);
47
void mve_update_and_store_eci(DisasContext *s);
48
bool mve_skip_vmov(DisasContext *s, int vn, int index, int size);
49
50
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/t32.decode
53
+++ b/target/arm/t32.decode
54
@@ -XXX,XX +XXX,XX @@ BL 1111 0. .......... 11.1 ............ @branch24
55
# This is DLSTP
56
DLS 1111 0 0000 0 size:2 rn:4 1110 0000 0000 0001
57
}
58
+ VCTP 1111 0 0000 0 size:2 rn:4 1110 1000 0000 0001
59
]
60
}
61
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/mve_helper.c
64
+++ b/target/arm/mve_helper.c
65
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpnot)(CPUARMState *env)
66
mve_advance_vpt(env);
67
}
68
69
+/*
70
+ * VCTP: P0 unexecuted bits unchanged, predicated bits zeroed,
71
+ * otherwise set according to value of Rn. The calculation of
72
+ * newmask here works in the same way as the calculation of the
73
+ * ltpmask in mve_element_mask(), but we have pre-calculated
74
+ * the masklen in the generated code.
75
+ */
76
+void HELPER(mve_vctp)(CPUARMState *env, uint32_t masklen)
77
+{
78
+ uint16_t mask = mve_element_mask(env);
79
+ uint16_t eci_mask = mve_eci_mask(env);
80
+ uint16_t newmask;
23
+
81
+
24
+VLD_all_lanes 1111 0100 1 . 1 0 rn:4 .... 11 n:2 size:2 t:1 a:1 rm:4 \
82
+ assert(masklen <= 16);
25
+ vd=%vd_dp
83
+ newmask = masklen ? MAKE_64BIT_MASK(0, masklen) : 0;
26
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
84
+ newmask &= mask;
85
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (newmask & eci_mask);
86
+ mve_advance_vpt(env);
87
+}
88
+
89
#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
90
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
91
{ \
92
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
27
index XXXXXXX..XXXXXXX 100644
93
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-neon.inc.c
94
--- a/target/arm/translate-mve.c
29
+++ b/target/arm/translate-neon.inc.c
95
+++ b/target/arm/translate-mve.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a)
96
@@ -XXX,XX +XXX,XX @@ bool mve_eci_check(DisasContext *s)
31
gen_neon_ldst_base_update(s, a->rm, a->rn, nregs * interleave * 8);
97
}
98
}
99
100
-static void mve_update_eci(DisasContext *s)
101
+void mve_update_eci(DisasContext *s)
102
{
103
/*
104
* The helper function will always update the CPUState field,
105
diff --git a/target/arm/translate.c b/target/arm/translate.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate.c
108
+++ b/target/arm/translate.c
109
@@ -XXX,XX +XXX,XX @@ static bool trans_LCTP(DisasContext *s, arg_LCTP *a)
32
return true;
110
return true;
33
}
111
}
112
113
+static bool trans_VCTP(DisasContext *s, arg_VCTP *a)
114
+{
115
+ /*
116
+ * M-profile Create Vector Tail Predicate. This insn is itself
117
+ * predicated and is subject to beatwise execution.
118
+ */
119
+ TCGv_i32 rn_shifted, masklen;
34
+
120
+
35
+static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
121
+ if (!dc_isar_feature(aa32_mve, s) || a->rn == 13 || a->rn == 15) {
36
+{
37
+ /* Neon load single structure to all lanes */
38
+ int reg, stride, vec_size;
39
+ int vd = a->vd;
40
+ int size = a->size;
41
+ int nregs = a->n + 1;
42
+ TCGv_i32 addr, tmp;
43
+
44
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
45
+ return false;
122
+ return false;
46
+ }
123
+ }
47
+
124
+
48
+ /* UNDEF accesses to D16-D31 if they don't exist */
125
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
49
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
50
+ return false;
51
+ }
52
+
53
+ if (size == 3) {
54
+ if (nregs != 4 || a->a == 0) {
55
+ return false;
56
+ }
57
+ /* For VLD4 size == 3 a == 1 means 32 bits at 16 byte alignment */
58
+ size = 2;
59
+ }
60
+ if (nregs == 1 && a->a == 1 && size == 0) {
61
+ return false;
62
+ }
63
+ if (nregs == 3 && a->a == 1) {
64
+ return false;
65
+ }
66
+
67
+ if (!vfp_access_check(s)) {
68
+ return true;
126
+ return true;
69
+ }
127
+ }
70
+
128
+
71
+ /*
129
+ /*
72
+ * VLD1 to all lanes: T bit indicates how many Dregs to write.
130
+ * We pre-calculate the mask length here to avoid having
73
+ * VLD2/3/4 to all lanes: T bit indicates register stride.
131
+ * to have multiple helpers specialized for size.
132
+ * We pass the helper "rn <= (1 << (4 - size)) ? (rn << size) : 16".
74
+ */
133
+ */
75
+ stride = a->t ? 2 : 1;
134
+ rn_shifted = tcg_temp_new_i32();
76
+ vec_size = nregs == 1 ? stride * 8 : 8;
135
+ masklen = load_reg(s, a->rn);
77
+
136
+ tcg_gen_shli_i32(rn_shifted, masklen, a->size);
78
+ tmp = tcg_temp_new_i32();
137
+ tcg_gen_movcond_i32(TCG_COND_LEU, masklen,
79
+ addr = tcg_temp_new_i32();
138
+ masklen, tcg_constant_i32(1 << (4 - a->size)),
80
+ load_reg_var(s, addr, a->rn);
139
+ rn_shifted, tcg_constant_i32(16));
81
+ for (reg = 0; reg < nregs; reg++) {
140
+ gen_helper_mve_vctp(cpu_env, masklen);
82
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
141
+ tcg_temp_free_i32(masklen);
83
+ s->be_data | size);
142
+ tcg_temp_free_i32(rn_shifted);
84
+ if ((vd & 1) && vec_size == 16) {
143
+ mve_update_eci(s);
85
+ /*
86
+ * We cannot write 16 bytes at once because the
87
+ * destination is unaligned.
88
+ */
89
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
90
+ 8, 8, tmp);
91
+ tcg_gen_gvec_mov(0, neon_reg_offset(vd + 1, 0),
92
+ neon_reg_offset(vd, 0), 8, 8);
93
+ } else {
94
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
95
+ vec_size, vec_size, tmp);
96
+ }
97
+ tcg_gen_addi_i32(addr, addr, 1 << size);
98
+ vd += stride;
99
+ }
100
+ tcg_temp_free_i32(tmp);
101
+ tcg_temp_free_i32(addr);
102
+
103
+ gen_neon_ldst_base_update(s, a->rm, a->rn, (1 << size) * nregs);
104
+
105
+ return true;
144
+ return true;
106
+}
145
+}
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
146
108
index XXXXXXX..XXXXXXX 100644
147
static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
109
--- a/target/arm/translate.c
148
{
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
112
int size;
113
int reg;
114
int load;
115
- int vec_size;
116
TCGv_i32 addr;
117
TCGv_i32 tmp;
118
119
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
120
} else {
121
size = (insn >> 10) & 3;
122
if (size == 3) {
123
- /* Load single element to all lanes. */
124
- int a = (insn >> 4) & 1;
125
- if (!load) {
126
- return 1;
127
- }
128
- size = (insn >> 6) & 3;
129
- nregs = ((insn >> 8) & 3) + 1;
130
-
131
- if (size == 3) {
132
- if (nregs != 4 || a == 0) {
133
- return 1;
134
- }
135
- /* For VLD4 size==3 a == 1 means 32 bits at 16 byte alignment */
136
- size = 2;
137
- }
138
- if (nregs == 1 && a == 1 && size == 0) {
139
- return 1;
140
- }
141
- if (nregs == 3 && a == 1) {
142
- return 1;
143
- }
144
- addr = tcg_temp_new_i32();
145
- load_reg_var(s, addr, rn);
146
-
147
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
148
- * VLD2/3/4 to all lanes: bit 5 indicates register stride.
149
- */
150
- stride = (insn & (1 << 5)) ? 2 : 1;
151
- vec_size = nregs == 1 ? stride * 8 : 8;
152
-
153
- tmp = tcg_temp_new_i32();
154
- for (reg = 0; reg < nregs; reg++) {
155
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
156
- s->be_data | size);
157
- if ((rd & 1) && vec_size == 16) {
158
- /* We cannot write 16 bytes at once because the
159
- * destination is unaligned.
160
- */
161
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
162
- 8, 8, tmp);
163
- tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
164
- neon_reg_offset(rd, 0), 8, 8);
165
- } else {
166
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
167
- vec_size, vec_size, tmp);
168
- }
169
- tcg_gen_addi_i32(addr, addr, 1 << size);
170
- rd += stride;
171
- }
172
- tcg_temp_free_i32(tmp);
173
- tcg_temp_free_i32(addr);
174
- stride = (1 << size) * nregs;
175
+ /* Load single element to all lanes -- handled by decodetree */
176
+ return 1;
177
} else {
178
/* Single element. */
179
int idx = (insn >> 4) & 0xf;
180
--
149
--
181
2.20.1
150
2.20.1
182
151
183
152
diff view generated by jsdifflib
1
Convert the VCADD (vector) insns to decodetree.
1
Implement the MVE gather-loads and scatter-stores which
2
form the address by adding a base value from a scalar
3
register to an offset in each element of a vector.
2
4
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-6-peter.maydell@linaro.org
6
---
7
---
7
target/arm/neon-shared.decode | 3 +++
8
target/arm/helper-mve.h | 32 +++++++++
8
target/arm/translate-neon.inc.c | 37 +++++++++++++++++++++++++++++++++
9
target/arm/mve.decode | 12 ++++
9
target/arm/translate.c | 11 +---------
10
target/arm/mve_helper.c | 129 +++++++++++++++++++++++++++++++++++++
10
3 files changed, 41 insertions(+), 10 deletions(-)
11
target/arm/translate-mve.c | 97 ++++++++++++++++++++++++++++
12
4 files changed, 270 insertions(+)
11
13
12
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-shared.decode
16
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/neon-shared.decode
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
19
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
20
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
+
42
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
53
+
54
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
55
56
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
57
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/mve.decode
60
+++ b/target/arm/mve.decode
16
@@ -XXX,XX +XXX,XX @@
61
@@ -XXX,XX +XXX,XX @@
17
62
&shl_scalar qda rm size
18
VCMLA 1111 110 rot:2 . 1 size:1 .... .... 1000 . q:1 . 0 .... \
63
&vmaxv qm rda size
19
vm=%vm_dp vn=%vn_dp vd=%vd_dp
64
&vabav qn qm rda size
20
+
65
+&vldst_sg qd qm rn size msize os
21
+VCADD 1111 110 rot:1 1 . 0 size:1 .... .... 1000 . q:1 . 0 .... \
66
+
22
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
67
+# scatter-gather memory size is in bits 6:4
23
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
68
+%sg_msize 6:1 4:1
69
70
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
71
# Note that both Rn and Qd are 3 bits only (no D bit)
72
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
73
74
+@vldst_sg .... .... .... rn:4 .... ... size:2 ... ... os:1 &vldst_sg \
75
+ qd=%qd qm=%qm msize=%sg_msize
76
+
77
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
78
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
79
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
80
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
81
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
82
size=2 p=1
83
84
+# gather loads/scatter stores
85
+VLDR_S_sg 111 0 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
86
+VLDR_U_sg 111 1 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
87
+VSTR_sg 111 0 1100 1 . 00 .... ... 0 111 . .... .... @vldst_sg
88
+
89
# Moves between 2 32-bit vector lanes and 2 general purpose registers
90
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
91
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
92
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
24
index XXXXXXX..XXXXXXX 100644
93
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/translate-neon.inc.c
94
--- a/target/arm/mve_helper.c
26
+++ b/target/arm/translate-neon.inc.c
95
+++ b/target/arm/mve_helper.c
27
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
96
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
28
tcg_temp_free_ptr(fpst);
97
#undef DO_VLDR
29
return true;
98
#undef DO_VSTR
30
}
99
31
+
100
+/*
32
+static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
101
+ * Gather loads/scatter stores. Here each element of Qm specifies
102
+ * an offset to use from the base register Rm. In the _os_ versions
103
+ * that offset is scaled by the element size.
104
+ * For loads, predicated lanes are zeroed instead of retaining
105
+ * their previous values.
106
+ */
107
+#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN) \
108
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
109
+ uint32_t base) \
110
+ { \
111
+ TYPE *d = vd; \
112
+ OFFTYPE *m = vm; \
113
+ uint16_t mask = mve_element_mask(env); \
114
+ uint16_t eci_mask = mve_eci_mask(env); \
115
+ unsigned e; \
116
+ uint32_t addr; \
117
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE, eci_mask >>= ESIZE) { \
118
+ if (!(eci_mask & 1)) { \
119
+ continue; \
120
+ } \
121
+ addr = ADDRFN(base, m[H##ESIZE(e)]); \
122
+ d[H##ESIZE(e)] = (mask & 1) ? \
123
+ cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
124
+ } \
125
+ mve_advance_vpt(env); \
126
+ }
127
+
128
+/* We know here TYPE is unsigned so always the same as the offset type */
129
+#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN) \
130
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
131
+ uint32_t base) \
132
+ { \
133
+ TYPE *d = vd; \
134
+ TYPE *m = vm; \
135
+ uint16_t mask = mve_element_mask(env); \
136
+ unsigned e; \
137
+ uint32_t addr; \
138
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
139
+ addr = ADDRFN(base, m[H##ESIZE(e)]); \
140
+ if (mask & 1) { \
141
+ cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
142
+ } \
143
+ } \
144
+ mve_advance_vpt(env); \
145
+ }
146
+
147
+/*
148
+ * 64-bit accesses are slightly different: they are done as two 32-bit
149
+ * accesses, controlled by the predicate mask for the relevant beat,
150
+ * and with a single 32-bit offset in the first of the two Qm elements.
151
+ * Note that for QEMU our IMPDEF AIRCR.ENDIANNESS is always 0 (little).
152
+ */
153
+#define DO_VLDR64_SG(OP, ADDRFN) \
154
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
155
+ uint32_t base) \
156
+ { \
157
+ uint32_t *d = vd; \
158
+ uint32_t *m = vm; \
159
+ uint16_t mask = mve_element_mask(env); \
160
+ uint16_t eci_mask = mve_eci_mask(env); \
161
+ unsigned e; \
162
+ uint32_t addr; \
163
+ for (e = 0; e < 16 / 4; e++, mask >>= 4, eci_mask >>= 4) { \
164
+ if (!(eci_mask & 1)) { \
165
+ continue; \
166
+ } \
167
+ addr = ADDRFN(base, m[H4(e & ~1)]); \
168
+ addr += 4 * (e & 1); \
169
+ d[H4(e)] = (mask & 1) ? cpu_ldl_data_ra(env, addr, GETPC()) : 0; \
170
+ } \
171
+ mve_advance_vpt(env); \
172
+ }
173
+
174
+#define DO_VSTR64_SG(OP, ADDRFN) \
175
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
176
+ uint32_t base) \
177
+ { \
178
+ uint32_t *d = vd; \
179
+ uint32_t *m = vm; \
180
+ uint16_t mask = mve_element_mask(env); \
181
+ unsigned e; \
182
+ uint32_t addr; \
183
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
184
+ addr = ADDRFN(base, m[H4(e & ~1)]); \
185
+ addr += 4 * (e & 1); \
186
+ if (mask & 1) { \
187
+ cpu_stl_data_ra(env, addr, d[H4(e)], GETPC()); \
188
+ } \
189
+ } \
190
+ mve_advance_vpt(env); \
191
+ }
192
+
193
+#define ADDR_ADD(BASE, OFFSET) ((BASE) + (OFFSET))
194
+#define ADDR_ADD_OSH(BASE, OFFSET) ((BASE) + ((OFFSET) << 1))
195
+#define ADDR_ADD_OSW(BASE, OFFSET) ((BASE) + ((OFFSET) << 2))
196
+#define ADDR_ADD_OSD(BASE, OFFSET) ((BASE) + ((OFFSET) << 3))
197
+
198
+DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD)
199
+DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD)
200
+DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD)
201
+
202
+DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD)
203
+DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD)
204
+DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD)
205
+DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD)
206
+DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD)
207
+DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD)
208
+DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD)
209
+
210
+DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH)
211
+DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH)
212
+DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH)
213
+DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW)
214
+DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD)
215
+
216
+DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD)
217
+DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD)
218
+DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD)
219
+DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD)
220
+DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD)
221
+DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD)
222
+DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD)
223
+
224
+DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH)
225
+DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH)
226
+DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW)
227
+DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD)
228
+
229
/*
230
* The mergemask(D, R, M) macro performs the operation "*D = R" but
231
* storing only the bytes which correspond to 1 bits in M,
232
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/translate-mve.c
235
+++ b/target/arm/translate-mve.c
236
@@ -XXX,XX +XXX,XX @@ static inline int vidup_imm(DisasContext *s, int x)
237
#include "decode-mve.c.inc"
238
239
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
240
+typedef void MVEGenLdStSGFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
241
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
242
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
243
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
244
@@ -XXX,XX +XXX,XX @@ DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h, MO_8)
245
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w, MO_8)
246
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w, MO_16)
247
248
+static bool do_ldst_sg(DisasContext *s, arg_vldst_sg *a, MVEGenLdStSGFn fn)
33
+{
249
+{
34
+ int opr_sz;
250
+ TCGv_i32 addr;
35
+ TCGv_ptr fpst;
251
+ TCGv_ptr qd, qm;
36
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
252
+
37
+
253
+ if (!dc_isar_feature(aa32_mve, s) ||
38
+ if (!dc_isar_feature(aa32_vcma, s)
254
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
39
+ || (!a->size && !dc_isar_feature(aa32_fp16_arith, s))) {
255
+ !fn || a->rn == 15) {
256
+ /* Rn case is UNPREDICTABLE */
40
+ return false;
257
+ return false;
41
+ }
258
+ }
42
+
259
+
43
+ /* UNDEF accesses to D16-D31 if they don't exist. */
260
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
44
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
45
+ ((a->vd | a->vn | a->vm) & 0x10)) {
46
+ return false;
47
+ }
48
+
49
+ if ((a->vn | a->vm | a->vd) & a->q) {
50
+ return false;
51
+ }
52
+
53
+ if (!vfp_access_check(s)) {
54
+ return true;
261
+ return true;
55
+ }
262
+ }
56
+
263
+
57
+ opr_sz = (1 + a->q) * 8;
264
+ addr = load_reg(s, a->rn);
58
+ fpst = get_fpstatus_ptr(1);
265
+
59
+ fn_gvec_ptr = a->size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
266
+ qd = mve_qreg_ptr(a->qd);
60
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
267
+ qm = mve_qreg_ptr(a->qm);
61
+ vfp_reg_offset(1, a->vn),
268
+ fn(cpu_env, qd, qm, addr);
62
+ vfp_reg_offset(1, a->vm),
269
+ tcg_temp_free_ptr(qd);
63
+ fpst, opr_sz, opr_sz, a->rot,
270
+ tcg_temp_free_ptr(qm);
64
+ fn_gvec_ptr);
271
+ tcg_temp_free_i32(addr);
65
+ tcg_temp_free_ptr(fpst);
272
+ mve_update_eci(s);
66
+ return true;
273
+ return true;
67
+}
274
+}
68
diff --git a/target/arm/translate.c b/target/arm/translate.c
275
+
69
index XXXXXXX..XXXXXXX 100644
276
+/*
70
--- a/target/arm/translate.c
277
+ * The naming scheme here is "vldrb_sg_sh == in-memory byte loads
71
+++ b/target/arm/translate.c
278
+ * signextended to halfword elements in register". _os_ indicates that
72
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
279
+ * the offsets in Qm should be scaled by the element size.
73
bool is_long = false, q = extract32(insn, 6, 1);
280
+ */
74
bool ptr_is_env = false;
281
+/* This macro is just to make the arrays more compact in these functions */
75
282
+#define F(N) gen_helper_mve_##N
76
- if ((insn & 0xfea00f10) == 0xfc800800) {
283
+
77
- /* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
284
+/* VLDRB/VSTRB (ie msize 1) with OS=1 is UNPREDICTABLE; we UNDEF */
78
- int size = extract32(insn, 20, 1);
285
+static bool trans_VLDR_S_sg(DisasContext *s, arg_vldst_sg *a)
79
- data = extract32(insn, 24, 1); /* rot */
286
+{
80
- if (!dc_isar_feature(aa32_vcma, s)
287
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
81
- || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
288
+ { NULL, F(vldrb_sg_sh), F(vldrb_sg_sw), NULL },
82
- return 1;
289
+ { NULL, NULL, F(vldrh_sg_sw), NULL },
83
- }
290
+ { NULL, NULL, NULL, NULL },
84
- fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
291
+ { NULL, NULL, NULL, NULL }
85
- } else if ((insn & 0xfeb00f00) == 0xfc200d00) {
292
+ }, {
86
+ if ((insn & 0xfeb00f00) == 0xfc200d00) {
293
+ { NULL, NULL, NULL, NULL },
87
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
294
+ { NULL, NULL, F(vldrh_sg_os_sw), NULL },
88
bool u = extract32(insn, 4, 1);
295
+ { NULL, NULL, NULL, NULL },
89
if (!dc_isar_feature(aa32_dp, s)) {
296
+ { NULL, NULL, NULL, NULL }
297
+ }
298
+ };
299
+ if (a->qd == a->qm) {
300
+ return false; /* UNPREDICTABLE */
301
+ }
302
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
303
+}
304
+
305
+static bool trans_VLDR_U_sg(DisasContext *s, arg_vldst_sg *a)
306
+{
307
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
308
+ { F(vldrb_sg_ub), F(vldrb_sg_uh), F(vldrb_sg_uw), NULL },
309
+ { NULL, F(vldrh_sg_uh), F(vldrh_sg_uw), NULL },
310
+ { NULL, NULL, F(vldrw_sg_uw), NULL },
311
+ { NULL, NULL, NULL, F(vldrd_sg_ud) }
312
+ }, {
313
+ { NULL, NULL, NULL, NULL },
314
+ { NULL, F(vldrh_sg_os_uh), F(vldrh_sg_os_uw), NULL },
315
+ { NULL, NULL, F(vldrw_sg_os_uw), NULL },
316
+ { NULL, NULL, NULL, F(vldrd_sg_os_ud) }
317
+ }
318
+ };
319
+ if (a->qd == a->qm) {
320
+ return false; /* UNPREDICTABLE */
321
+ }
322
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
323
+}
324
+
325
+static bool trans_VSTR_sg(DisasContext *s, arg_vldst_sg *a)
326
+{
327
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
328
+ { F(vstrb_sg_ub), F(vstrb_sg_uh), F(vstrb_sg_uw), NULL },
329
+ { NULL, F(vstrh_sg_uh), F(vstrh_sg_uw), NULL },
330
+ { NULL, NULL, F(vstrw_sg_uw), NULL },
331
+ { NULL, NULL, NULL, F(vstrd_sg_ud) }
332
+ }, {
333
+ { NULL, NULL, NULL, NULL },
334
+ { NULL, F(vstrh_sg_os_uh), F(vstrh_sg_os_uw), NULL },
335
+ { NULL, NULL, F(vstrw_sg_os_uw), NULL },
336
+ { NULL, NULL, NULL, F(vstrd_sg_os_ud) }
337
+ }
338
+ };
339
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
340
+}
341
+
342
+#undef F
343
+
344
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
345
{
346
TCGv_ptr qd;
90
--
347
--
91
2.20.1
348
2.20.1
92
349
93
350
diff view generated by jsdifflib
1
Convert the VCMLA (vector) insns in the 3same extension group to
1
Implement the MVE VLDR/VSTR insns which do scatter-gather using base
2
decodetree.
2
addresses from Qm plus or minus an immediate offset (possibly with
3
writeback). Note that writeback is not predicated but it does have
4
to honour ECI state, so we have to add an eci_mask check to the
5
VSTR_SG macros (the VLDR_SG macros already needed this to be able
6
to distinguish "skip beat" from "set predicated element to 0").
3
7
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-5-peter.maydell@linaro.org
7
---
10
---
8
target/arm/neon-shared.decode | 11 ++++++++++
11
target/arm/helper-mve.h | 5 +++
9
target/arm/translate-neon.inc.c | 37 +++++++++++++++++++++++++++++++++
12
target/arm/mve.decode | 10 +++++
10
target/arm/translate.c | 11 +---------
13
target/arm/mve_helper.c | 91 ++++++++++++++++++++++++--------------
11
3 files changed, 49 insertions(+), 10 deletions(-)
14
target/arm/translate-mve.c | 72 ++++++++++++++++++++++++++++++
15
4 files changed, 146 insertions(+), 32 deletions(-)
12
16
13
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-shared.decode
19
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/neon-shared.decode
20
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(mve_vstrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vstrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+
30
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
31
32
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
17
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@
18
# More specifically, this covers:
38
&vmaxv qm rda size
19
# 2reg scalar ext: 0b1111_1110_xxxx_xxxx_xxxx_1x0x_xxxx_xxxx
39
&vabav qn qm rda size
20
# 3same ext: 0b1111_110x_xxxx_xxxx_xxxx_1x0x_xxxx_xxxx
40
&vldst_sg qd qm rn size msize os
21
+
41
+&vldst_sg_imm qd qm a w imm
22
+# VFP/Neon register fields; same as vfp.decode
42
23
+%vm_dp 5:1 0:4
43
# scatter-gather memory size is in bits 6:4
24
+%vm_sp 0:4 5:1
44
%sg_msize 6:1 4:1
25
+%vn_dp 7:1 16:4
45
@@ -XXX,XX +XXX,XX @@
26
+%vn_sp 16:4 7:1
46
@vldst_sg .... .... .... rn:4 .... ... size:2 ... ... os:1 &vldst_sg \
27
+%vd_dp 22:1 12:4
47
qd=%qd qm=%qm msize=%sg_msize
28
+%vd_sp 12:4 22:1
48
29
+
49
+# Qm is in the fields usually labeled Qn
30
+VCMLA 1111 110 rot:2 . 1 size:1 .... .... 1000 . q:1 . 0 .... \
50
+@vldst_sg_imm .... .... a:1 . w:1 . .... .... .... . imm:7 &vldst_sg_imm \
31
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
51
+ qd=%qd qm=%qn
32
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
52
+
53
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
54
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
55
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
56
@@ -XXX,XX +XXX,XX @@ VLDR_S_sg 111 0 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
57
VLDR_U_sg 111 1 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
58
VSTR_sg 111 0 1100 1 . 00 .... ... 0 111 . .... .... @vldst_sg
59
60
+VLDRW_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1110 .... .... @vldst_sg_imm
61
+VLDRD_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1111 .... .... @vldst_sg_imm
62
+VSTRW_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1110 .... .... @vldst_sg_imm
63
+VSTRD_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1111 .... .... @vldst_sg_imm
64
+
65
# Moves between 2 32-bit vector lanes and 2 general purpose registers
66
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
67
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
68
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
33
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-neon.inc.c
70
--- a/target/arm/mve_helper.c
35
+++ b/target/arm/translate-neon.inc.c
71
+++ b/target/arm/mve_helper.c
36
@@ -XXX,XX +XXX,XX @@
72
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
37
#include "decode-neon-dp.inc.c"
73
* For loads, predicated lanes are zeroed instead of retaining
38
#include "decode-neon-ls.inc.c"
74
* their previous values.
39
#include "decode-neon-shared.inc.c"
75
*/
40
+
76
-#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN) \
41
+static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
77
+#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN, WB) \
42
+{
78
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
43
+ int opr_sz;
79
uint32_t base) \
44
+ TCGv_ptr fpst;
80
{ \
45
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
81
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
46
+
82
addr = ADDRFN(base, m[H##ESIZE(e)]); \
47
+ if (!dc_isar_feature(aa32_vcma, s)
83
d[H##ESIZE(e)] = (mask & 1) ? \
48
+ || (!a->size && !dc_isar_feature(aa32_fp16_arith, s))) {
84
cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
85
+ if (WB) { \
86
+ m[H##ESIZE(e)] = addr; \
87
+ } \
88
} \
89
mve_advance_vpt(env); \
90
}
91
92
/* We know here TYPE is unsigned so always the same as the offset type */
93
-#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN) \
94
+#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN, WB) \
95
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
96
uint32_t base) \
97
{ \
98
TYPE *d = vd; \
99
TYPE *m = vm; \
100
uint16_t mask = mve_element_mask(env); \
101
+ uint16_t eci_mask = mve_eci_mask(env); \
102
unsigned e; \
103
uint32_t addr; \
104
- for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
105
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE, eci_mask >>= ESIZE) { \
106
+ if (!(eci_mask & 1)) { \
107
+ continue; \
108
+ } \
109
addr = ADDRFN(base, m[H##ESIZE(e)]); \
110
if (mask & 1) { \
111
cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
112
} \
113
+ if (WB) { \
114
+ m[H##ESIZE(e)] = addr; \
115
+ } \
116
} \
117
mve_advance_vpt(env); \
118
}
119
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
120
* accesses, controlled by the predicate mask for the relevant beat,
121
* and with a single 32-bit offset in the first of the two Qm elements.
122
* Note that for QEMU our IMPDEF AIRCR.ENDIANNESS is always 0 (little).
123
+ * Address writeback happens on the odd beats and updates the address
124
+ * stored in the even-beat element.
125
*/
126
-#define DO_VLDR64_SG(OP, ADDRFN) \
127
+#define DO_VLDR64_SG(OP, ADDRFN, WB) \
128
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
129
uint32_t base) \
130
{ \
131
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
132
addr = ADDRFN(base, m[H4(e & ~1)]); \
133
addr += 4 * (e & 1); \
134
d[H4(e)] = (mask & 1) ? cpu_ldl_data_ra(env, addr, GETPC()) : 0; \
135
+ if (WB && (e & 1)) { \
136
+ m[H4(e & ~1)] = addr - 4; \
137
+ } \
138
} \
139
mve_advance_vpt(env); \
140
}
141
142
-#define DO_VSTR64_SG(OP, ADDRFN) \
143
+#define DO_VSTR64_SG(OP, ADDRFN, WB) \
144
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
145
uint32_t base) \
146
{ \
147
uint32_t *d = vd; \
148
uint32_t *m = vm; \
149
uint16_t mask = mve_element_mask(env); \
150
+ uint16_t eci_mask = mve_eci_mask(env); \
151
unsigned e; \
152
uint32_t addr; \
153
- for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
154
+ for (e = 0; e < 16 / 4; e++, mask >>= 4, eci_mask >>= 4) { \
155
+ if (!(eci_mask & 1)) { \
156
+ continue; \
157
+ } \
158
addr = ADDRFN(base, m[H4(e & ~1)]); \
159
addr += 4 * (e & 1); \
160
if (mask & 1) { \
161
cpu_stl_data_ra(env, addr, d[H4(e)], GETPC()); \
162
} \
163
+ if (WB && (e & 1)) { \
164
+ m[H4(e & ~1)] = addr - 4; \
165
+ } \
166
} \
167
mve_advance_vpt(env); \
168
}
169
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
170
#define ADDR_ADD_OSW(BASE, OFFSET) ((BASE) + ((OFFSET) << 2))
171
#define ADDR_ADD_OSD(BASE, OFFSET) ((BASE) + ((OFFSET) << 3))
172
173
-DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD)
174
-DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD)
175
-DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD)
176
+DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD, false)
177
+DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD, false)
178
+DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD, false)
179
180
-DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD)
181
-DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD)
182
-DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD)
183
-DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD)
184
-DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD)
185
-DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD)
186
-DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD)
187
+DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD, false)
188
+DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD, false)
189
+DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD, false)
190
+DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD, false)
191
+DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD, false)
192
+DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD, false)
193
+DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD, false)
194
195
-DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH)
196
-DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH)
197
-DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH)
198
-DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW)
199
-DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD)
200
+DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH, false)
201
+DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH, false)
202
+DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH, false)
203
+DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW, false)
204
+DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD, false)
205
206
-DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD)
207
-DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD)
208
-DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD)
209
-DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD)
210
-DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD)
211
-DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD)
212
-DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD)
213
+DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD, false)
214
+DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD, false)
215
+DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD, false)
216
+DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD, false)
217
+DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD, false)
218
+DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD, false)
219
+DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD, false)
220
221
-DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH)
222
-DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH)
223
-DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW)
224
-DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD)
225
+DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH, false)
226
+DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH, false)
227
+DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW, false)
228
+DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD, false)
229
+
230
+DO_VLDR_SG(vldrw_sg_wb_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD, true)
231
+DO_VLDR64_SG(vldrd_sg_wb_ud, ADDR_ADD, true)
232
+DO_VSTR_SG(vstrw_sg_wb_uw, stl, 4, uint32_t, ADDR_ADD, true)
233
+DO_VSTR64_SG(vstrd_sg_wb_ud, ADDR_ADD, true)
234
235
/*
236
* The mergemask(D, R, M) macro performs the operation "*D = R" but
237
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
238
index XXXXXXX..XXXXXXX 100644
239
--- a/target/arm/translate-mve.c
240
+++ b/target/arm/translate-mve.c
241
@@ -XXX,XX +XXX,XX @@ static bool trans_VSTR_sg(DisasContext *s, arg_vldst_sg *a)
242
243
#undef F
244
245
+static bool do_ldst_sg_imm(DisasContext *s, arg_vldst_sg_imm *a,
246
+ MVEGenLdStSGFn *fn, unsigned msize)
247
+{
248
+ uint32_t offset;
249
+ TCGv_ptr qd, qm;
250
+
251
+ if (!dc_isar_feature(aa32_mve, s) ||
252
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
253
+ !fn) {
49
+ return false;
254
+ return false;
50
+ }
255
+ }
51
+
256
+
52
+ /* UNDEF accesses to D16-D31 if they don't exist. */
257
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
53
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
54
+ ((a->vd | a->vn | a->vm) & 0x10)) {
55
+ return false;
56
+ }
57
+
58
+ if ((a->vn | a->vm | a->vd) & a->q) {
59
+ return false;
60
+ }
61
+
62
+ if (!vfp_access_check(s)) {
63
+ return true;
258
+ return true;
64
+ }
259
+ }
65
+
260
+
66
+ opr_sz = (1 + a->q) * 8;
261
+ offset = a->imm << msize;
67
+ fpst = get_fpstatus_ptr(1);
262
+ if (!a->a) {
68
+ fn_gvec_ptr = a->size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
263
+ offset = -offset;
69
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
264
+ }
70
+ vfp_reg_offset(1, a->vn),
265
+
71
+ vfp_reg_offset(1, a->vm),
266
+ qd = mve_qreg_ptr(a->qd);
72
+ fpst, opr_sz, opr_sz, a->rot,
267
+ qm = mve_qreg_ptr(a->qm);
73
+ fn_gvec_ptr);
268
+ fn(cpu_env, qd, qm, tcg_constant_i32(offset));
74
+ tcg_temp_free_ptr(fpst);
269
+ tcg_temp_free_ptr(qd);
270
+ tcg_temp_free_ptr(qm);
271
+ mve_update_eci(s);
75
+ return true;
272
+ return true;
76
+}
273
+}
77
diff --git a/target/arm/translate.c b/target/arm/translate.c
274
+
78
index XXXXXXX..XXXXXXX 100644
275
+static bool trans_VLDRW_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
79
--- a/target/arm/translate.c
276
+{
80
+++ b/target/arm/translate.c
277
+ static MVEGenLdStSGFn * const fns[] = {
81
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
278
+ gen_helper_mve_vldrw_sg_uw,
82
bool is_long = false, q = extract32(insn, 6, 1);
279
+ gen_helper_mve_vldrw_sg_wb_uw,
83
bool ptr_is_env = false;
280
+ };
84
281
+ if (a->qd == a->qm) {
85
- if ((insn & 0xfe200f10) == 0xfc200800) {
282
+ return false; /* UNPREDICTABLE */
86
- /* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
283
+ }
87
- int size = extract32(insn, 20, 1);
284
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_32);
88
- data = extract32(insn, 23, 2); /* rot */
285
+}
89
- if (!dc_isar_feature(aa32_vcma, s)
286
+
90
- || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
287
+static bool trans_VLDRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
91
- return 1;
288
+{
92
- }
289
+ static MVEGenLdStSGFn * const fns[] = {
93
- fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
290
+ gen_helper_mve_vldrd_sg_ud,
94
- } else if ((insn & 0xfea00f10) == 0xfc800800) {
291
+ gen_helper_mve_vldrd_sg_wb_ud,
95
+ if ((insn & 0xfea00f10) == 0xfc800800) {
292
+ };
96
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
293
+ if (a->qd == a->qm) {
97
int size = extract32(insn, 20, 1);
294
+ return false; /* UNPREDICTABLE */
98
data = extract32(insn, 24, 1); /* rot */
295
+ }
296
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
297
+}
298
+
299
+static bool trans_VSTRW_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
300
+{
301
+ static MVEGenLdStSGFn * const fns[] = {
302
+ gen_helper_mve_vstrw_sg_uw,
303
+ gen_helper_mve_vstrw_sg_wb_uw,
304
+ };
305
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_32);
306
+}
307
+
308
+static bool trans_VSTRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
309
+{
310
+ static MVEGenLdStSGFn * const fns[] = {
311
+ gen_helper_mve_vstrd_sg_ud,
312
+ gen_helper_mve_vstrd_sg_wb_ud,
313
+ };
314
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
315
+}
316
+
317
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
318
{
319
TCGv_ptr qd;
99
--
320
--
100
2.20.1
321
2.20.1
101
322
102
323
diff view generated by jsdifflib
1
Convert the Neon VMUL, VMLA, VMLS and VSHL insns in the
1
Implement the MVE interleaving load/store functions VLD2, VLD4, VST2
2
3-reg-same grouping to decodetree.
2
and VST4. VLD2 loads 16 bytes of data from memory and writes to 2
3
consecutive Qregs; VLD4 loads 16 bytes of data from memory and writes
4
to 4 consecutive Qregs. The 'pattern' field in the encoding
5
determines the offset into memory which is accessed and also which
6
elements in the Qregs are written to. (The intention is that a
7
sequence of four consecutive VLD4 with different pattern values
8
performs a complete de-interleaving load of 64 bytes into all
9
elements of the 4 Qregs.) VST2 and VST4 do the same, but for stores.
3
10
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-20-peter.maydell@linaro.org
7
---
13
---
8
target/arm/neon-dp.decode | 9 +++++++
14
target/arm/helper-mve.h | 48 ++++++
9
target/arm/translate-neon.inc.c | 44 +++++++++++++++++++++++++++++++++
15
target/arm/mve.decode | 11 ++
10
target/arm/translate.c | 28 +++------------------
16
target/arm/mve_helper.c | 342 +++++++++++++++++++++++++++++++++++++
11
3 files changed, 56 insertions(+), 25 deletions(-)
17
target/arm/translate-mve.c | 94 ++++++++++
18
4 files changed, 495 insertions(+)
12
19
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
20
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
22
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/neon-dp.decode
23
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ VCGT_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 0 .... @3same
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vldrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
VCGE_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 1 .... @3same
25
DEF_HELPER_FLAGS_4(mve_vstrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
VCGE_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 1 .... @3same
26
DEF_HELPER_FLAGS_4(mve_vstrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
27
21
+VSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 0 .... @3same
28
+DEF_HELPER_FLAGS_3(mve_vld20b, TCG_CALL_NO_WG, void, env, i32, i32)
22
+VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same
29
+DEF_HELPER_FLAGS_3(mve_vld20h, TCG_CALL_NO_WG, void, env, i32, i32)
23
+
30
+DEF_HELPER_FLAGS_3(mve_vld20w, TCG_CALL_NO_WG, void, env, i32, i32)
24
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
31
+
25
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
32
+DEF_HELPER_FLAGS_3(mve_vld21b, TCG_CALL_NO_WG, void, env, i32, i32)
26
VMIN_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 1 .... @3same
33
+DEF_HELPER_FLAGS_3(mve_vld21h, TCG_CALL_NO_WG, void, env, i32, i32)
27
@@ -XXX,XX +XXX,XX @@ VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
34
+DEF_HELPER_FLAGS_3(mve_vld21w, TCG_CALL_NO_WG, void, env, i32, i32)
28
35
+
29
VTST_3s 1111 001 0 0 . .. .... .... 1000 . . . 1 .... @3same
36
+DEF_HELPER_FLAGS_3(mve_vld40b, TCG_CALL_NO_WG, void, env, i32, i32)
30
VCEQ_3s 1111 001 1 0 . .. .... .... 1000 . . . 1 .... @3same
37
+DEF_HELPER_FLAGS_3(mve_vld40h, TCG_CALL_NO_WG, void, env, i32, i32)
31
+
38
+DEF_HELPER_FLAGS_3(mve_vld40w, TCG_CALL_NO_WG, void, env, i32, i32)
32
+VMLA_3s 1111 001 0 0 . .. .... .... 1001 . . . 0 .... @3same
39
+
33
+VMLS_3s 1111 001 1 0 . .. .... .... 1001 . . . 0 .... @3same
40
+DEF_HELPER_FLAGS_3(mve_vld41b, TCG_CALL_NO_WG, void, env, i32, i32)
34
+
41
+DEF_HELPER_FLAGS_3(mve_vld41h, TCG_CALL_NO_WG, void, env, i32, i32)
35
+VMUL_3s 1111 001 0 0 . .. .... .... 1001 . . . 1 .... @3same
42
+DEF_HELPER_FLAGS_3(mve_vld41w, TCG_CALL_NO_WG, void, env, i32, i32)
36
+VMUL_p_3s 1111 001 1 0 . .. .... .... 1001 . . . 1 .... @3same
43
+
37
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
44
+DEF_HELPER_FLAGS_3(mve_vld42b, TCG_CALL_NO_WG, void, env, i32, i32)
45
+DEF_HELPER_FLAGS_3(mve_vld42h, TCG_CALL_NO_WG, void, env, i32, i32)
46
+DEF_HELPER_FLAGS_3(mve_vld42w, TCG_CALL_NO_WG, void, env, i32, i32)
47
+
48
+DEF_HELPER_FLAGS_3(mve_vld43b, TCG_CALL_NO_WG, void, env, i32, i32)
49
+DEF_HELPER_FLAGS_3(mve_vld43h, TCG_CALL_NO_WG, void, env, i32, i32)
50
+DEF_HELPER_FLAGS_3(mve_vld43w, TCG_CALL_NO_WG, void, env, i32, i32)
51
+
52
+DEF_HELPER_FLAGS_3(mve_vst20b, TCG_CALL_NO_WG, void, env, i32, i32)
53
+DEF_HELPER_FLAGS_3(mve_vst20h, TCG_CALL_NO_WG, void, env, i32, i32)
54
+DEF_HELPER_FLAGS_3(mve_vst20w, TCG_CALL_NO_WG, void, env, i32, i32)
55
+
56
+DEF_HELPER_FLAGS_3(mve_vst21b, TCG_CALL_NO_WG, void, env, i32, i32)
57
+DEF_HELPER_FLAGS_3(mve_vst21h, TCG_CALL_NO_WG, void, env, i32, i32)
58
+DEF_HELPER_FLAGS_3(mve_vst21w, TCG_CALL_NO_WG, void, env, i32, i32)
59
+
60
+DEF_HELPER_FLAGS_3(mve_vst40b, TCG_CALL_NO_WG, void, env, i32, i32)
61
+DEF_HELPER_FLAGS_3(mve_vst40h, TCG_CALL_NO_WG, void, env, i32, i32)
62
+DEF_HELPER_FLAGS_3(mve_vst40w, TCG_CALL_NO_WG, void, env, i32, i32)
63
+
64
+DEF_HELPER_FLAGS_3(mve_vst41b, TCG_CALL_NO_WG, void, env, i32, i32)
65
+DEF_HELPER_FLAGS_3(mve_vst41h, TCG_CALL_NO_WG, void, env, i32, i32)
66
+DEF_HELPER_FLAGS_3(mve_vst41w, TCG_CALL_NO_WG, void, env, i32, i32)
67
+
68
+DEF_HELPER_FLAGS_3(mve_vst42b, TCG_CALL_NO_WG, void, env, i32, i32)
69
+DEF_HELPER_FLAGS_3(mve_vst42h, TCG_CALL_NO_WG, void, env, i32, i32)
70
+DEF_HELPER_FLAGS_3(mve_vst42w, TCG_CALL_NO_WG, void, env, i32, i32)
71
+
72
+DEF_HELPER_FLAGS_3(mve_vst43b, TCG_CALL_NO_WG, void, env, i32, i32)
73
+DEF_HELPER_FLAGS_3(mve_vst43h, TCG_CALL_NO_WG, void, env, i32, i32)
74
+DEF_HELPER_FLAGS_3(mve_vst43w, TCG_CALL_NO_WG, void, env, i32, i32)
75
+
76
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
77
78
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
79
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
38
index XXXXXXX..XXXXXXX 100644
80
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-neon.inc.c
81
--- a/target/arm/mve.decode
40
+++ b/target/arm/translate-neon.inc.c
82
+++ b/target/arm/mve.decode
41
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VMAX_S, tcg_gen_gvec_smax)
83
@@ -XXX,XX +XXX,XX @@
42
DO_3SAME_NO_SZ_3(VMAX_U, tcg_gen_gvec_umax)
84
&vabav qn qm rda size
43
DO_3SAME_NO_SZ_3(VMIN_S, tcg_gen_gvec_smin)
85
&vldst_sg qd qm rn size msize os
44
DO_3SAME_NO_SZ_3(VMIN_U, tcg_gen_gvec_umin)
86
&vldst_sg_imm qd qm a w imm
45
+DO_3SAME_NO_SZ_3(VMUL, tcg_gen_gvec_mul)
87
+&vldst_il qd rn size pat w
46
88
47
#define DO_3SAME_CMP(INSN, COND) \
89
# scatter-gather memory size is in bits 6:4
48
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
90
%sg_msize 6:1 4:1
49
@@ -XXX,XX +XXX,XX @@ DO_3SAME_GVEC4(VQADD_S, sqadd_op)
91
@@ -XXX,XX +XXX,XX @@
50
DO_3SAME_GVEC4(VQADD_U, uqadd_op)
92
@vldst_sg_imm .... .... a:1 . w:1 . .... .... .... . imm:7 &vldst_sg_imm \
51
DO_3SAME_GVEC4(VQSUB_S, sqsub_op)
93
qd=%qd qm=%qn
52
DO_3SAME_GVEC4(VQSUB_U, uqsub_op)
94
53
+
95
+# Deinterleaving load/interleaving store
54
+static void gen_VMUL_p_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
96
+@vldst_il .... .... .. w:1 . rn:4 .... ... size:2 pat:2 ..... &vldst_il \
55
+ uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
97
+ qd=%qd
98
+
99
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
100
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
101
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
102
@@ -XXX,XX +XXX,XX @@ VLDRD_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1111 .... .... @vldst_sg_imm
103
VSTRW_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1110 .... .... @vldst_sg_imm
104
VSTRD_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1111 .... .... @vldst_sg_imm
105
106
+# deinterleaving loads/interleaving stores
107
+VLD2 1111 1100 1 .. 1 .... ... 1 111 .. .. 00000 @vldst_il
108
+VLD4 1111 1100 1 .. 1 .... ... 1 111 .. .. 00001 @vldst_il
109
+VST2 1111 1100 1 .. 0 .... ... 1 111 .. .. 00000 @vldst_il
110
+VST4 1111 1100 1 .. 0 .... ... 1 111 .. .. 00001 @vldst_il
111
+
112
# Moves between 2 32-bit vector lanes and 2 general purpose registers
113
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
114
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
115
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/mve_helper.c
118
+++ b/target/arm/mve_helper.c
119
@@ -XXX,XX +XXX,XX @@ DO_VLDR64_SG(vldrd_sg_wb_ud, ADDR_ADD, true)
120
DO_VSTR_SG(vstrw_sg_wb_uw, stl, 4, uint32_t, ADDR_ADD, true)
121
DO_VSTR64_SG(vstrd_sg_wb_ud, ADDR_ADD, true)
122
123
+/*
124
+ * Deinterleaving loads/interleaving stores.
125
+ *
126
+ * For these helpers we are passed the index of the first Qreg
127
+ * (VLD2/VST2 will also access Qn+1, VLD4/VST4 access Qn .. Qn+3)
128
+ * and the value of the base address register Rn.
129
+ * The helpers are specialized for pattern and element size, so
130
+ * for instance vld42h is VLD4 with pattern 2, element size MO_16.
131
+ *
132
+ * These insns are beatwise but not predicated, so we must honour ECI,
133
+ * but need not look at mve_element_mask().
134
+ *
135
+ * The pseudocode implements these insns with multiple memory accesses
136
+ * of the element size, but rules R_VVVG and R_FXDM permit us to make
137
+ * one 32-bit memory access per beat.
138
+ */
139
+#define DO_VLD4B(OP, O1, O2, O3, O4) \
140
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
141
+ uint32_t base) \
142
+ { \
143
+ int beat, e; \
144
+ uint16_t mask = mve_eci_mask(env); \
145
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
146
+ uint32_t addr, data; \
147
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
148
+ if ((mask & 1) == 0) { \
149
+ /* ECI says skip this beat */ \
150
+ continue; \
151
+ } \
152
+ addr = base + off[beat] * 4; \
153
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
154
+ for (e = 0; e < 4; e++, data >>= 8) { \
155
+ uint8_t *qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + e); \
156
+ qd[H1(off[beat])] = data; \
157
+ } \
158
+ } \
159
+ }
160
+
161
+#define DO_VLD4H(OP, O1, O2) \
162
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
163
+ uint32_t base) \
164
+ { \
165
+ int beat; \
166
+ uint16_t mask = mve_eci_mask(env); \
167
+ static const uint8_t off[4] = { O1, O1, O2, O2 }; \
168
+ uint32_t addr, data; \
169
+ int y; /* y counts 0 2 0 2 */ \
170
+ uint16_t *qd; \
171
+ for (beat = 0, y = 0; beat < 4; beat++, mask >>= 4, y ^= 2) { \
172
+ if ((mask & 1) == 0) { \
173
+ /* ECI says skip this beat */ \
174
+ continue; \
175
+ } \
176
+ addr = base + off[beat] * 8 + (beat & 1) * 4; \
177
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
178
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y); \
179
+ qd[H2(off[beat])] = data; \
180
+ data >>= 16; \
181
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y + 1); \
182
+ qd[H2(off[beat])] = data; \
183
+ } \
184
+ }
185
+
186
+#define DO_VLD4W(OP, O1, O2, O3, O4) \
187
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
188
+ uint32_t base) \
189
+ { \
190
+ int beat; \
191
+ uint16_t mask = mve_eci_mask(env); \
192
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
193
+ uint32_t addr, data; \
194
+ uint32_t *qd; \
195
+ int y; \
196
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
197
+ if ((mask & 1) == 0) { \
198
+ /* ECI says skip this beat */ \
199
+ continue; \
200
+ } \
201
+ addr = base + off[beat] * 4; \
202
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
203
+ y = (beat + (O1 & 2)) & 3; \
204
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + y); \
205
+ qd[H4(off[beat] >> 2)] = data; \
206
+ } \
207
+ }
208
+
209
+DO_VLD4B(vld40b, 0, 1, 10, 11)
210
+DO_VLD4B(vld41b, 2, 3, 12, 13)
211
+DO_VLD4B(vld42b, 4, 5, 14, 15)
212
+DO_VLD4B(vld43b, 6, 7, 8, 9)
213
+
214
+DO_VLD4H(vld40h, 0, 5)
215
+DO_VLD4H(vld41h, 1, 6)
216
+DO_VLD4H(vld42h, 2, 7)
217
+DO_VLD4H(vld43h, 3, 4)
218
+
219
+DO_VLD4W(vld40w, 0, 1, 10, 11)
220
+DO_VLD4W(vld41w, 2, 3, 12, 13)
221
+DO_VLD4W(vld42w, 4, 5, 14, 15)
222
+DO_VLD4W(vld43w, 6, 7, 8, 9)
223
+
224
+#define DO_VLD2B(OP, O1, O2, O3, O4) \
225
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
226
+ uint32_t base) \
227
+ { \
228
+ int beat, e; \
229
+ uint16_t mask = mve_eci_mask(env); \
230
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
231
+ uint32_t addr, data; \
232
+ uint8_t *qd; \
233
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
234
+ if ((mask & 1) == 0) { \
235
+ /* ECI says skip this beat */ \
236
+ continue; \
237
+ } \
238
+ addr = base + off[beat] * 2; \
239
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
240
+ for (e = 0; e < 4; e++, data >>= 8) { \
241
+ qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + (e & 1)); \
242
+ qd[H1(off[beat] + (e >> 1))] = data; \
243
+ } \
244
+ } \
245
+ }
246
+
247
+#define DO_VLD2H(OP, O1, O2, O3, O4) \
248
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
249
+ uint32_t base) \
250
+ { \
251
+ int beat; \
252
+ uint16_t mask = mve_eci_mask(env); \
253
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
254
+ uint32_t addr, data; \
255
+ int e; \
256
+ uint16_t *qd; \
257
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
258
+ if ((mask & 1) == 0) { \
259
+ /* ECI says skip this beat */ \
260
+ continue; \
261
+ } \
262
+ addr = base + off[beat] * 4; \
263
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
264
+ for (e = 0; e < 2; e++, data >>= 16) { \
265
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + e); \
266
+ qd[H2(off[beat])] = data; \
267
+ } \
268
+ } \
269
+ }
270
+
271
+#define DO_VLD2W(OP, O1, O2, O3, O4) \
272
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
273
+ uint32_t base) \
274
+ { \
275
+ int beat; \
276
+ uint16_t mask = mve_eci_mask(env); \
277
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
278
+ uint32_t addr, data; \
279
+ uint32_t *qd; \
280
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
281
+ if ((mask & 1) == 0) { \
282
+ /* ECI says skip this beat */ \
283
+ continue; \
284
+ } \
285
+ addr = base + off[beat]; \
286
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
287
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + (beat & 1)); \
288
+ qd[H4(off[beat] >> 3)] = data; \
289
+ } \
290
+ }
291
+
292
+DO_VLD2B(vld20b, 0, 2, 12, 14)
293
+DO_VLD2B(vld21b, 4, 6, 8, 10)
294
+
295
+DO_VLD2H(vld20h, 0, 1, 6, 7)
296
+DO_VLD2H(vld21h, 2, 3, 4, 5)
297
+
298
+DO_VLD2W(vld20w, 0, 4, 24, 28)
299
+DO_VLD2W(vld21w, 8, 12, 16, 20)
300
+
301
+#define DO_VST4B(OP, O1, O2, O3, O4) \
302
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
303
+ uint32_t base) \
304
+ { \
305
+ int beat, e; \
306
+ uint16_t mask = mve_eci_mask(env); \
307
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
308
+ uint32_t addr, data; \
309
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
310
+ if ((mask & 1) == 0) { \
311
+ /* ECI says skip this beat */ \
312
+ continue; \
313
+ } \
314
+ addr = base + off[beat] * 4; \
315
+ data = 0; \
316
+ for (e = 3; e >= 0; e--) { \
317
+ uint8_t *qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + e); \
318
+ data = (data << 8) | qd[H1(off[beat])]; \
319
+ } \
320
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
321
+ } \
322
+ }
323
+
324
+#define DO_VST4H(OP, O1, O2) \
325
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
326
+ uint32_t base) \
327
+ { \
328
+ int beat; \
329
+ uint16_t mask = mve_eci_mask(env); \
330
+ static const uint8_t off[4] = { O1, O1, O2, O2 }; \
331
+ uint32_t addr, data; \
332
+ int y; /* y counts 0 2 0 2 */ \
333
+ uint16_t *qd; \
334
+ for (beat = 0, y = 0; beat < 4; beat++, mask >>= 4, y ^= 2) { \
335
+ if ((mask & 1) == 0) { \
336
+ /* ECI says skip this beat */ \
337
+ continue; \
338
+ } \
339
+ addr = base + off[beat] * 8 + (beat & 1) * 4; \
340
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y); \
341
+ data = qd[H2(off[beat])]; \
342
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y + 1); \
343
+ data |= qd[H2(off[beat])] << 16; \
344
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
345
+ } \
346
+ }
347
+
348
+#define DO_VST4W(OP, O1, O2, O3, O4) \
349
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
350
+ uint32_t base) \
351
+ { \
352
+ int beat; \
353
+ uint16_t mask = mve_eci_mask(env); \
354
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
355
+ uint32_t addr, data; \
356
+ uint32_t *qd; \
357
+ int y; \
358
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
359
+ if ((mask & 1) == 0) { \
360
+ /* ECI says skip this beat */ \
361
+ continue; \
362
+ } \
363
+ addr = base + off[beat] * 4; \
364
+ y = (beat + (O1 & 2)) & 3; \
365
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + y); \
366
+ data = qd[H4(off[beat] >> 2)]; \
367
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
368
+ } \
369
+ }
370
+
371
+DO_VST4B(vst40b, 0, 1, 10, 11)
372
+DO_VST4B(vst41b, 2, 3, 12, 13)
373
+DO_VST4B(vst42b, 4, 5, 14, 15)
374
+DO_VST4B(vst43b, 6, 7, 8, 9)
375
+
376
+DO_VST4H(vst40h, 0, 5)
377
+DO_VST4H(vst41h, 1, 6)
378
+DO_VST4H(vst42h, 2, 7)
379
+DO_VST4H(vst43h, 3, 4)
380
+
381
+DO_VST4W(vst40w, 0, 1, 10, 11)
382
+DO_VST4W(vst41w, 2, 3, 12, 13)
383
+DO_VST4W(vst42w, 4, 5, 14, 15)
384
+DO_VST4W(vst43w, 6, 7, 8, 9)
385
+
386
+#define DO_VST2B(OP, O1, O2, O3, O4) \
387
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
388
+ uint32_t base) \
389
+ { \
390
+ int beat, e; \
391
+ uint16_t mask = mve_eci_mask(env); \
392
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
393
+ uint32_t addr, data; \
394
+ uint8_t *qd; \
395
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
396
+ if ((mask & 1) == 0) { \
397
+ /* ECI says skip this beat */ \
398
+ continue; \
399
+ } \
400
+ addr = base + off[beat] * 2; \
401
+ data = 0; \
402
+ for (e = 3; e >= 0; e--) { \
403
+ qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + (e & 1)); \
404
+ data = (data << 8) | qd[H1(off[beat] + (e >> 1))]; \
405
+ } \
406
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
407
+ } \
408
+ }
409
+
410
+#define DO_VST2H(OP, O1, O2, O3, O4) \
411
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
412
+ uint32_t base) \
413
+ { \
414
+ int beat; \
415
+ uint16_t mask = mve_eci_mask(env); \
416
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
417
+ uint32_t addr, data; \
418
+ int e; \
419
+ uint16_t *qd; \
420
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
421
+ if ((mask & 1) == 0) { \
422
+ /* ECI says skip this beat */ \
423
+ continue; \
424
+ } \
425
+ addr = base + off[beat] * 4; \
426
+ data = 0; \
427
+ for (e = 1; e >= 0; e--) { \
428
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + e); \
429
+ data = (data << 16) | qd[H2(off[beat])]; \
430
+ } \
431
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
432
+ } \
433
+ }
434
+
435
+#define DO_VST2W(OP, O1, O2, O3, O4) \
436
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
437
+ uint32_t base) \
438
+ { \
439
+ int beat; \
440
+ uint16_t mask = mve_eci_mask(env); \
441
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
442
+ uint32_t addr, data; \
443
+ uint32_t *qd; \
444
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
445
+ if ((mask & 1) == 0) { \
446
+ /* ECI says skip this beat */ \
447
+ continue; \
448
+ } \
449
+ addr = base + off[beat]; \
450
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + (beat & 1)); \
451
+ data = qd[H4(off[beat] >> 3)]; \
452
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
453
+ } \
454
+ }
455
+
456
+DO_VST2B(vst20b, 0, 2, 12, 14)
457
+DO_VST2B(vst21b, 4, 6, 8, 10)
458
+
459
+DO_VST2H(vst20h, 0, 1, 6, 7)
460
+DO_VST2H(vst21h, 2, 3, 4, 5)
461
+
462
+DO_VST2W(vst20w, 0, 4, 24, 28)
463
+DO_VST2W(vst21w, 8, 12, 16, 20)
464
+
465
/*
466
* The mergemask(D, R, M) macro performs the operation "*D = R" but
467
* storing only the bytes which correspond to 1 bits in M,
468
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
469
index XXXXXXX..XXXXXXX 100644
470
--- a/target/arm/translate-mve.c
471
+++ b/target/arm/translate-mve.c
472
@@ -XXX,XX +XXX,XX @@ static inline int vidup_imm(DisasContext *s, int x)
473
474
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
475
typedef void MVEGenLdStSGFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
476
+typedef void MVEGenLdStIlFn(TCGv_ptr, TCGv_i32, TCGv_i32);
477
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
478
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
479
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
480
@@ -XXX,XX +XXX,XX @@ static bool trans_VSTRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
481
return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
482
}
483
484
+static bool do_vldst_il(DisasContext *s, arg_vldst_il *a, MVEGenLdStIlFn *fn,
485
+ int addrinc)
56
+{
486
+{
57
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz,
487
+ TCGv_i32 rn;
58
+ 0, gen_helper_gvec_pmul_b);
488
+
489
+ if (!dc_isar_feature(aa32_mve, s) ||
490
+ !mve_check_qreg_bank(s, a->qd) ||
491
+ !fn || (a->rn == 13 && a->w) || a->rn == 15) {
492
+ /* Variously UNPREDICTABLE or UNDEF or related-encoding */
493
+ return false;
494
+ }
495
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
496
+ return true;
497
+ }
498
+
499
+ rn = load_reg(s, a->rn);
500
+ /*
501
+ * We pass the index of Qd, not a pointer, because the helper must
502
+ * access multiple Q registers starting at Qd and working up.
503
+ */
504
+ fn(cpu_env, tcg_constant_i32(a->qd), rn);
505
+
506
+ if (a->w) {
507
+ tcg_gen_addi_i32(rn, rn, addrinc);
508
+ store_reg(s, a->rn, rn);
509
+ } else {
510
+ tcg_temp_free_i32(rn);
511
+ }
512
+ mve_update_and_store_eci(s);
513
+ return true;
59
+}
514
+}
60
+
515
+
61
+static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
516
+/* This macro is just to make the arrays more compact in these functions */
517
+#define F(N) gen_helper_mve_##N
518
+
519
+static bool trans_VLD2(DisasContext *s, arg_vldst_il *a)
62
+{
520
+{
63
+ if (a->size != 0) {
521
+ static MVEGenLdStIlFn * const fns[4][4] = {
522
+ { F(vld20b), F(vld20h), F(vld20w), NULL, },
523
+ { F(vld21b), F(vld21h), F(vld21w), NULL, },
524
+ { NULL, NULL, NULL, NULL },
525
+ { NULL, NULL, NULL, NULL },
526
+ };
527
+ if (a->qd > 6) {
64
+ return false;
528
+ return false;
65
+ }
529
+ }
66
+ return do_3same(s, a, gen_VMUL_p_3s);
530
+ return do_vldst_il(s, a, fns[a->pat][a->size], 32);
67
+}
531
+}
68
+
532
+
69
+#define DO_3SAME_GVEC3_NO_SZ_3(INSN, OPARRAY) \
533
+static bool trans_VLD4(DisasContext *s, arg_vldst_il *a)
70
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
534
+{
71
+ uint32_t rn_ofs, uint32_t rm_ofs, \
535
+ static MVEGenLdStIlFn * const fns[4][4] = {
72
+ uint32_t oprsz, uint32_t maxsz) \
536
+ { F(vld40b), F(vld40h), F(vld40w), NULL, },
73
+ { \
537
+ { F(vld41b), F(vld41h), F(vld41w), NULL, },
74
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, \
538
+ { F(vld42b), F(vld42h), F(vld42w), NULL, },
75
+ oprsz, maxsz, &OPARRAY[vece]); \
539
+ { F(vld43b), F(vld43h), F(vld43w), NULL, },
76
+ } \
540
+ };
77
+ DO_3SAME_NO_SZ_3(INSN, gen_##INSN##_3s)
541
+ if (a->qd > 4) {
78
+
542
+ return false;
79
+
543
+ }
80
+DO_3SAME_GVEC3_NO_SZ_3(VMLA, mla_op)
544
+ return do_vldst_il(s, a, fns[a->pat][a->size], 64);
81
+DO_3SAME_GVEC3_NO_SZ_3(VMLS, mls_op)
545
+}
82
+
546
+
83
+#define DO_3SAME_GVEC3_SHIFT(INSN, OPARRAY) \
547
+static bool trans_VST2(DisasContext *s, arg_vldst_il *a)
84
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
548
+{
85
+ uint32_t rn_ofs, uint32_t rm_ofs, \
549
+ static MVEGenLdStIlFn * const fns[4][4] = {
86
+ uint32_t oprsz, uint32_t maxsz) \
550
+ { F(vst20b), F(vst20h), F(vst20w), NULL, },
87
+ { \
551
+ { F(vst21b), F(vst21h), F(vst21w), NULL, },
88
+ /* Note the operation is vshl vd,vm,vn */ \
552
+ { NULL, NULL, NULL, NULL },
89
+ tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, \
553
+ { NULL, NULL, NULL, NULL },
90
+ oprsz, maxsz, &OPARRAY[vece]); \
554
+ };
91
+ } \
555
+ if (a->qd > 6) {
92
+ DO_3SAME(INSN, gen_##INSN##_3s)
556
+ return false;
93
+
557
+ }
94
+DO_3SAME_GVEC3_SHIFT(VSHL_S, sshl_op)
558
+ return do_vldst_il(s, a, fns[a->pat][a->size], 32);
95
+DO_3SAME_GVEC3_SHIFT(VSHL_U, ushl_op)
559
+}
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
560
+
97
index XXXXXXX..XXXXXXX 100644
561
+static bool trans_VST4(DisasContext *s, arg_vldst_il *a)
98
--- a/target/arm/translate.c
562
+{
99
+++ b/target/arm/translate.c
563
+ static MVEGenLdStIlFn * const fns[4][4] = {
100
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
564
+ { F(vst40b), F(vst40h), F(vst40w), NULL, },
101
}
565
+ { F(vst41b), F(vst41h), F(vst41w), NULL, },
102
return 1;
566
+ { F(vst42b), F(vst42h), F(vst42w), NULL, },
103
567
+ { F(vst43b), F(vst43h), F(vst43w), NULL, },
104
- case NEON_3R_VMUL: /* VMUL */
568
+ };
105
- if (u) {
569
+ if (a->qd > 4) {
106
- /* Polynomial case allows only P8. */
570
+ return false;
107
- if (size != 0) {
571
+ }
108
- return 1;
572
+ return do_vldst_il(s, a, fns[a->pat][a->size], 64);
109
- }
573
+}
110
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
574
+
111
- 0, gen_helper_gvec_pmul_b);
575
+#undef F
112
- } else {
576
+
113
- tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
577
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
114
- vec_size, vec_size);
578
{
115
- }
579
TCGv_ptr qd;
116
- return 0;
117
-
118
- case NEON_3R_VML: /* VMLA, VMLS */
119
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
120
- u ? &mls_op[size] : &mla_op[size]);
121
- return 0;
122
-
123
- case NEON_3R_VSHL:
124
- /* Note the operation is vshl vd,vm,vn */
125
- tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
126
- u ? &ushl_op[size] : &sshl_op[size]);
127
- return 0;
128
-
129
case NEON_3R_VADD_VSUB:
130
case NEON_3R_LOGIC:
131
case NEON_3R_VMAX:
132
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
133
case NEON_3R_VCGE:
134
case NEON_3R_VQADD:
135
case NEON_3R_VQSUB:
136
+ case NEON_3R_VMUL:
137
+ case NEON_3R_VML:
138
+ case NEON_3R_VSHL:
139
/* Already handled by decodetree */
140
return 1;
141
}
142
--
580
--
143
2.20.1
581
2.20.1
144
582
145
583
diff view generated by jsdifflib
1
For ARMv8.2-TTS2UXN, the stage 2 page table walk wants to know
1
We're about to make a code change to the sdiv and udiv helper
2
whether the stage 1 access is for EL0 or not, because whether
2
functions, so first fix their indentation and coding style.
3
exec permission is given can depend on whether this is an EL0
4
or EL1 access. Add a new argument to get_phys_addr_lpae() so
5
the call sites can pass this information in.
6
7
Since get_phys_addr_lpae() doesn't already have a doc comment,
8
add one so we have a place to put the documentation of the
9
semantics of the new s1_is_el0 argument.
10
3
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20200330210400.11724-4-peter.maydell@linaro.org
6
Message-id: 20210730151636.17254-2-peter.maydell@linaro.org
15
---
7
---
16
target/arm/helper.c | 29 ++++++++++++++++++++++++++++-
8
target/arm/helper.c | 15 +++++++++------
17
1 file changed, 28 insertions(+), 1 deletion(-)
9
1 file changed, 9 insertions(+), 6 deletions(-)
18
10
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@
15
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(uxtb16)(uint32_t x)
24
16
25
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
17
int32_t HELPER(sdiv)(int32_t num, int32_t den)
26
MMUAccessType access_type, ARMMMUIdx mmu_idx,
18
{
27
+ bool s1_is_el0,
19
- if (den == 0)
28
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
20
- return 0;
29
target_ulong *page_size_ptr,
21
- if (num == INT_MIN && den == -1)
30
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs);
22
- return INT_MIN;
31
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
23
+ if (den == 0) {
32
}
24
+ return 0;
33
25
+ }
34
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, ARMMMUIdx_Stage2,
26
+ if (num == INT_MIN && den == -1) {
35
+ false,
27
+ return INT_MIN;
36
&s2pa, &txattrs, &s2prot, &s2size, fi,
28
+ }
37
pcacheattrs);
29
return num / den;
38
if (ret) {
39
@@ -XXX,XX +XXX,XX @@ static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
40
};
41
}
30
}
42
31
43
+/**
32
uint32_t HELPER(udiv)(uint32_t num, uint32_t den)
44
+ * get_phys_addr_lpae: perform one stage of page table walk, LPAE format
33
{
45
+ *
34
- if (den == 0)
46
+ * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
35
- return 0;
47
+ * prot and page_size may not be filled in, and the populated fsr value provides
36
+ if (den == 0) {
48
+ * information on why the translation aborted, in the format of a long-format
37
+ return 0;
49
+ * DFSR/IFSR fault register, with the following caveats:
38
+ }
50
+ * * the WnR bit is never set (the caller must do this).
39
return num / den;
51
+ *
40
}
52
+ * @env: CPUARMState
41
53
+ * @address: virtual address to get physical address for
54
+ * @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
55
+ * @mmu_idx: MMU index indicating required translation regime
56
+ * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page table
57
+ * walk), must be true if this is stage 2 of a stage 1+2 walk for an
58
+ * EL0 access). If @mmu_idx is anything else, @s1_is_el0 is ignored.
59
+ * @phys_ptr: set to the physical address corresponding to the virtual address
60
+ * @attrs: set to the memory transaction attributes to use
61
+ * @prot: set to the permissions for the page containing phys_ptr
62
+ * @page_size_ptr: set to the size of the page containing phys_ptr
63
+ * @fi: set to fault info if the translation fails
64
+ * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
65
+ */
66
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
67
MMUAccessType access_type, ARMMMUIdx mmu_idx,
68
+ bool s1_is_el0,
69
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
70
target_ulong *page_size_ptr,
71
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
72
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
73
74
/* S1 is done. Now do S2 translation. */
75
ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_Stage2,
76
+ mmu_idx == ARMMMUIdx_E10_0,
77
phys_ptr, attrs, &s2_prot,
78
page_size, fi,
79
cacheattrs != NULL ? &cacheattrs2 : NULL);
80
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
81
}
82
83
if (regime_using_lpae_format(env, mmu_idx)) {
84
- return get_phys_addr_lpae(env, address, access_type, mmu_idx,
85
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
86
phys_ptr, attrs, prot, page_size,
87
fi, cacheattrs);
88
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
89
--
42
--
90
2.20.1
43
2.20.1
91
44
92
45
diff view generated by jsdifflib
1
We define ARMMMUIdx_Stage2 as being an MMU index which uses a QEMU
1
Unlike A-profile, for M-profile the UDIV and SDIV insns can be
2
TLB. However we never actually use the TLB -- all stage 2 lookups
2
configured to raise an exception on division by zero, using the CCR
3
are done by direct calls to get_phys_addr_lpae() followed by a
3
DIV_0_TRP bit.
4
physical address load via address_space_ld*().
5
4
6
Remove Stage2 from the list of ARM MMU indexes which correspond to
5
Implement support for setting this bit by making the helper functions
7
real core MMU indexes, and instead put it in the set of "NOTLB" ARM
6
raise the appropriate exception.
8
MMU indexes.
9
10
This allows us to drop NB_MMU_MODES to 11. It also means we can
11
safely add support for the ARMv8.3-TTS2UXN extension, which adds
12
permission bits to the stage 2 descriptors which define execute
13
permission separatel for EL0 and EL1; supporting that while keeping
14
Stage2 in a QEMU TLB would require us to use separate TLBs for
15
"Stage2 for an EL0 access" and "Stage2 for an EL1 access", which is a
16
lot of extra complication given we aren't even using the QEMU TLB.
17
18
In the process of updating the comment on our MMU index use,
19
fix a couple of other minor errors:
20
* NS EL2 EL2&0 was missing from the list in the comment
21
* some text hadn't been updated from when we bumped NB_MMU_MODES
22
above 8
23
7
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
26
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
27
Message-id: 20200330210400.11724-2-peter.maydell@linaro.org
10
Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
28
---
11
---
29
target/arm/cpu-param.h | 2 +-
12
target/arm/cpu.h | 1 +
30
target/arm/cpu.h | 21 +++++---
13
target/arm/helper.h | 4 ++--
31
target/arm/helper.c | 112 ++++-------------------------------------
14
target/arm/helper.c | 19 +++++++++++++++++--
32
3 files changed, 27 insertions(+), 108 deletions(-)
15
target/arm/m_helper.c | 4 ++++
16
target/arm/translate.c | 4 ++--
17
5 files changed, 26 insertions(+), 6 deletions(-)
33
18
34
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/cpu-param.h
37
+++ b/target/arm/cpu-param.h
38
@@ -XXX,XX +XXX,XX @@
39
# define TARGET_PAGE_BITS_MIN 10
40
#endif
41
42
-#define NB_MMU_MODES 12
43
+#define NB_MMU_MODES 11
44
45
#endif
46
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
47
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
49
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
50
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
23
@@ -XXX,XX +XXX,XX @@
51
* handling via the TLB. The only way to do a stage 1 translation without
24
#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
52
* the immediate stage 2 translation is via the ATS or AT system insns,
25
#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
53
* which can be slow-pathed and always do a page table walk.
26
#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
54
+ * The only use of stage 2 translations is either as part of an s1+2
27
+#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
55
+ * lookup or when loading the descriptors during a stage 1 page table walk,
28
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
56
+ * and in both those cases we don't use the TLB.
29
57
* 4. we can also safely fold together the "32 bit EL3" and "64 bit EL3"
30
#define ARMV7M_EXCP_RESET 1
58
* translation regimes, because they map reasonably well to each other
31
diff --git a/target/arm/helper.h b/target/arm/helper.h
59
* and they can't both be active at the same time.
32
index XXXXXXX..XXXXXXX 100644
60
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
33
--- a/target/arm/helper.h
61
* NS EL1 EL1&0 stage 1+2 (aka NS PL1)
34
+++ b/target/arm/helper.h
62
* NS EL1 EL1&0 stage 1+2 +PAN
35
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(add_saturate, i32, env, i32, i32)
63
* NS EL0 EL2&0
36
DEF_HELPER_3(sub_saturate, i32, env, i32, i32)
64
+ * NS EL2 EL2&0
37
DEF_HELPER_3(add_usaturate, i32, env, i32, i32)
65
* NS EL2 EL2&0 +PAN
38
DEF_HELPER_3(sub_usaturate, i32, env, i32, i32)
66
* NS EL2 (aka NS PL2)
39
-DEF_HELPER_FLAGS_2(sdiv, TCG_CALL_NO_RWG_SE, s32, s32, s32)
67
* S EL0 EL1&0 (aka S PL0)
40
-DEF_HELPER_FLAGS_2(udiv, TCG_CALL_NO_RWG_SE, i32, i32, i32)
68
* S EL1 EL1&0 (not used if EL3 is 32 bit)
41
+DEF_HELPER_FLAGS_3(sdiv, TCG_CALL_NO_RWG, s32, env, s32, s32)
69
* S EL1 EL1&0 +PAN
42
+DEF_HELPER_FLAGS_3(udiv, TCG_CALL_NO_RWG, i32, env, i32, i32)
70
* S EL3 (aka S PL1)
43
DEF_HELPER_FLAGS_1(rbit, TCG_CALL_NO_RWG_SE, i32, i32)
71
- * NS EL1&0 stage 2
44
72
*
45
#define PAS_OP(pfx) \
73
- * for a total of 12 different mmu_idx.
74
+ * for a total of 11 different mmu_idx.
75
*
76
* R profile CPUs have an MPU, but can use the same set of MMU indexes
77
* as A profile. They only need to distinguish NS EL0 and NS EL1 (and
78
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
79
* are not quite the same -- different CPU types (most notably M profile
80
* vs A/R profile) would like to use MMU indexes with different semantics,
81
* but since we don't ever need to use all of those in a single CPU we
82
- * can avoid setting NB_MMU_MODES to more than 8. The lower bits of
83
+ * can avoid having to set NB_MMU_MODES to "total number of A profile MMU
84
+ * modes + total number of M profile MMU modes". The lower bits of
85
* ARMMMUIdx are the core TLB mmu index, and the higher bits are always
86
* the same for any particular CPU.
87
* Variables of type ARMMUIdx are always full values, and the core
88
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
89
ARMMMUIdx_SE10_1_PAN = 9 | ARM_MMU_IDX_A,
90
ARMMMUIdx_SE3 = 10 | ARM_MMU_IDX_A,
91
92
- ARMMMUIdx_Stage2 = 11 | ARM_MMU_IDX_A,
93
-
94
/*
95
* These are not allocated TLBs and are used only for AT system
96
* instructions or for the first stage of an S12 page table walk.
97
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
98
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
99
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
100
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
101
+ /*
102
+ * Not allocated a TLB: used only for second stage of an S12 page
103
+ * table walk, or for descriptor loads during first stage of an S1
104
+ * page table walk. Note that if we ever want to have a TLB for this
105
+ * then various TLB flush insns which currently are no-ops or flush
106
+ * only stage 1 MMU indexes will need to change to flush stage 2.
107
+ */
108
+ ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
109
110
/*
111
* M-profile.
112
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
113
TO_CORE_BIT(SE10_1),
114
TO_CORE_BIT(SE10_1_PAN),
115
TO_CORE_BIT(SE3),
116
- TO_CORE_BIT(Stage2),
117
118
TO_CORE_BIT(MUser),
119
TO_CORE_BIT(MPriv),
120
diff --git a/target/arm/helper.c b/target/arm/helper.c
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
121
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/helper.c
48
--- a/target/arm/helper.c
123
+++ b/target/arm/helper.c
49
+++ b/target/arm/helper.c
124
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
50
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sxtb16)(uint32_t x)
125
tlb_flush_by_mmuidx(cs,
51
return res;
126
ARMMMUIdxBit_E10_1 |
127
ARMMMUIdxBit_E10_1_PAN |
128
- ARMMMUIdxBit_E10_0 |
129
- ARMMMUIdxBit_Stage2);
130
+ ARMMMUIdxBit_E10_0);
131
}
52
}
132
53
133
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
+static void handle_possible_div0_trap(CPUARMState *env, uintptr_t ra)
134
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
+{
135
tlb_flush_by_mmuidx_all_cpus_synced(cs,
56
+ /*
136
ARMMMUIdxBit_E10_1 |
57
+ * Take a division-by-zero exception if necessary; otherwise return
137
ARMMMUIdxBit_E10_1_PAN |
58
+ * to get the usual non-trapping division behaviour (result of 0)
138
- ARMMMUIdxBit_E10_0 |
59
+ */
139
- ARMMMUIdxBit_Stage2);
60
+ if (arm_feature(env, ARM_FEATURE_M)
140
+ ARMMMUIdxBit_E10_0);
61
+ && (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_DIV_0_TRP_MASK)) {
62
+ raise_exception_ra(env, EXCP_DIVBYZERO, 0, 1, ra);
63
+ }
64
+}
65
+
66
uint32_t HELPER(uxtb16)(uint32_t x)
67
{
68
uint32_t res;
69
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(uxtb16)(uint32_t x)
70
return res;
141
}
71
}
142
72
143
-static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
73
-int32_t HELPER(sdiv)(int32_t num, int32_t den)
144
- uint64_t value)
74
+int32_t HELPER(sdiv)(CPUARMState *env, int32_t num, int32_t den)
145
-{
75
{
146
- /* Invalidate by IPA. This has to invalidate any structures that
76
if (den == 0) {
147
- * contain only stage 2 translation information, but does not need
77
+ handle_possible_div0_trap(env, GETPC());
148
- * to apply to structures that contain combined stage 1 and stage 2
78
return 0;
149
- * translation information.
150
- * This must NOP if EL2 isn't implemented or SCR_EL3.NS is zero.
151
- */
152
- CPUState *cs = env_cpu(env);
153
- uint64_t pageaddr;
154
-
155
- if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) {
156
- return;
157
- }
158
-
159
- pageaddr = sextract64(value << 12, 0, 40);
160
-
161
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
162
-}
163
-
164
-static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
165
- uint64_t value)
166
-{
167
- CPUState *cs = env_cpu(env);
168
- uint64_t pageaddr;
169
-
170
- if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) {
171
- return;
172
- }
173
-
174
- pageaddr = sextract64(value << 12, 0, 40);
175
-
176
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
177
- ARMMMUIdxBit_Stage2);
178
-}
179
180
static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
181
uint64_t value)
182
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
183
tlb_flush_by_mmuidx(cs,
184
ARMMMUIdxBit_E10_1 |
185
ARMMMUIdxBit_E10_1_PAN |
186
- ARMMMUIdxBit_E10_0 |
187
- ARMMMUIdxBit_Stage2);
188
+ ARMMMUIdxBit_E10_0);
189
raw_write(env, ri, value);
190
}
79
}
80
if (num == INT_MIN && den == -1) {
81
@@ -XXX,XX +XXX,XX @@ int32_t HELPER(sdiv)(int32_t num, int32_t den)
82
return num / den;
191
}
83
}
192
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
84
193
return ARMMMUIdxBit_SE10_1 |
85
-uint32_t HELPER(udiv)(uint32_t num, uint32_t den)
194
ARMMMUIdxBit_SE10_1_PAN |
86
+uint32_t HELPER(udiv)(CPUARMState *env, uint32_t num, uint32_t den)
195
ARMMMUIdxBit_SE10_0;
87
{
196
- } else if (arm_feature(env, ARM_FEATURE_EL2)) {
88
if (den == 0) {
197
- return ARMMMUIdxBit_E10_1 |
89
+ handle_possible_div0_trap(env, GETPC());
198
- ARMMMUIdxBit_E10_1_PAN |
90
return 0;
199
- ARMMMUIdxBit_E10_0 |
91
}
200
- ARMMMUIdxBit_Stage2;
92
return num / den;
93
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(int idx)
94
[EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
95
[EXCP_LSERR] = "v8M LSERR UsageFault",
96
[EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
97
+ [EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
98
};
99
100
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
101
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/target/arm/m_helper.c
104
+++ b/target/arm/m_helper.c
105
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
106
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
107
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
108
break;
109
+ case EXCP_DIVBYZERO:
110
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
111
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_DIVBYZERO_MASK;
112
+ break;
113
case EXCP_SWI:
114
/* The PC already points to the next instruction. */
115
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
116
diff --git a/target/arm/translate.c b/target/arm/translate.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate.c
119
+++ b/target/arm/translate.c
120
@@ -XXX,XX +XXX,XX @@ static bool op_div(DisasContext *s, arg_rrr *a, bool u)
121
t1 = load_reg(s, a->rn);
122
t2 = load_reg(s, a->rm);
123
if (u) {
124
- gen_helper_udiv(t1, t1, t2);
125
+ gen_helper_udiv(t1, cpu_env, t1, t2);
201
} else {
126
} else {
202
return ARMMMUIdxBit_E10_1 |
127
- gen_helper_sdiv(t1, t1, t2);
203
ARMMMUIdxBit_E10_1_PAN |
128
+ gen_helper_sdiv(t1, cpu_env, t1, t2);
204
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
129
}
205
ARMMMUIdxBit_SE3);
130
tcg_temp_free_i32(t2);
206
}
131
store_reg(s, a->rd, t1);
207
208
-static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
209
- uint64_t value)
210
-{
211
- /* Invalidate by IPA. This has to invalidate any structures that
212
- * contain only stage 2 translation information, but does not need
213
- * to apply to structures that contain combined stage 1 and stage 2
214
- * translation information.
215
- * This must NOP if EL2 isn't implemented or SCR_EL3.NS is zero.
216
- */
217
- ARMCPU *cpu = env_archcpu(env);
218
- CPUState *cs = CPU(cpu);
219
- uint64_t pageaddr;
220
-
221
- if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) {
222
- return;
223
- }
224
-
225
- pageaddr = sextract64(value << 12, 0, 48);
226
-
227
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
228
-}
229
-
230
-static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
231
- uint64_t value)
232
-{
233
- CPUState *cs = env_cpu(env);
234
- uint64_t pageaddr;
235
-
236
- if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) {
237
- return;
238
- }
239
-
240
- pageaddr = sextract64(value << 12, 0, 48);
241
-
242
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
243
- ARMMMUIdxBit_Stage2);
244
-}
245
-
246
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
247
bool isread)
248
{
249
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
250
.writefn = tlbi_aa64_vae1_write },
251
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
252
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
253
- .access = PL2_W, .type = ARM_CP_NO_RAW,
254
- .writefn = tlbi_aa64_ipas2e1is_write },
255
+ .access = PL2_W, .type = ARM_CP_NOP },
256
{ .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
257
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
258
- .access = PL2_W, .type = ARM_CP_NO_RAW,
259
- .writefn = tlbi_aa64_ipas2e1is_write },
260
+ .access = PL2_W, .type = ARM_CP_NOP },
261
{ .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
262
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
263
.access = PL2_W, .type = ARM_CP_NO_RAW,
264
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
265
.writefn = tlbi_aa64_alle1is_write },
266
{ .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
267
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
268
- .access = PL2_W, .type = ARM_CP_NO_RAW,
269
- .writefn = tlbi_aa64_ipas2e1_write },
270
+ .access = PL2_W, .type = ARM_CP_NOP },
271
{ .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
272
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
273
- .access = PL2_W, .type = ARM_CP_NO_RAW,
274
- .writefn = tlbi_aa64_ipas2e1_write },
275
+ .access = PL2_W, .type = ARM_CP_NOP },
276
{ .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
277
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
278
.access = PL2_W, .type = ARM_CP_NO_RAW,
279
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
280
.writefn = tlbimva_hyp_is_write },
281
{ .name = "TLBIIPAS2",
282
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
283
- .type = ARM_CP_NO_RAW, .access = PL2_W,
284
- .writefn = tlbiipas2_write },
285
+ .type = ARM_CP_NOP, .access = PL2_W },
286
{ .name = "TLBIIPAS2IS",
287
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
288
- .type = ARM_CP_NO_RAW, .access = PL2_W,
289
- .writefn = tlbiipas2_is_write },
290
+ .type = ARM_CP_NOP, .access = PL2_W },
291
{ .name = "TLBIIPAS2L",
292
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
293
- .type = ARM_CP_NO_RAW, .access = PL2_W,
294
- .writefn = tlbiipas2_write },
295
+ .type = ARM_CP_NOP, .access = PL2_W },
296
{ .name = "TLBIIPAS2LIS",
297
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
298
- .type = ARM_CP_NO_RAW, .access = PL2_W,
299
- .writefn = tlbiipas2_is_write },
300
+ .type = ARM_CP_NOP, .access = PL2_W },
301
/* 32 bit cache operations */
302
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
303
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
304
--
132
--
305
2.20.1
133
2.20.1
306
134
307
135
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Hamza Mahfooz <someguy@effective-light.com>
2
2
3
Embed the APUs into the SoC type.
3
As per commit 5626f8c6d468 ("rcu: Add automatically released rcu_read_lock
4
variants"), RCU_READ_LOCK_GUARD() should be used instead of
5
rcu_read_{un}lock().
4
6
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Hamza Mahfooz <someguy@effective-light.com>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 20210727235201.11491-1-someguy@effective-light.com
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20200427181649.26851-8-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
include/hw/arm/xlnx-versal.h | 2 +-
12
target/arm/kvm.c | 17 ++++++++---------
14
hw/arm/xlnx-versal-virt.c | 4 ++--
13
1 file changed, 8 insertions(+), 9 deletions(-)
15
hw/arm/xlnx-versal.c | 19 +++++--------------
16
3 files changed, 8 insertions(+), 17 deletions(-)
17
14
18
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
15
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/xlnx-versal.h
17
--- a/target/arm/kvm.c
21
+++ b/include/hw/arm/xlnx-versal.h
18
+++ b/target/arm/kvm.c
22
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
19
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
23
struct {
20
hwaddr xlat, len, doorbell_gpa;
24
struct {
21
MemoryRegionSection mrs;
25
MemoryRegion mr;
22
MemoryRegion *mr;
26
- ARMCPU *cpu[XLNX_VERSAL_NR_ACPUS];
23
- int ret = 1;
27
+ ARMCPU cpu[XLNX_VERSAL_NR_ACPUS];
24
28
GICv3State gic;
25
if (as == &address_space_memory) {
29
} apu;
26
return 0;
30
} fpd;
27
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
31
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
28
32
index XXXXXXX..XXXXXXX 100644
29
/* MSI doorbell address is translated by an IOMMU */
33
--- a/hw/arm/xlnx-versal-virt.c
30
34
+++ b/hw/arm/xlnx-versal-virt.c
31
- rcu_read_lock();
35
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
32
+ RCU_READ_LOCK_GUARD();
36
s->binfo.get_dtb = versal_virt_get_dtb;
33
+
37
s->binfo.modify_dtb = versal_virt_modify_dtb;
34
mr = address_space_translate(as, address, &xlat, &len, true,
38
if (machine->kernel_filename) {
35
MEMTXATTRS_UNSPECIFIED);
39
- arm_load_kernel(s->soc.fpd.apu.cpu[0], machine, &s->binfo);
36
+
40
+ arm_load_kernel(&s->soc.fpd.apu.cpu[0], machine, &s->binfo);
37
if (!mr) {
41
} else {
38
- goto unlock;
42
- AddressSpace *as = arm_boot_address_space(s->soc.fpd.apu.cpu[0],
39
+ return 1;
43
+ AddressSpace *as = arm_boot_address_space(&s->soc.fpd.apu.cpu[0],
40
}
44
&s->binfo);
41
+
45
/* Some boot-loaders (e.g u-boot) don't like blobs at address 0 (NULL).
42
mrs = memory_region_find(mr, xlat, 1);
46
* Offset things by 4K. */
43
+
47
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
44
if (!mrs.mr) {
48
index XXXXXXX..XXXXXXX 100644
45
- goto unlock;
49
--- a/hw/arm/xlnx-versal.c
46
+ return 1;
50
+++ b/hw/arm/xlnx-versal.c
47
}
51
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_cpus(Versal *s)
48
52
49
doorbell_gpa = mrs.offset_within_address_space;
53
for (i = 0; i < ARRAY_SIZE(s->fpd.apu.cpu); i++) {
50
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
54
Object *obj;
51
55
- char *name;
52
trace_kvm_arm_fixup_msi_route(address, doorbell_gpa);
53
54
- ret = 0;
56
-
55
-
57
- obj = object_new(XLNX_VERSAL_ACPU_TYPE);
56
-unlock:
58
- if (!obj) {
57
- rcu_read_unlock();
59
- error_report("Unable to create apu.cpu[%d] of type %s",
58
- return ret;
60
- i, XLNX_VERSAL_ACPU_TYPE);
59
+ return 0;
61
- exit(EXIT_FAILURE);
62
- }
63
-
64
- name = g_strdup_printf("apu-cpu[%d]", i);
65
- object_property_add_child(OBJECT(s), name, obj, &error_fatal);
66
- g_free(name);
67
68
+ object_initialize_child(OBJECT(s), "apu-cpu[*]",
69
+ &s->fpd.apu.cpu[i], sizeof(s->fpd.apu.cpu[i]),
70
+ XLNX_VERSAL_ACPU_TYPE, &error_abort, NULL);
71
+ obj = OBJECT(&s->fpd.apu.cpu[i]);
72
object_property_set_int(obj, s->cfg.psci_conduit,
73
"psci-conduit", &error_abort);
74
if (i) {
75
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_cpus(Versal *s)
76
object_property_set_link(obj, OBJECT(&s->fpd.apu.mr), "memory",
77
&error_abort);
78
object_property_set_bool(obj, true, "realized", &error_fatal);
79
- s->fpd.apu.cpu[i] = ARM_CPU(obj);
80
}
81
}
60
}
82
61
83
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_gic(Versal *s, qemu_irq *pic)
62
int kvm_arch_add_msi_route_post(struct kvm_irq_routing_entry *route,
84
}
85
86
for (i = 0; i < nr_apu_cpus; i++) {
87
- DeviceState *cpudev = DEVICE(s->fpd.apu.cpu[i]);
88
+ DeviceState *cpudev = DEVICE(&s->fpd.apu.cpu[i]);
89
int ppibase = XLNX_VERSAL_NR_IRQS + i * GIC_INTERNAL + GIC_NR_SGIS;
90
qemu_irq maint_irq;
91
int ti;
92
--
63
--
93
2.20.1
64
2.20.1
94
65
95
66
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Jan Luebbe <jlu@pengutronix.de>
2
2
3
Embed the UARTs into the SoC type.
3
Break events are currently only handled by chardev/char-serial.c, so we
4
just ignore errors, which results in no behaviour change for other
5
chardevs.
4
6
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Jan Luebbe <jlu@pengutronix.de>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20210806144700.3751979-1-jlu@pengutronix.de
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20200427181649.26851-5-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
include/hw/arm/xlnx-versal.h | 3 ++-
12
hw/char/pl011.c | 6 ++++++
14
hw/arm/xlnx-versal.c | 12 ++++++------
13
1 file changed, 6 insertions(+)
15
2 files changed, 8 insertions(+), 7 deletions(-)
16
14
17
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
15
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/xlnx-versal.h
17
--- a/hw/char/pl011.c
20
+++ b/include/hw/arm/xlnx-versal.h
18
+++ b/hw/char/pl011.c
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
22
#include "hw/sysbus.h"
20
#include "hw/qdev-properties-system.h"
23
#include "hw/arm/boot.h"
21
#include "migration/vmstate.h"
24
#include "hw/intc/arm_gicv3.h"
22
#include "chardev/char-fe.h"
25
+#include "hw/char/pl011.h"
23
+#include "chardev/char-serial.h"
26
24
#include "qemu/log.h"
27
#define TYPE_XLNX_VERSAL "xlnx-versal"
25
#include "qemu/module.h"
28
#define XLNX_VERSAL(obj) OBJECT_CHECK(Versal, (obj), TYPE_XLNX_VERSAL)
26
#include "trace.h"
29
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
27
@@ -XXX,XX +XXX,XX @@ static void pl011_write(void *opaque, hwaddr offset,
30
MemoryRegion mr_ocm;
28
s->read_count = 0;
31
29
s->read_pos = 0;
32
struct {
30
}
33
- SysBusDevice *uart[XLNX_VERSAL_NR_UARTS];
31
+ if ((s->lcr ^ value) & 0x1) {
34
+ PL011State uart[XLNX_VERSAL_NR_UARTS];
32
+ int break_enable = value & 0x1;
35
SysBusDevice *gem[XLNX_VERSAL_NR_GEMS];
33
+ qemu_chr_fe_ioctl(&s->chr, CHR_IOCTL_SERIAL_SET_BREAK,
36
SysBusDevice *adma[XLNX_VERSAL_NR_ADMAS];
34
+ &break_enable);
37
} iou;
35
+ }
38
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
36
s->lcr = value;
39
index XXXXXXX..XXXXXXX 100644
37
pl011_set_read_trigger(s);
40
--- a/hw/arm/xlnx-versal.c
38
break;
41
+++ b/hw/arm/xlnx-versal.c
42
@@ -XXX,XX +XXX,XX @@
43
#include "kvm_arm.h"
44
#include "hw/misc/unimp.h"
45
#include "hw/arm/xlnx-versal.h"
46
-#include "hw/char/pl011.h"
47
48
#define XLNX_VERSAL_ACPU_TYPE ARM_CPU_TYPE_NAME("cortex-a72")
49
#define GEM_REVISION 0x40070106
50
@@ -XXX,XX +XXX,XX @@ static void versal_create_uarts(Versal *s, qemu_irq *pic)
51
DeviceState *dev;
52
MemoryRegion *mr;
53
54
- dev = qdev_create(NULL, TYPE_PL011);
55
- s->lpd.iou.uart[i] = SYS_BUS_DEVICE(dev);
56
+ sysbus_init_child_obj(OBJECT(s), name,
57
+ &s->lpd.iou.uart[i], sizeof(s->lpd.iou.uart[i]),
58
+ TYPE_PL011);
59
+ dev = DEVICE(&s->lpd.iou.uart[i]);
60
qdev_prop_set_chr(dev, "chardev", serial_hd(i));
61
- object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal);
62
qdev_init_nofail(dev);
63
64
- mr = sysbus_mmio_get_region(s->lpd.iou.uart[i], 0);
65
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
66
memory_region_add_subregion(&s->mr_ps, addrs[i], mr);
67
68
- sysbus_connect_irq(s->lpd.iou.uart[i], 0, pic[irqs[i]]);
69
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irqs[i]]);
70
g_free(name);
71
}
72
}
73
--
39
--
74
2.20.1
40
2.20.1
75
41
76
42
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Move misplaced comment.
3
Instantiate SAI1/2/3 and ASRC as unimplemented devices to avoid random
4
Linux kernel crashes, such as
4
5
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Unhandled fault: external abort on non-linefetch (0x808) at 0xd1580010
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
pgd = (ptrval)
8
[d1580010] *pgd=8231b811, *pte=02034653, *ppte=02034453
9
Internal error: : 808 [#1] SMP ARM
10
...
11
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
12
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
13
[<c09580f4>] (_regmap_write) from [<c095837c>] (_regmap_update_bits+0xe4/0xec)
14
[<c095837c>] (_regmap_update_bits) from [<c09599b4>] (regmap_update_bits_base+0x50/0x74)
15
[<c09599b4>] (regmap_update_bits_base) from [<c0d3e9e4>] (fsl_asrc_runtime_resume+0x1e4/0x21c)
16
[<c0d3e9e4>] (fsl_asrc_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
17
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
18
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
19
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
20
[<c0942dfc>] (__pm_runtime_resume) from [<c0d3ecc4>] (fsl_asrc_probe+0x2a8/0x708)
21
[<c0d3ecc4>] (fsl_asrc_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
22
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
23
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
24
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
25
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
26
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
27
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
28
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
29
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
30
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
31
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
32
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
33
34
or
35
36
Unhandled fault: external abort on non-linefetch (0x808) at 0xd19b0000
37
pgd = (ptrval)
38
[d19b0000] *pgd=82711811, *pte=308a0653, *ppte=308a0453
39
Internal error: : 808 [#1] SMP ARM
40
...
41
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
42
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
43
[<c09580f4>] (_regmap_write) from [<c0959b28>] (regmap_write+0x3c/0x60)
44
[<c0959b28>] (regmap_write) from [<c0d41130>] (fsl_sai_runtime_resume+0x9c/0x1ec)
45
[<c0d41130>] (fsl_sai_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
46
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
47
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
48
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
49
[<c0942dfc>] (__pm_runtime_resume) from [<c0d4231c>] (fsl_sai_probe+0x2b8/0x65c)
50
[<c0d4231c>] (fsl_sai_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
51
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
52
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
53
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
54
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
55
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
56
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
57
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
58
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
59
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
60
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
61
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
62
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
63
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
64
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
9
Message-id: 20200427181649.26851-3-edgar.iglesias@gmail.com
65
Message-id: 20210810160318.87376-1-linux@roeck-us.net
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
66
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
67
---
12
hw/arm/xlnx-versal.c | 2 +-
68
hw/arm/fsl-imx6ul.c | 12 ++++++++++++
13
1 file changed, 1 insertion(+), 1 deletion(-)
69
1 file changed, 12 insertions(+)
14
70
15
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
71
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
16
index XXXXXXX..XXXXXXX 100644
72
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/xlnx-versal.c
73
--- a/hw/arm/fsl-imx6ul.c
18
+++ b/hw/arm/xlnx-versal.c
74
+++ b/hw/arm/fsl-imx6ul.c
19
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_cpus(Versal *s)
75
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
20
76
*/
21
obj = object_new(XLNX_VERSAL_ACPU_TYPE);
77
create_unimplemented_device("sdma", FSL_IMX6UL_SDMA_ADDR, 0x4000);
22
if (!obj) {
78
23
- /* Secondary CPUs start in PSCI powered-down state */
79
+ /*
24
error_report("Unable to create apu.cpu[%d] of type %s",
80
+ * SAI (Audio SSI (Synchronous Serial Interface))
25
i, XLNX_VERSAL_ACPU_TYPE);
81
+ */
26
exit(EXIT_FAILURE);
82
+ create_unimplemented_device("sai1", FSL_IMX6UL_SAI1_ADDR, 0x4000);
27
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_cpus(Versal *s)
83
+ create_unimplemented_device("sai2", FSL_IMX6UL_SAI2_ADDR, 0x4000);
28
object_property_set_int(obj, s->cfg.psci_conduit,
84
+ create_unimplemented_device("sai3", FSL_IMX6UL_SAI3_ADDR, 0x4000);
29
"psci-conduit", &error_abort);
85
+
30
if (i) {
86
/*
31
+ /* Secondary CPUs start in PSCI powered-down state */
87
* PWM
32
object_property_set_bool(obj, true,
88
*/
33
"start-powered-off", &error_abort);
89
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
34
}
90
create_unimplemented_device("pwm3", FSL_IMX6UL_PWM3_ADDR, 0x4000);
91
create_unimplemented_device("pwm4", FSL_IMX6UL_PWM4_ADDR, 0x4000);
92
93
+ /*
94
+ * Audio ASRC (asynchronous sample rate converter)
95
+ */
96
+ create_unimplemented_device("asrc", FSL_IMX6UL_ASRC_ADDR, 0x4000);
97
+
98
/*
99
* CAN
100
*/
35
--
101
--
36
2.20.1
102
2.20.1
37
103
38
104
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: "Wen, Jianxian" <Jianxian.Wen@verisilicon.com>
2
2
3
Add support for SD.
3
Add property memory region which can connect with IOMMU region to support SMMU translate.
4
4
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
5
Signed-off-by: Jianxian Wen <jianxian.wen@verisilicon.com>
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
7
Message-id: 4C23C17B8E87E74E906A25A3254A03F4FA1FEC31@SHASXM03.verisilicon.com
8
Message-id: 20200427181649.26851-11-edgar.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
hw/arm/xlnx-versal-virt.c | 46 +++++++++++++++++++++++++++++++++++++++
10
hw/arm/exynos4210.c | 3 +++
12
1 file changed, 46 insertions(+)
11
hw/arm/xilinx_zynq.c | 3 +++
12
hw/dma/pl330.c | 26 ++++++++++++++++++++++----
13
3 files changed, 28 insertions(+), 4 deletions(-)
13
14
14
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
15
diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/xlnx-versal-virt.c
17
--- a/hw/arm/exynos4210.c
17
+++ b/hw/arm/xlnx-versal-virt.c
18
+++ b/hw/arm/exynos4210.c
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static DeviceState *pl330_create(uint32_t base, qemu_or_irq *orgate,
19
#include "hw/arm/sysbus-fdt.h"
20
int i;
20
#include "hw/arm/fdt.h"
21
21
#include "cpu.h"
22
dev = qdev_new("pl330");
22
+#include "hw/qdev-properties.h"
23
+ object_property_set_link(OBJECT(dev), "memory",
23
#include "hw/arm/xlnx-versal.h"
24
+ OBJECT(get_system_memory()),
24
25
+ &error_fatal);
25
#define TYPE_XLNX_VERSAL_VIRT_MACHINE MACHINE_TYPE_NAME("xlnx-versal-virt")
26
qdev_prop_set_uint8(dev, "num_events", nevents);
26
@@ -XXX,XX +XXX,XX @@ static void fdt_add_zdma_nodes(VersalVirt *s)
27
qdev_prop_set_uint8(dev, "num_chnls", 8);
27
}
28
qdev_prop_set_uint8(dev, "num_periph_req", nreq);
29
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/xilinx_zynq.c
32
+++ b/hw/arm/xilinx_zynq.c
33
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
34
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[39-IRQ_OFFSET]);
35
36
dev = qdev_new("pl330");
37
+ object_property_set_link(OBJECT(dev), "memory",
38
+ OBJECT(address_space_mem),
39
+ &error_fatal);
40
qdev_prop_set_uint8(dev, "num_chnls", 8);
41
qdev_prop_set_uint8(dev, "num_periph_req", 4);
42
qdev_prop_set_uint8(dev, "num_events", 16);
43
diff --git a/hw/dma/pl330.c b/hw/dma/pl330.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/dma/pl330.c
46
+++ b/hw/dma/pl330.c
47
@@ -XXX,XX +XXX,XX @@ struct PL330State {
48
uint8_t num_faulting;
49
uint8_t periph_busy[PL330_PERIPH_NUM];
50
51
+ /* Memory region that DMA operation access */
52
+ MemoryRegion *mem_mr;
53
+ AddressSpace *mem_as;
54
};
55
56
#define TYPE_PL330 "pl330"
57
@@ -XXX,XX +XXX,XX @@ static inline const PL330InsnDesc *pl330_fetch_insn(PL330Chan *ch)
58
uint8_t opcode;
59
int i;
60
61
- dma_memory_read(&address_space_memory, ch->pc, &opcode, 1);
62
+ dma_memory_read(ch->parent->mem_as, ch->pc, &opcode, 1);
63
for (i = 0; insn_desc[i].size; i++) {
64
if ((opcode & insn_desc[i].opmask) == insn_desc[i].opcode) {
65
return &insn_desc[i];
66
@@ -XXX,XX +XXX,XX @@ static inline void pl330_exec_insn(PL330Chan *ch, const PL330InsnDesc *insn)
67
uint8_t buf[PL330_INSN_MAXSIZE];
68
69
assert(insn->size <= PL330_INSN_MAXSIZE);
70
- dma_memory_read(&address_space_memory, ch->pc, buf, insn->size);
71
+ dma_memory_read(ch->parent->mem_as, ch->pc, buf, insn->size);
72
insn->exec(ch, buf[0], &buf[1], insn->size - 1);
28
}
73
}
29
74
30
+static void fdt_add_sd_nodes(VersalVirt *s)
75
@@ -XXX,XX +XXX,XX @@ static int pl330_exec_cycle(PL330Chan *channel)
31
+{
76
if (q != NULL && q->len <= pl330_fifo_num_free(&s->fifo)) {
32
+ const char clocknames[] = "clk_xin\0clk_ahb";
77
int len = q->len - (q->addr & (q->len - 1));
33
+ const char compat[] = "arasan,sdhci-8.9a";
78
34
+ int i;
79
- dma_memory_read(&address_space_memory, q->addr, buf, len);
35
+
80
+ dma_memory_read(s->mem_as, q->addr, buf, len);
36
+ for (i = ARRAY_SIZE(s->soc.pmc.iou.sd) - 1; i >= 0; i--) {
81
trace_pl330_exec_cycle(q->addr, len);
37
+ uint64_t addr = MM_PMC_SD0 + MM_PMC_SD0_SIZE * i;
82
if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) {
38
+ char *name = g_strdup_printf("/sdhci@%" PRIx64, addr);
83
pl330_hexdump(buf, len);
39
+
84
@@ -XXX,XX +XXX,XX @@ static int pl330_exec_cycle(PL330Chan *channel)
40
+ qemu_fdt_add_subnode(s->fdt, name);
85
fifo_res = pl330_fifo_get(&s->fifo, buf, len, q->tag);
41
+
86
}
42
+ qemu_fdt_setprop_cells(s->fdt, name, "clocks",
87
if (fifo_res == PL330_FIFO_OK || q->z) {
43
+ s->phandle.clk_25Mhz, s->phandle.clk_25Mhz);
88
- dma_memory_write(&address_space_memory, q->addr, buf, len);
44
+ qemu_fdt_setprop(s->fdt, name, "clock-names",
89
+ dma_memory_write(s->mem_as, q->addr, buf, len);
45
+ clocknames, sizeof(clocknames));
90
trace_pl330_exec_cycle(q->addr, len);
46
+ qemu_fdt_setprop_cells(s->fdt, name, "interrupts",
91
if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) {
47
+ GIC_FDT_IRQ_TYPE_SPI, VERSAL_SD0_IRQ_0 + i * 2,
92
pl330_hexdump(buf, len);
48
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
93
@@ -XXX,XX +XXX,XX @@ static void pl330_realize(DeviceState *dev, Error **errp)
49
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
94
"dma", PL330_IOMEM_SIZE);
50
+ 2, addr, 2, MM_PMC_SD0_SIZE);
95
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
51
+ qemu_fdt_setprop(s->fdt, name, "compatible", compat, sizeof(compat));
96
52
+ g_free(name);
97
+ if (!s->mem_mr) {
53
+ }
98
+ error_setg(errp, "'memory' link is not set");
54
+}
99
+ return;
55
+
100
+ } else if (s->mem_mr == get_system_memory()) {
56
static void fdt_nop_memory_nodes(void *fdt, Error **errp)
101
+ /* Avoid creating new AS for system memory. */
57
{
102
+ s->mem_as = &address_space_memory;
58
Error *err = NULL;
103
+ } else {
59
@@ -XXX,XX +XXX,XX @@ static void create_virtio_regions(VersalVirt *s)
104
+ s->mem_as = g_new0(AddressSpace, 1);
60
}
105
+ address_space_init(s->mem_as, s->mem_mr,
61
}
106
+ memory_region_name(s->mem_mr));
62
63
+static void sd_plugin_card(SDHCIState *sd, DriveInfo *di)
64
+{
65
+ BlockBackend *blk = di ? blk_by_legacy_dinfo(di) : NULL;
66
+ DeviceState *card;
67
+
68
+ card = qdev_create(qdev_get_child_bus(DEVICE(sd), "sd-bus"), TYPE_SD_CARD);
69
+ object_property_add_child(OBJECT(sd), "card[*]", OBJECT(card),
70
+ &error_fatal);
71
+ qdev_prop_set_drive(card, "drive", blk, &error_fatal);
72
+ object_property_set_bool(OBJECT(card), true, "realized", &error_fatal);
73
+}
74
+
75
static void versal_virt_init(MachineState *machine)
76
{
77
VersalVirt *s = XLNX_VERSAL_VIRT_MACHINE(machine);
78
int psci_conduit = QEMU_PSCI_CONDUIT_DISABLED;
79
+ int i;
80
81
/*
82
* If the user provides an Operating System to be loaded, we expect them
83
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
84
fdt_add_gic_nodes(s);
85
fdt_add_timer_nodes(s);
86
fdt_add_zdma_nodes(s);
87
+ fdt_add_sd_nodes(s);
88
fdt_add_cpu_nodes(s, psci_conduit);
89
fdt_add_clk_node(s, "/clk125", 125000000, s->phandle.clk_125Mhz);
90
fdt_add_clk_node(s, "/clk25", 25000000, s->phandle.clk_25Mhz);
91
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
92
memory_region_add_subregion_overlap(get_system_memory(),
93
0, &s->soc.fpd.apu.mr, 0);
94
95
+ /* Plugin SD cards. */
96
+ for (i = 0; i < ARRAY_SIZE(s->soc.pmc.iou.sd); i++) {
97
+ sd_plugin_card(&s->soc.pmc.iou.sd[i], drive_get_next(IF_SD));
98
+ }
107
+ }
99
+
108
+
100
s->binfo.ram_size = machine->ram_size;
109
s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, pl330_exec_cycle_timer, s);
101
s->binfo.loader_start = 0x0;
110
102
s->binfo.get_dtb = versal_virt_get_dtb;
111
s->cfg[0] = (s->mgr_ns_at_rst ? 0x4 : 0) |
112
@@ -XXX,XX +XXX,XX @@ static Property pl330_properties[] = {
113
DEFINE_PROP_UINT8("rd_q_dep", PL330State, rd_q_dep, 16),
114
DEFINE_PROP_UINT16("data_buffer_dep", PL330State, data_buffer_dep, 256),
115
116
+ DEFINE_PROP_LINK("memory", PL330State, mem_mr,
117
+ TYPE_MEMORY_REGION, MemoryRegion *),
118
+
119
DEFINE_PROP_END_OF_LIST(),
120
};
121
103
--
122
--
104
2.20.1
123
2.20.1
105
124
106
125
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Eduardo Habkost <ehabkost@redhat.com>
2
2
3
By using the TYPE_* definitions for devices, we can:
3
The SBSA_GWDT enum value conflicts with the SBSA_GWDT() QOM type
4
- quickly find where devices are used with 'git-grep'
4
checking helper, preventing us from using a OBJECT_DEFINE* or
5
- easily rename a device (one-line change).
5
DEFINE_INSTANCE_CHECKER macro for the SBSA_GWDT() wrapper.
6
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
If I understand the SBSA 6.0 specification correctly, the signal
8
Message-id: 20200428154650.21991-1-f4bug@amsat.org
8
being connected to IRQ 16 is the WS0 output signal from the
9
Generic Watchdog. Rename the enum value to SBSA_GWDT_WS0 to be
10
more explicit and avoid the name conflict.
11
12
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
13
Message-id: 20210806023119.431680-1-ehabkost@redhat.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
16
---
12
hw/arm/mps2-tz.c | 2 +-
17
hw/arm/sbsa-ref.c | 6 +++---
13
1 file changed, 1 insertion(+), 1 deletion(-)
18
1 file changed, 3 insertions(+), 3 deletions(-)
14
19
15
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
20
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/mps2-tz.c
22
--- a/hw/arm/sbsa-ref.c
18
+++ b/hw/arm/mps2-tz.c
23
+++ b/hw/arm/sbsa-ref.c
19
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
24
@@ -XXX,XX +XXX,XX @@ enum {
20
exit(EXIT_FAILURE);
25
SBSA_GIC_DIST,
21
}
26
SBSA_GIC_REDIST,
22
27
SBSA_SECURE_EC,
23
- sysbus_init_child_obj(OBJECT(machine), "iotkit", &mms->iotkit,
28
- SBSA_GWDT,
24
+ sysbus_init_child_obj(OBJECT(machine), TYPE_IOTKIT, &mms->iotkit,
29
+ SBSA_GWDT_WS0,
25
sizeof(mms->iotkit), mmc->armsse_type);
30
SBSA_GWDT_REFRESH,
26
iotkitdev = DEVICE(&mms->iotkit);
31
SBSA_GWDT_CONTROL,
27
object_property_set_link(OBJECT(&mms->iotkit), OBJECT(system_memory),
32
SBSA_SMMU,
33
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
34
[SBSA_AHCI] = 10,
35
[SBSA_EHCI] = 11,
36
[SBSA_SMMU] = 12, /* ... to 15 */
37
- [SBSA_GWDT] = 16,
38
+ [SBSA_GWDT_WS0] = 16,
39
};
40
41
static const char * const valid_cpus[] = {
42
@@ -XXX,XX +XXX,XX @@ static void create_wdt(const SBSAMachineState *sms)
43
hwaddr cbase = sbsa_ref_memmap[SBSA_GWDT_CONTROL].base;
44
DeviceState *dev = qdev_new(TYPE_WDT_SBSA);
45
SysBusDevice *s = SYS_BUS_DEVICE(dev);
46
- int irq = sbsa_ref_irqmap[SBSA_GWDT];
47
+ int irq = sbsa_ref_irqmap[SBSA_GWDT_WS0];
48
49
sysbus_realize_and_unref(s, &error_fatal);
50
sysbus_mmio_map(s, 0, rbase);
28
--
51
--
29
2.20.1
52
2.20.1
30
53
31
54
diff view generated by jsdifflib
1
From: Fredrik Strupe <fredrik@strupe.net>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
According to Arm ARM, VQDMULL is only valid when U=0, while having
3
Instantiate SAI1/2/3 as unimplemented devices to avoid Linux kernel crashes
4
U=1 is unallocated.
4
such as the following.
5
5
6
Signed-off-by: Fredrik Strupe <fredrik@strupe.net>
6
Unhandled fault: external abort on non-linefetch (0x808) at 0xd19b0000
7
Fixes: 695272dcb976 ("target-arm: Handle UNDEF cases for Neon 3-regs-different-widths")
7
pgd = (ptrval)
8
[d19b0000] *pgd=82711811, *pte=308a0653, *ppte=308a0453
9
Internal error: : 808 [#1] SMP ARM
10
Modules linked in:
11
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-rc5 #1
12
...
13
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
14
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
15
[<c09580f4>] (_regmap_write) from [<c0959b28>] (regmap_write+0x3c/0x60)
16
[<c0959b28>] (regmap_write) from [<c0d41130>] (fsl_sai_runtime_resume+0x9c/0x1ec)
17
[<c0d41130>] (fsl_sai_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
18
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
19
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
20
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
21
[<c0942dfc>] (__pm_runtime_resume) from [<c0d4231c>] (fsl_sai_probe+0x2b8/0x65c)
22
[<c0d4231c>] (fsl_sai_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
23
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
24
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
25
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
26
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
27
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
28
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
29
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
30
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
31
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
32
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
33
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
34
35
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
36
Message-id: 20210810175607.538090-1-linux@roeck-us.net
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
37
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
39
---
11
target/arm/translate.c | 2 +-
40
include/hw/arm/fsl-imx7.h | 5 +++++
12
1 file changed, 1 insertion(+), 1 deletion(-)
41
hw/arm/fsl-imx7.c | 7 +++++++
42
2 files changed, 12 insertions(+)
13
43
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
44
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
15
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
46
--- a/include/hw/arm/fsl-imx7.h
17
+++ b/target/arm/translate.c
47
+++ b/include/hw/arm/fsl-imx7.h
18
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
48
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
19
{0, 0, 0, 0}, /* VMLSL */
49
FSL_IMX7_UART6_ADDR = 0x30A80000,
20
{0, 0, 0, 9}, /* VQDMLSL */
50
FSL_IMX7_UART7_ADDR = 0x30A90000,
21
{0, 0, 0, 0}, /* Integer VMULL */
51
22
- {0, 0, 0, 1}, /* VQDMULL */
52
+ FSL_IMX7_SAI1_ADDR = 0x308A0000,
23
+ {0, 0, 0, 9}, /* VQDMULL */
53
+ FSL_IMX7_SAI2_ADDR = 0x308B0000,
24
{0, 0, 0, 0xa}, /* Polynomial VMULL */
54
+ FSL_IMX7_SAI3_ADDR = 0x308C0000,
25
{0, 0, 0, 7}, /* Reserved: always UNDEF */
55
+ FSL_IMX7_SAIn_SIZE = 0x10000,
26
};
56
+
57
FSL_IMX7_ENET1_ADDR = 0x30BE0000,
58
FSL_IMX7_ENET2_ADDR = 0x30BF0000,
59
60
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/hw/arm/fsl-imx7.c
63
+++ b/hw/arm/fsl-imx7.c
64
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
65
create_unimplemented_device("can1", FSL_IMX7_CAN1_ADDR, FSL_IMX7_CANn_SIZE);
66
create_unimplemented_device("can2", FSL_IMX7_CAN2_ADDR, FSL_IMX7_CANn_SIZE);
67
68
+ /*
69
+ * SAI (Audio SSI (Synchronous Serial Interface))
70
+ */
71
+ create_unimplemented_device("sai1", FSL_IMX7_SAI1_ADDR, FSL_IMX7_SAIn_SIZE);
72
+ create_unimplemented_device("sai2", FSL_IMX7_SAI2_ADDR, FSL_IMX7_SAIn_SIZE);
73
+ create_unimplemented_device("sai2", FSL_IMX7_SAI3_ADDR, FSL_IMX7_SAIn_SIZE);
74
+
75
/*
76
* OCOTP
77
*/
27
--
78
--
28
2.20.1
79
2.20.1
29
80
30
81
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Sebastian Meyer <meyer@absint.com>
2
2
3
Remove inclusion of arm_gicv3_common.h, this already gets
3
With gdb 9.0 and better it is possible to connect to a gdbstub
4
included via xlnx-versal.h.
4
over unix sockets, which is better than a TCP socket connection
5
in some situations. The QEMU command line to set this up is
6
non-obvious; document it.
5
7
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Signed-off-by: Sebastian Meyer <meyer@absint.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 162867284829.27377.4784930719350564918-0@git.sr.ht
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
[PMM: Tweaked commit message; adjusted wording in a couple of
9
Message-id: 20200427181649.26851-2-edgar.iglesias@gmail.com
11
places; fixed rST formatting issue; moved section up out of
12
the 'advanced debugging options' subsection]
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
16
---
12
hw/arm/xlnx-versal.c | 1 -
17
docs/system/gdb.rst | 26 +++++++++++++++++++++++++-
13
1 file changed, 1 deletion(-)
18
1 file changed, 25 insertions(+), 1 deletion(-)
14
19
15
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
20
diff --git a/docs/system/gdb.rst b/docs/system/gdb.rst
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/xlnx-versal.c
22
--- a/docs/system/gdb.rst
18
+++ b/hw/arm/xlnx-versal.c
23
+++ b/docs/system/gdb.rst
19
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ The ``-s`` option will make QEMU listen for an incoming connection
20
#include "hw/arm/boot.h"
25
from gdb on TCP port 1234, and ``-S`` will make QEMU not start the
21
#include "kvm_arm.h"
26
guest until you tell it to from gdb. (If you want to specify which
22
#include "hw/misc/unimp.h"
27
TCP port to use or to use something other than TCP for the gdbstub
23
-#include "hw/intc/arm_gicv3_common.h"
28
-connection, use the ``-gdb dev`` option instead of ``-s``.)
24
#include "hw/arm/xlnx-versal.h"
29
+connection, use the ``-gdb dev`` option instead of ``-s``. See
25
#include "hw/char/pl011.h"
30
+`Using unix sockets`_ for an example.)
31
32
.. parsed-literal::
33
34
@@ -XXX,XX +XXX,XX @@ not just those in the cluster you are currently working on::
35
36
(gdb) set schedule-multiple on
37
38
+Using unix sockets
39
+==================
40
+
41
+An alternate method for connecting gdb to the QEMU gdbstub is to use
42
+a unix socket (if supported by your operating system). This is useful when
43
+running several tests in parallel, or if you do not have a known free TCP
44
+port (e.g. when running automated tests).
45
+
46
+First create a chardev with the appropriate options, then
47
+instruct the gdbserver to use that device:
48
+
49
+.. parsed-literal::
50
+
51
+ |qemu_system| -chardev socket,path=/tmp/gdb-socket,server=on,wait=off,id=gdb0 -gdb chardev:gdb0 -S ...
52
+
53
+Start gdb as before, but this time connect using the path to
54
+the socket::
55
+
56
+ (gdb) target remote /tmp/gdb-socket
57
+
58
+Note that to use a unix socket for the connection you will need
59
+gdb version 9.0 or newer.
60
+
61
Advanced debugging options
62
==========================
26
63
27
--
64
--
28
2.20.1
65
2.20.1
29
66
30
67
diff view generated by jsdifflib