1
Big fat pullreq this time around, because it has all of RTH's
1
target-arm queue, mostly SME preliminaries.
2
SVE2 emulation patchset in it.
2
3
In the unlikely event we don't land the rest of SME before freeze
4
for 7.1 we can revert the docs/property changes included here.
3
5
4
-- PMM
6
-- PMM
5
7
6
The following changes since commit 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049:
8
The following changes since commit 097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc:
7
9
8
Merge remote-tracking branch 'remotes/stefanha-gitlab/tags/block-pull-request' into staging (2021-05-24 15:48:08 +0100)
10
Merge tag 'qemu-sparc-20220626' of https://github.com/mcayland/qemu into staging (2022-06-27 05:21:05 +0530)
9
11
10
are available in the Git repository at:
12
are available in the Git repository at:
11
13
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210525
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220627
13
15
14
for you to fetch changes up to f8680aaa6e5bfc6022b75157c23db7d2ea98ab11:
16
for you to fetch changes up to 59e1b8a22ea9f947d038ccac784de1020f266e14:
15
17
16
target/arm: Enable SVE2 and related extensions (2021-05-25 16:01:44 +0100)
18
target/arm: Check V7VE as well as LPAE in arm_pamax (2022-06-27 11:18:17 +0100)
17
19
18
----------------------------------------------------------------
20
----------------------------------------------------------------
19
target-arm queue:
21
target-arm queue:
20
* Implement SVE2 emulation
22
* sphinx: change default language to 'en'
21
* Implement integer matrix multiply accumulate
23
* Diagnose attempts to emulate EL3 in hvf as well as kvm
22
* Implement FEAT_TLBIOS
24
* More SME groundwork patches
23
* Implement FEAT_TLBRANGE
25
* virt: Fix calculation of physical address space size
24
* disas/libvixl: Protect C system header for C++ compiler
26
for v7VE CPUs (eg cortex-a15)
25
* Use correct SP in M-profile exception return
26
* AN524, AN547: Correct modelling of internal SRAMs
27
* hw/intc/arm_gicv3_cpuif: Fix EOIR write access check logic
28
* hw/arm/smmuv3: Another range invalidation fix
29
27
30
----------------------------------------------------------------
28
----------------------------------------------------------------
31
Eric Auger (1):
29
Alexander Graf (2):
32
hw/arm/smmuv3: Another range invalidation fix
30
accel: Introduce current_accel_name()
31
target/arm: Catch invalid kvm state also for hvf
33
32
34
Peter Maydell (8):
33
Martin Liška (1):
35
hw/intc/arm_gicv3_cpuif: Fix EOIR write access check logic
34
sphinx: change default language to 'en'
36
hw/arm/mps2-tz: Don't duplicate modelling of SRAM in AN524
37
hw/arm/mps2-tz: Make SRAM_ADDR_WIDTH board-specific
38
hw/arm/armsse.c: Correct modelling of SSE-300 internal SRAMs
39
hw/arm/armsse: Convert armsse_realize() to use ERRP_GUARD
40
hw/arm/mps2-tz: Allow board to specify a boot RAM size
41
hw/arm: Model TCMs in the SSE-300, not the AN547
42
target/arm: Use correct SP in M-profile exception return
43
35
44
Philippe Mathieu-Daudé (1):
36
Richard Henderson (22):
45
disas/libvixl: Protect C system header for C++ compiler
37
target/arm: Implement TPIDR2_EL0
38
target/arm: Add SMEEXC_EL to TB flags
39
target/arm: Add syn_smetrap
40
target/arm: Add ARM_CP_SME
41
target/arm: Add SVCR
42
target/arm: Add SMCR_ELx
43
target/arm: Add SMIDR_EL1, SMPRI_EL1, SMPRIMAP_EL2
44
target/arm: Add PSTATE.{SM,ZA} to TB flags
45
target/arm: Add the SME ZA storage to CPUARMState
46
target/arm: Implement SMSTART, SMSTOP
47
target/arm: Move error for sve%d property to arm_cpu_sve_finalize
48
target/arm: Create ARMVQMap
49
target/arm: Generalize cpu_arm_{get,set}_vq
50
target/arm: Generalize cpu_arm_{get, set}_default_vec_len
51
target/arm: Move arm_cpu_*_finalize to internals.h
52
target/arm: Unexport aarch64_add_*_properties
53
target/arm: Add cpu properties for SME
54
target/arm: Introduce sve_vqm1_for_el_sm
55
target/arm: Add SVL to TB flags
56
target/arm: Move pred_{full, gvec}_reg_{offset, size} to translate-a64.h
57
target/arm: Extend arm_pamax to more than aarch64
58
target/arm: Check V7VE as well as LPAE in arm_pamax
46
59
47
Rebecca Cran (3):
60
docs/conf.py | 2 +-
48
target/arm: Add support for FEAT_TLBIRANGE
61
docs/system/arm/cpu-features.rst | 56 ++++++++++
49
target/arm: Add support for FEAT_TLBIOS
62
include/qemu/accel.h | 1 +
50
target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type
63
target/arm/cpregs.h | 5 +
64
target/arm/cpu.h | 103 ++++++++++++++-----
65
target/arm/helper-sme.h | 21 ++++
66
target/arm/helper.h | 1 +
67
target/arm/internals.h | 4 +
68
target/arm/syndrome.h | 14 +++
69
target/arm/translate-a64.h | 38 +++++++
70
target/arm/translate.h | 6 ++
71
accel/accel-common.c | 8 ++
72
hw/arm/virt.c | 10 +-
73
softmmu/vl.c | 3 +-
74
target/arm/cpu.c | 32 ++++--
75
target/arm/cpu64.c | 205 ++++++++++++++++++++++++++++---------
76
target/arm/helper.c | 213 +++++++++++++++++++++++++++++++++++++--
77
target/arm/kvm64.c | 2 +-
78
target/arm/machine.c | 34 +++++++
79
target/arm/ptw.c | 26 +++--
80
target/arm/sme_helper.c | 61 +++++++++++
81
target/arm/translate-a64.c | 46 +++++++++
82
target/arm/translate-sve.c | 36 -------
83
target/arm/meson.build | 1 +
84
24 files changed, 782 insertions(+), 146 deletions(-)
85
create mode 100644 target/arm/helper-sme.h
86
create mode 100644 target/arm/sme_helper.c
51
87
52
Richard Henderson (84):
53
accel/tcg: Replace g_new() + memcpy() by g_memdup()
54
accel/tcg: Pass length argument to tlb_flush_range_locked()
55
accel/tlb: Rename TLBFlushPageBitsByMMUIdxData -> TLBFlushRangeData
56
accel/tcg: Remove {encode,decode}_pbm_to_runon
57
accel/tcg: Add tlb_flush_range_by_mmuidx()
58
accel/tcg: Add tlb_flush_range_by_mmuidx_all_cpus()
59
accel/tlb: Add tlb_flush_range_by_mmuidx_all_cpus_synced()
60
accel/tcg: Rename tlb_flush_page_bits -> range]_by_mmuidx_async_0
61
accel/tlb: Rename tlb_flush_[page_bits > range]_by_mmuidx_async_[2 > 1]
62
target/arm: Add ID_AA64ZFR0 fields and isar_feature_aa64_sve2
63
target/arm: Implement SVE2 Integer Multiply - Unpredicated
64
target/arm: Implement SVE2 integer pairwise add and accumulate long
65
target/arm: Implement SVE2 integer unary operations (predicated)
66
target/arm: Split out saturating/rounding shifts from neon
67
target/arm: Implement SVE2 saturating/rounding bitwise shift left (predicated)
68
target/arm: Implement SVE2 integer halving add/subtract (predicated)
69
target/arm: Implement SVE2 integer pairwise arithmetic
70
target/arm: Implement SVE2 saturating add/subtract (predicated)
71
target/arm: Implement SVE2 integer add/subtract long
72
target/arm: Implement SVE2 integer add/subtract interleaved long
73
target/arm: Implement SVE2 integer add/subtract wide
74
target/arm: Implement SVE2 integer multiply long
75
target/arm: Implement SVE2 PMULLB, PMULLT
76
target/arm: Implement SVE2 bitwise shift left long
77
target/arm: Implement SVE2 bitwise exclusive-or interleaved
78
target/arm: Implement SVE2 bitwise permute
79
target/arm: Implement SVE2 complex integer add
80
target/arm: Implement SVE2 integer absolute difference and accumulate long
81
target/arm: Implement SVE2 integer add/subtract long with carry
82
target/arm: Implement SVE2 bitwise shift right and accumulate
83
target/arm: Implement SVE2 bitwise shift and insert
84
target/arm: Implement SVE2 integer absolute difference and accumulate
85
target/arm: Implement SVE2 saturating extract narrow
86
target/arm: Implement SVE2 SHRN, RSHRN
87
target/arm: Implement SVE2 SQSHRUN, SQRSHRUN
88
target/arm: Implement SVE2 UQSHRN, UQRSHRN
89
target/arm: Implement SVE2 SQSHRN, SQRSHRN
90
target/arm: Implement SVE2 WHILEGT, WHILEGE, WHILEHI, WHILEHS
91
target/arm: Implement SVE2 WHILERW, WHILEWR
92
target/arm: Implement SVE2 bitwise ternary operations
93
target/arm: Implement SVE2 saturating multiply-add long
94
target/arm: Implement SVE2 saturating multiply-add high
95
target/arm: Implement SVE2 integer multiply-add long
96
target/arm: Implement SVE2 complex integer multiply-add
97
target/arm: Implement SVE2 XAR
98
target/arm: Use correct output type for gvec_sdot_*_b
99
target/arm: Pass separate addend to {U, S}DOT helpers
100
target/arm: Pass separate addend to FCMLA helpers
101
target/arm: Split out formats for 2 vectors + 1 index
102
target/arm: Split out formats for 3 vectors + 1 index
103
target/arm: Implement SVE2 integer multiply (indexed)
104
target/arm: Implement SVE2 integer multiply-add (indexed)
105
target/arm: Implement SVE2 saturating multiply-add high (indexed)
106
target/arm: Implement SVE2 saturating multiply-add (indexed)
107
target/arm: Implement SVE2 saturating multiply (indexed)
108
target/arm: Implement SVE2 signed saturating doubling multiply high
109
target/arm: Implement SVE2 saturating multiply high (indexed)
110
target/arm: Implement SVE2 multiply-add long (indexed)
111
target/arm: Implement SVE2 integer multiply long (indexed)
112
target/arm: Implement SVE2 complex integer multiply-add (indexed)
113
target/arm: Implement SVE2 complex integer dot product
114
target/arm: Macroize helper_gvec_{s,u}dot_{b,h}
115
target/arm: Macroize helper_gvec_{s,u}dot_idx_{b,h}
116
target/arm: Implement SVE mixed sign dot product (indexed)
117
target/arm: Implement SVE mixed sign dot product
118
target/arm: Implement SVE2 crypto unary operations
119
target/arm: Implement SVE2 crypto destructive binary operations
120
target/arm: Implement SVE2 crypto constructive binary operations
121
target/arm: Implement SVE2 FCVTNT
122
target/arm: Share table of sve load functions
123
target/arm: Tidy do_ldrq
124
target/arm: Implement SVE2 LD1RO
125
target/arm: Implement 128-bit ZIP, UZP, TRN
126
target/arm: Move endian adjustment macros to vec_internal.h
127
target/arm: Implement aarch64 SUDOT, USDOT
128
target/arm: Split out do_neon_ddda_fpst
129
target/arm: Remove unused fpst from VDOT_scalar
130
target/arm: Fix decode for VDOT (indexed)
131
target/arm: Split out do_neon_ddda
132
target/arm: Split decode of VSDOT and VUDOT
133
target/arm: Implement aarch32 VSUDOT, VUSDOT
134
target/arm: Implement integer matrix multiply accumulate
135
linux-user/aarch64: Enable hwcap bits for sve2 and related extensions
136
target/arm: Enable SVE2 and related extensions
137
138
Stephen Long (17):
139
target/arm: Implement SVE2 floating-point pairwise
140
target/arm: Implement SVE2 MATCH, NMATCH
141
target/arm: Implement SVE2 ADDHNB, ADDHNT
142
target/arm: Implement SVE2 RADDHNB, RADDHNT
143
target/arm: Implement SVE2 SUBHNB, SUBHNT
144
target/arm: Implement SVE2 RSUBHNB, RSUBHNT
145
target/arm: Implement SVE2 HISTCNT, HISTSEG
146
target/arm: Implement SVE2 scatter store insns
147
target/arm: Implement SVE2 gather load insns
148
target/arm: Implement SVE2 FMMLA
149
target/arm: Implement SVE2 SPLICE, EXT
150
target/arm: Implement SVE2 TBL, TBX
151
target/arm: Implement SVE2 FCVTLT
152
target/arm: Implement SVE2 FCVTXNT, FCVTX
153
target/arm: Implement SVE2 FLOGB
154
target/arm: Implement SVE2 bitwise shift immediate
155
target/arm: Implement SVE2 fp multiply-add long
156
157
disas/libvixl/vixl/code-buffer.h | 2 +-
158
disas/libvixl/vixl/globals.h | 16 +-
159
disas/libvixl/vixl/invalset.h | 2 +-
160
disas/libvixl/vixl/platform.h | 2 +
161
disas/libvixl/vixl/utils.h | 2 +-
162
include/exec/exec-all.h | 44 +
163
include/hw/arm/armsse.h | 2 +
164
target/arm/cpu.h | 76 +
165
target/arm/helper-sve.h | 722 ++++++++-
166
target/arm/helper.h | 110 +-
167
target/arm/translate-a64.h | 3 +
168
target/arm/vec_internal.h | 167 ++
169
target/arm/neon-shared.decode | 24 +-
170
target/arm/sve.decode | 574 ++++++-
171
accel/tcg/cputlb.c | 231 ++-
172
hw/arm/armsse.c | 35 +-
173
hw/arm/mps2-tz.c | 39 +-
174
hw/arm/smmuv3.c | 50 +-
175
hw/intc/arm_gicv3_cpuif.c | 48 +-
176
linux-user/elfload.c | 10 +
177
target/arm/cpu.c | 2 +
178
target/arm/cpu64.c | 14 +
179
target/arm/cpu_tcg.c | 1 +
180
target/arm/helper.c | 327 +++-
181
target/arm/kvm64.c | 21 +-
182
target/arm/m_helper.c | 3 +-
183
target/arm/neon_helper.c | 507 +-----
184
target/arm/sve_helper.c | 2110 +++++++++++++++++++++++--
185
target/arm/translate-a64.c | 111 +-
186
target/arm/translate-neon.c | 231 +--
187
target/arm/translate-sve.c | 3200 +++++++++++++++++++++++++++++++++++---
188
target/arm/vec_helper.c | 887 ++++++++---
189
disas/libvixl/vixl/utils.cc | 2 +-
190
33 files changed, 8275 insertions(+), 1300 deletions(-)
191
diff view generated by jsdifflib
Deleted patch
1
From: Eric Auger <eric.auger@redhat.com>
2
1
3
6d9cd115b9 ("hw/arm/smmuv3: Enforce invalidation on a power of two range")
4
failed to completely fix misalignment issues with range
5
invalidation. For instance invalidations patterns like "invalidate 32
6
4kB pages starting from 0xff395000 are not correctly handled" due
7
to the fact the previous fix only made sure the number of invalidated
8
pages were a power of 2 but did not properly handle the start
9
address was not aligned with the range. This can be noticed when
10
boothing a fedora 33 with protected virtio-blk-pci.
11
12
Signed-off-by: Eric Auger <eric.auger@redhat.com>
13
Fixes: 6d9cd115b9 ("hw/arm/smmuv3: Enforce invalidation on a power of two range")
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/arm/smmuv3.c | 50 +++++++++++++++++++++++++------------------------
18
1 file changed, 26 insertions(+), 24 deletions(-)
19
20
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/smmuv3.c
23
+++ b/hw/arm/smmuv3.c
24
@@ -XXX,XX +XXX,XX @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova,
25
26
static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
27
{
28
- uint8_t scale = 0, num = 0, ttl = 0;
29
- dma_addr_t addr = CMD_ADDR(cmd);
30
+ dma_addr_t end, addr = CMD_ADDR(cmd);
31
uint8_t type = CMD_TYPE(cmd);
32
uint16_t vmid = CMD_VMID(cmd);
33
+ uint8_t scale = CMD_SCALE(cmd);
34
+ uint8_t num = CMD_NUM(cmd);
35
+ uint8_t ttl = CMD_TTL(cmd);
36
bool leaf = CMD_LEAF(cmd);
37
uint8_t tg = CMD_TG(cmd);
38
- uint64_t first_page = 0, last_page;
39
- uint64_t num_pages = 1;
40
+ uint64_t num_pages;
41
+ uint8_t granule;
42
int asid = -1;
43
44
- if (tg) {
45
- scale = CMD_SCALE(cmd);
46
- num = CMD_NUM(cmd);
47
- ttl = CMD_TTL(cmd);
48
- num_pages = (num + 1) * BIT_ULL(scale);
49
- }
50
-
51
if (type == SMMU_CMD_TLBI_NH_VA) {
52
asid = CMD_ASID(cmd);
53
}
54
55
+ if (!tg) {
56
+ trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, 1, ttl, leaf);
57
+ smmuv3_inv_notifiers_iova(s, asid, addr, tg, 1);
58
+ smmu_iotlb_inv_iova(s, asid, addr, tg, 1, ttl);
59
+ return;
60
+ }
61
+
62
+ /* RIL in use */
63
+
64
+ num_pages = (num + 1) * BIT_ULL(scale);
65
+ granule = tg * 2 + 10;
66
+
67
/* Split invalidations into ^2 range invalidations */
68
- last_page = num_pages - 1;
69
- while (num_pages) {
70
- uint8_t granule = tg * 2 + 10;
71
- uint64_t mask, count;
72
+ end = addr + (num_pages << granule) - 1;
73
74
- mask = dma_aligned_pow2_mask(first_page, last_page, 64 - granule);
75
- count = mask + 1;
76
+ while (addr != end + 1) {
77
+ uint64_t mask = dma_aligned_pow2_mask(addr, end, 64);
78
79
- trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, count, ttl, leaf);
80
- smmuv3_inv_notifiers_iova(s, asid, addr, tg, count);
81
- smmu_iotlb_inv_iova(s, asid, addr, tg, count, ttl);
82
-
83
- num_pages -= count;
84
- first_page += count;
85
- addr += count * BIT_ULL(granule);
86
+ num_pages = (mask + 1) >> granule;
87
+ trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, num_pages, ttl, leaf);
88
+ smmuv3_inv_notifiers_iova(s, asid, addr, tg, num_pages);
89
+ smmu_iotlb_inv_iova(s, asid, addr, tg, num_pages, ttl);
90
+ addr += mask + 1;
91
}
92
}
93
94
--
95
2.20.1
96
97
diff view generated by jsdifflib
Deleted patch
1
In icc_eoir_write() we assume that we can identify the group of the
2
IRQ being completed based purely on which register is being written
3
to and the current CPU state, and that "CPU state matches group
4
indicated by register" is the only necessary access check.
5
1
6
This isn't correct: if the CPU is not in Secure state then EOIR1 will
7
only complete Group 1 NS IRQs, but if the CPU is in EL3 it can
8
complete both Group 1 S and Group 1 NS IRQs. (The pseudocode
9
ICC_EOIR1_EL1 makes this clear.) We were also missing the logic to
10
prevent EOIR0 writes completing G0 IRQs when they should not.
11
12
Rearrange the logic to first identify the group of the current
13
highest priority interrupt and then look at whether we should
14
complete it or ignore the access based on which register was accessed
15
and the state of the CPU. The resulting behavioural change is:
16
* EL3 can now complete G1NS interrupts
17
* G0 interrupt completion is now ignored if the GIC
18
and the CPU have the security extension enabled and
19
the CPU is not secure
20
21
Reported-by: Chan Kim <ckim@etri.re.kr>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20210510150016.24910-1-peter.maydell@linaro.org
26
---
27
hw/intc/arm_gicv3_cpuif.c | 48 ++++++++++++++++++++++++++-------------
28
1 file changed, 32 insertions(+), 16 deletions(-)
29
30
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_cpuif.c
33
+++ b/hw/intc/arm_gicv3_cpuif.c
34
@@ -XXX,XX +XXX,XX @@ static void icc_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
35
GICv3CPUState *cs = icc_cs_from_env(env);
36
int irq = value & 0xffffff;
37
int grp;
38
+ bool is_eoir0 = ri->crm == 8;
39
40
- if (icv_access(env, ri->crm == 8 ? HCR_FMO : HCR_IMO)) {
41
+ if (icv_access(env, is_eoir0 ? HCR_FMO : HCR_IMO)) {
42
icv_eoir_write(env, ri, value);
43
return;
44
}
45
46
- trace_gicv3_icc_eoir_write(ri->crm == 8 ? 0 : 1,
47
+ trace_gicv3_icc_eoir_write(is_eoir0 ? 0 : 1,
48
gicv3_redist_affid(cs), value);
49
50
- if (ri->crm == 8) {
51
- /* EOIR0 */
52
- grp = GICV3_G0;
53
- } else {
54
- /* EOIR1 */
55
- if (arm_is_secure(env)) {
56
- grp = GICV3_G1;
57
- } else {
58
- grp = GICV3_G1NS;
59
- }
60
- }
61
-
62
if (irq >= cs->gic->num_irq) {
63
/* This handles two cases:
64
* 1. If software writes the ID of a spurious interrupt [ie 1020-1023]
65
@@ -XXX,XX +XXX,XX @@ static void icc_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
return;
67
}
68
69
- if (icc_highest_active_group(cs) != grp) {
70
- return;
71
+ grp = icc_highest_active_group(cs);
72
+ switch (grp) {
73
+ case GICV3_G0:
74
+ if (!is_eoir0) {
75
+ return;
76
+ }
77
+ if (!(cs->gic->gicd_ctlr & GICD_CTLR_DS)
78
+ && arm_feature(env, ARM_FEATURE_EL3) && !arm_is_secure(env)) {
79
+ return;
80
+ }
81
+ break;
82
+ case GICV3_G1:
83
+ if (is_eoir0) {
84
+ return;
85
+ }
86
+ if (!arm_is_secure(env)) {
87
+ return;
88
+ }
89
+ break;
90
+ case GICV3_G1NS:
91
+ if (is_eoir0) {
92
+ return;
93
+ }
94
+ if (!arm_is_el3_or_mon(env) && arm_is_secure(env)) {
95
+ return;
96
+ }
97
+ break;
98
+ default:
99
+ g_assert_not_reached();
100
}
101
102
icc_drop_prio(cs, grp);
103
--
104
2.20.1
105
106
diff view generated by jsdifflib
Deleted patch
1
The SRAM at 0x2000_0000 is part of the SSE-200 itself, and we model
2
it that way in hw/arm/armsse.c (along with the associated MPCs). We
3
incorrectly also added an entry to the RAMInfo array for the AN524 in
4
hw/arm/mps2-tz.c, which was pointless because the CPU would never see
5
it. Delete it.
6
1
7
The bug had no guest-visible effect because devices in the SSE-200
8
take priority over those in the board model (armsse.c maps
9
s->board_memory at priority -2).
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210510190844.17799-2-peter.maydell@linaro.org
14
---
15
hw/arm/mps2-tz.c | 8 +-------
16
1 file changed, 1 insertion(+), 7 deletions(-)
17
18
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/mps2-tz.c
21
+++ b/hw/arm/mps2-tz.c
22
@@ -XXX,XX +XXX,XX @@ static const RAMInfo an524_raminfo[] = { {
23
.size = 512 * KiB,
24
.mpc = 0,
25
.mrindex = 0,
26
- }, {
27
- .name = "sram",
28
- .base = 0x20000000,
29
- .size = 32 * 4 * KiB,
30
- .mpc = -1,
31
- .mrindex = 1,
32
}, {
33
/* We don't model QSPI flash yet; for now expose it as simple ROM */
34
.name = "QSPI",
35
.base = 0x28000000,
36
.size = 8 * MiB,
37
.mpc = 1,
38
- .mrindex = 2,
39
+ .mrindex = 1,
40
.flags = IS_ROM,
41
}, {
42
.name = "DDR",
43
--
44
2.20.1
45
46
diff view generated by jsdifflib
Deleted patch
1
The AN547 sets the SRAM_ADDR_WIDTH for the SSE-300 to 21;
2
since this is not the default value for the SSE-300, model this
3
in mps2-tz.c as a per-board value.
4
1
5
Reported-by: Devaraj Ranganna <devaraj.ranganna@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210510190844.17799-3-peter.maydell@linaro.org
9
---
10
hw/arm/mps2-tz.c | 6 ++++++
11
1 file changed, 6 insertions(+)
12
13
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/mps2-tz.c
16
+++ b/hw/arm/mps2-tz.c
17
@@ -XXX,XX +XXX,XX @@ struct MPS2TZMachineClass {
18
int numirq; /* Number of external interrupts */
19
int uart_overflow_irq; /* number of the combined UART overflow IRQ */
20
uint32_t init_svtor; /* init-svtor setting for SSE */
21
+ uint32_t sram_addr_width; /* SRAM_ADDR_WIDTH setting for SSE */
22
const RAMInfo *raminfo;
23
const char *armsse_type;
24
};
25
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
26
OBJECT(system_memory), &error_abort);
27
qdev_prop_set_uint32(iotkitdev, "EXP_NUMIRQ", mmc->numirq);
28
qdev_prop_set_uint32(iotkitdev, "init-svtor", mmc->init_svtor);
29
+ qdev_prop_set_uint32(iotkitdev, "SRAM_ADDR_WIDTH", mmc->sram_addr_width);
30
qdev_connect_clock_in(iotkitdev, "MAINCLK", mms->sysclk);
31
qdev_connect_clock_in(iotkitdev, "S32KCLK", mms->s32kclk);
32
sysbus_realize(SYS_BUS_DEVICE(&mms->iotkit), &error_fatal);
33
@@ -XXX,XX +XXX,XX @@ static void mps2tz_an505_class_init(ObjectClass *oc, void *data)
34
mmc->numirq = 92;
35
mmc->uart_overflow_irq = 47;
36
mmc->init_svtor = 0x10000000;
37
+ mmc->sram_addr_width = 15;
38
mmc->raminfo = an505_raminfo;
39
mmc->armsse_type = TYPE_IOTKIT;
40
mps2tz_set_default_ram_info(mmc);
41
@@ -XXX,XX +XXX,XX @@ static void mps2tz_an521_class_init(ObjectClass *oc, void *data)
42
mmc->numirq = 92;
43
mmc->uart_overflow_irq = 47;
44
mmc->init_svtor = 0x10000000;
45
+ mmc->sram_addr_width = 15;
46
mmc->raminfo = an505_raminfo; /* AN521 is the same as AN505 here */
47
mmc->armsse_type = TYPE_SSE200;
48
mps2tz_set_default_ram_info(mmc);
49
@@ -XXX,XX +XXX,XX @@ static void mps3tz_an524_class_init(ObjectClass *oc, void *data)
50
mmc->numirq = 95;
51
mmc->uart_overflow_irq = 47;
52
mmc->init_svtor = 0x10000000;
53
+ mmc->sram_addr_width = 15;
54
mmc->raminfo = an524_raminfo;
55
mmc->armsse_type = TYPE_SSE200;
56
mps2tz_set_default_ram_info(mmc);
57
@@ -XXX,XX +XXX,XX @@ static void mps3tz_an547_class_init(ObjectClass *oc, void *data)
58
mmc->numirq = 96;
59
mmc->uart_overflow_irq = 48;
60
mmc->init_svtor = 0x00000000;
61
+ mmc->sram_addr_width = 21;
62
mmc->raminfo = an547_raminfo;
63
mmc->armsse_type = TYPE_SSE300;
64
mps2tz_set_default_ram_info(mmc);
65
--
66
2.20.1
67
68
diff view generated by jsdifflib
Deleted patch
1
The SSE-300 was not correctly modelling its internal SRAMs:
2
* the SRAM address width default is 18
3
* the SRAM is mapped at 0x2100_0000, not 0x2000_0000 like
4
the SSE-200 and IoTKit
5
1
6
The default address width is no longer guest-visible since
7
our only SSE-300 board sets it explicitly to a non-default
8
value, but following the hardware's default will help for
9
any future boards we need to model.
10
11
Reported-by: Devaraj Ranganna <devaraj.ranganna@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210510190844.17799-4-peter.maydell@linaro.org
15
---
16
hw/arm/armsse.c | 8 ++++++--
17
1 file changed, 6 insertions(+), 2 deletions(-)
18
19
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/armsse.c
22
+++ b/hw/arm/armsse.c
23
@@ -XXX,XX +XXX,XX @@ struct ARMSSEInfo {
24
const char *cpu_type;
25
uint32_t sse_version;
26
int sram_banks;
27
+ uint32_t sram_bank_base;
28
int num_cpus;
29
uint32_t sys_version;
30
uint32_t iidr;
31
@@ -XXX,XX +XXX,XX @@ static Property sse300_properties[] = {
32
DEFINE_PROP_LINK("memory", ARMSSE, board_memory, TYPE_MEMORY_REGION,
33
MemoryRegion *),
34
DEFINE_PROP_UINT32("EXP_NUMIRQ", ARMSSE, exp_numirq, 64),
35
- DEFINE_PROP_UINT32("SRAM_ADDR_WIDTH", ARMSSE, sram_addr_width, 15),
36
+ DEFINE_PROP_UINT32("SRAM_ADDR_WIDTH", ARMSSE, sram_addr_width, 18),
37
DEFINE_PROP_UINT32("init-svtor", ARMSSE, init_svtor, 0x10000000),
38
DEFINE_PROP_BOOL("CPU0_FPU", ARMSSE, cpu_fpu[0], true),
39
DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], true),
40
@@ -XXX,XX +XXX,XX @@ static const ARMSSEInfo armsse_variants[] = {
41
.sse_version = ARMSSE_IOTKIT,
42
.cpu_type = ARM_CPU_TYPE_NAME("cortex-m33"),
43
.sram_banks = 1,
44
+ .sram_bank_base = 0x20000000,
45
.num_cpus = 1,
46
.sys_version = 0x41743,
47
.iidr = 0,
48
@@ -XXX,XX +XXX,XX @@ static const ARMSSEInfo armsse_variants[] = {
49
.sse_version = ARMSSE_SSE200,
50
.cpu_type = ARM_CPU_TYPE_NAME("cortex-m33"),
51
.sram_banks = 4,
52
+ .sram_bank_base = 0x20000000,
53
.num_cpus = 2,
54
.sys_version = 0x22041743,
55
.iidr = 0,
56
@@ -XXX,XX +XXX,XX @@ static const ARMSSEInfo armsse_variants[] = {
57
.sse_version = ARMSSE_SSE300,
58
.cpu_type = ARM_CPU_TYPE_NAME("cortex-m55"),
59
.sram_banks = 2,
60
+ .sram_bank_base = 0x21000000,
61
.num_cpus = 1,
62
.sys_version = 0x7e00043b,
63
.iidr = 0x74a0043b,
64
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
65
/* Map the upstream end of the MPC into the right place... */
66
sbd_mpc = SYS_BUS_DEVICE(&s->mpc[i]);
67
memory_region_add_subregion(&s->container,
68
- 0x20000000 + i * sram_bank_size,
69
+ info->sram_bank_base + i * sram_bank_size,
70
sysbus_mmio_get_region(sbd_mpc, 1));
71
/* ...and its register interface */
72
memory_region_add_subregion(&s->container, 0x50083000 + i * 0x1000,
73
--
74
2.20.1
75
76
diff view generated by jsdifflib
Deleted patch
1
Convert armsse_realize() to use ERRP_GUARD(), following
2
the rules in include/qapi/error.h.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210510190844.17799-5-peter.maydell@linaro.org
7
---
8
hw/arm/armsse.c | 8 ++++----
9
1 file changed, 4 insertions(+), 4 deletions(-)
10
11
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/armsse.c
14
+++ b/hw/arm/armsse.c
15
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
16
const ARMSSEDeviceInfo *devinfo;
17
int i;
18
MemoryRegion *mr;
19
- Error *err = NULL;
20
SysBusDevice *sbd_apb_ppc0;
21
SysBusDevice *sbd_secctl;
22
DeviceState *dev_apb_ppc0;
23
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
24
DeviceState *dev_splitter;
25
uint32_t addr_width_max;
26
27
+ ERRP_GUARD();
28
+
29
if (!s->board_memory) {
30
error_setg(errp, "memory property was not set");
31
return;
32
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
33
uint32_t sram_bank_size = 1 << s->sram_addr_width;
34
35
memory_region_init_ram(&s->sram[i], NULL, ramname,
36
- sram_bank_size, &err);
37
+ sram_bank_size, errp);
38
g_free(ramname);
39
- if (err) {
40
- error_propagate(errp, err);
41
+ if (*errp) {
42
return;
43
}
44
object_property_set_link(OBJECT(&s->mpc[i]), "downstream",
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
Deleted patch
1
Currently we model the ITCM in the AN547's RAMInfo list. This is incorrect
2
because this RAM is really a part of the SSE-300. We can't just delete
3
it from the RAMInfo list, though, because this would make boot_ram_size()
4
assert because it wouldn't be able to find an entry in the list covering
5
guest address 0.
6
1
7
Allow a board to specify a boot RAM size manually if it doesn't have
8
any RAM itself at address 0 and is relying on the SSE for that, and
9
set the correct value for the AN547. The other boards can continue
10
to use the "look it up from the RAMInfo list" logic.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210510190844.17799-6-peter.maydell@linaro.org
15
---
16
hw/arm/mps2-tz.c | 13 +++++++++++++
17
1 file changed, 13 insertions(+)
18
19
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/mps2-tz.c
22
+++ b/hw/arm/mps2-tz.c
23
@@ -XXX,XX +XXX,XX @@ struct MPS2TZMachineClass {
24
uint32_t sram_addr_width; /* SRAM_ADDR_WIDTH setting for SSE */
25
const RAMInfo *raminfo;
26
const char *armsse_type;
27
+ uint32_t boot_ram_size; /* size of ram at address 0; 0 == find in raminfo */
28
};
29
30
struct MPS2TZMachineState {
31
@@ -XXX,XX +XXX,XX @@ static uint32_t boot_ram_size(MPS2TZMachineState *mms)
32
const RAMInfo *p;
33
MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms);
34
35
+ /*
36
+ * Use a per-board specification (for when the boot RAM is in
37
+ * the SSE and so doesn't have a RAMInfo list entry)
38
+ */
39
+ if (mmc->boot_ram_size) {
40
+ return mmc->boot_ram_size;
41
+ }
42
+
43
for (p = mmc->raminfo; p->name; p++) {
44
if (p->base == boot_mem_base(mms)) {
45
return p->size;
46
@@ -XXX,XX +XXX,XX @@ static void mps2tz_an505_class_init(ObjectClass *oc, void *data)
47
mmc->sram_addr_width = 15;
48
mmc->raminfo = an505_raminfo;
49
mmc->armsse_type = TYPE_IOTKIT;
50
+ mmc->boot_ram_size = 0;
51
mps2tz_set_default_ram_info(mmc);
52
}
53
54
@@ -XXX,XX +XXX,XX @@ static void mps2tz_an521_class_init(ObjectClass *oc, void *data)
55
mmc->sram_addr_width = 15;
56
mmc->raminfo = an505_raminfo; /* AN521 is the same as AN505 here */
57
mmc->armsse_type = TYPE_SSE200;
58
+ mmc->boot_ram_size = 0;
59
mps2tz_set_default_ram_info(mmc);
60
}
61
62
@@ -XXX,XX +XXX,XX @@ static void mps3tz_an524_class_init(ObjectClass *oc, void *data)
63
mmc->sram_addr_width = 15;
64
mmc->raminfo = an524_raminfo;
65
mmc->armsse_type = TYPE_SSE200;
66
+ mmc->boot_ram_size = 0;
67
mps2tz_set_default_ram_info(mmc);
68
69
object_class_property_add_str(oc, "remap", mps2_get_remap, mps2_set_remap);
70
@@ -XXX,XX +XXX,XX @@ static void mps3tz_an547_class_init(ObjectClass *oc, void *data)
71
mmc->sram_addr_width = 21;
72
mmc->raminfo = an547_raminfo;
73
mmc->armsse_type = TYPE_SSE300;
74
+ mmc->boot_ram_size = 512 * KiB;
75
mps2tz_set_default_ram_info(mmc);
76
}
77
78
--
79
2.20.1
80
81
diff view generated by jsdifflib
Deleted patch
1
The SSE-300 has an ITCM at 0x0000_0000 and a DTCM at 0x2000_0000.
2
Currently we model these in the AN547 board, but this is conceptually
3
wrong, because they are a part of the SSE-300 itself. Move the
4
modelling of the TCMs out of mps2-tz.c into sse300.c.
5
1
6
This has no guest-visible effects.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210510190844.17799-7-peter.maydell@linaro.org
11
---
12
include/hw/arm/armsse.h | 2 ++
13
hw/arm/armsse.c | 19 +++++++++++++++++++
14
hw/arm/mps2-tz.c | 12 ------------
15
3 files changed, 21 insertions(+), 12 deletions(-)
16
17
diff --git a/include/hw/arm/armsse.h b/include/hw/arm/armsse.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/armsse.h
20
+++ b/include/hw/arm/armsse.h
21
@@ -XXX,XX +XXX,XX @@ struct ARMSSE {
22
MemoryRegion alias2;
23
MemoryRegion alias3[SSE_MAX_CPUS];
24
MemoryRegion sram[MAX_SRAM_BANKS];
25
+ MemoryRegion itcm;
26
+ MemoryRegion dtcm;
27
28
qemu_irq *exp_irqs[SSE_MAX_CPUS];
29
qemu_irq ppc0_irq;
30
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/arm/armsse.c
33
+++ b/hw/arm/armsse.c
34
@@ -XXX,XX +XXX,XX @@
35
#include "qemu/log.h"
36
#include "qemu/module.h"
37
#include "qemu/bitops.h"
38
+#include "qemu/units.h"
39
#include "qapi/error.h"
40
#include "trace.h"
41
#include "hw/sysbus.h"
42
@@ -XXX,XX +XXX,XX @@ struct ARMSSEInfo {
43
bool has_cpuid;
44
bool has_cpu_pwrctrl;
45
bool has_sse_counter;
46
+ bool has_tcms;
47
Property *props;
48
const ARMSSEDeviceInfo *devinfo;
49
const bool *irq_is_common;
50
@@ -XXX,XX +XXX,XX @@ static const ARMSSEInfo armsse_variants[] = {
51
.has_cpuid = false,
52
.has_cpu_pwrctrl = false,
53
.has_sse_counter = false,
54
+ .has_tcms = false,
55
.props = iotkit_properties,
56
.devinfo = iotkit_devices,
57
.irq_is_common = sse200_irq_is_common,
58
@@ -XXX,XX +XXX,XX @@ static const ARMSSEInfo armsse_variants[] = {
59
.has_cpuid = true,
60
.has_cpu_pwrctrl = false,
61
.has_sse_counter = false,
62
+ .has_tcms = false,
63
.props = sse200_properties,
64
.devinfo = sse200_devices,
65
.irq_is_common = sse200_irq_is_common,
66
@@ -XXX,XX +XXX,XX @@ static const ARMSSEInfo armsse_variants[] = {
67
.has_cpuid = true,
68
.has_cpu_pwrctrl = true,
69
.has_sse_counter = true,
70
+ .has_tcms = true,
71
.props = sse300_properties,
72
.devinfo = sse300_devices,
73
.irq_is_common = sse300_irq_is_common,
74
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
75
sysbus_mmio_get_region(sbd, 1));
76
}
77
78
+ if (info->has_tcms) {
79
+ /* The SSE-300 has an ITCM at 0x0000_0000 and a DTCM at 0x2000_0000 */
80
+ memory_region_init_ram(&s->itcm, NULL, "sse300-itcm", 512 * KiB, errp);
81
+ if (*errp) {
82
+ return;
83
+ }
84
+ memory_region_init_ram(&s->dtcm, NULL, "sse300-dtcm", 512 * KiB, errp);
85
+ if (*errp) {
86
+ return;
87
+ }
88
+ memory_region_add_subregion(&s->container, 0x00000000, &s->itcm);
89
+ memory_region_add_subregion(&s->container, 0x20000000, &s->dtcm);
90
+ }
91
+
92
/* Devices behind APB PPC0:
93
* 0x40000000: timer0
94
* 0x40001000: timer1
95
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/hw/arm/mps2-tz.c
98
+++ b/hw/arm/mps2-tz.c
99
@@ -XXX,XX +XXX,XX @@ static const RAMInfo an524_raminfo[] = { {
100
};
101
102
static const RAMInfo an547_raminfo[] = { {
103
- .name = "itcm",
104
- .base = 0x00000000,
105
- .size = 512 * KiB,
106
- .mpc = -1,
107
- .mrindex = 0,
108
- }, {
109
.name = "sram",
110
.base = 0x01000000,
111
.size = 2 * MiB,
112
.mpc = 0,
113
.mrindex = 1,
114
- }, {
115
- .name = "dtcm",
116
- .base = 0x20000000,
117
- .size = 4 * 128 * KiB,
118
- .mpc = -1,
119
- .mrindex = 2,
120
}, {
121
.name = "sram 2",
122
.base = 0x21000000,
123
--
124
2.20.1
125
126
diff view generated by jsdifflib
Deleted patch
1
When an M-profile CPU is restoring registers from the stack on
2
exception return, the stack pointer to use is determined based on
3
bits in the magic exception return type value. We were not getting
4
this logic entirely correct.
5
1
6
Whether we use one of the Secure stack pointers or one of the
7
Non-Secure stack pointers depends on the EXCRET.S bit. However,
8
whether we use the MSP or the PSP then depends on the SPSEL bit in
9
either the CONTROL_S or CONTROL_NS register. We were incorrectly
10
selecting MSP vs PSP based on the EXCRET.SPSEL bit.
11
12
(In the pseudocode this is in the PopStack() function, which calls
13
LookUpSp_with_security_mode() which in turn looks at the relevant
14
CONTROL.SPSEL bit.)
15
16
The buggy behaviour wasn't noticeable in most cases, because we write
17
EXCRET.SPSEL to the CONTROL.SPSEL bit for the S/NS register selected
18
by EXCRET.ES, so we only do the wrong thing when EXCRET.S and
19
EXCRET.ES are different. This will happen when secure code takes a
20
secure exception, which then tail-chains to a non-secure exception
21
which finally returns to the original secure code.
22
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20210520130905.2049-1-peter.maydell@linaro.org
26
---
27
target/arm/m_helper.c | 3 ++-
28
1 file changed, 2 insertions(+), 1 deletion(-)
29
30
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/m_helper.c
33
+++ b/target/arm/m_helper.c
34
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
35
* We use this limited C variable scope so we don't accidentally
36
* use 'frame_sp_p' after we do something that makes it invalid.
37
*/
38
+ bool spsel = env->v7m.control[return_to_secure] & R_V7M_CONTROL_SPSEL_MASK;
39
uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
40
return_to_secure,
41
!return_to_handler,
42
- return_to_sp_process);
43
+ spsel);
44
uint32_t frameptr = *frame_sp_p;
45
bool pop_ok = true;
46
ARMMMUIdx mmu_idx;
47
--
48
2.20.1
49
50
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Martin Liška <mliska@suse.cz>
2
2
3
We're about to add more variations on this theme.
3
Fixes the following Sphinx warning (treated as error) starting
4
Accept the inner loop for the _h variants, rather
4
with 5.0 release:
5
than keep it unrolled.
6
5
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Warning, treated as error:
8
Message-id: 20210525010358.152808-66-richard.henderson@linaro.org
7
Invalid configuration value found: 'language = None'. Update your configuration to a valid langauge code. Falling back to 'en' (English).
8
9
Signed-off-by: Martin Liska <mliska@suse.cz>
10
Message-id: e91e51ee-48ac-437e-6467-98b56ee40042@suse.cz
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
target/arm/vec_helper.c | 160 ++++++++--------------------------------
14
docs/conf.py | 2 +-
13
1 file changed, 29 insertions(+), 131 deletions(-)
15
1 file changed, 1 insertion(+), 1 deletion(-)
14
16
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
17
diff --git a/docs/conf.py b/docs/conf.py
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/vec_helper.c
19
--- a/docs/conf.py
18
+++ b/target/arm/vec_helper.c
20
+++ b/docs/conf.py
19
@@ -XXX,XX +XXX,XX @@ DO_DOT(gvec_udot_b, uint32_t, uint8_t, uint8_t)
21
@@ -XXX,XX +XXX,XX @@
20
DO_DOT(gvec_sdot_h, int64_t, int16_t, int16_t)
22
#
21
DO_DOT(gvec_udot_h, uint64_t, uint16_t, uint16_t)
23
# This is also used if you do content translation via gettext catalogs.
22
24
# Usually you set "language" from the command line for these cases.
23
-void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm,
25
-language = None
24
- void *va, uint32_t desc)
26
+language = 'en'
25
-{
27
26
- intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
28
# List of patterns, relative to source directory, that match files and
27
- intptr_t index = simd_data(desc);
29
# directories to ignore when looking for source files.
28
- int32_t *d = vd, *a = va;
29
- int8_t *n = vn;
30
- int8_t *m_indexed = (int8_t *)vm + H4(index) * 4;
31
-
32
- /* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
33
- * Otherwise opr_sz is a multiple of 16.
34
- */
35
- segend = MIN(4, opr_sz_4);
36
- i = 0;
37
- do {
38
- int8_t m0 = m_indexed[i * 4 + 0];
39
- int8_t m1 = m_indexed[i * 4 + 1];
40
- int8_t m2 = m_indexed[i * 4 + 2];
41
- int8_t m3 = m_indexed[i * 4 + 3];
42
-
43
- do {
44
- d[i] = (a[i] +
45
- n[i * 4 + 0] * m0 +
46
- n[i * 4 + 1] * m1 +
47
- n[i * 4 + 2] * m2 +
48
- n[i * 4 + 3] * m3);
49
- } while (++i < segend);
50
- segend = i + 4;
51
- } while (i < opr_sz_4);
52
-
53
- clear_tail(d, opr_sz, simd_maxsz(desc));
54
+#define DO_DOT_IDX(NAME, TYPED, TYPEN, TYPEM, HD) \
55
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
56
+{ \
57
+ intptr_t i = 0, opr_sz = simd_oprsz(desc); \
58
+ intptr_t opr_sz_n = opr_sz / sizeof(TYPED); \
59
+ intptr_t segend = MIN(16 / sizeof(TYPED), opr_sz_n); \
60
+ intptr_t index = simd_data(desc); \
61
+ TYPED *d = vd, *a = va; \
62
+ TYPEN *n = vn; \
63
+ TYPEM *m_indexed = (TYPEM *)vm + HD(index) * 4; \
64
+ do { \
65
+ TYPED m0 = m_indexed[i * 4 + 0]; \
66
+ TYPED m1 = m_indexed[i * 4 + 1]; \
67
+ TYPED m2 = m_indexed[i * 4 + 2]; \
68
+ TYPED m3 = m_indexed[i * 4 + 3]; \
69
+ do { \
70
+ d[i] = (a[i] + \
71
+ n[i * 4 + 0] * m0 + \
72
+ n[i * 4 + 1] * m1 + \
73
+ n[i * 4 + 2] * m2 + \
74
+ n[i * 4 + 3] * m3); \
75
+ } while (++i < segend); \
76
+ segend = i + 4; \
77
+ } while (i < opr_sz_n); \
78
+ clear_tail(d, opr_sz, simd_maxsz(desc)); \
79
}
80
81
-void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm,
82
- void *va, uint32_t desc)
83
-{
84
- intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
85
- intptr_t index = simd_data(desc);
86
- uint32_t *d = vd, *a = va;
87
- uint8_t *n = vn;
88
- uint8_t *m_indexed = (uint8_t *)vm + H4(index) * 4;
89
-
90
- /* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
91
- * Otherwise opr_sz is a multiple of 16.
92
- */
93
- segend = MIN(4, opr_sz_4);
94
- i = 0;
95
- do {
96
- uint8_t m0 = m_indexed[i * 4 + 0];
97
- uint8_t m1 = m_indexed[i * 4 + 1];
98
- uint8_t m2 = m_indexed[i * 4 + 2];
99
- uint8_t m3 = m_indexed[i * 4 + 3];
100
-
101
- do {
102
- d[i] = (a[i] +
103
- n[i * 4 + 0] * m0 +
104
- n[i * 4 + 1] * m1 +
105
- n[i * 4 + 2] * m2 +
106
- n[i * 4 + 3] * m3);
107
- } while (++i < segend);
108
- segend = i + 4;
109
- } while (i < opr_sz_4);
110
-
111
- clear_tail(d, opr_sz, simd_maxsz(desc));
112
-}
113
-
114
-void HELPER(gvec_sdot_idx_h)(void *vd, void *vn, void *vm,
115
- void *va, uint32_t desc)
116
-{
117
- intptr_t i, opr_sz = simd_oprsz(desc), opr_sz_8 = opr_sz / 8;
118
- intptr_t index = simd_data(desc);
119
- int64_t *d = vd, *a = va;
120
- int16_t *n = vn;
121
- int16_t *m_indexed = (int16_t *)vm + index * 4;
122
-
123
- /* This is supported by SVE only, so opr_sz is always a multiple of 16.
124
- * Process the entire segment all at once, writing back the results
125
- * only after we've consumed all of the inputs.
126
- */
127
- for (i = 0; i < opr_sz_8; i += 2) {
128
- int64_t d0, d1;
129
-
130
- d0 = a[i + 0];
131
- d0 += n[i * 4 + 0] * (int64_t)m_indexed[i * 4 + 0];
132
- d0 += n[i * 4 + 1] * (int64_t)m_indexed[i * 4 + 1];
133
- d0 += n[i * 4 + 2] * (int64_t)m_indexed[i * 4 + 2];
134
- d0 += n[i * 4 + 3] * (int64_t)m_indexed[i * 4 + 3];
135
-
136
- d1 = a[i + 1];
137
- d1 += n[i * 4 + 4] * (int64_t)m_indexed[i * 4 + 0];
138
- d1 += n[i * 4 + 5] * (int64_t)m_indexed[i * 4 + 1];
139
- d1 += n[i * 4 + 6] * (int64_t)m_indexed[i * 4 + 2];
140
- d1 += n[i * 4 + 7] * (int64_t)m_indexed[i * 4 + 3];
141
-
142
- d[i + 0] = d0;
143
- d[i + 1] = d1;
144
- }
145
- clear_tail(d, opr_sz, simd_maxsz(desc));
146
-}
147
-
148
-void HELPER(gvec_udot_idx_h)(void *vd, void *vn, void *vm,
149
- void *va, uint32_t desc)
150
-{
151
- intptr_t i, opr_sz = simd_oprsz(desc), opr_sz_8 = opr_sz / 8;
152
- intptr_t index = simd_data(desc);
153
- uint64_t *d = vd, *a = va;
154
- uint16_t *n = vn;
155
- uint16_t *m_indexed = (uint16_t *)vm + index * 4;
156
-
157
- /* This is supported by SVE only, so opr_sz is always a multiple of 16.
158
- * Process the entire segment all at once, writing back the results
159
- * only after we've consumed all of the inputs.
160
- */
161
- for (i = 0; i < opr_sz_8; i += 2) {
162
- uint64_t d0, d1;
163
-
164
- d0 = a[i + 0];
165
- d0 += n[i * 4 + 0] * (uint64_t)m_indexed[i * 4 + 0];
166
- d0 += n[i * 4 + 1] * (uint64_t)m_indexed[i * 4 + 1];
167
- d0 += n[i * 4 + 2] * (uint64_t)m_indexed[i * 4 + 2];
168
- d0 += n[i * 4 + 3] * (uint64_t)m_indexed[i * 4 + 3];
169
-
170
- d1 = a[i + 1];
171
- d1 += n[i * 4 + 4] * (uint64_t)m_indexed[i * 4 + 0];
172
- d1 += n[i * 4 + 5] * (uint64_t)m_indexed[i * 4 + 1];
173
- d1 += n[i * 4 + 6] * (uint64_t)m_indexed[i * 4 + 2];
174
- d1 += n[i * 4 + 7] * (uint64_t)m_indexed[i * 4 + 3];
175
-
176
- d[i + 0] = d0;
177
- d[i + 1] = d1;
178
- }
179
- clear_tail(d, opr_sz, simd_maxsz(desc));
180
-}
181
+DO_DOT_IDX(gvec_sdot_idx_b, int32_t, int8_t, int8_t, H4)
182
+DO_DOT_IDX(gvec_udot_idx_b, uint32_t, uint8_t, uint8_t, H4)
183
+DO_DOT_IDX(gvec_sdot_idx_h, int64_t, int16_t, int16_t, )
184
+DO_DOT_IDX(gvec_udot_idx_h, uint64_t, uint16_t, uint16_t, )
185
186
void HELPER(gvec_fcaddh)(void *vd, void *vn, void *vm,
187
void *vfpst, uint32_t desc)
188
--
30
--
189
2.20.1
31
2.25.1
190
32
191
33
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Rename the existing sve_while (less-than) helper to sve_whilel
3
We need to fetch the name of the current accelerator in flexible error
4
to make room for a new sve_whileg helper for greater-than.
4
messages more going forward. Let's create a helper that gives it to us
5
without casting in the target code.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525010358.152808-31-richard.henderson@linaro.org
9
Message-id: 20220620192242.70573-1-agraf@csgraf.de
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/helper-sve.h | 3 +-
12
include/qemu/accel.h | 1 +
12
target/arm/sve.decode | 2 +-
13
accel/accel-common.c | 8 ++++++++
13
target/arm/sve_helper.c | 38 +++++++++++++++++++++++++-
14
softmmu/vl.c | 3 +--
14
target/arm/translate-sve.c | 56 ++++++++++++++++++++++++++++----------
15
3 files changed, 10 insertions(+), 2 deletions(-)
15
4 files changed, 82 insertions(+), 17 deletions(-)
16
16
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
17
diff --git a/include/qemu/accel.h b/include/qemu/accel.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
19
--- a/include/qemu/accel.h
20
+++ b/target/arm/helper-sve.h
20
+++ b/include/qemu/accel.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
21
@@ -XXX,XX +XXX,XX @@ typedef struct AccelClass {
22
22
23
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
23
AccelClass *accel_find(const char *opt_name);
24
24
AccelState *current_accel(void);
25
-DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
25
+const char *current_accel_name(void);
26
+DEF_HELPER_FLAGS_3(sve_whilel, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
26
27
+DEF_HELPER_FLAGS_3(sve_whileg, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
27
void accel_init_interfaces(AccelClass *ac);
28
28
29
DEF_HELPER_FLAGS_4(sve_subri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
diff --git a/accel/accel-common.c b/accel/accel-common.c
30
DEF_HELPER_FLAGS_4(sve_subri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
31
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
32
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve.decode
31
--- a/accel/accel-common.c
34
+++ b/target/arm/sve.decode
32
+++ b/accel/accel-common.c
35
@@ -XXX,XX +XXX,XX @@ SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
33
@@ -XXX,XX +XXX,XX @@ AccelClass *accel_find(const char *opt_name)
36
CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
34
return ac;
37
38
# SVE integer compare scalar count and limit
39
-WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
40
+WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 lt:1 rn:5 eq:1 rd:4
41
42
### SVE Integer Wide Immediate - Unpredicated Group
43
44
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/sve_helper.c
47
+++ b/target/arm/sve_helper.c
48
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
49
return sum;
50
}
35
}
51
36
52
-uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
37
+/* Return the name of the current accelerator */
53
+uint32_t HELPER(sve_whilel)(void *vd, uint32_t count, uint32_t pred_desc)
38
+const char *current_accel_name(void)
54
{
55
intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
56
intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
57
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
58
return predtest_ones(d, oprsz, esz_mask);
59
}
60
61
+uint32_t HELPER(sve_whileg)(void *vd, uint32_t count, uint32_t pred_desc)
62
+{
39
+{
63
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
40
+ AccelClass *ac = ACCEL_GET_CLASS(current_accel());
64
+ intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
65
+ uint64_t esz_mask = pred_esz_masks[esz];
66
+ ARMPredicateReg *d = vd;
67
+ intptr_t i, invcount, oprbits;
68
+ uint64_t bits;
69
+
41
+
70
+ if (count == 0) {
42
+ return ac->name;
71
+ return do_zero(d, oprsz);
72
+ }
73
+
74
+ oprbits = oprsz * 8;
75
+ tcg_debug_assert(count <= oprbits);
76
+
77
+ bits = esz_mask;
78
+ if (oprbits & 63) {
79
+ bits &= MAKE_64BIT_MASK(0, oprbits & 63);
80
+ }
81
+
82
+ invcount = oprbits - count;
83
+ for (i = (oprsz - 1) / 8; i > invcount / 64; --i) {
84
+ d->p[i] = bits;
85
+ bits = esz_mask;
86
+ }
87
+
88
+ d->p[i] = bits & MAKE_64BIT_MASK(invcount & 63, 64);
89
+
90
+ while (--i >= 0) {
91
+ d->p[i] = 0;
92
+ }
93
+
94
+ return predtest_ones(d, oprsz, esz_mask);
95
+}
43
+}
96
+
44
+
97
/* Recursive reduction on a function;
45
static void accel_init_cpu_int_aux(ObjectClass *klass, void *opaque)
98
* C.f. the ARM ARM function ReducePredicated.
46
{
99
*
47
CPUClass *cc = CPU_CLASS(klass);
100
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
48
diff --git a/softmmu/vl.c b/softmmu/vl.c
101
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/translate-sve.c
50
--- a/softmmu/vl.c
103
+++ b/target/arm/translate-sve.c
51
+++ b/softmmu/vl.c
104
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
52
@@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname)
105
unsigned vsz = vec_full_reg_size(s);
106
unsigned desc = 0;
107
TCGCond cond;
108
+ uint64_t maxval;
109
+ /* Note that GE/HS has a->eq == 0 and GT/HI has a->eq == 1. */
110
+ bool eq = a->eq == a->lt;
111
112
+ /* The greater-than conditions are all SVE2. */
113
+ if (!a->lt && !dc_isar_feature(aa64_sve2, s)) {
114
+ return false;
115
+ }
116
if (!sve_access_check(s)) {
117
return true;
118
}
53
}
119
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
54
120
*/
55
if (init_failed && !qtest_chrdev) {
121
t0 = tcg_temp_new_i64();
56
- AccelClass *ac = ACCEL_GET_CLASS(current_accel());
122
t1 = tcg_temp_new_i64();
57
- error_report("falling back to %s", ac->name);
123
- tcg_gen_sub_i64(t0, op1, op0);
58
+ error_report("falling back to %s", current_accel_name());
124
+
125
+ if (a->lt) {
126
+ tcg_gen_sub_i64(t0, op1, op0);
127
+ if (a->u) {
128
+ maxval = a->sf ? UINT64_MAX : UINT32_MAX;
129
+ cond = eq ? TCG_COND_LEU : TCG_COND_LTU;
130
+ } else {
131
+ maxval = a->sf ? INT64_MAX : INT32_MAX;
132
+ cond = eq ? TCG_COND_LE : TCG_COND_LT;
133
+ }
134
+ } else {
135
+ tcg_gen_sub_i64(t0, op0, op1);
136
+ if (a->u) {
137
+ maxval = 0;
138
+ cond = eq ? TCG_COND_GEU : TCG_COND_GTU;
139
+ } else {
140
+ maxval = a->sf ? INT64_MIN : INT32_MIN;
141
+ cond = eq ? TCG_COND_GE : TCG_COND_GT;
142
+ }
143
+ }
144
145
tmax = tcg_const_i64(vsz >> a->esz);
146
- if (a->eq) {
147
+ if (eq) {
148
/* Equality means one more iteration. */
149
tcg_gen_addi_i64(t0, t0, 1);
150
151
- /* If op1 is max (un)signed integer (and the only time the addition
152
- * above could overflow), then we produce an all-true predicate by
153
- * setting the count to the vector length. This is because the
154
- * pseudocode is described as an increment + compare loop, and the
155
- * max integer would always compare true.
156
+ /*
157
+ * For the less-than while, if op1 is maxval (and the only time
158
+ * the addition above could overflow), then we produce an all-true
159
+ * predicate by setting the count to the vector length. This is
160
+ * because the pseudocode is described as an increment + compare
161
+ * loop, and the maximum integer would always compare true.
162
+ * Similarly, the greater-than while has the same issue with the
163
+ * minimum integer due to the decrement + compare loop.
164
*/
165
- tcg_gen_movi_i64(t1, (a->sf
166
- ? (a->u ? UINT64_MAX : INT64_MAX)
167
- : (a->u ? UINT32_MAX : INT32_MAX)));
168
+ tcg_gen_movi_i64(t1, maxval);
169
tcg_gen_movcond_i64(TCG_COND_EQ, t0, op1, t1, tmax, t0);
170
}
59
}
171
60
172
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
61
if (icount_enabled() && !tcg_enabled()) {
173
tcg_temp_free_i64(tmax);
174
175
/* Set the count to zero if the condition is false. */
176
- cond = (a->u
177
- ? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
178
- : (a->eq ? TCG_COND_LE : TCG_COND_LT));
179
tcg_gen_movi_i64(t1, 0);
180
tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
181
tcg_temp_free_i64(t1);
182
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
183
ptr = tcg_temp_new_ptr();
184
tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
185
186
- gen_helper_sve_while(t2, ptr, t2, t3);
187
+ if (a->lt) {
188
+ gen_helper_sve_whilel(t2, ptr, t2, t3);
189
+ } else {
190
+ gen_helper_sve_whileg(t2, ptr, t2, t3);
191
+ }
192
do_pred_flags(t2);
193
194
tcg_temp_free_ptr(ptr);
195
--
62
--
196
2.20.1
63
2.25.1
197
198
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
We will not be able to fit address + length into a 64-bit packet.
3
Some features such as running in EL3 or running M profile code are
4
Drop this optimization before re-organizing this code.
4
incompatible with virtualization as QEMU implements it today. To prevent
5
users from picking invalid configurations on other virt solutions like
6
Hvf, let's run the same checks there too.
5
7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1073
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Message-id: 20210509151618.2331764-10-f4bug@amsat.org
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
11
Message-id: 20220620192242.70573-2-agraf@csgraf.de
10
[PMD: Split from bigger patch]
12
[PMM: Allow qtest accelerator too; tweak comment]
11
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
[PMM: Moved patch earlier in the series]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
14
---
15
accel/tcg/cputlb.c | 86 +++++++++++-----------------------------------
15
target/arm/cpu.c | 16 ++++++++++++----
16
1 file changed, 20 insertions(+), 66 deletions(-)
16
1 file changed, 12 insertions(+), 4 deletions(-)
17
17
18
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/accel/tcg/cputlb.c
20
--- a/target/arm/cpu.c
21
+++ b/accel/tcg/cputlb.c
21
+++ b/target/arm/cpu.c
22
@@ -XXX,XX +XXX,XX @@ tlb_flush_page_bits_by_mmuidx_async_0(CPUState *cpu,
22
@@ -XXX,XX +XXX,XX @@
23
}
23
#include "hw/boards.h"
24
}
24
#endif
25
25
#include "sysemu/tcg.h"
26
-static bool encode_pbm_to_runon(run_on_cpu_data *out,
26
+#include "sysemu/qtest.h"
27
- TLBFlushRangeData d)
27
#include "sysemu/hw_accel.h"
28
-{
28
#include "kvm_arm.h"
29
- /* We need 6 bits to hold to hold @bits up to 63. */
29
#include "disas/capstone.h"
30
- if (d.idxmap <= MAKE_64BIT_MASK(0, TARGET_PAGE_BITS - 6)) {
30
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
31
- *out = RUN_ON_CPU_TARGET_PTR(d.addr | (d.idxmap << 6) | d.bits);
32
- return true;
33
- }
34
- return false;
35
-}
36
-
37
-static TLBFlushRangeData
38
-decode_runon_to_pbm(run_on_cpu_data data)
39
-{
40
- target_ulong addr_map_bits = (target_ulong) data.target_ptr;
41
- return (TLBFlushRangeData){
42
- .addr = addr_map_bits & TARGET_PAGE_MASK,
43
- .idxmap = (addr_map_bits & ~TARGET_PAGE_MASK) >> 6,
44
- .bits = addr_map_bits & 0x3f
45
- };
46
-}
47
-
48
-static void tlb_flush_page_bits_by_mmuidx_async_1(CPUState *cpu,
49
- run_on_cpu_data runon)
50
-{
51
- tlb_flush_page_bits_by_mmuidx_async_0(cpu, decode_runon_to_pbm(runon));
52
-}
53
-
54
static void tlb_flush_page_bits_by_mmuidx_async_2(CPUState *cpu,
55
run_on_cpu_data data)
56
{
57
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
58
uint16_t idxmap, unsigned bits)
59
{
60
TLBFlushRangeData d;
61
- run_on_cpu_data runon;
62
63
/* If all bits are significant, this devolves to tlb_flush_page. */
64
if (bits >= TARGET_LONG_BITS) {
65
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
66
67
if (qemu_cpu_is_self(cpu)) {
68
tlb_flush_page_bits_by_mmuidx_async_0(cpu, d);
69
- } else if (encode_pbm_to_runon(&runon, d)) {
70
- async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
71
} else {
72
/* Otherwise allocate a structure, freed by the worker. */
73
TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
74
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
75
unsigned bits)
76
{
77
TLBFlushRangeData d;
78
- run_on_cpu_data runon;
79
+ CPUState *dst_cpu;
80
81
/* If all bits are significant, this devolves to tlb_flush_page. */
82
if (bits >= TARGET_LONG_BITS) {
83
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
84
d.idxmap = idxmap;
85
d.bits = bits;
86
87
- if (encode_pbm_to_runon(&runon, d)) {
88
- flush_all_helper(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
89
- } else {
90
- CPUState *dst_cpu;
91
-
92
- /* Allocate a separate data block for each destination cpu. */
93
- CPU_FOREACH(dst_cpu) {
94
- if (dst_cpu != src_cpu) {
95
- TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
96
- async_run_on_cpu(dst_cpu,
97
- tlb_flush_page_bits_by_mmuidx_async_2,
98
- RUN_ON_CPU_HOST_PTR(p));
99
- }
100
+ /* Allocate a separate data block for each destination cpu. */
101
+ CPU_FOREACH(dst_cpu) {
102
+ if (dst_cpu != src_cpu) {
103
+ TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
104
+ async_run_on_cpu(dst_cpu,
105
+ tlb_flush_page_bits_by_mmuidx_async_2,
106
+ RUN_ON_CPU_HOST_PTR(p));
107
}
31
}
108
}
32
}
109
33
110
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
34
- if (kvm_enabled()) {
111
uint16_t idxmap,
35
+ if (!tcg_enabled() && !qtest_enabled()) {
112
unsigned bits)
36
/*
113
{
37
+ * We assume that no accelerator except TCG (and the "not really an
114
- TLBFlushRangeData d;
38
+ * accelerator" qtest) can handle these features, because Arm hardware
115
- run_on_cpu_data runon;
39
+ * virtualization can't virtualize them.
116
+ TLBFlushRangeData d, *p;
40
+ *
117
+ CPUState *dst_cpu;
41
* Catch all the cases which might cause us to create more than one
118
42
* address space for the CPU (otherwise we will assert() later in
119
/* If all bits are significant, this devolves to tlb_flush_page. */
43
* cpu_address_space_init()).
120
if (bits >= TARGET_LONG_BITS) {
44
*/
121
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
45
if (arm_feature(env, ARM_FEATURE_M)) {
122
d.idxmap = idxmap;
46
error_setg(errp,
123
d.bits = bits;
47
- "Cannot enable KVM when using an M-profile guest CPU");
124
48
+ "Cannot enable %s when using an M-profile guest CPU",
125
- if (encode_pbm_to_runon(&runon, d)) {
49
+ current_accel_name());
126
- flush_all_helper(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
50
return;
127
- async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1,
128
- runon);
129
- } else {
130
- CPUState *dst_cpu;
131
- TLBFlushRangeData *p;
132
-
133
- /* Allocate a separate data block for each destination cpu. */
134
- CPU_FOREACH(dst_cpu) {
135
- if (dst_cpu != src_cpu) {
136
- p = g_memdup(&d, sizeof(d));
137
- async_run_on_cpu(dst_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
138
- RUN_ON_CPU_HOST_PTR(p));
139
- }
140
+ /* Allocate a separate data block for each destination cpu. */
141
+ CPU_FOREACH(dst_cpu) {
142
+ if (dst_cpu != src_cpu) {
143
+ p = g_memdup(&d, sizeof(d));
144
+ async_run_on_cpu(dst_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
145
+ RUN_ON_CPU_HOST_PTR(p));
146
}
51
}
147
-
52
if (cpu->has_el3) {
148
- p = g_memdup(&d, sizeof(d));
53
error_setg(errp,
149
- async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
54
- "Cannot enable KVM when guest CPU has EL3 enabled");
150
- RUN_ON_CPU_HOST_PTR(p));
55
+ "Cannot enable %s when guest CPU has EL3 enabled",
56
+ current_accel_name());
57
return;
58
}
59
if (cpu->tag_memory) {
60
error_setg(errp,
61
- "Cannot enable KVM when guest CPUs has MTE enabled");
62
+ "Cannot enable %s when guest CPUs has MTE enabled",
63
+ current_accel_name());
64
return;
65
}
151
}
66
}
152
+
153
+ p = g_memdup(&d, sizeof(d));
154
+ async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
155
+ RUN_ON_CPU_HOST_PTR(p));
156
}
157
158
/* update the TLBs so that writes to code in the virtual page 'addr'
159
--
67
--
160
2.20.1
68
2.25.1
161
162
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This register is part of SME, but isn't closely related to the
4
rest of the extension.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-71-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/cpu.h | 5 +++++
11
target/arm/cpu.h | 1 +
9
target/arm/sve.decode | 4 ++++
12
target/arm/helper.c | 32 ++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 16 ++++++++++++++++
13
2 files changed, 33 insertions(+)
11
3 files changed, 25 insertions(+)
12
14
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
18
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
20
};
19
}
21
uint64_t tpidr_el[4];
20
22
};
21
+static inline bool isar_feature_aa64_sve2_sha3(const ARMISARegisters *id)
23
+ uint64_t tpidr2_el0;
24
/* The secure banks of these registers don't map anywhere */
25
uint64_t tpidrurw_s;
26
uint64_t tpidrprw_s;
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.c
30
+++ b/target/arm/helper.c
31
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo zcr_reginfo[] = {
32
.writefn = zcr_write, .raw_writefn = raw_write },
33
};
34
35
+#ifdef TARGET_AARCH64
36
+static CPAccessResult access_tpidr2(CPUARMState *env, const ARMCPRegInfo *ri,
37
+ bool isread)
22
+{
38
+{
23
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SHA3) != 0;
39
+ int el = arm_current_el(env);
40
+
41
+ if (el == 0) {
42
+ uint64_t sctlr = arm_sctlr(env, el);
43
+ if (!(sctlr & SCTLR_EnTP2)) {
44
+ return CP_ACCESS_TRAP;
45
+ }
46
+ }
47
+ /* TODO: FEAT_FGT */
48
+ if (el < 3
49
+ && arm_feature(env, ARM_FEATURE_EL3)
50
+ && !(env->cp15.scr_el3 & SCR_ENTP2)) {
51
+ return CP_ACCESS_TRAP_EL3;
52
+ }
53
+ return CP_ACCESS_OK;
24
+}
54
+}
25
+
55
+
26
static inline bool isar_feature_aa64_sve2_sm4(const ARMISARegisters *id)
56
+static const ARMCPRegInfo sme_reginfo[] = {
57
+ { .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
58
+ .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
59
+ .access = PL0_RW, .accessfn = access_tpidr2,
60
+ .fieldoffset = offsetof(CPUARMState, cp15.tpidr2_el0) },
61
+};
62
+#endif /* TARGET_AARCH64 */
63
+
64
void hw_watchpoint_update(ARMCPU *cpu, int n)
27
{
65
{
28
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SM4) != 0;
66
CPUARMState *env = &cpu->env;
29
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
67
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
30
index XXXXXXX..XXXXXXX 100644
68
}
31
--- a/target/arm/sve.decode
69
32
+++ b/target/arm/sve.decode
70
#ifdef TARGET_AARCH64
33
@@ -XXX,XX +XXX,XX @@ AESMC 01000101 00 10000011100 decrypt:1 00000 rd:5
71
+ if (cpu_isar_feature(aa64_sme, cpu)) {
34
AESE 01000101 00 10001 0 11100 0 ..... ..... @rdn_rm_e0
72
+ define_arm_cp_regs(cpu, sme_reginfo);
35
AESD 01000101 00 10001 0 11100 1 ..... ..... @rdn_rm_e0
36
SM4E 01000101 00 10001 1 11100 0 ..... ..... @rdn_rm_e0
37
+
38
+# SVE2 crypto constructive binary operations
39
+SM4EKEY 01000101 00 1 ..... 11110 0 ..... ..... @rd_rn_rm_e0
40
+RAX1 01000101 00 1 ..... 11110 1 ..... ..... @rd_rn_rm_e0
41
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/translate-sve.c
44
+++ b/target/arm/translate-sve.c
45
@@ -XXX,XX +XXX,XX @@ static bool trans_SM4E(DisasContext *s, arg_rrr_esz *a)
46
{
47
return do_sm4(s, a, gen_helper_crypto_sm4e);
48
}
49
+
50
+static bool trans_SM4EKEY(DisasContext *s, arg_rrr_esz *a)
51
+{
52
+ return do_sm4(s, a, gen_helper_crypto_sm4ekey);
53
+}
54
+
55
+static bool trans_RAX1(DisasContext *s, arg_rrr_esz *a)
56
+{
57
+ if (!dc_isar_feature(aa64_sve2_sha3, s)) {
58
+ return false;
59
+ }
73
+ }
60
+ if (sve_access_check(s)) {
74
if (cpu_isar_feature(aa64_pauth, cpu)) {
61
+ gen_gvec_fn_zzz(s, gen_gvec_rax1, MO_64, a->rd, a->rn, a->rm);
75
define_arm_cp_regs(cpu, pauth_reginfo);
62
+ }
76
}
63
+ return true;
64
+}
65
--
77
--
66
2.20.1
78
2.25.1
67
68
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Split these operations out into a header that can be shared
3
This is CheckSMEAccess, which is the basis for a set of
4
between neon and sve. The "sat" pointer acts both as a boolean
4
related tests for various SME cpregs and instructions.
5
for control of saturating behavior and controls the difference
6
in behavior between neon and sve -- QC bit or no QC bit.
7
8
Widen the shift operand in the new helpers, as the SVE2 insns treat
9
the whole input element as significant. For the neon uses, truncate
10
the shift to int8_t while passing the parameter.
11
12
Implement right-shift rounding as
13
14
tmp = src >> (shift - 1);
15
dst = (tmp >> 1) + (tmp & 1);
16
17
This is the same number of instructions as the current
18
19
tmp = 1 << (shift - 1);
20
dst = (src + tmp) >> shift;
21
22
without any possibility of intermediate overflow.
23
5
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-id: 20210525010358.152808-6-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-3-richard.henderson@linaro.org
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
10
---
29
target/arm/vec_internal.h | 138 +++++++++++
11
target/arm/cpu.h | 2 ++
30
target/arm/neon_helper.c | 507 +++++++-------------------------------
12
target/arm/translate.h | 1 +
31
2 files changed, 221 insertions(+), 424 deletions(-)
13
target/arm/helper.c | 52 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-a64.c | 1 +
15
4 files changed, 56 insertions(+)
32
16
33
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/vec_internal.h
19
--- a/target/arm/cpu.h
36
+++ b/target/arm/vec_internal.h
20
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
21
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env);
38
}
22
23
int fp_exception_el(CPUARMState *env, int cur_el);
24
int sve_exception_el(CPUARMState *env, int cur_el);
25
+int sme_exception_el(CPUARMState *env, int cur_el);
26
27
/**
28
* sve_vqm1_for_el:
29
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, ATA, 15, 1)
30
FIELD(TBFLAG_A64, TCMA, 16, 2)
31
FIELD(TBFLAG_A64, MTE_ACTIVE, 18, 1)
32
FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
33
+FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
34
35
/*
36
* Helpers for using the above.
37
diff --git a/target/arm/translate.h b/target/arm/translate.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate.h
40
+++ b/target/arm/translate.h
41
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
42
bool ns; /* Use non-secure CPREG bank on access */
43
int fp_excp_el; /* FP exception EL or 0 if enabled */
44
int sve_excp_el; /* SVE exception EL or 0 if enabled */
45
+ int sme_excp_el; /* SME exception EL or 0 if enabled */
46
int vl; /* current vector length in bytes */
47
bool vfp_enabled; /* FP enabled via FPSCR.EN */
48
int vec_len;
49
diff --git a/target/arm/helper.c b/target/arm/helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/helper.c
52
+++ b/target/arm/helper.c
53
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
54
return 0;
39
}
55
}
40
56
41
+static inline int32_t do_sqrshl_bhs(int32_t src, int32_t shift, int bits,
57
+/*
42
+ bool round, uint32_t *sat)
58
+ * Return the exception level to which exceptions should be taken for SME.
59
+ * C.f. the ARM pseudocode function CheckSMEAccess.
60
+ */
61
+int sme_exception_el(CPUARMState *env, int el)
43
+{
62
+{
44
+ if (shift <= -bits) {
63
+#ifndef CONFIG_USER_ONLY
45
+ /* Rounding the sign bit always produces 0. */
64
+ if (el <= 1 && !el_is_in_host(env, el)) {
46
+ if (round) {
65
+ switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, SMEN)) {
47
+ return 0;
66
+ case 1:
67
+ if (el != 0) {
68
+ break;
69
+ }
70
+ /* fall through */
71
+ case 0:
72
+ case 2:
73
+ return 1;
48
+ }
74
+ }
49
+ return src >> 31;
75
+ }
50
+ } else if (shift < 0) {
76
+
51
+ if (round) {
77
+ if (el <= 2 && arm_is_el2_enabled(env)) {
52
+ src >>= -shift - 1;
78
+ /* CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE). */
53
+ return (src >> 1) + (src & 1);
79
+ if (env->cp15.hcr_el2 & HCR_E2H) {
54
+ }
80
+ switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, SMEN)) {
55
+ return src >> -shift;
81
+ case 1:
56
+ } else if (shift < bits) {
82
+ if (el != 0 || !(env->cp15.hcr_el2 & HCR_TGE)) {
57
+ int32_t val = src << shift;
83
+ break;
58
+ if (bits == 32) {
84
+ }
59
+ if (!sat || val >> shift == src) {
85
+ /* fall through */
60
+ return val;
86
+ case 0:
87
+ case 2:
88
+ return 2;
61
+ }
89
+ }
62
+ } else {
90
+ } else {
63
+ int32_t extval = sextract32(val, 0, bits);
91
+ if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TSM)) {
64
+ if (!sat || val == extval) {
92
+ return 2;
65
+ return extval;
66
+ }
93
+ }
67
+ }
94
+ }
68
+ } else if (!sat || src == 0) {
69
+ return 0;
70
+ }
95
+ }
71
+
96
+
72
+ *sat = 1;
97
+ /* CPTR_EL3. Since ESM is negative we must check for EL3. */
73
+ return (1u << (bits - 1)) - (src >= 0);
98
+ if (arm_feature(env, ARM_FEATURE_EL3)
99
+ && !FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, ESM)) {
100
+ return 3;
101
+ }
102
+#endif
103
+ return 0;
74
+}
104
+}
75
+
105
+
76
+static inline uint32_t do_uqrshl_bhs(uint32_t src, int32_t shift, int bits,
106
/*
77
+ bool round, uint32_t *sat)
107
* Given that SVE is enabled, return the vector length for EL.
78
+{
108
*/
79
+ if (shift <= -(bits + round)) {
109
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
80
+ return 0;
110
}
81
+ } else if (shift < 0) {
111
DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
82
+ if (round) {
112
}
83
+ src >>= -shift - 1;
113
+ if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
84
+ return (src >> 1) + (src & 1);
114
+ DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
85
+ }
86
+ return src >> -shift;
87
+ } else if (shift < bits) {
88
+ uint32_t val = src << shift;
89
+ if (bits == 32) {
90
+ if (!sat || val >> shift == src) {
91
+ return val;
92
+ }
93
+ } else {
94
+ uint32_t extval = extract32(val, 0, bits);
95
+ if (!sat || val == extval) {
96
+ return extval;
97
+ }
98
+ }
99
+ } else if (!sat || src == 0) {
100
+ return 0;
101
+ }
115
+ }
102
+
116
103
+ *sat = 1;
117
sctlr = regime_sctlr(env, stage1);
104
+ return MAKE_64BIT_MASK(0, bits);
118
105
+}
119
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
106
+
107
+static inline int32_t do_suqrshl_bhs(int32_t src, int32_t shift, int bits,
108
+ bool round, uint32_t *sat)
109
+{
110
+ if (sat && src < 0) {
111
+ *sat = 1;
112
+ return 0;
113
+ }
114
+ return do_uqrshl_bhs(src, shift, bits, round, sat);
115
+}
116
+
117
+static inline int64_t do_sqrshl_d(int64_t src, int64_t shift,
118
+ bool round, uint32_t *sat)
119
+{
120
+ if (shift <= -64) {
121
+ /* Rounding the sign bit always produces 0. */
122
+ if (round) {
123
+ return 0;
124
+ }
125
+ return src >> 63;
126
+ } else if (shift < 0) {
127
+ if (round) {
128
+ src >>= -shift - 1;
129
+ return (src >> 1) + (src & 1);
130
+ }
131
+ return src >> -shift;
132
+ } else if (shift < 64) {
133
+ int64_t val = src << shift;
134
+ if (!sat || val >> shift == src) {
135
+ return val;
136
+ }
137
+ } else if (!sat || src == 0) {
138
+ return 0;
139
+ }
140
+
141
+ *sat = 1;
142
+ return src < 0 ? INT64_MIN : INT64_MAX;
143
+}
144
+
145
+static inline uint64_t do_uqrshl_d(uint64_t src, int64_t shift,
146
+ bool round, uint32_t *sat)
147
+{
148
+ if (shift <= -(64 + round)) {
149
+ return 0;
150
+ } else if (shift < 0) {
151
+ if (round) {
152
+ src >>= -shift - 1;
153
+ return (src >> 1) + (src & 1);
154
+ }
155
+ return src >> -shift;
156
+ } else if (shift < 64) {
157
+ uint64_t val = src << shift;
158
+ if (!sat || val >> shift == src) {
159
+ return val;
160
+ }
161
+ } else if (!sat || src == 0) {
162
+ return 0;
163
+ }
164
+
165
+ *sat = 1;
166
+ return UINT64_MAX;
167
+}
168
+
169
+static inline int64_t do_suqrshl_d(int64_t src, int64_t shift,
170
+ bool round, uint32_t *sat)
171
+{
172
+ if (sat && src < 0) {
173
+ *sat = 1;
174
+ return 0;
175
+ }
176
+ return do_uqrshl_d(src, shift, round, sat);
177
+}
178
+
179
#endif /* TARGET_ARM_VEC_INTERNALS_H */
180
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
181
index XXXXXXX..XXXXXXX 100644
120
index XXXXXXX..XXXXXXX 100644
182
--- a/target/arm/neon_helper.c
121
--- a/target/arm/translate-a64.c
183
+++ b/target/arm/neon_helper.c
122
+++ b/target/arm/translate-a64.c
184
@@ -XXX,XX +XXX,XX @@
123
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
185
#include "cpu.h"
124
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
186
#include "exec/helper-proto.h"
125
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
187
#include "fpu/softfloat.h"
126
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
188
+#include "vec_internal.h"
127
+ dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
189
128
dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
190
#define SIGNBIT (uint32_t)0x80000000
129
dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
191
#define SIGNBIT64 ((uint64_t)1 << 63)
130
dc->bt = EX_TBFLAG_A64(tb_flags, BT);
192
@@ -XXX,XX +XXX,XX @@ NEON_POP(pmax_s16, neon_s16, 2)
193
NEON_POP(pmax_u16, neon_u16, 2)
194
#undef NEON_FN
195
196
-#define NEON_FN(dest, src1, src2) do { \
197
- int8_t tmp; \
198
- tmp = (int8_t)src2; \
199
- if (tmp >= (ssize_t)sizeof(src1) * 8 || \
200
- tmp <= -(ssize_t)sizeof(src1) * 8) { \
201
- dest = 0; \
202
- } else if (tmp < 0) { \
203
- dest = src1 >> -tmp; \
204
- } else { \
205
- dest = src1 << tmp; \
206
- }} while (0)
207
+#define NEON_FN(dest, src1, src2) \
208
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, false, NULL))
209
NEON_VOP(shl_u16, neon_u16, 2)
210
#undef NEON_FN
211
212
-#define NEON_FN(dest, src1, src2) do { \
213
- int8_t tmp; \
214
- tmp = (int8_t)src2; \
215
- if (tmp >= (ssize_t)sizeof(src1) * 8) { \
216
- dest = 0; \
217
- } else if (tmp <= -(ssize_t)sizeof(src1) * 8) { \
218
- dest = src1 >> (sizeof(src1) * 8 - 1); \
219
- } else if (tmp < 0) { \
220
- dest = src1 >> -tmp; \
221
- } else { \
222
- dest = src1 << tmp; \
223
- }} while (0)
224
+#define NEON_FN(dest, src1, src2) \
225
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, false, NULL))
226
NEON_VOP(shl_s16, neon_s16, 2)
227
#undef NEON_FN
228
229
-#define NEON_FN(dest, src1, src2) do { \
230
- int8_t tmp; \
231
- tmp = (int8_t)src2; \
232
- if ((tmp >= (ssize_t)sizeof(src1) * 8) \
233
- || (tmp <= -(ssize_t)sizeof(src1) * 8)) { \
234
- dest = 0; \
235
- } else if (tmp < 0) { \
236
- dest = (src1 + (1 << (-1 - tmp))) >> -tmp; \
237
- } else { \
238
- dest = src1 << tmp; \
239
- }} while (0)
240
+#define NEON_FN(dest, src1, src2) \
241
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, true, NULL))
242
NEON_VOP(rshl_s8, neon_s8, 4)
243
+#undef NEON_FN
244
+
245
+#define NEON_FN(dest, src1, src2) \
246
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, true, NULL))
247
NEON_VOP(rshl_s16, neon_s16, 2)
248
#undef NEON_FN
249
250
-/* The addition of the rounding constant may overflow, so we use an
251
- * intermediate 64 bit accumulator. */
252
-uint32_t HELPER(neon_rshl_s32)(uint32_t valop, uint32_t shiftop)
253
+uint32_t HELPER(neon_rshl_s32)(uint32_t val, uint32_t shift)
254
{
255
- int32_t dest;
256
- int32_t val = (int32_t)valop;
257
- int8_t shift = (int8_t)shiftop;
258
- if ((shift >= 32) || (shift <= -32)) {
259
- dest = 0;
260
- } else if (shift < 0) {
261
- int64_t big_dest = ((int64_t)val + (1 << (-1 - shift)));
262
- dest = big_dest >> -shift;
263
- } else {
264
- dest = val << shift;
265
- }
266
- return dest;
267
+ return do_sqrshl_bhs(val, (int8_t)shift, 32, true, NULL);
268
}
269
270
-/* Handling addition overflow with 64 bit input values is more
271
- * tricky than with 32 bit values. */
272
-uint64_t HELPER(neon_rshl_s64)(uint64_t valop, uint64_t shiftop)
273
+uint64_t HELPER(neon_rshl_s64)(uint64_t val, uint64_t shift)
274
{
275
- int8_t shift = (int8_t)shiftop;
276
- int64_t val = valop;
277
- if ((shift >= 64) || (shift <= -64)) {
278
- val = 0;
279
- } else if (shift < 0) {
280
- val >>= (-shift - 1);
281
- if (val == INT64_MAX) {
282
- /* In this case, it means that the rounding constant is 1,
283
- * and the addition would overflow. Return the actual
284
- * result directly. */
285
- val = 0x4000000000000000LL;
286
- } else {
287
- val++;
288
- val >>= 1;
289
- }
290
- } else {
291
- val <<= shift;
292
- }
293
- return val;
294
+ return do_sqrshl_d(val, (int8_t)shift, true, NULL);
295
}
296
297
-#define NEON_FN(dest, src1, src2) do { \
298
- int8_t tmp; \
299
- tmp = (int8_t)src2; \
300
- if (tmp >= (ssize_t)sizeof(src1) * 8 || \
301
- tmp < -(ssize_t)sizeof(src1) * 8) { \
302
- dest = 0; \
303
- } else if (tmp == -(ssize_t)sizeof(src1) * 8) { \
304
- dest = src1 >> (-tmp - 1); \
305
- } else if (tmp < 0) { \
306
- dest = (src1 + (1 << (-1 - tmp))) >> -tmp; \
307
- } else { \
308
- dest = src1 << tmp; \
309
- }} while (0)
310
+#define NEON_FN(dest, src1, src2) \
311
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, true, NULL))
312
NEON_VOP(rshl_u8, neon_u8, 4)
313
+#undef NEON_FN
314
+
315
+#define NEON_FN(dest, src1, src2) \
316
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, true, NULL))
317
NEON_VOP(rshl_u16, neon_u16, 2)
318
#undef NEON_FN
319
320
-/* The addition of the rounding constant may overflow, so we use an
321
- * intermediate 64 bit accumulator. */
322
-uint32_t HELPER(neon_rshl_u32)(uint32_t val, uint32_t shiftop)
323
+uint32_t HELPER(neon_rshl_u32)(uint32_t val, uint32_t shift)
324
{
325
- uint32_t dest;
326
- int8_t shift = (int8_t)shiftop;
327
- if (shift >= 32 || shift < -32) {
328
- dest = 0;
329
- } else if (shift == -32) {
330
- dest = val >> 31;
331
- } else if (shift < 0) {
332
- uint64_t big_dest = ((uint64_t)val + (1 << (-1 - shift)));
333
- dest = big_dest >> -shift;
334
- } else {
335
- dest = val << shift;
336
- }
337
- return dest;
338
+ return do_uqrshl_bhs(val, (int8_t)shift, 32, true, NULL);
339
}
340
341
-/* Handling addition overflow with 64 bit input values is more
342
- * tricky than with 32 bit values. */
343
-uint64_t HELPER(neon_rshl_u64)(uint64_t val, uint64_t shiftop)
344
+uint64_t HELPER(neon_rshl_u64)(uint64_t val, uint64_t shift)
345
{
346
- int8_t shift = (uint8_t)shiftop;
347
- if (shift >= 64 || shift < -64) {
348
- val = 0;
349
- } else if (shift == -64) {
350
- /* Rounding a 1-bit result just preserves that bit. */
351
- val >>= 63;
352
- } else if (shift < 0) {
353
- val >>= (-shift - 1);
354
- if (val == UINT64_MAX) {
355
- /* In this case, it means that the rounding constant is 1,
356
- * and the addition would overflow. Return the actual
357
- * result directly. */
358
- val = 0x8000000000000000ULL;
359
- } else {
360
- val++;
361
- val >>= 1;
362
- }
363
- } else {
364
- val <<= shift;
365
- }
366
- return val;
367
+ return do_uqrshl_d(val, (int8_t)shift, true, NULL);
368
}
369
370
-#define NEON_FN(dest, src1, src2) do { \
371
- int8_t tmp; \
372
- tmp = (int8_t)src2; \
373
- if (tmp >= (ssize_t)sizeof(src1) * 8) { \
374
- if (src1) { \
375
- SET_QC(); \
376
- dest = ~0; \
377
- } else { \
378
- dest = 0; \
379
- } \
380
- } else if (tmp <= -(ssize_t)sizeof(src1) * 8) { \
381
- dest = 0; \
382
- } else if (tmp < 0) { \
383
- dest = src1 >> -tmp; \
384
- } else { \
385
- dest = src1 << tmp; \
386
- if ((dest >> tmp) != src1) { \
387
- SET_QC(); \
388
- dest = ~0; \
389
- } \
390
- }} while (0)
391
+#define NEON_FN(dest, src1, src2) \
392
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, false, env->vfp.qc))
393
NEON_VOP_ENV(qshl_u8, neon_u8, 4)
394
+#undef NEON_FN
395
+
396
+#define NEON_FN(dest, src1, src2) \
397
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, false, env->vfp.qc))
398
NEON_VOP_ENV(qshl_u16, neon_u16, 2)
399
-NEON_VOP_ENV(qshl_u32, neon_u32, 1)
400
#undef NEON_FN
401
402
-uint64_t HELPER(neon_qshl_u64)(CPUARMState *env, uint64_t val, uint64_t shiftop)
403
+uint32_t HELPER(neon_qshl_u32)(CPUARMState *env, uint32_t val, uint32_t shift)
404
{
405
- int8_t shift = (int8_t)shiftop;
406
- if (shift >= 64) {
407
- if (val) {
408
- val = ~(uint64_t)0;
409
- SET_QC();
410
- }
411
- } else if (shift <= -64) {
412
- val = 0;
413
- } else if (shift < 0) {
414
- val >>= -shift;
415
- } else {
416
- uint64_t tmp = val;
417
- val <<= shift;
418
- if ((val >> shift) != tmp) {
419
- SET_QC();
420
- val = ~(uint64_t)0;
421
- }
422
- }
423
- return val;
424
+ return do_uqrshl_bhs(val, (int8_t)shift, 32, false, env->vfp.qc);
425
}
426
427
-#define NEON_FN(dest, src1, src2) do { \
428
- int8_t tmp; \
429
- tmp = (int8_t)src2; \
430
- if (tmp >= (ssize_t)sizeof(src1) * 8) { \
431
- if (src1) { \
432
- SET_QC(); \
433
- dest = (uint32_t)(1 << (sizeof(src1) * 8 - 1)); \
434
- if (src1 > 0) { \
435
- dest--; \
436
- } \
437
- } else { \
438
- dest = src1; \
439
- } \
440
- } else if (tmp <= -(ssize_t)sizeof(src1) * 8) { \
441
- dest = src1 >> 31; \
442
- } else if (tmp < 0) { \
443
- dest = src1 >> -tmp; \
444
- } else { \
445
- dest = src1 << tmp; \
446
- if ((dest >> tmp) != src1) { \
447
- SET_QC(); \
448
- dest = (uint32_t)(1 << (sizeof(src1) * 8 - 1)); \
449
- if (src1 > 0) { \
450
- dest--; \
451
- } \
452
- } \
453
- }} while (0)
454
+uint64_t HELPER(neon_qshl_u64)(CPUARMState *env, uint64_t val, uint64_t shift)
455
+{
456
+ return do_uqrshl_d(val, (int8_t)shift, false, env->vfp.qc);
457
+}
458
+
459
+#define NEON_FN(dest, src1, src2) \
460
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, false, env->vfp.qc))
461
NEON_VOP_ENV(qshl_s8, neon_s8, 4)
462
+#undef NEON_FN
463
+
464
+#define NEON_FN(dest, src1, src2) \
465
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, false, env->vfp.qc))
466
NEON_VOP_ENV(qshl_s16, neon_s16, 2)
467
-NEON_VOP_ENV(qshl_s32, neon_s32, 1)
468
#undef NEON_FN
469
470
-uint64_t HELPER(neon_qshl_s64)(CPUARMState *env, uint64_t valop, uint64_t shiftop)
471
+uint32_t HELPER(neon_qshl_s32)(CPUARMState *env, uint32_t val, uint32_t shift)
472
{
473
- int8_t shift = (uint8_t)shiftop;
474
- int64_t val = valop;
475
- if (shift >= 64) {
476
- if (val) {
477
- SET_QC();
478
- val = (val >> 63) ^ ~SIGNBIT64;
479
- }
480
- } else if (shift <= -64) {
481
- val >>= 63;
482
- } else if (shift < 0) {
483
- val >>= -shift;
484
- } else {
485
- int64_t tmp = val;
486
- val <<= shift;
487
- if ((val >> shift) != tmp) {
488
- SET_QC();
489
- val = (tmp >> 63) ^ ~SIGNBIT64;
490
- }
491
- }
492
- return val;
493
+ return do_sqrshl_bhs(val, (int8_t)shift, 32, false, env->vfp.qc);
494
}
495
496
-#define NEON_FN(dest, src1, src2) do { \
497
- if (src1 & (1 << (sizeof(src1) * 8 - 1))) { \
498
- SET_QC(); \
499
- dest = 0; \
500
- } else { \
501
- int8_t tmp; \
502
- tmp = (int8_t)src2; \
503
- if (tmp >= (ssize_t)sizeof(src1) * 8) { \
504
- if (src1) { \
505
- SET_QC(); \
506
- dest = ~0; \
507
- } else { \
508
- dest = 0; \
509
- } \
510
- } else if (tmp <= -(ssize_t)sizeof(src1) * 8) { \
511
- dest = 0; \
512
- } else if (tmp < 0) { \
513
- dest = src1 >> -tmp; \
514
- } else { \
515
- dest = src1 << tmp; \
516
- if ((dest >> tmp) != src1) { \
517
- SET_QC(); \
518
- dest = ~0; \
519
- } \
520
- } \
521
- }} while (0)
522
-NEON_VOP_ENV(qshlu_s8, neon_u8, 4)
523
-NEON_VOP_ENV(qshlu_s16, neon_u16, 2)
524
+uint64_t HELPER(neon_qshl_s64)(CPUARMState *env, uint64_t val, uint64_t shift)
525
+{
526
+ return do_sqrshl_d(val, (int8_t)shift, false, env->vfp.qc);
527
+}
528
+
529
+#define NEON_FN(dest, src1, src2) \
530
+ (dest = do_suqrshl_bhs(src1, (int8_t)src2, 8, false, env->vfp.qc))
531
+NEON_VOP_ENV(qshlu_s8, neon_s8, 4)
532
#undef NEON_FN
533
534
-uint32_t HELPER(neon_qshlu_s32)(CPUARMState *env, uint32_t valop, uint32_t shiftop)
535
+#define NEON_FN(dest, src1, src2) \
536
+ (dest = do_suqrshl_bhs(src1, (int8_t)src2, 16, false, env->vfp.qc))
537
+NEON_VOP_ENV(qshlu_s16, neon_s16, 2)
538
+#undef NEON_FN
539
+
540
+uint32_t HELPER(neon_qshlu_s32)(CPUARMState *env, uint32_t val, uint32_t shift)
541
{
542
- if ((int32_t)valop < 0) {
543
- SET_QC();
544
- return 0;
545
- }
546
- return helper_neon_qshl_u32(env, valop, shiftop);
547
+ return do_suqrshl_bhs(val, (int8_t)shift, 32, false, env->vfp.qc);
548
}
549
550
-uint64_t HELPER(neon_qshlu_s64)(CPUARMState *env, uint64_t valop, uint64_t shiftop)
551
+uint64_t HELPER(neon_qshlu_s64)(CPUARMState *env, uint64_t val, uint64_t shift)
552
{
553
- if ((int64_t)valop < 0) {
554
- SET_QC();
555
- return 0;
556
- }
557
- return helper_neon_qshl_u64(env, valop, shiftop);
558
+ return do_suqrshl_d(val, (int8_t)shift, false, env->vfp.qc);
559
}
560
561
-#define NEON_FN(dest, src1, src2) do { \
562
- int8_t tmp; \
563
- tmp = (int8_t)src2; \
564
- if (tmp >= (ssize_t)sizeof(src1) * 8) { \
565
- if (src1) { \
566
- SET_QC(); \
567
- dest = ~0; \
568
- } else { \
569
- dest = 0; \
570
- } \
571
- } else if (tmp < -(ssize_t)sizeof(src1) * 8) { \
572
- dest = 0; \
573
- } else if (tmp == -(ssize_t)sizeof(src1) * 8) { \
574
- dest = src1 >> (sizeof(src1) * 8 - 1); \
575
- } else if (tmp < 0) { \
576
- dest = (src1 + (1 << (-1 - tmp))) >> -tmp; \
577
- } else { \
578
- dest = src1 << tmp; \
579
- if ((dest >> tmp) != src1) { \
580
- SET_QC(); \
581
- dest = ~0; \
582
- } \
583
- }} while (0)
584
+#define NEON_FN(dest, src1, src2) \
585
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, true, env->vfp.qc))
586
NEON_VOP_ENV(qrshl_u8, neon_u8, 4)
587
+#undef NEON_FN
588
+
589
+#define NEON_FN(dest, src1, src2) \
590
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, true, env->vfp.qc))
591
NEON_VOP_ENV(qrshl_u16, neon_u16, 2)
592
#undef NEON_FN
593
594
-/* The addition of the rounding constant may overflow, so we use an
595
- * intermediate 64 bit accumulator. */
596
-uint32_t HELPER(neon_qrshl_u32)(CPUARMState *env, uint32_t val, uint32_t shiftop)
597
+uint32_t HELPER(neon_qrshl_u32)(CPUARMState *env, uint32_t val, uint32_t shift)
598
{
599
- uint32_t dest;
600
- int8_t shift = (int8_t)shiftop;
601
- if (shift >= 32) {
602
- if (val) {
603
- SET_QC();
604
- dest = ~0;
605
- } else {
606
- dest = 0;
607
- }
608
- } else if (shift < -32) {
609
- dest = 0;
610
- } else if (shift == -32) {
611
- dest = val >> 31;
612
- } else if (shift < 0) {
613
- uint64_t big_dest = ((uint64_t)val + (1 << (-1 - shift)));
614
- dest = big_dest >> -shift;
615
- } else {
616
- dest = val << shift;
617
- if ((dest >> shift) != val) {
618
- SET_QC();
619
- dest = ~0;
620
- }
621
- }
622
- return dest;
623
+ return do_uqrshl_bhs(val, (int8_t)shift, 32, true, env->vfp.qc);
624
}
625
626
-/* Handling addition overflow with 64 bit input values is more
627
- * tricky than with 32 bit values. */
628
-uint64_t HELPER(neon_qrshl_u64)(CPUARMState *env, uint64_t val, uint64_t shiftop)
629
+uint64_t HELPER(neon_qrshl_u64)(CPUARMState *env, uint64_t val, uint64_t shift)
630
{
631
- int8_t shift = (int8_t)shiftop;
632
- if (shift >= 64) {
633
- if (val) {
634
- SET_QC();
635
- val = ~0;
636
- }
637
- } else if (shift < -64) {
638
- val = 0;
639
- } else if (shift == -64) {
640
- val >>= 63;
641
- } else if (shift < 0) {
642
- val >>= (-shift - 1);
643
- if (val == UINT64_MAX) {
644
- /* In this case, it means that the rounding constant is 1,
645
- * and the addition would overflow. Return the actual
646
- * result directly. */
647
- val = 0x8000000000000000ULL;
648
- } else {
649
- val++;
650
- val >>= 1;
651
- }
652
- } else { \
653
- uint64_t tmp = val;
654
- val <<= shift;
655
- if ((val >> shift) != tmp) {
656
- SET_QC();
657
- val = ~0;
658
- }
659
- }
660
- return val;
661
+ return do_uqrshl_d(val, (int8_t)shift, true, env->vfp.qc);
662
}
663
664
-#define NEON_FN(dest, src1, src2) do { \
665
- int8_t tmp; \
666
- tmp = (int8_t)src2; \
667
- if (tmp >= (ssize_t)sizeof(src1) * 8) { \
668
- if (src1) { \
669
- SET_QC(); \
670
- dest = (typeof(dest))(1 << (sizeof(src1) * 8 - 1)); \
671
- if (src1 > 0) { \
672
- dest--; \
673
- } \
674
- } else { \
675
- dest = 0; \
676
- } \
677
- } else if (tmp <= -(ssize_t)sizeof(src1) * 8) { \
678
- dest = 0; \
679
- } else if (tmp < 0) { \
680
- dest = (src1 + (1 << (-1 - tmp))) >> -tmp; \
681
- } else { \
682
- dest = src1 << tmp; \
683
- if ((dest >> tmp) != src1) { \
684
- SET_QC(); \
685
- dest = (uint32_t)(1 << (sizeof(src1) * 8 - 1)); \
686
- if (src1 > 0) { \
687
- dest--; \
688
- } \
689
- } \
690
- }} while (0)
691
+#define NEON_FN(dest, src1, src2) \
692
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, true, env->vfp.qc))
693
NEON_VOP_ENV(qrshl_s8, neon_s8, 4)
694
+#undef NEON_FN
695
+
696
+#define NEON_FN(dest, src1, src2) \
697
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, true, env->vfp.qc))
698
NEON_VOP_ENV(qrshl_s16, neon_s16, 2)
699
#undef NEON_FN
700
701
-/* The addition of the rounding constant may overflow, so we use an
702
- * intermediate 64 bit accumulator. */
703
-uint32_t HELPER(neon_qrshl_s32)(CPUARMState *env, uint32_t valop, uint32_t shiftop)
704
+uint32_t HELPER(neon_qrshl_s32)(CPUARMState *env, uint32_t val, uint32_t shift)
705
{
706
- int32_t dest;
707
- int32_t val = (int32_t)valop;
708
- int8_t shift = (int8_t)shiftop;
709
- if (shift >= 32) {
710
- if (val) {
711
- SET_QC();
712
- dest = (val >> 31) ^ ~SIGNBIT;
713
- } else {
714
- dest = 0;
715
- }
716
- } else if (shift <= -32) {
717
- dest = 0;
718
- } else if (shift < 0) {
719
- int64_t big_dest = ((int64_t)val + (1 << (-1 - shift)));
720
- dest = big_dest >> -shift;
721
- } else {
722
- dest = val << shift;
723
- if ((dest >> shift) != val) {
724
- SET_QC();
725
- dest = (val >> 31) ^ ~SIGNBIT;
726
- }
727
- }
728
- return dest;
729
+ return do_sqrshl_bhs(val, (int8_t)shift, 32, true, env->vfp.qc);
730
}
731
732
-/* Handling addition overflow with 64 bit input values is more
733
- * tricky than with 32 bit values. */
734
-uint64_t HELPER(neon_qrshl_s64)(CPUARMState *env, uint64_t valop, uint64_t shiftop)
735
+uint64_t HELPER(neon_qrshl_s64)(CPUARMState *env, uint64_t val, uint64_t shift)
736
{
737
- int8_t shift = (uint8_t)shiftop;
738
- int64_t val = valop;
739
-
740
- if (shift >= 64) {
741
- if (val) {
742
- SET_QC();
743
- val = (val >> 63) ^ ~SIGNBIT64;
744
- }
745
- } else if (shift <= -64) {
746
- val = 0;
747
- } else if (shift < 0) {
748
- val >>= (-shift - 1);
749
- if (val == INT64_MAX) {
750
- /* In this case, it means that the rounding constant is 1,
751
- * and the addition would overflow. Return the actual
752
- * result directly. */
753
- val = 0x4000000000000000ULL;
754
- } else {
755
- val++;
756
- val >>= 1;
757
- }
758
- } else {
759
- int64_t tmp = val;
760
- val <<= shift;
761
- if ((val >> shift) != tmp) {
762
- SET_QC();
763
- val = (tmp >> 63) ^ ~SIGNBIT64;
764
- }
765
- }
766
- return val;
767
+ return do_sqrshl_d(val, (int8_t)shift, true, env->vfp.qc);
768
}
769
770
uint32_t HELPER(neon_add_u8)(uint32_t a, uint32_t b)
771
--
131
--
772
2.20.1
132
2.25.1
773
774
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This will be used for raising various traps for SME.
2
4
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-60-richard.henderson@linaro.org
7
Message-id: 20220620175235.60881-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper.h | 14 ++++++
10
target/arm/syndrome.h | 14 ++++++++++++++
9
target/arm/sve.decode | 8 ++++
11
1 file changed, 14 insertions(+)
10
target/arm/translate-sve.c | 8 ++++
11
target/arm/vec_helper.c | 88 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 118 insertions(+)
13
12
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
13
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
15
--- a/target/arm/syndrome.h
17
+++ b/target/arm/helper.h
16
+++ b/target/arm/syndrome.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_sqrdmulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
17
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
19
DEF_HELPER_FLAGS_4(sve2_sqrdmulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
EC_AA64_SMC = 0x17,
20
DEF_HELPER_FLAGS_4(sve2_sqrdmulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
EC_SYSTEMREGISTERTRAP = 0x18,
21
20
EC_SVEACCESSTRAP = 0x19,
22
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_idx_h, TCG_CALL_NO_RWG,
21
+ EC_SMETRAP = 0x1d,
23
+ void, ptr, ptr, ptr, i32)
22
EC_INSNABORT = 0x20,
24
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_idx_s, TCG_CALL_NO_RWG,
23
EC_INSNABORT_SAME_EL = 0x21,
25
+ void, ptr, ptr, ptr, i32)
24
EC_PCALIGNMENT = 0x22,
26
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_idx_d, TCG_CALL_NO_RWG,
25
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
27
+ void, ptr, ptr, ptr, i32)
26
EC_AA64_BKPT = 0x3c,
27
};
28
29
+typedef enum {
30
+ SME_ET_AccessTrap,
31
+ SME_ET_Streaming,
32
+ SME_ET_NotStreaming,
33
+ SME_ET_InactiveZA,
34
+} SMEExceptionType;
28
+
35
+
29
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_h, TCG_CALL_NO_RWG,
36
#define ARM_EL_EC_SHIFT 26
30
+ void, ptr, ptr, ptr, i32)
37
#define ARM_EL_IL_SHIFT 25
31
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_s, TCG_CALL_NO_RWG,
38
#define ARM_EL_ISV_SHIFT 24
32
+ void, ptr, ptr, ptr, i32)
39
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_sve_access_trap(void)
33
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_d, TCG_CALL_NO_RWG,
40
return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
34
+ void, ptr, ptr, ptr, i32)
35
+
36
DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
38
#ifdef TARGET_AARCH64
39
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve.decode
42
+++ b/target/arm/sve.decode
43
@@ -XXX,XX +XXX,XX @@ SQDMULLB_zzx_d 01000100 11 1 ..... 1110.0 ..... ..... @rrx_2a esz=3
44
SQDMULLT_zzx_s 01000100 10 1 ..... 1110.1 ..... ..... @rrx_3a esz=2
45
SQDMULLT_zzx_d 01000100 11 1 ..... 1110.1 ..... ..... @rrx_2a esz=3
46
47
+# SVE2 saturating multiply high (indexed)
48
+SQDMULH_zzx_h 01000100 0. 1 ..... 111100 ..... ..... @rrx_3 esz=1
49
+SQDMULH_zzx_s 01000100 10 1 ..... 111100 ..... ..... @rrx_2 esz=2
50
+SQDMULH_zzx_d 01000100 11 1 ..... 111100 ..... ..... @rrx_1 esz=3
51
+SQRDMULH_zzx_h 01000100 0. 1 ..... 111101 ..... ..... @rrx_3 esz=1
52
+SQRDMULH_zzx_s 01000100 10 1 ..... 111101 ..... ..... @rrx_2 esz=2
53
+SQRDMULH_zzx_d 01000100 11 1 ..... 111101 ..... ..... @rrx_1 esz=3
54
+
55
# SVE2 integer multiply (indexed)
56
MUL_zzx_h 01000100 0. 1 ..... 111110 ..... ..... @rrx_3 esz=1
57
MUL_zzx_s 01000100 10 1 ..... 111110 ..... ..... @rrx_2 esz=2
58
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-sve.c
61
+++ b/target/arm/translate-sve.c
62
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRX(trans_MUL_zzx_h, gen_helper_gvec_mul_idx_h)
63
DO_SVE2_RRX(trans_MUL_zzx_s, gen_helper_gvec_mul_idx_s)
64
DO_SVE2_RRX(trans_MUL_zzx_d, gen_helper_gvec_mul_idx_d)
65
66
+DO_SVE2_RRX(trans_SQDMULH_zzx_h, gen_helper_sve2_sqdmulh_idx_h)
67
+DO_SVE2_RRX(trans_SQDMULH_zzx_s, gen_helper_sve2_sqdmulh_idx_s)
68
+DO_SVE2_RRX(trans_SQDMULH_zzx_d, gen_helper_sve2_sqdmulh_idx_d)
69
+
70
+DO_SVE2_RRX(trans_SQRDMULH_zzx_h, gen_helper_sve2_sqrdmulh_idx_h)
71
+DO_SVE2_RRX(trans_SQRDMULH_zzx_s, gen_helper_sve2_sqrdmulh_idx_s)
72
+DO_SVE2_RRX(trans_SQRDMULH_zzx_d, gen_helper_sve2_sqrdmulh_idx_d)
73
+
74
#undef DO_SVE2_RRX
75
76
#define DO_SVE2_RRX_TB(NAME, FUNC, TOP) \
77
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/vec_helper.c
80
+++ b/target/arm/vec_helper.c
81
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmulh_h)(void *vd, void *vn, void *vm, uint32_t desc)
82
}
83
}
41
}
84
42
85
+void HELPER(sve2_sqdmulh_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
43
+static inline uint32_t syn_smetrap(SMEExceptionType etype, bool is_16bit)
86
+{
44
+{
87
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
45
+ return (EC_SMETRAP << ARM_EL_EC_SHIFT)
88
+ int idx = simd_data(desc);
46
+ | (is_16bit ? 0 : ARM_EL_IL) | etype;
89
+ int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
90
+ uint32_t discard;
91
+
92
+ for (i = 0; i < opr_sz / 2; i += 16 / 2) {
93
+ int16_t mm = m[i];
94
+ for (j = 0; j < 16 / 2; ++j) {
95
+ d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, false, &discard);
96
+ }
97
+ }
98
+}
47
+}
99
+
48
+
100
+void HELPER(sve2_sqrdmulh_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
49
static inline uint32_t syn_pactrap(void)
101
+{
102
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
103
+ int idx = simd_data(desc);
104
+ int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
105
+ uint32_t discard;
106
+
107
+ for (i = 0; i < opr_sz / 2; i += 16 / 2) {
108
+ int16_t mm = m[i];
109
+ for (j = 0; j < 16 / 2; ++j) {
110
+ d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, true, &discard);
111
+ }
112
+ }
113
+}
114
+
115
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
116
int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
117
bool neg, bool round, uint32_t *sat)
118
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmulh_s)(void *vd, void *vn, void *vm, uint32_t desc)
119
}
120
}
121
122
+void HELPER(sve2_sqdmulh_idx_s)(void *vd, void *vn, void *vm, uint32_t desc)
123
+{
124
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
125
+ int idx = simd_data(desc);
126
+ int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
127
+ uint32_t discard;
128
+
129
+ for (i = 0; i < opr_sz / 4; i += 16 / 4) {
130
+ int32_t mm = m[i];
131
+ for (j = 0; j < 16 / 4; ++j) {
132
+ d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, false, &discard);
133
+ }
134
+ }
135
+}
136
+
137
+void HELPER(sve2_sqrdmulh_idx_s)(void *vd, void *vn, void *vm, uint32_t desc)
138
+{
139
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
140
+ int idx = simd_data(desc);
141
+ int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
142
+ uint32_t discard;
143
+
144
+ for (i = 0; i < opr_sz / 4; i += 16 / 4) {
145
+ int32_t mm = m[i];
146
+ for (j = 0; j < 16 / 4; ++j) {
147
+ d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, true, &discard);
148
+ }
149
+ }
150
+}
151
+
152
/* Signed saturating rounding doubling multiply-accumulate high half, 64-bit */
153
static int64_t do_sat128_d(Int128 r)
154
{
50
{
155
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmulh_d)(void *vd, void *vn, void *vm, uint32_t desc)
51
return EC_PACTRAP << ARM_EL_EC_SHIFT;
156
}
157
}
158
159
+void HELPER(sve2_sqdmulh_idx_d)(void *vd, void *vn, void *vm, uint32_t desc)
160
+{
161
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
162
+ int idx = simd_data(desc);
163
+ int64_t *d = vd, *n = vn, *m = (int64_t *)vm + idx;
164
+
165
+ for (i = 0; i < opr_sz / 8; i += 16 / 8) {
166
+ int64_t mm = m[i];
167
+ for (j = 0; j < 16 / 8; ++j) {
168
+ d[i + j] = do_sqrdmlah_d(n[i + j], mm, 0, false, false);
169
+ }
170
+ }
171
+}
172
+
173
+void HELPER(sve2_sqrdmulh_idx_d)(void *vd, void *vn, void *vm, uint32_t desc)
174
+{
175
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
176
+ int idx = simd_data(desc);
177
+ int64_t *d = vd, *n = vn, *m = (int64_t *)vm + idx;
178
+
179
+ for (i = 0; i < opr_sz / 8; i += 16 / 8) {
180
+ int64_t mm = m[i];
181
+ for (j = 0; j < 16 / 8; ++j) {
182
+ d[i + j] = do_sqrdmlah_d(n[i + j], mm, 0, false, true);
183
+ }
184
+ }
185
+}
186
+
187
/* Integer 8 and 16-bit dot-product.
188
*
189
* Note that for the loops herein, host endianness does not matter
190
--
52
--
191
2.20.1
53
2.25.1
192
193
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For SVE, we potentially have a 4th argument coming from the
3
This will be used for controlling access to SME cpregs.
4
movprfx instruction. Currently we do not optimize movprfx,
5
so the problem is not visible.
6
4
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210525010358.152808-51-richard.henderson@linaro.org
7
Message-id: 20220620175235.60881-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/helper.h | 20 +++++++--------
10
target/arm/cpregs.h | 5 +++++
13
target/arm/translate-a64.c | 28 +++++++++++++++++----
11
target/arm/translate-a64.c | 18 ++++++++++++++++++
14
target/arm/translate-neon.c | 10 +++++---
12
2 files changed, 23 insertions(+)
15
target/arm/translate-sve.c | 5 ++--
16
target/arm/vec_helper.c | 50 +++++++++++++++----------------------
17
5 files changed, 62 insertions(+), 51 deletions(-)
18
13
19
diff --git a/target/arm/helper.h b/target/arm/helper.h
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.h
16
--- a/target/arm/cpregs.h
22
+++ b/target/arm/helper.h
17
+++ b/target/arm/cpregs.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
18
@@ -XXX,XX +XXX,XX @@ enum {
24
DEF_HELPER_FLAGS_5(gvec_fcaddd, TCG_CALL_NO_RWG,
19
ARM_CP_EL3_NO_EL2_UNDEF = 1 << 16,
25
void, ptr, ptr, ptr, ptr, i32)
20
ARM_CP_EL3_NO_EL2_KEEP = 1 << 17,
26
21
ARM_CP_EL3_NO_EL2_C_NZ = 1 << 18,
27
-DEF_HELPER_FLAGS_5(gvec_fcmlah, TCG_CALL_NO_RWG,
22
+ /*
28
- void, ptr, ptr, ptr, ptr, i32)
23
+ * Flag: Access check for this sysreg is constrained by the
29
-DEF_HELPER_FLAGS_5(gvec_fcmlah_idx, TCG_CALL_NO_RWG,
24
+ * ARM pseudocode function CheckSMEAccess().
30
- void, ptr, ptr, ptr, ptr, i32)
25
+ */
31
-DEF_HELPER_FLAGS_5(gvec_fcmlas, TCG_CALL_NO_RWG,
26
+ ARM_CP_SME = 1 << 19,
32
- void, ptr, ptr, ptr, ptr, i32)
27
};
33
-DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
28
34
- void, ptr, ptr, ptr, ptr, i32)
29
/*
35
-DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
36
- void, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_6(gvec_fcmlah, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_6(gvec_fcmlah_idx, TCG_CALL_NO_RWG,
40
+ void, ptr, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_6(gvec_fcmlas, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_6(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_6(gvec_fcmlad, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, ptr, i32)
47
48
DEF_HELPER_FLAGS_5(neon_paddh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
49
DEF_HELPER_FLAGS_5(neon_pmaxh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
50
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
30
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
51
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-a64.c
32
--- a/target/arm/translate-a64.c
53
+++ b/target/arm/translate-a64.c
33
+++ b/target/arm/translate-a64.c
54
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op4_ool(DisasContext *s, bool is_q, int rd, int rn,
34
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
55
is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
35
return fp_access_check(s);
56
}
36
}
57
37
58
+/*
38
+/*
59
+ * Expand a 4-operand + fpstatus pointer + simd data value operation using
39
+ * Check that SME access is enabled, raise an exception if not.
60
+ * an out-of-line helper.
40
+ * Note that this function corresponds to CheckSMEAccess and is
41
+ * only used directly for cpregs.
61
+ */
42
+ */
62
+static void gen_gvec_op4_fpst(DisasContext *s, bool is_q, int rd, int rn,
43
+static bool sme_access_check(DisasContext *s)
63
+ int rm, int ra, bool is_fp16, int data,
64
+ gen_helper_gvec_4_ptr *fn)
65
+{
44
+{
66
+ TCGv_ptr fpst = fpstatus_ptr(is_fp16 ? FPST_FPCR_F16 : FPST_FPCR);
45
+ if (s->sme_excp_el) {
67
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, rd),
46
+ gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
68
+ vec_full_reg_offset(s, rn),
47
+ syn_smetrap(SME_ET_AccessTrap, false),
69
+ vec_full_reg_offset(s, rm),
48
+ s->sme_excp_el);
70
+ vec_full_reg_offset(s, ra), fpst,
49
+ return false;
71
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
50
+ }
72
+ tcg_temp_free_ptr(fpst);
51
+ return true;
73
+}
52
+}
74
+
53
+
75
/* Set ZF and NF based on a 64 bit result. This is alas fiddlier
54
/*
76
* than the 32 bit equivalent.
55
* This utility function is for doing register extension with an
77
*/
56
* optional shift. You will likely want to pass a temporary for the
78
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
57
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
79
rot = extract32(opcode, 0, 2);
58
return;
80
switch (size) {
59
} else if ((ri->type & ARM_CP_SVE) && !sve_access_check(s)) {
81
case 1:
60
return;
82
- gen_gvec_op3_fpst(s, is_q, rd, rn, rm, true, rot,
61
+ } else if ((ri->type & ARM_CP_SME) && !sme_access_check(s)) {
83
+ gen_gvec_op4_fpst(s, is_q, rd, rn, rm, rd, true, rot,
62
+ return;
84
gen_helper_gvec_fcmlah);
85
break;
86
case 2:
87
- gen_gvec_op3_fpst(s, is_q, rd, rn, rm, false, rot,
88
+ gen_gvec_op4_fpst(s, is_q, rd, rn, rm, rd, false, rot,
89
gen_helper_gvec_fcmlas);
90
break;
91
case 3:
92
- gen_gvec_op3_fpst(s, is_q, rd, rn, rm, false, rot,
93
+ gen_gvec_op4_fpst(s, is_q, rd, rn, rm, rd, false, rot,
94
gen_helper_gvec_fcmlad);
95
break;
96
default:
97
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
98
{
99
int rot = extract32(insn, 13, 2);
100
int data = (index << 2) | rot;
101
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
102
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, rd),
103
vec_full_reg_offset(s, rn),
104
- vec_full_reg_offset(s, rm), fpst,
105
+ vec_full_reg_offset(s, rm),
106
+ vec_full_reg_offset(s, rd), fpst,
107
is_q ? 16 : 8, vec_full_reg_size(s), data,
108
size == MO_64
109
? gen_helper_gvec_fcmlas_idx
110
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/translate-neon.c
113
+++ b/target/arm/translate-neon.c
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
115
{
116
int opr_sz;
117
TCGv_ptr fpst;
118
- gen_helper_gvec_3_ptr *fn_gvec_ptr;
119
+ gen_helper_gvec_4_ptr *fn_gvec_ptr;
120
121
if (!dc_isar_feature(aa32_vcma, s)
122
|| (a->size == MO_16 && !dc_isar_feature(aa32_fp16_arith, s))) {
123
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
124
fpst = fpstatus_ptr(a->size == MO_16 ? FPST_STD_F16 : FPST_STD);
125
fn_gvec_ptr = (a->size == MO_16) ?
126
gen_helper_gvec_fcmlah : gen_helper_gvec_fcmlas;
127
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
128
+ tcg_gen_gvec_4_ptr(vfp_reg_offset(1, a->vd),
129
vfp_reg_offset(1, a->vn),
130
vfp_reg_offset(1, a->vm),
131
+ vfp_reg_offset(1, a->vd),
132
fpst, opr_sz, opr_sz, a->rot,
133
fn_gvec_ptr);
134
tcg_temp_free_ptr(fpst);
135
@@ -XXX,XX +XXX,XX @@ static bool trans_VFML(DisasContext *s, arg_VFML *a)
136
137
static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
138
{
139
- gen_helper_gvec_3_ptr *fn_gvec_ptr;
140
+ gen_helper_gvec_4_ptr *fn_gvec_ptr;
141
int opr_sz;
142
TCGv_ptr fpst;
143
144
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
145
gen_helper_gvec_fcmlah_idx : gen_helper_gvec_fcmlas_idx;
146
opr_sz = (1 + a->q) * 8;
147
fpst = fpstatus_ptr(a->size == MO_16 ? FPST_STD_F16 : FPST_STD);
148
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
149
+ tcg_gen_gvec_4_ptr(vfp_reg_offset(1, a->vd),
150
vfp_reg_offset(1, a->vn),
151
vfp_reg_offset(1, a->vm),
152
+ vfp_reg_offset(1, a->vd),
153
fpst, opr_sz, opr_sz,
154
(a->index << 2) | a->rot, fn_gvec_ptr);
155
tcg_temp_free_ptr(fpst);
156
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/target/arm/translate-sve.c
159
+++ b/target/arm/translate-sve.c
160
@@ -XXX,XX +XXX,XX @@ static bool trans_FCMLA_zpzzz(DisasContext *s, arg_FCMLA_zpzzz *a)
161
162
static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCMLA_zzxz *a)
163
{
164
- static gen_helper_gvec_3_ptr * const fns[2] = {
165
+ static gen_helper_gvec_4_ptr * const fns[2] = {
166
gen_helper_gvec_fcmlah_idx,
167
gen_helper_gvec_fcmlas_idx,
168
};
169
@@ -XXX,XX +XXX,XX @@ static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCMLA_zzxz *a)
170
if (sve_access_check(s)) {
171
unsigned vsz = vec_full_reg_size(s);
172
TCGv_ptr status = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
173
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
174
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
175
vec_full_reg_offset(s, a->rn),
176
vec_full_reg_offset(s, a->rm),
177
+ vec_full_reg_offset(s, a->ra),
178
status, vsz, vsz,
179
a->index * 4 + a->rot,
180
fns[a->esz - 1]);
181
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/target/arm/vec_helper.c
184
+++ b/target/arm/vec_helper.c
185
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcaddd)(void *vd, void *vn, void *vm,
186
clear_tail(d, opr_sz, simd_maxsz(desc));
187
}
188
189
-void HELPER(gvec_fcmlah)(void *vd, void *vn, void *vm,
190
+void HELPER(gvec_fcmlah)(void *vd, void *vn, void *vm, void *va,
191
void *vfpst, uint32_t desc)
192
{
193
uintptr_t opr_sz = simd_oprsz(desc);
194
- float16 *d = vd;
195
- float16 *n = vn;
196
- float16 *m = vm;
197
+ float16 *d = vd, *n = vn, *m = vm, *a = va;
198
float_status *fpst = vfpst;
199
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
200
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
201
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlah)(void *vd, void *vn, void *vm,
202
float16 e4 = e2;
203
float16 e3 = m[H2(i + 1 - flip)] ^ neg_imag;
204
205
- d[H2(i)] = float16_muladd(e2, e1, d[H2(i)], 0, fpst);
206
- d[H2(i + 1)] = float16_muladd(e4, e3, d[H2(i + 1)], 0, fpst);
207
+ d[H2(i)] = float16_muladd(e2, e1, a[H2(i)], 0, fpst);
208
+ d[H2(i + 1)] = float16_muladd(e4, e3, a[H2(i + 1)], 0, fpst);
209
}
63
}
210
clear_tail(d, opr_sz, simd_maxsz(desc));
64
211
}
65
if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
212
213
-void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm,
214
+void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm, void *va,
215
void *vfpst, uint32_t desc)
216
{
217
uintptr_t opr_sz = simd_oprsz(desc);
218
- float16 *d = vd;
219
- float16 *n = vn;
220
- float16 *m = vm;
221
+ float16 *d = vd, *n = vn, *m = vm, *a = va;
222
float_status *fpst = vfpst;
223
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
224
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
225
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm,
226
float16 e2 = n[H2(j + flip)];
227
float16 e4 = e2;
228
229
- d[H2(j)] = float16_muladd(e2, e1, d[H2(j)], 0, fpst);
230
- d[H2(j + 1)] = float16_muladd(e4, e3, d[H2(j + 1)], 0, fpst);
231
+ d[H2(j)] = float16_muladd(e2, e1, a[H2(j)], 0, fpst);
232
+ d[H2(j + 1)] = float16_muladd(e4, e3, a[H2(j + 1)], 0, fpst);
233
}
234
}
235
clear_tail(d, opr_sz, simd_maxsz(desc));
236
}
237
238
-void HELPER(gvec_fcmlas)(void *vd, void *vn, void *vm,
239
+void HELPER(gvec_fcmlas)(void *vd, void *vn, void *vm, void *va,
240
void *vfpst, uint32_t desc)
241
{
242
uintptr_t opr_sz = simd_oprsz(desc);
243
- float32 *d = vd;
244
- float32 *n = vn;
245
- float32 *m = vm;
246
+ float32 *d = vd, *n = vn, *m = vm, *a = va;
247
float_status *fpst = vfpst;
248
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
249
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
250
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlas)(void *vd, void *vn, void *vm,
251
float32 e4 = e2;
252
float32 e3 = m[H4(i + 1 - flip)] ^ neg_imag;
253
254
- d[H4(i)] = float32_muladd(e2, e1, d[H4(i)], 0, fpst);
255
- d[H4(i + 1)] = float32_muladd(e4, e3, d[H4(i + 1)], 0, fpst);
256
+ d[H4(i)] = float32_muladd(e2, e1, a[H4(i)], 0, fpst);
257
+ d[H4(i + 1)] = float32_muladd(e4, e3, a[H4(i + 1)], 0, fpst);
258
}
259
clear_tail(d, opr_sz, simd_maxsz(desc));
260
}
261
262
-void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm,
263
+void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm, void *va,
264
void *vfpst, uint32_t desc)
265
{
266
uintptr_t opr_sz = simd_oprsz(desc);
267
- float32 *d = vd;
268
- float32 *n = vn;
269
- float32 *m = vm;
270
+ float32 *d = vd, *n = vn, *m = vm, *a = va;
271
float_status *fpst = vfpst;
272
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
273
uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
274
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm,
275
float32 e2 = n[H4(j + flip)];
276
float32 e4 = e2;
277
278
- d[H4(j)] = float32_muladd(e2, e1, d[H4(j)], 0, fpst);
279
- d[H4(j + 1)] = float32_muladd(e4, e3, d[H4(j + 1)], 0, fpst);
280
+ d[H4(j)] = float32_muladd(e2, e1, a[H4(j)], 0, fpst);
281
+ d[H4(j + 1)] = float32_muladd(e4, e3, a[H4(j + 1)], 0, fpst);
282
}
283
}
284
clear_tail(d, opr_sz, simd_maxsz(desc));
285
}
286
287
-void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
288
+void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm, void *va,
289
void *vfpst, uint32_t desc)
290
{
291
uintptr_t opr_sz = simd_oprsz(desc);
292
- float64 *d = vd;
293
- float64 *n = vn;
294
- float64 *m = vm;
295
+ float64 *d = vd, *n = vn, *m = vm, *a = va;
296
float_status *fpst = vfpst;
297
intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
298
uint64_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
299
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
300
float64 e4 = e2;
301
float64 e3 = m[i + 1 - flip] ^ neg_imag;
302
303
- d[i] = float64_muladd(e2, e1, d[i], 0, fpst);
304
- d[i + 1] = float64_muladd(e4, e3, d[i + 1], 0, fpst);
305
+ d[i] = float64_muladd(e2, e1, a[i], 0, fpst);
306
+ d[i + 1] = float64_muladd(e4, e3, a[i + 1], 0, fpst);
307
}
308
clear_tail(d, opr_sz, simd_maxsz(desc));
309
}
310
--
66
--
311
2.20.1
67
2.25.1
312
313
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This cpreg is used to access two new bits of PSTATE
4
that are not visible via any other mechanism.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-67-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/cpu.h | 5 +++++
11
target/arm/cpu.h | 6 ++++++
9
target/arm/helper.h | 4 ++++
12
target/arm/helper.c | 13 +++++++++++++
10
target/arm/sve.decode | 4 ++++
13
2 files changed, 19 insertions(+)
11
target/arm/translate-sve.c | 16 ++++++++++++++++
12
target/arm/vec_helper.c | 2 ++
13
5 files changed, 31 insertions(+)
14
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
20
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
20
* nRW (also known as M[4]) is kept, inverted, in env->aarch64
21
* DAIF (exception masks) are kept in env->daif
22
* BTYPE is kept in env->btype
23
+ * SM and ZA are kept in env->svcr
24
* all other bits are stored in their correct places in env->pstate
25
*/
26
uint32_t pstate;
27
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
28
uint32_t condexec_bits; /* IT bits. cpsr[15:10,26:25]. */
29
uint32_t btype; /* BTI branch type. spsr[11:10]. */
30
uint64_t daif; /* exception masks, in the bits they are in PSTATE */
31
+ uint64_t svcr; /* PSTATE.{SM,ZA} in the bits they are in SVCR */
32
33
uint64_t elr_el[4]; /* AArch64 exception link regs */
34
uint64_t sp_el[4]; /* AArch64 banked stack pointers */
35
@@ -XXX,XX +XXX,XX @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
36
#define PSTATE_MODE_EL1t 4
37
#define PSTATE_MODE_EL0t 0
38
39
+/* PSTATE bits that are accessed via SVCR and not stored in SPSR_ELx. */
40
+FIELD(SVCR, SM, 0, 1)
41
+FIELD(SVCR, ZA, 1, 1)
42
+
43
/* Write a new value to v7m.exception, thus transitioning into or out
44
* of Handler mode; this may result in a change of active stack pointer.
45
*/
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/helper.c
49
+++ b/target/arm/helper.c
50
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpidr2(CPUARMState *env, const ARMCPRegInfo *ri,
51
return CP_ACCESS_OK;
21
}
52
}
22
53
23
+static inline bool isar_feature_aa64_sve_i8mm(const ARMISARegisters *id)
54
+static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
+ uint64_t value)
24
+{
56
+{
25
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, I8MM) != 0;
57
+ value &= R_SVCR_SM_MASK | R_SVCR_ZA_MASK;
58
+ /* TODO: Side effects. */
59
+ env->svcr = value;
26
+}
60
+}
27
+
61
+
28
static inline bool isar_feature_aa64_sve_f32mm(const ARMISARegisters *id)
62
static const ARMCPRegInfo sme_reginfo[] = {
29
{
63
{ .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
30
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F32MM) != 0;
64
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
31
diff --git a/target/arm/helper.h b/target/arm/helper.h
65
.access = PL0_RW, .accessfn = access_tpidr2,
32
index XXXXXXX..XXXXXXX 100644
66
.fieldoffset = offsetof(CPUARMState, cp15.tpidr2_el0) },
33
--- a/target/arm/helper.h
67
+ { .name = "SVCR", .state = ARM_CP_STATE_AA64,
34
+++ b/target/arm/helper.h
68
+ .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 2,
35
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_sdot_idx_h, TCG_CALL_NO_RWG,
69
+ .access = PL0_RW, .type = ARM_CP_SME,
36
void, ptr, ptr, ptr, ptr, i32)
70
+ .fieldoffset = offsetof(CPUARMState, svcr),
37
DEF_HELPER_FLAGS_5(gvec_udot_idx_h, TCG_CALL_NO_RWG,
71
+ .writefn = svcr_write, .raw_writefn = raw_write },
38
void, ptr, ptr, ptr, ptr, i32)
72
};
39
+DEF_HELPER_FLAGS_5(gvec_sudot_idx_b, TCG_CALL_NO_RWG,
73
#endif /* TARGET_AARCH64 */
40
+ void, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(gvec_usdot_idx_b, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, i32)
43
44
DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
45
void, ptr, ptr, ptr, ptr, i32)
46
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/sve.decode
49
+++ b/target/arm/sve.decode
50
@@ -XXX,XX +XXX,XX @@ SQRDMLSH_zzxz_h 01000100 0. 1 ..... 000101 ..... ..... @rrxr_3 esz=1
51
SQRDMLSH_zzxz_s 01000100 10 1 ..... 000101 ..... ..... @rrxr_2 esz=2
52
SQRDMLSH_zzxz_d 01000100 11 1 ..... 000101 ..... ..... @rrxr_1 esz=3
53
54
+# SVE mixed sign dot product (indexed)
55
+USDOT_zzxw_s 01000100 10 1 ..... 000110 ..... ..... @rrxr_2 esz=2
56
+SUDOT_zzxw_s 01000100 10 1 ..... 000111 ..... ..... @rrxr_2 esz=2
57
+
58
# SVE2 saturating multiply-add (indexed)
59
SQDMLALB_zzxw_s 01000100 10 1 ..... 0010.0 ..... ..... @rrxr_3a esz=2
60
SQDMLALB_zzxw_d 01000100 11 1 ..... 0010.0 ..... ..... @rrxr_2a esz=3
61
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/translate-sve.c
64
+++ b/target/arm/translate-sve.c
65
@@ -XXX,XX +XXX,XX @@ DO_RRXR(trans_SDOT_zzxw_d, gen_helper_gvec_sdot_idx_h)
66
DO_RRXR(trans_UDOT_zzxw_s, gen_helper_gvec_udot_idx_b)
67
DO_RRXR(trans_UDOT_zzxw_d, gen_helper_gvec_udot_idx_h)
68
69
+static bool trans_SUDOT_zzxw_s(DisasContext *s, arg_rrxr_esz *a)
70
+{
71
+ if (!dc_isar_feature(aa64_sve_i8mm, s)) {
72
+ return false;
73
+ }
74
+ return do_zzxz_ool(s, a, gen_helper_gvec_sudot_idx_b);
75
+}
76
+
77
+static bool trans_USDOT_zzxw_s(DisasContext *s, arg_rrxr_esz *a)
78
+{
79
+ if (!dc_isar_feature(aa64_sve_i8mm, s)) {
80
+ return false;
81
+ }
82
+ return do_zzxz_ool(s, a, gen_helper_gvec_usdot_idx_b);
83
+}
84
+
85
#undef DO_RRXR
86
87
static bool do_sve2_zzz_data(DisasContext *s, int rd, int rn, int rm, int data,
88
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/vec_helper.c
91
+++ b/target/arm/vec_helper.c
92
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
93
94
DO_DOT_IDX(gvec_sdot_idx_b, int32_t, int8_t, int8_t, H4)
95
DO_DOT_IDX(gvec_udot_idx_b, uint32_t, uint8_t, uint8_t, H4)
96
+DO_DOT_IDX(gvec_sudot_idx_b, int32_t, int8_t, uint8_t, H4)
97
+DO_DOT_IDX(gvec_usdot_idx_b, int32_t, uint8_t, int8_t, H4)
98
DO_DOT_IDX(gvec_sdot_idx_h, int64_t, int16_t, int16_t, )
99
DO_DOT_IDX(gvec_udot_idx_h, uint64_t, uint16_t, uint16_t, )
100
74
101
--
75
--
102
2.20.1
76
2.25.1
103
104
diff view generated by jsdifflib
1
From: Stephen Long <steplong@quicinc.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
These cpregs control the streaming vector length and whether the
4
full a64 instruction set is allowed while in streaming mode.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-47-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-7-richard.henderson@linaro.org
7
Message-Id: <20200422165503.13511-1-steplong@quicinc.com>
8
[rth: Fix indexing in helpers, expand macro to straight functions.]
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/cpu.h | 10 ++++++
11
target/arm/cpu.h | 8 ++++++--
13
target/arm/helper-sve.h | 3 ++
12
target/arm/helper.c | 41 +++++++++++++++++++++++++++++++++++++++++
14
target/arm/sve.decode | 4 +++
13
2 files changed, 47 insertions(+), 2 deletions(-)
15
target/arm/sve_helper.c | 74 ++++++++++++++++++++++++++++++++++++++
16
target/arm/translate-sve.c | 34 ++++++++++++++++++
17
5 files changed, 125 insertions(+)
18
14
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
24
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
20
float_status standard_fp_status;
21
float_status standard_fp_status_f16;
22
23
- /* ZCR_EL[1-3] */
24
- uint64_t zcr_el[4];
25
+ uint64_t zcr_el[4]; /* ZCR_EL[1-3] */
26
+ uint64_t smcr_el[4]; /* SMCR_EL[1-3] */
27
} vfp;
28
uint64_t exclusive_addr;
29
uint64_t exclusive_val;
30
@@ -XXX,XX +XXX,XX @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
31
FIELD(SVCR, SM, 0, 1)
32
FIELD(SVCR, ZA, 1, 1)
33
34
+/* Fields for SMCR_ELx. */
35
+FIELD(SMCR, LEN, 0, 4)
36
+FIELD(SMCR, FA64, 31, 1)
37
+
38
/* Write a new value to v7m.exception, thus transitioning into or out
39
* of Handler mode; this may result in a change of active stack pointer.
40
*/
41
diff --git a/target/arm/helper.c b/target/arm/helper.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/helper.c
44
+++ b/target/arm/helper.c
45
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
46
*/
47
{ K(3, 0, 1, 2, 0), K(3, 4, 1, 2, 0), K(3, 5, 1, 2, 0),
48
"ZCR_EL1", "ZCR_EL2", "ZCR_EL12", isar_feature_aa64_sve },
49
+ { K(3, 0, 1, 2, 6), K(3, 4, 1, 2, 6), K(3, 5, 1, 2, 6),
50
+ "SMCR_EL1", "SMCR_EL2", "SMCR_EL12", isar_feature_aa64_sme },
51
52
{ K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0),
53
"TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte },
54
@@ -XXX,XX +XXX,XX @@ static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
env->svcr = value;
25
}
56
}
26
57
27
+static inline bool isar_feature_aa64_sve_f32mm(const ARMISARegisters *id)
58
+static void smcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
59
+ uint64_t value)
28
+{
60
+{
29
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F32MM) != 0;
61
+ int cur_el = arm_current_el(env);
30
+}
62
+ int old_len = sve_vqm1_for_el(env, cur_el);
63
+ int new_len;
31
+
64
+
32
+static inline bool isar_feature_aa64_sve_f64mm(const ARMISARegisters *id)
65
+ QEMU_BUILD_BUG_ON(ARM_MAX_VQ > R_SMCR_LEN_MASK + 1);
33
+{
66
+ value &= R_SMCR_LEN_MASK | R_SMCR_FA64_MASK;
34
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F64MM) != 0;
67
+ raw_write(env, ri, value);
35
+}
36
+
68
+
37
/*
69
+ /*
38
* Feature tests for "does this exist in either 32-bit or 64-bit?"
70
+ * Note that it is CONSTRAINED UNPREDICTABLE what happens to ZA storage
39
*/
71
+ * when SVL is widened (old values kept, or zeros). Choose to keep the
40
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
72
+ * current values for simplicity. But for QEMU internals, we must still
41
index XXXXXXX..XXXXXXX 100644
73
+ * apply the narrower SVL to the Zregs and Pregs -- see the comment
42
--- a/target/arm/helper-sve.h
74
+ * above aarch64_sve_narrow_vq.
43
+++ b/target/arm/helper-sve.h
75
+ */
44
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_s, TCG_CALL_NO_RWG,
76
+ new_len = sve_vqm1_for_el(env, cur_el);
45
void, ptr, ptr, ptr, ptr, i32)
77
+ if (new_len < old_len) {
46
DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_d, TCG_CALL_NO_RWG,
78
+ aarch64_sve_narrow_vq(env, new_len + 1);
47
void, ptr, ptr, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_6(fmmla_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_6(fmmla_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, i32)
51
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/sve.decode
54
+++ b/target/arm/sve.decode
55
@@ -XXX,XX +XXX,XX @@ UMLSLT_zzzw 01000100 .. 0 ..... 010 111 ..... ..... @rda_rn_rm
56
CMLA_zzzz 01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5 ra=%reg_movprfx
57
SQRDCMLAH_zzzz 01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5 ra=%reg_movprfx
58
59
+### SVE2 floating point matrix multiply accumulate
60
+
61
+FMMLA 01100100 .. 1 ..... 111001 ..... ..... @rda_rn_rm
62
+
63
### SVE2 Memory Gather Load Group
64
65
# SVE2 64-bit gather non-temporal load
66
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/sve_helper.c
69
+++ b/target/arm/sve_helper.c
70
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_xar_s)(void *vd, void *vn, void *vm, uint32_t desc)
71
d[i] = ror32(n[i] ^ m[i], shr);
72
}
73
}
74
+
75
+void HELPER(fmmla_s)(void *vd, void *vn, void *vm, void *va,
76
+ void *status, uint32_t desc)
77
+{
78
+ intptr_t s, opr_sz = simd_oprsz(desc) / (sizeof(float32) * 4);
79
+
80
+ for (s = 0; s < opr_sz; ++s) {
81
+ float32 *n = vn + s * sizeof(float32) * 4;
82
+ float32 *m = vm + s * sizeof(float32) * 4;
83
+ float32 *a = va + s * sizeof(float32) * 4;
84
+ float32 *d = vd + s * sizeof(float32) * 4;
85
+ float32 n00 = n[H4(0)], n01 = n[H4(1)];
86
+ float32 n10 = n[H4(2)], n11 = n[H4(3)];
87
+ float32 m00 = m[H4(0)], m01 = m[H4(1)];
88
+ float32 m10 = m[H4(2)], m11 = m[H4(3)];
89
+ float32 p0, p1;
90
+
91
+ /* i = 0, j = 0 */
92
+ p0 = float32_mul(n00, m00, status);
93
+ p1 = float32_mul(n01, m01, status);
94
+ d[H4(0)] = float32_add(a[H4(0)], float32_add(p0, p1, status), status);
95
+
96
+ /* i = 0, j = 1 */
97
+ p0 = float32_mul(n00, m10, status);
98
+ p1 = float32_mul(n01, m11, status);
99
+ d[H4(1)] = float32_add(a[H4(1)], float32_add(p0, p1, status), status);
100
+
101
+ /* i = 1, j = 0 */
102
+ p0 = float32_mul(n10, m00, status);
103
+ p1 = float32_mul(n11, m01, status);
104
+ d[H4(2)] = float32_add(a[H4(2)], float32_add(p0, p1, status), status);
105
+
106
+ /* i = 1, j = 1 */
107
+ p0 = float32_mul(n10, m10, status);
108
+ p1 = float32_mul(n11, m11, status);
109
+ d[H4(3)] = float32_add(a[H4(3)], float32_add(p0, p1, status), status);
110
+ }
79
+ }
111
+}
80
+}
112
+
81
+
113
+void HELPER(fmmla_d)(void *vd, void *vn, void *vm, void *va,
82
static const ARMCPRegInfo sme_reginfo[] = {
114
+ void *status, uint32_t desc)
83
{ .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
115
+{
84
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
116
+ intptr_t s, opr_sz = simd_oprsz(desc) / (sizeof(float64) * 4);
85
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
117
+
86
.access = PL0_RW, .type = ARM_CP_SME,
118
+ for (s = 0; s < opr_sz; ++s) {
87
.fieldoffset = offsetof(CPUARMState, svcr),
119
+ float64 *n = vn + s * sizeof(float64) * 4;
88
.writefn = svcr_write, .raw_writefn = raw_write },
120
+ float64 *m = vm + s * sizeof(float64) * 4;
89
+ { .name = "SMCR_EL1", .state = ARM_CP_STATE_AA64,
121
+ float64 *a = va + s * sizeof(float64) * 4;
90
+ .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 6,
122
+ float64 *d = vd + s * sizeof(float64) * 4;
91
+ .access = PL1_RW, .type = ARM_CP_SME,
123
+ float64 n00 = n[0], n01 = n[1], n10 = n[2], n11 = n[3];
92
+ .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[1]),
124
+ float64 m00 = m[0], m01 = m[1], m10 = m[2], m11 = m[3];
93
+ .writefn = smcr_write, .raw_writefn = raw_write },
125
+ float64 p0, p1;
94
+ { .name = "SMCR_EL2", .state = ARM_CP_STATE_AA64,
126
+
95
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 6,
127
+ /* i = 0, j = 0 */
96
+ .access = PL2_RW, .type = ARM_CP_SME,
128
+ p0 = float64_mul(n00, m00, status);
97
+ .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[2]),
129
+ p1 = float64_mul(n01, m01, status);
98
+ .writefn = smcr_write, .raw_writefn = raw_write },
130
+ d[0] = float64_add(a[0], float64_add(p0, p1, status), status);
99
+ { .name = "SMCR_EL3", .state = ARM_CP_STATE_AA64,
131
+
100
+ .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 6,
132
+ /* i = 0, j = 1 */
101
+ .access = PL3_RW, .type = ARM_CP_SME,
133
+ p0 = float64_mul(n00, m10, status);
102
+ .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[3]),
134
+ p1 = float64_mul(n01, m11, status);
103
+ .writefn = smcr_write, .raw_writefn = raw_write },
135
+ d[1] = float64_add(a[1], float64_add(p0, p1, status), status);
104
};
136
+
105
#endif /* TARGET_AARCH64 */
137
+ /* i = 1, j = 0 */
106
138
+ p0 = float64_mul(n10, m00, status);
139
+ p1 = float64_mul(n11, m01, status);
140
+ d[2] = float64_add(a[2], float64_add(p0, p1, status), status);
141
+
142
+ /* i = 1, j = 1 */
143
+ p0 = float64_mul(n10, m10, status);
144
+ p1 = float64_mul(n11, m11, status);
145
+ d[3] = float64_add(a[3], float64_add(p0, p1, status), status);
146
+ }
147
+}
148
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/target/arm/translate-sve.c
151
+++ b/target/arm/translate-sve.c
152
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZPZZ_FP(FMINP, fminp)
153
* SVE Integer Multiply-Add (unpredicated)
154
*/
155
156
+static bool trans_FMMLA(DisasContext *s, arg_rrrr_esz *a)
157
+{
158
+ gen_helper_gvec_4_ptr *fn;
159
+
160
+ switch (a->esz) {
161
+ case MO_32:
162
+ if (!dc_isar_feature(aa64_sve_f32mm, s)) {
163
+ return false;
164
+ }
165
+ fn = gen_helper_fmmla_s;
166
+ break;
167
+ case MO_64:
168
+ if (!dc_isar_feature(aa64_sve_f64mm, s)) {
169
+ return false;
170
+ }
171
+ fn = gen_helper_fmmla_d;
172
+ break;
173
+ default:
174
+ return false;
175
+ }
176
+
177
+ if (sve_access_check(s)) {
178
+ unsigned vsz = vec_full_reg_size(s);
179
+ TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
180
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
181
+ vec_full_reg_offset(s, a->rn),
182
+ vec_full_reg_offset(s, a->rm),
183
+ vec_full_reg_offset(s, a->ra),
184
+ status, vsz, vsz, 0, fn);
185
+ tcg_temp_free_ptr(status);
186
+ }
187
+ return true;
188
+}
189
+
190
static bool do_sqdmlal_zzzw(DisasContext *s, arg_rrrr_esz *a,
191
bool sel1, bool sel2)
192
{
193
--
107
--
194
2.20.1
108
2.25.1
195
196
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This completes the section "SVE2 bitwise shift right narrow".
3
Implement the streaming mode identification register, and the
4
two streaming priority registers. For QEMU, they are all RES0.
4
5
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525010358.152808-30-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-8-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/helper-sve.h | 16 ++++++
11
target/arm/helper.c | 33 +++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 4 ++
12
1 file changed, 33 insertions(+)
12
target/arm/sve_helper.c | 24 +++++++++
13
target/arm/translate-sve.c | 105 +++++++++++++++++++++++++++++++++++++
14
4 files changed, 149 insertions(+)
15
13
16
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-sve.h
16
--- a/target/arm/helper.c
19
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve2_sqrshrunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpidr2(CPUARMState *env, const ARMCPRegInfo *ri,
21
DEF_HELPER_FLAGS_3(sve2_sqrshrunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
return CP_ACCESS_OK;
22
DEF_HELPER_FLAGS_3(sve2_sqrshrunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
24
+DEF_HELPER_FLAGS_3(sve2_sqshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_3(sve2_sqshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_3(sve2_sqshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
27
+
28
+DEF_HELPER_FLAGS_3(sve2_sqshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(sve2_sqshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_3(sve2_sqshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_3(sve2_sqrshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_3(sve2_sqrshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_3(sve2_sqrshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
35
+
36
+DEF_HELPER_FLAGS_3(sve2_sqrshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_3(sve2_sqrshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_3(sve2_sqrshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
39
+
40
DEF_HELPER_FLAGS_3(sve2_uqshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_3(sve2_uqshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
42
DEF_HELPER_FLAGS_3(sve2_uqshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
43
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/sve.decode
46
+++ b/target/arm/sve.decode
47
@@ -XXX,XX +XXX,XX @@ SHRNB 01000101 .. 1 ..... 00 0100 ..... ..... @rd_rn_tszimm_shr
48
SHRNT 01000101 .. 1 ..... 00 0101 ..... ..... @rd_rn_tszimm_shr
49
RSHRNB 01000101 .. 1 ..... 00 0110 ..... ..... @rd_rn_tszimm_shr
50
RSHRNT 01000101 .. 1 ..... 00 0111 ..... ..... @rd_rn_tszimm_shr
51
+SQSHRNB 01000101 .. 1 ..... 00 1000 ..... ..... @rd_rn_tszimm_shr
52
+SQSHRNT 01000101 .. 1 ..... 00 1001 ..... ..... @rd_rn_tszimm_shr
53
+SQRSHRNB 01000101 .. 1 ..... 00 1010 ..... ..... @rd_rn_tszimm_shr
54
+SQRSHRNT 01000101 .. 1 ..... 00 1011 ..... ..... @rd_rn_tszimm_shr
55
UQSHRNB 01000101 .. 1 ..... 00 1100 ..... ..... @rd_rn_tszimm_shr
56
UQSHRNT 01000101 .. 1 ..... 00 1101 ..... ..... @rd_rn_tszimm_shr
57
UQRSHRNB 01000101 .. 1 ..... 00 1110 ..... ..... @rd_rn_tszimm_shr
58
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/sve_helper.c
61
+++ b/target/arm/sve_helper.c
62
@@ -XXX,XX +XXX,XX @@ DO_SHRNT(sve2_sqrshrunt_h, int16_t, uint8_t, H1_2, H1, DO_SQRSHRUN_H)
63
DO_SHRNT(sve2_sqrshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRUN_S)
64
DO_SHRNT(sve2_sqrshrunt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRUN_D)
65
66
+#define DO_SQSHRN_H(x, sh) do_sat_bhs(x >> sh, INT8_MIN, INT8_MAX)
67
+#define DO_SQSHRN_S(x, sh) do_sat_bhs(x >> sh, INT16_MIN, INT16_MAX)
68
+#define DO_SQSHRN_D(x, sh) do_sat_bhs(x >> sh, INT32_MIN, INT32_MAX)
69
+
70
+DO_SHRNB(sve2_sqshrnb_h, int16_t, uint8_t, DO_SQSHRN_H)
71
+DO_SHRNB(sve2_sqshrnb_s, int32_t, uint16_t, DO_SQSHRN_S)
72
+DO_SHRNB(sve2_sqshrnb_d, int64_t, uint32_t, DO_SQSHRN_D)
73
+
74
+DO_SHRNT(sve2_sqshrnt_h, int16_t, uint8_t, H1_2, H1, DO_SQSHRN_H)
75
+DO_SHRNT(sve2_sqshrnt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQSHRN_S)
76
+DO_SHRNT(sve2_sqshrnt_d, int64_t, uint32_t, , H1_4, DO_SQSHRN_D)
77
+
78
+#define DO_SQRSHRN_H(x, sh) do_sat_bhs(do_srshr(x, sh), INT8_MIN, INT8_MAX)
79
+#define DO_SQRSHRN_S(x, sh) do_sat_bhs(do_srshr(x, sh), INT16_MIN, INT16_MAX)
80
+#define DO_SQRSHRN_D(x, sh) do_sat_bhs(do_srshr(x, sh), INT32_MIN, INT32_MAX)
81
+
82
+DO_SHRNB(sve2_sqrshrnb_h, int16_t, uint8_t, DO_SQRSHRN_H)
83
+DO_SHRNB(sve2_sqrshrnb_s, int32_t, uint16_t, DO_SQRSHRN_S)
84
+DO_SHRNB(sve2_sqrshrnb_d, int64_t, uint32_t, DO_SQRSHRN_D)
85
+
86
+DO_SHRNT(sve2_sqrshrnt_h, int16_t, uint8_t, H1_2, H1, DO_SQRSHRN_H)
87
+DO_SHRNT(sve2_sqrshrnt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRN_S)
88
+DO_SHRNT(sve2_sqrshrnt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRN_D)
89
+
90
#define DO_UQSHRN_H(x, sh) MIN(x >> sh, UINT8_MAX)
91
#define DO_UQSHRN_S(x, sh) MIN(x >> sh, UINT16_MAX)
92
#define DO_UQSHRN_D(x, sh) MIN(x >> sh, UINT32_MAX)
93
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-sve.c
96
+++ b/target/arm/translate-sve.c
97
@@ -XXX,XX +XXX,XX @@ static bool trans_SQRSHRUNT(DisasContext *s, arg_rri_esz *a)
98
return do_sve2_shr_narrow(s, a, ops);
99
}
20
}
100
21
101
+static void gen_sqshrnb_vec(unsigned vece, TCGv_vec d,
22
+static CPAccessResult access_esm(CPUARMState *env, const ARMCPRegInfo *ri,
102
+ TCGv_vec n, int64_t shr)
23
+ bool isread)
103
+{
24
+{
104
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
25
+ /* TODO: FEAT_FGT for SMPRI_EL1 but not SMPRIMAP_EL2 */
105
+ int halfbits = 4 << vece;
26
+ if (arm_current_el(env) < 3
106
+ int64_t max = MAKE_64BIT_MASK(0, halfbits - 1);
27
+ && arm_feature(env, ARM_FEATURE_EL3)
107
+ int64_t min = -max - 1;
28
+ && !FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, ESM)) {
108
+
29
+ return CP_ACCESS_TRAP_EL3;
109
+ tcg_gen_sari_vec(vece, n, n, shr);
30
+ }
110
+ tcg_gen_dupi_vec(vece, t, min);
31
+ return CP_ACCESS_OK;
111
+ tcg_gen_smax_vec(vece, n, n, t);
112
+ tcg_gen_dupi_vec(vece, t, max);
113
+ tcg_gen_smin_vec(vece, n, n, t);
114
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(0, halfbits));
115
+ tcg_gen_and_vec(vece, d, n, t);
116
+ tcg_temp_free_vec(t);
117
+}
32
+}
118
+
33
+
119
+static bool trans_SQSHRNB(DisasContext *s, arg_rri_esz *a)
34
static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
120
+{
35
uint64_t value)
121
+ static const TCGOpcode vec_list[] = {
122
+ INDEX_op_sari_vec, INDEX_op_smax_vec, INDEX_op_smin_vec, 0
123
+ };
124
+ static const GVecGen2i ops[3] = {
125
+ { .fniv = gen_sqshrnb_vec,
126
+ .opt_opc = vec_list,
127
+ .fno = gen_helper_sve2_sqshrnb_h,
128
+ .vece = MO_16 },
129
+ { .fniv = gen_sqshrnb_vec,
130
+ .opt_opc = vec_list,
131
+ .fno = gen_helper_sve2_sqshrnb_s,
132
+ .vece = MO_32 },
133
+ { .fniv = gen_sqshrnb_vec,
134
+ .opt_opc = vec_list,
135
+ .fno = gen_helper_sve2_sqshrnb_d,
136
+ .vece = MO_64 },
137
+ };
138
+ return do_sve2_shr_narrow(s, a, ops);
139
+}
140
+
141
+static void gen_sqshrnt_vec(unsigned vece, TCGv_vec d,
142
+ TCGv_vec n, int64_t shr)
143
+{
144
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
145
+ int halfbits = 4 << vece;
146
+ int64_t max = MAKE_64BIT_MASK(0, halfbits - 1);
147
+ int64_t min = -max - 1;
148
+
149
+ tcg_gen_sari_vec(vece, n, n, shr);
150
+ tcg_gen_dupi_vec(vece, t, min);
151
+ tcg_gen_smax_vec(vece, n, n, t);
152
+ tcg_gen_dupi_vec(vece, t, max);
153
+ tcg_gen_smin_vec(vece, n, n, t);
154
+ tcg_gen_shli_vec(vece, n, n, halfbits);
155
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(0, halfbits));
156
+ tcg_gen_bitsel_vec(vece, d, t, d, n);
157
+ tcg_temp_free_vec(t);
158
+}
159
+
160
+static bool trans_SQSHRNT(DisasContext *s, arg_rri_esz *a)
161
+{
162
+ static const TCGOpcode vec_list[] = {
163
+ INDEX_op_shli_vec, INDEX_op_sari_vec,
164
+ INDEX_op_smax_vec, INDEX_op_smin_vec, 0
165
+ };
166
+ static const GVecGen2i ops[3] = {
167
+ { .fniv = gen_sqshrnt_vec,
168
+ .opt_opc = vec_list,
169
+ .load_dest = true,
170
+ .fno = gen_helper_sve2_sqshrnt_h,
171
+ .vece = MO_16 },
172
+ { .fniv = gen_sqshrnt_vec,
173
+ .opt_opc = vec_list,
174
+ .load_dest = true,
175
+ .fno = gen_helper_sve2_sqshrnt_s,
176
+ .vece = MO_32 },
177
+ { .fniv = gen_sqshrnt_vec,
178
+ .opt_opc = vec_list,
179
+ .load_dest = true,
180
+ .fno = gen_helper_sve2_sqshrnt_d,
181
+ .vece = MO_64 },
182
+ };
183
+ return do_sve2_shr_narrow(s, a, ops);
184
+}
185
+
186
+static bool trans_SQRSHRNB(DisasContext *s, arg_rri_esz *a)
187
+{
188
+ static const GVecGen2i ops[3] = {
189
+ { .fno = gen_helper_sve2_sqrshrnb_h },
190
+ { .fno = gen_helper_sve2_sqrshrnb_s },
191
+ { .fno = gen_helper_sve2_sqrshrnb_d },
192
+ };
193
+ return do_sve2_shr_narrow(s, a, ops);
194
+}
195
+
196
+static bool trans_SQRSHRNT(DisasContext *s, arg_rri_esz *a)
197
+{
198
+ static const GVecGen2i ops[3] = {
199
+ { .fno = gen_helper_sve2_sqrshrnt_h },
200
+ { .fno = gen_helper_sve2_sqrshrnt_s },
201
+ { .fno = gen_helper_sve2_sqrshrnt_d },
202
+ };
203
+ return do_sve2_shr_narrow(s, a, ops);
204
+}
205
+
206
static void gen_uqshrnb_vec(unsigned vece, TCGv_vec d,
207
TCGv_vec n, int64_t shr)
208
{
36
{
37
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
38
.access = PL3_RW, .type = ARM_CP_SME,
39
.fieldoffset = offsetof(CPUARMState, vfp.smcr_el[3]),
40
.writefn = smcr_write, .raw_writefn = raw_write },
41
+ { .name = "SMIDR_EL1", .state = ARM_CP_STATE_AA64,
42
+ .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 6,
43
+ .access = PL1_R, .accessfn = access_aa64_tid1,
44
+ /*
45
+ * IMPLEMENTOR = 0 (software)
46
+ * REVISION = 0 (implementation defined)
47
+ * SMPS = 0 (no streaming execution priority in QEMU)
48
+ * AFFINITY = 0 (streaming sve mode not shared with other PEs)
49
+ */
50
+ .type = ARM_CP_CONST, .resetvalue = 0, },
51
+ /*
52
+ * Because SMIDR_EL1.SMPS is 0, SMPRI_EL1 and SMPRIMAP_EL2 are RES 0.
53
+ */
54
+ { .name = "SMPRI_EL1", .state = ARM_CP_STATE_AA64,
55
+ .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 4,
56
+ .access = PL1_RW, .accessfn = access_esm,
57
+ .type = ARM_CP_CONST, .resetvalue = 0 },
58
+ { .name = "SMPRIMAP_EL2", .state = ARM_CP_STATE_AA64,
59
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 5,
60
+ .access = PL2_RW, .accessfn = access_esm,
61
+ .type = ARM_CP_CONST, .resetvalue = 0 },
62
};
63
#endif /* TARGET_AARCH64 */
64
209
--
65
--
210
2.20.1
66
2.25.1
211
212
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For SVE, we potentially have a 4th argument coming from the
3
These are required to determine if various insns
4
movprfx instruction. Currently we do not optimize movprfx,
4
are allowed to issue.
5
so the problem is not visible.
6
5
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210525010358.152808-50-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-9-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.h | 20 +++---
11
target/arm/cpu.h | 2 ++
13
target/arm/sve.decode | 7 ++-
12
target/arm/translate.h | 4 ++++
14
target/arm/translate-a64.c | 15 ++++-
13
target/arm/helper.c | 4 ++++
15
target/arm/translate-neon.c | 10 +--
14
target/arm/translate-a64.c | 2 ++
16
target/arm/translate-sve.c | 13 ++--
15
4 files changed, 12 insertions(+)
17
target/arm/vec_helper.c | 120 ++++++++++++++++++++----------------
18
6 files changed, 109 insertions(+), 76 deletions(-)
19
16
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
19
--- a/target/arm/cpu.h
23
+++ b/target/arm/helper.h
20
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_sqrdmlah_d, TCG_CALL_NO_RWG,
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, TCMA, 16, 2)
25
DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_d, TCG_CALL_NO_RWG,
22
FIELD(TBFLAG_A64, MTE_ACTIVE, 18, 1)
26
void, ptr, ptr, ptr, ptr, i32)
23
FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
27
24
FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
28
-DEF_HELPER_FLAGS_4(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+FIELD(TBFLAG_A64, PSTATE_SM, 22, 1)
29
-DEF_HELPER_FLAGS_4(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
30
-DEF_HELPER_FLAGS_4(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
31
-DEF_HELPER_FLAGS_4(gvec_udot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
/*
32
+DEF_HELPER_FLAGS_5(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
* Helpers for using the above.
33
+DEF_HELPER_FLAGS_5(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
diff --git a/target/arm/translate.h b/target/arm/translate.h
34
+DEF_HELPER_FLAGS_5(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(gvec_udot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
36
37
-DEF_HELPER_FLAGS_4(gvec_sdot_idx_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
-DEF_HELPER_FLAGS_4(gvec_udot_idx_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
-DEF_HELPER_FLAGS_4(gvec_sdot_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
-DEF_HELPER_FLAGS_4(gvec_udot_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(gvec_sdot_idx_b, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_5(gvec_udot_idx_b, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(gvec_sdot_idx_h, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(gvec_udot_idx_h, TCG_CALL_NO_RWG,
48
+ void, ptr, ptr, ptr, ptr, i32)
49
50
DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
51
void, ptr, ptr, ptr, ptr, i32)
52
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
53
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/sve.decode
32
--- a/target/arm/translate.h
55
+++ b/target/arm/sve.decode
33
+++ b/target/arm/translate.h
56
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
34
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
57
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
35
bool align_mem;
58
36
/* True if PSTATE.IL is set */
59
# SVE integer dot product (unpredicated)
37
bool pstate_il;
60
-DOT_zzz 01000100 1 sz:1 0 rm:5 00000 u:1 rn:5 rd:5 ra=%reg_movprfx
38
+ /* True if PSTATE.SM is set. */
61
+DOT_zzzz 01000100 1 sz:1 0 rm:5 00000 u:1 rn:5 rd:5 \
39
+ bool pstate_sm;
62
+ ra=%reg_movprfx
40
+ /* True if PSTATE.ZA is set. */
63
41
+ bool pstate_za;
64
# SVE integer dot product (indexed)
42
/* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
65
-DOT_zzx 01000100 101 index:2 rm:3 00000 u:1 rn:5 rd:5 \
43
bool mve_no_pred;
66
+DOT_zzxw 01000100 101 index:2 rm:3 00000 u:1 rn:5 rd:5 \
44
/*
67
sz=0 ra=%reg_movprfx
45
diff --git a/target/arm/helper.c b/target/arm/helper.c
68
-DOT_zzx 01000100 111 index:1 rm:4 00000 u:1 rn:5 rd:5 \
46
index XXXXXXX..XXXXXXX 100644
69
+DOT_zzxw 01000100 111 index:1 rm:4 00000 u:1 rn:5 rd:5 \
47
--- a/target/arm/helper.c
70
sz=1 ra=%reg_movprfx
48
+++ b/target/arm/helper.c
71
49
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
72
# SVE floating-point complex add (predicated)
50
}
51
if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
52
DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
53
+ if (FIELD_EX64(env->svcr, SVCR, SM)) {
54
+ DP_TBFLAG_A64(flags, PSTATE_SM, 1);
55
+ }
56
+ DP_TBFLAG_A64(flags, PSTATE_ZA, FIELD_EX64(env->svcr, SVCR, ZA));
57
}
58
59
sctlr = regime_sctlr(env, stage1);
73
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
60
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
74
index XXXXXXX..XXXXXXX 100644
61
index XXXXXXX..XXXXXXX 100644
75
--- a/target/arm/translate-a64.c
62
--- a/target/arm/translate-a64.c
76
+++ b/target/arm/translate-a64.c
63
+++ b/target/arm/translate-a64.c
77
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3_qc(DisasContext *s, bool is_q, int rd, int rn,
64
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
78
tcg_temp_free_ptr(qc_ptr);
65
dc->ata = EX_TBFLAG_A64(tb_flags, ATA);
79
}
66
dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE);
80
67
dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
81
+/* Expand a 4-operand operation using an out-of-line helper. */
68
+ dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
82
+static void gen_gvec_op4_ool(DisasContext *s, bool is_q, int rd, int rn,
69
+ dc->pstate_za = EX_TBFLAG_A64(tb_flags, PSTATE_ZA);
83
+ int rm, int ra, int data, gen_helper_gvec_4 *fn)
70
dc->vec_len = 0;
84
+{
71
dc->vec_stride = 0;
85
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
72
dc->cp_regs = arm_cpu->cp_regs;
86
+ vec_full_reg_offset(s, rn),
87
+ vec_full_reg_offset(s, rm),
88
+ vec_full_reg_offset(s, ra),
89
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
90
+}
91
+
92
/* Set ZF and NF based on a 64 bit result. This is alas fiddlier
93
* than the 32 bit equivalent.
94
*/
95
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
96
return;
97
98
case 0x2: /* SDOT / UDOT */
99
- gen_gvec_op3_ool(s, is_q, rd, rn, rm, 0,
100
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0,
101
u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b);
102
return;
103
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
105
switch (16 * u + opcode) {
106
case 0x0e: /* SDOT */
107
case 0x1e: /* UDOT */
108
- gen_gvec_op3_ool(s, is_q, rd, rn, rm, index,
109
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
110
u ? gen_helper_gvec_udot_idx_b
111
: gen_helper_gvec_sdot_idx_b);
112
return;
113
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/translate-neon.c
116
+++ b/target/arm/translate-neon.c
117
@@ -XXX,XX +XXX,XX @@ static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
118
static bool trans_VDOT(DisasContext *s, arg_VDOT *a)
119
{
120
int opr_sz;
121
- gen_helper_gvec_3 *fn_gvec;
122
+ gen_helper_gvec_4 *fn_gvec;
123
124
if (!dc_isar_feature(aa32_dp, s)) {
125
return false;
126
@@ -XXX,XX +XXX,XX @@ static bool trans_VDOT(DisasContext *s, arg_VDOT *a)
127
128
opr_sz = (1 + a->q) * 8;
129
fn_gvec = a->u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
130
- tcg_gen_gvec_3_ool(vfp_reg_offset(1, a->vd),
131
+ tcg_gen_gvec_4_ool(vfp_reg_offset(1, a->vd),
132
vfp_reg_offset(1, a->vn),
133
vfp_reg_offset(1, a->vm),
134
+ vfp_reg_offset(1, a->vd),
135
opr_sz, opr_sz, 0, fn_gvec);
136
return true;
137
}
138
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
139
140
static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
141
{
142
- gen_helper_gvec_3 *fn_gvec;
143
+ gen_helper_gvec_4 *fn_gvec;
144
int opr_sz;
145
TCGv_ptr fpst;
146
147
@@ -XXX,XX +XXX,XX @@ static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
148
fn_gvec = a->u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
149
opr_sz = (1 + a->q) * 8;
150
fpst = fpstatus_ptr(FPST_STD);
151
- tcg_gen_gvec_3_ool(vfp_reg_offset(1, a->vd),
152
+ tcg_gen_gvec_4_ool(vfp_reg_offset(1, a->vd),
153
vfp_reg_offset(1, a->vn),
154
vfp_reg_offset(1, a->rm),
155
+ vfp_reg_offset(1, a->vd),
156
opr_sz, opr_sz, a->index, fn_gvec);
157
tcg_temp_free_ptr(fpst);
158
return true;
159
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/target/arm/translate-sve.c
162
+++ b/target/arm/translate-sve.c
163
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
164
165
#undef DO_ZZI
166
167
-static bool trans_DOT_zzz(DisasContext *s, arg_DOT_zzz *a)
168
+static bool trans_DOT_zzzz(DisasContext *s, arg_DOT_zzzz *a)
169
{
170
- static gen_helper_gvec_3 * const fns[2][2] = {
171
+ static gen_helper_gvec_4 * const fns[2][2] = {
172
{ gen_helper_gvec_sdot_b, gen_helper_gvec_sdot_h },
173
{ gen_helper_gvec_udot_b, gen_helper_gvec_udot_h }
174
};
175
176
if (sve_access_check(s)) {
177
- gen_gvec_ool_zzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm, 0);
178
+ gen_gvec_ool_zzzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm, a->ra, 0);
179
}
180
return true;
181
}
182
183
-static bool trans_DOT_zzx(DisasContext *s, arg_DOT_zzx *a)
184
+static bool trans_DOT_zzxw(DisasContext *s, arg_DOT_zzxw *a)
185
{
186
- static gen_helper_gvec_3 * const fns[2][2] = {
187
+ static gen_helper_gvec_4 * const fns[2][2] = {
188
{ gen_helper_gvec_sdot_idx_b, gen_helper_gvec_sdot_idx_h },
189
{ gen_helper_gvec_udot_idx_b, gen_helper_gvec_udot_idx_h }
190
};
191
192
if (sve_access_check(s)) {
193
- gen_gvec_ool_zzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm, a->index);
194
+ gen_gvec_ool_zzzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm,
195
+ a->ra, a->index);
196
}
197
return true;
198
}
199
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/target/arm/vec_helper.c
202
+++ b/target/arm/vec_helper.c
203
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmlsh_d)(void *vd, void *vn, void *vm,
204
* All elements are treated equally, no matter where they are.
205
*/
206
207
-void HELPER(gvec_sdot_b)(void *vd, void *vn, void *vm, uint32_t desc)
208
+void HELPER(gvec_sdot_b)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
209
{
210
intptr_t i, opr_sz = simd_oprsz(desc);
211
- int32_t *d = vd;
212
+ int32_t *d = vd, *a = va;
213
int8_t *n = vn, *m = vm;
214
215
for (i = 0; i < opr_sz / 4; ++i) {
216
- d[i] += n[i * 4 + 0] * m[i * 4 + 0]
217
- + n[i * 4 + 1] * m[i * 4 + 1]
218
- + n[i * 4 + 2] * m[i * 4 + 2]
219
- + n[i * 4 + 3] * m[i * 4 + 3];
220
+ d[i] = (a[i] +
221
+ n[i * 4 + 0] * m[i * 4 + 0] +
222
+ n[i * 4 + 1] * m[i * 4 + 1] +
223
+ n[i * 4 + 2] * m[i * 4 + 2] +
224
+ n[i * 4 + 3] * m[i * 4 + 3]);
225
}
226
clear_tail(d, opr_sz, simd_maxsz(desc));
227
}
228
229
-void HELPER(gvec_udot_b)(void *vd, void *vn, void *vm, uint32_t desc)
230
+void HELPER(gvec_udot_b)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
231
{
232
intptr_t i, opr_sz = simd_oprsz(desc);
233
- uint32_t *d = vd;
234
+ uint32_t *d = vd, *a = va;
235
uint8_t *n = vn, *m = vm;
236
237
for (i = 0; i < opr_sz / 4; ++i) {
238
- d[i] += n[i * 4 + 0] * m[i * 4 + 0]
239
- + n[i * 4 + 1] * m[i * 4 + 1]
240
- + n[i * 4 + 2] * m[i * 4 + 2]
241
- + n[i * 4 + 3] * m[i * 4 + 3];
242
+ d[i] = (a[i] +
243
+ n[i * 4 + 0] * m[i * 4 + 0] +
244
+ n[i * 4 + 1] * m[i * 4 + 1] +
245
+ n[i * 4 + 2] * m[i * 4 + 2] +
246
+ n[i * 4 + 3] * m[i * 4 + 3]);
247
}
248
clear_tail(d, opr_sz, simd_maxsz(desc));
249
}
250
251
-void HELPER(gvec_sdot_h)(void *vd, void *vn, void *vm, uint32_t desc)
252
+void HELPER(gvec_sdot_h)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
253
{
254
intptr_t i, opr_sz = simd_oprsz(desc);
255
- int64_t *d = vd;
256
+ int64_t *d = vd, *a = va;
257
int16_t *n = vn, *m = vm;
258
259
for (i = 0; i < opr_sz / 8; ++i) {
260
- d[i] += (int64_t)n[i * 4 + 0] * m[i * 4 + 0]
261
- + (int64_t)n[i * 4 + 1] * m[i * 4 + 1]
262
- + (int64_t)n[i * 4 + 2] * m[i * 4 + 2]
263
- + (int64_t)n[i * 4 + 3] * m[i * 4 + 3];
264
+ d[i] = (a[i] +
265
+ (int64_t)n[i * 4 + 0] * m[i * 4 + 0] +
266
+ (int64_t)n[i * 4 + 1] * m[i * 4 + 1] +
267
+ (int64_t)n[i * 4 + 2] * m[i * 4 + 2] +
268
+ (int64_t)n[i * 4 + 3] * m[i * 4 + 3]);
269
}
270
clear_tail(d, opr_sz, simd_maxsz(desc));
271
}
272
273
-void HELPER(gvec_udot_h)(void *vd, void *vn, void *vm, uint32_t desc)
274
+void HELPER(gvec_udot_h)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
275
{
276
intptr_t i, opr_sz = simd_oprsz(desc);
277
- uint64_t *d = vd;
278
+ uint64_t *d = vd, *a = va;
279
uint16_t *n = vn, *m = vm;
280
281
for (i = 0; i < opr_sz / 8; ++i) {
282
- d[i] += (uint64_t)n[i * 4 + 0] * m[i * 4 + 0]
283
- + (uint64_t)n[i * 4 + 1] * m[i * 4 + 1]
284
- + (uint64_t)n[i * 4 + 2] * m[i * 4 + 2]
285
- + (uint64_t)n[i * 4 + 3] * m[i * 4 + 3];
286
+ d[i] = (a[i] +
287
+ (uint64_t)n[i * 4 + 0] * m[i * 4 + 0] +
288
+ (uint64_t)n[i * 4 + 1] * m[i * 4 + 1] +
289
+ (uint64_t)n[i * 4 + 2] * m[i * 4 + 2] +
290
+ (uint64_t)n[i * 4 + 3] * m[i * 4 + 3]);
291
}
292
clear_tail(d, opr_sz, simd_maxsz(desc));
293
}
294
295
-void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
296
+void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm,
297
+ void *va, uint32_t desc)
298
{
299
intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
300
intptr_t index = simd_data(desc);
301
- int32_t *d = vd;
302
+ int32_t *d = vd, *a = va;
303
int8_t *n = vn;
304
int8_t *m_indexed = (int8_t *)vm + H4(index) * 4;
305
306
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
307
int8_t m3 = m_indexed[i * 4 + 3];
308
309
do {
310
- d[i] += n[i * 4 + 0] * m0
311
- + n[i * 4 + 1] * m1
312
- + n[i * 4 + 2] * m2
313
- + n[i * 4 + 3] * m3;
314
+ d[i] = (a[i] +
315
+ n[i * 4 + 0] * m0 +
316
+ n[i * 4 + 1] * m1 +
317
+ n[i * 4 + 2] * m2 +
318
+ n[i * 4 + 3] * m3);
319
} while (++i < segend);
320
segend = i + 4;
321
} while (i < opr_sz_4);
322
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
323
clear_tail(d, opr_sz, simd_maxsz(desc));
324
}
325
326
-void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
327
+void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm,
328
+ void *va, uint32_t desc)
329
{
330
intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
331
intptr_t index = simd_data(desc);
332
- uint32_t *d = vd;
333
+ uint32_t *d = vd, *a = va;
334
uint8_t *n = vn;
335
uint8_t *m_indexed = (uint8_t *)vm + H4(index) * 4;
336
337
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
338
uint8_t m3 = m_indexed[i * 4 + 3];
339
340
do {
341
- d[i] += n[i * 4 + 0] * m0
342
- + n[i * 4 + 1] * m1
343
- + n[i * 4 + 2] * m2
344
- + n[i * 4 + 3] * m3;
345
+ d[i] = (a[i] +
346
+ n[i * 4 + 0] * m0 +
347
+ n[i * 4 + 1] * m1 +
348
+ n[i * 4 + 2] * m2 +
349
+ n[i * 4 + 3] * m3);
350
} while (++i < segend);
351
segend = i + 4;
352
} while (i < opr_sz_4);
353
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
354
clear_tail(d, opr_sz, simd_maxsz(desc));
355
}
356
357
-void HELPER(gvec_sdot_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
358
+void HELPER(gvec_sdot_idx_h)(void *vd, void *vn, void *vm,
359
+ void *va, uint32_t desc)
360
{
361
intptr_t i, opr_sz = simd_oprsz(desc), opr_sz_8 = opr_sz / 8;
362
intptr_t index = simd_data(desc);
363
- int64_t *d = vd;
364
+ int64_t *d = vd, *a = va;
365
int16_t *n = vn;
366
int16_t *m_indexed = (int16_t *)vm + index * 4;
367
368
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
369
* Process the entire segment all at once, writing back the results
370
* only after we've consumed all of the inputs.
371
*/
372
- for (i = 0; i < opr_sz_8 ; i += 2) {
373
- uint64_t d0, d1;
374
+ for (i = 0; i < opr_sz_8; i += 2) {
375
+ int64_t d0, d1;
376
377
- d0 = n[i * 4 + 0] * (int64_t)m_indexed[i * 4 + 0];
378
+ d0 = a[i + 0];
379
+ d0 += n[i * 4 + 0] * (int64_t)m_indexed[i * 4 + 0];
380
d0 += n[i * 4 + 1] * (int64_t)m_indexed[i * 4 + 1];
381
d0 += n[i * 4 + 2] * (int64_t)m_indexed[i * 4 + 2];
382
d0 += n[i * 4 + 3] * (int64_t)m_indexed[i * 4 + 3];
383
- d1 = n[i * 4 + 4] * (int64_t)m_indexed[i * 4 + 0];
384
+
385
+ d1 = a[i + 1];
386
+ d1 += n[i * 4 + 4] * (int64_t)m_indexed[i * 4 + 0];
387
d1 += n[i * 4 + 5] * (int64_t)m_indexed[i * 4 + 1];
388
d1 += n[i * 4 + 6] * (int64_t)m_indexed[i * 4 + 2];
389
d1 += n[i * 4 + 7] * (int64_t)m_indexed[i * 4 + 3];
390
391
- d[i + 0] += d0;
392
- d[i + 1] += d1;
393
+ d[i + 0] = d0;
394
+ d[i + 1] = d1;
395
}
396
-
397
clear_tail(d, opr_sz, simd_maxsz(desc));
398
}
399
400
-void HELPER(gvec_udot_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
401
+void HELPER(gvec_udot_idx_h)(void *vd, void *vn, void *vm,
402
+ void *va, uint32_t desc)
403
{
404
intptr_t i, opr_sz = simd_oprsz(desc), opr_sz_8 = opr_sz / 8;
405
intptr_t index = simd_data(desc);
406
- uint64_t *d = vd;
407
+ uint64_t *d = vd, *a = va;
408
uint16_t *n = vn;
409
uint16_t *m_indexed = (uint16_t *)vm + index * 4;
410
411
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
412
* Process the entire segment all at once, writing back the results
413
* only after we've consumed all of the inputs.
414
*/
415
- for (i = 0; i < opr_sz_8 ; i += 2) {
416
+ for (i = 0; i < opr_sz_8; i += 2) {
417
uint64_t d0, d1;
418
419
- d0 = n[i * 4 + 0] * (uint64_t)m_indexed[i * 4 + 0];
420
+ d0 = a[i + 0];
421
+ d0 += n[i * 4 + 0] * (uint64_t)m_indexed[i * 4 + 0];
422
d0 += n[i * 4 + 1] * (uint64_t)m_indexed[i * 4 + 1];
423
d0 += n[i * 4 + 2] * (uint64_t)m_indexed[i * 4 + 2];
424
d0 += n[i * 4 + 3] * (uint64_t)m_indexed[i * 4 + 3];
425
- d1 = n[i * 4 + 4] * (uint64_t)m_indexed[i * 4 + 0];
426
+
427
+ d1 = a[i + 1];
428
+ d1 += n[i * 4 + 4] * (uint64_t)m_indexed[i * 4 + 0];
429
d1 += n[i * 4 + 5] * (uint64_t)m_indexed[i * 4 + 1];
430
d1 += n[i * 4 + 6] * (uint64_t)m_indexed[i * 4 + 2];
431
d1 += n[i * 4 + 7] * (uint64_t)m_indexed[i * 4 + 3];
432
433
- d[i + 0] += d0;
434
- d[i + 1] += d1;
435
+ d[i + 0] = d0;
436
+ d[i + 1] = d1;
437
}
438
-
439
clear_tail(d, opr_sz, simd_maxsz(desc));
440
}
441
442
--
73
--
443
2.20.1
74
2.25.1
444
445
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Place this late in the resettable section of the structure,
4
to keep the most common element offsets from being > 64k.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-70-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-10-richard.henderson@linaro.org
9
[PMM: expanded comment on zarray[] format]
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/cpu.h | 5 +++++
12
target/arm/cpu.h | 22 ++++++++++++++++++++++
9
target/arm/sve.decode | 7 +++++++
13
target/arm/machine.c | 34 ++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 38 ++++++++++++++++++++++++++++++++++++++
14
2 files changed, 56 insertions(+)
11
3 files changed, 50 insertions(+)
12
15
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
18
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
20
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
18
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
21
} keys;
19
}
22
20
23
uint64_t scxtnum_el[4];
21
+static inline bool isar_feature_aa64_sve2_sm4(const ARMISARegisters *id)
24
+
25
+ /*
26
+ * SME ZA storage -- 256 x 256 byte array, with bytes in host word order,
27
+ * as we do with vfp.zregs[]. This corresponds to the architectural ZA
28
+ * array, where ZA[N] is in the least-significant bytes of env->zarray[N].
29
+ * When SVL is less than the architectural maximum, the accessible
30
+ * storage is restricted, such that if the SVL is X bytes the guest can
31
+ * see only the bottom X elements of zarray[], and only the least
32
+ * significant X bytes of each element of the array. (In other words,
33
+ * the observable part is always square.)
34
+ *
35
+ * The ZA storage can also be considered as a set of square tiles of
36
+ * elements of different sizes. The mapping from tiles to the ZA array
37
+ * is architecturally defined, such that for tiles of elements of esz
38
+ * bytes, the Nth row (or "horizontal slice") of tile T is in
39
+ * ZA[T + N * esz]. Note that this means that each tile is not contiguous
40
+ * in the ZA storage, because its rows are striped through the ZA array.
41
+ *
42
+ * Because this is so large, keep this toward the end of the reset area,
43
+ * to keep the offsets into the rest of the structure smaller.
44
+ */
45
+ ARMVectorReg zarray[ARM_MAX_VQ * 16];
46
#endif
47
48
#if defined(CONFIG_USER_ONLY)
49
diff --git a/target/arm/machine.c b/target/arm/machine.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/machine.c
52
+++ b/target/arm/machine.c
53
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
54
VMSTATE_END_OF_LIST()
55
}
56
};
57
+
58
+static const VMStateDescription vmstate_vreg = {
59
+ .name = "vreg",
60
+ .version_id = 1,
61
+ .minimum_version_id = 1,
62
+ .fields = (VMStateField[]) {
63
+ VMSTATE_UINT64_ARRAY(d, ARMVectorReg, ARM_MAX_VQ * 2),
64
+ VMSTATE_END_OF_LIST()
65
+ }
66
+};
67
+
68
+static bool za_needed(void *opaque)
22
+{
69
+{
23
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SM4) != 0;
70
+ ARMCPU *cpu = opaque;
71
+
72
+ /*
73
+ * When ZA storage is disabled, its contents are discarded.
74
+ * It will be zeroed when ZA storage is re-enabled.
75
+ */
76
+ return FIELD_EX64(cpu->env.svcr, SVCR, ZA);
24
+}
77
+}
25
+
78
+
26
static inline bool isar_feature_aa64_sve_i8mm(const ARMISARegisters *id)
79
+static const VMStateDescription vmstate_za = {
27
{
80
+ .name = "cpu/sme",
28
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, I8MM) != 0;
81
+ .version_id = 1,
29
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
82
+ .minimum_version_id = 1,
30
index XXXXXXX..XXXXXXX 100644
83
+ .needed = za_needed,
31
--- a/target/arm/sve.decode
84
+ .fields = (VMStateField[]) {
32
+++ b/target/arm/sve.decode
85
+ VMSTATE_STRUCT_ARRAY(env.zarray, ARMCPU, ARM_MAX_VQ * 16, 0,
33
@@ -XXX,XX +XXX,XX @@
86
+ vmstate_vreg, ARMVectorReg),
34
@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
87
+ VMSTATE_END_OF_LIST()
35
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
36
&rrr_esz rn=%reg_movprfx
37
+@rdn_rm_e0 ........ .. ...... ...... rm:5 rd:5 \
38
+ &rrr_esz rn=%reg_movprfx esz=0
39
@rdn_sh_i8u ........ esz:2 ...... ...... ..... rd:5 \
40
&rri_esz rn=%reg_movprfx imm=%sh8_i8u
41
@rdn_i8u ........ esz:2 ...... ... imm:8 rd:5 \
42
@@ -XXX,XX +XXX,XX @@ STNT1_zprz 1110010 .. 10 ..... 001 ... ..... ..... \
43
# SVE2 crypto unary operations
44
# AESMC and AESIMC
45
AESMC 01000101 00 10000011100 decrypt:1 00000 rd:5
46
+
47
+# SVE2 crypto destructive binary operations
48
+AESE 01000101 00 10001 0 11100 0 ..... ..... @rdn_rm_e0
49
+AESD 01000101 00 10001 0 11100 1 ..... ..... @rdn_rm_e0
50
+SM4E 01000101 00 10001 1 11100 0 ..... ..... @rdn_rm_e0
51
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/translate-sve.c
54
+++ b/target/arm/translate-sve.c
55
@@ -XXX,XX +XXX,XX @@ static bool trans_AESMC(DisasContext *s, arg_AESMC *a)
56
}
57
return true;
58
}
59
+
60
+static bool do_aese(DisasContext *s, arg_rrr_esz *a, bool decrypt)
61
+{
62
+ if (!dc_isar_feature(aa64_sve2_aes, s)) {
63
+ return false;
64
+ }
88
+ }
65
+ if (sve_access_check(s)) {
89
+};
66
+ gen_gvec_ool_zzz(s, gen_helper_crypto_aese,
90
#endif /* AARCH64 */
67
+ a->rd, a->rn, a->rm, decrypt);
91
68
+ }
92
static bool serror_needed(void *opaque)
69
+ return true;
93
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
70
+}
94
&vmstate_m_security,
71
+
95
#ifdef TARGET_AARCH64
72
+static bool trans_AESE(DisasContext *s, arg_rrr_esz *a)
96
&vmstate_sve,
73
+{
97
+ &vmstate_za,
74
+ return do_aese(s, a, false);
98
#endif
75
+}
99
&vmstate_serror,
76
+
100
&vmstate_irq_line_state,
77
+static bool trans_AESD(DisasContext *s, arg_rrr_esz *a)
78
+{
79
+ return do_aese(s, a, true);
80
+}
81
+
82
+static bool do_sm4(DisasContext *s, arg_rrr_esz *a, gen_helper_gvec_3 *fn)
83
+{
84
+ if (!dc_isar_feature(aa64_sve2_sm4, s)) {
85
+ return false;
86
+ }
87
+ if (sve_access_check(s)) {
88
+ gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, 0);
89
+ }
90
+ return true;
91
+}
92
+
93
+static bool trans_SM4E(DisasContext *s, arg_rrr_esz *a)
94
+{
95
+ return do_sm4(s, a, gen_helper_crypto_sm4e);
96
+}
97
--
101
--
98
2.20.1
102
2.25.1
99
100
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In addition, use the same vector generator interface for AdvSIMD.
3
These two instructions are aliases of MSR (immediate).
4
This fixes a bug in which the AdvSIMD insn failed to clear the
4
Use the two helpers to properly implement svcr_write.
5
high bits of the SVE register.
6
5
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210525010358.152808-44-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-11-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper-sve.h | 4 ++
11
target/arm/cpu.h | 1 +
13
target/arm/helper.h | 2 +
12
target/arm/helper-sme.h | 21 +++++++++++++
14
target/arm/translate-a64.h | 3 ++
13
target/arm/helper.h | 1 +
15
target/arm/sve.decode | 4 ++
14
target/arm/helper.c | 6 ++--
16
target/arm/sve_helper.c | 39 ++++++++++++++
15
target/arm/sme_helper.c | 61 ++++++++++++++++++++++++++++++++++++++
17
target/arm/translate-a64.c | 25 ++-------
16
target/arm/translate-a64.c | 24 +++++++++++++++
18
target/arm/translate-sve.c | 104 +++++++++++++++++++++++++++++++++++++
17
target/arm/meson.build | 1 +
19
target/arm/vec_helper.c | 12 +++++
18
7 files changed, 112 insertions(+), 3 deletions(-)
20
8 files changed, 172 insertions(+), 21 deletions(-)
19
create mode 100644 target/arm/helper-sme.h
20
create mode 100644 target/arm/sme_helper.c
21
21
22
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper-sve.h
24
--- a/target/arm/cpu.h
25
+++ b/target/arm/helper-sve.h
25
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_histcnt_d, TCG_CALL_NO_RWG,
26
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
27
27
int new_el, bool el0_a64);
28
DEF_HELPER_FLAGS_4(sve2_histseg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
void aarch64_add_sve_properties(Object *obj);
29
29
void aarch64_add_pauth_properties(Object *obj);
30
+DEF_HELPER_FLAGS_4(sve2_xar_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+void arm_reset_sve_state(CPUARMState *env);
31
+DEF_HELPER_FLAGS_4(sve2_xar_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
32
+DEF_HELPER_FLAGS_4(sve2_xar_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
/*
33
+
33
* SVE registers are encoded in KVM's memory in an endianness-invariant format.
34
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
34
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
35
void, ptr, ptr, ptr, ptr, ptr, i32)
35
new file mode 100644
36
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
36
index XXXXXXX..XXXXXXX
37
--- /dev/null
38
+++ b/target/arm/helper-sme.h
39
@@ -XXX,XX +XXX,XX @@
40
+/*
41
+ * AArch64 SME specific helper definitions
42
+ *
43
+ * Copyright (c) 2022 Linaro, Ltd
44
+ *
45
+ * This library is free software; you can redistribute it and/or
46
+ * modify it under the terms of the GNU Lesser General Public
47
+ * License as published by the Free Software Foundation; either
48
+ * version 2.1 of the License, or (at your option) any later version.
49
+ *
50
+ * This library is distributed in the hope that it will be useful,
51
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
52
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
53
+ * Lesser General Public License for more details.
54
+ *
55
+ * You should have received a copy of the GNU Lesser General Public
56
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
57
+ */
58
+
59
+DEF_HELPER_FLAGS_2(set_pstate_sm, TCG_CALL_NO_RWG, void, env, i32)
60
+DEF_HELPER_FLAGS_2(set_pstate_za, TCG_CALL_NO_RWG, void, env, i32)
37
diff --git a/target/arm/helper.h b/target/arm/helper.h
61
diff --git a/target/arm/helper.h b/target/arm/helper.h
38
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.h
63
--- a/target/arm/helper.h
40
+++ b/target/arm/helper.h
64
+++ b/target/arm/helper.h
41
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(neon_sqrdmulh_h, TCG_CALL_NO_RWG,
65
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
42
DEF_HELPER_FLAGS_5(neon_sqrdmulh_s, TCG_CALL_NO_RWG,
43
void, ptr, ptr, ptr, ptr, i32)
44
45
+DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
+
47
#ifdef TARGET_AARCH64
66
#ifdef TARGET_AARCH64
48
#include "helper-a64.h"
67
#include "helper-a64.h"
49
#include "helper-sve.h"
68
#include "helper-sve.h"
50
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
69
+#include "helper-sme.h"
51
index XXXXXXX..XXXXXXX 100644
70
#endif
52
--- a/target/arm/translate-a64.h
71
53
+++ b/target/arm/translate-a64.h
72
#include "helper-mve.h"
54
@@ -XXX,XX +XXX,XX @@ bool disas_sve(DisasContext *, uint32_t);
73
diff --git a/target/arm/helper.c b/target/arm/helper.c
55
74
index XXXXXXX..XXXXXXX 100644
56
void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
75
--- a/target/arm/helper.c
57
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
76
+++ b/target/arm/helper.c
58
+void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
77
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_esm(CPUARMState *env, const ARMCPRegInfo *ri,
59
+ uint32_t rm_ofs, int64_t shift,
78
static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
60
+ uint32_t opr_sz, uint32_t max_sz);
79
uint64_t value)
61
80
{
62
#endif /* TARGET_ARM_TRANSLATE_A64_H */
81
- value &= R_SVCR_SM_MASK | R_SVCR_ZA_MASK;
63
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
82
- /* TODO: Side effects. */
64
index XXXXXXX..XXXXXXX 100644
83
- env->svcr = value;
65
--- a/target/arm/sve.decode
84
+ helper_set_pstate_sm(env, FIELD_EX64(value, SVCR, SM));
66
+++ b/target/arm/sve.decode
85
+ helper_set_pstate_za(env, FIELD_EX64(value, SVCR, ZA));
86
+ arm_rebuild_hflags(env);
87
}
88
89
static void smcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
90
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
91
new file mode 100644
92
index XXXXXXX..XXXXXXX
93
--- /dev/null
94
+++ b/target/arm/sme_helper.c
67
@@ -XXX,XX +XXX,XX @@
95
@@ -XXX,XX +XXX,XX @@
68
&rr_dbm rd rn dbm
96
+/*
69
&rrri rd rn rm imm
97
+ * ARM SME Operations
70
&rri_esz rd rn imm esz
98
+ *
71
+&rrri_esz rd rn rm imm esz
99
+ * Copyright (c) 2022 Linaro, Ltd.
72
&rrr_esz rd rn rm esz
100
+ *
73
&rpr_esz rd pg rn esz
101
+ * This library is free software; you can redistribute it and/or
74
&rpr_s rd pg rn s
102
+ * modify it under the terms of the GNU Lesser General Public
75
@@ -XXX,XX +XXX,XX @@ ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
103
+ * License as published by the Free Software Foundation; either
76
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
104
+ * version 2.1 of the License, or (at your option) any later version.
77
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
105
+ *
78
106
+ * This library is distributed in the hope that it will be useful,
79
+XAR 00000100 .. 1 ..... 001 101 rm:5 rd:5 &rrri_esz \
107
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
80
+ rn=%reg_movprfx esz=%tszimm16_esz imm=%tszimm16_shr
108
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
81
+
109
+ * Lesser General Public License for more details.
82
# SVE2 bitwise ternary operations
110
+ *
83
EOR3 00000100 00 1 ..... 001 110 ..... ..... @rdn_ra_rm_e0
111
+ * You should have received a copy of the GNU Lesser General Public
84
BSL 00000100 00 1 ..... 001 111 ..... ..... @rdn_ra_rm_e0
112
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
85
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
113
+ */
86
index XXXXXXX..XXXXXXX 100644
114
+
87
--- a/target/arm/sve_helper.c
115
+#include "qemu/osdep.h"
88
+++ b/target/arm/sve_helper.c
116
+#include "cpu.h"
89
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_histseg)(void *vd, void *vn, void *vm, uint32_t desc)
117
+#include "internals.h"
90
*(uint64_t *)(vd + i + 8) = out1;
118
+#include "exec/helper-proto.h"
91
}
119
+
92
}
120
+/* ResetSVEState */
93
+
121
+void arm_reset_sve_state(CPUARMState *env)
94
+void HELPER(sve2_xar_b)(void *vd, void *vn, void *vm, uint32_t desc)
95
+{
122
+{
96
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
123
+ memset(env->vfp.zregs, 0, sizeof(env->vfp.zregs));
97
+ int shr = simd_data(desc);
124
+ /* Recall that FFR is stored as pregs[16]. */
98
+ int shl = 8 - shr;
125
+ memset(env->vfp.pregs, 0, sizeof(env->vfp.pregs));
99
+ uint64_t mask = dup_const(MO_8, 0xff >> shr);
126
+ vfp_set_fpcr(env, 0x0800009f);
100
+ uint64_t *d = vd, *n = vn, *m = vm;
127
+}
101
+
128
+
102
+ for (i = 0; i < opr_sz; ++i) {
129
+void helper_set_pstate_sm(CPUARMState *env, uint32_t i)
103
+ uint64_t t = n[i] ^ m[i];
130
+{
104
+ d[i] = ((t >> shr) & mask) | ((t << shl) & ~mask);
131
+ if (i == FIELD_EX64(env->svcr, SVCR, SM)) {
132
+ return;
105
+ }
133
+ }
134
+ env->svcr ^= R_SVCR_SM_MASK;
135
+ arm_reset_sve_state(env);
106
+}
136
+}
107
+
137
+
108
+void HELPER(sve2_xar_h)(void *vd, void *vn, void *vm, uint32_t desc)
138
+void helper_set_pstate_za(CPUARMState *env, uint32_t i)
109
+{
139
+{
110
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
140
+ if (i == FIELD_EX64(env->svcr, SVCR, ZA)) {
111
+ int shr = simd_data(desc);
141
+ return;
112
+ int shl = 16 - shr;
113
+ uint64_t mask = dup_const(MO_16, 0xffff >> shr);
114
+ uint64_t *d = vd, *n = vn, *m = vm;
115
+
116
+ for (i = 0; i < opr_sz; ++i) {
117
+ uint64_t t = n[i] ^ m[i];
118
+ d[i] = ((t >> shr) & mask) | ((t << shl) & ~mask);
119
+ }
142
+ }
120
+}
143
+ env->svcr ^= R_SVCR_ZA_MASK;
121
+
144
+
122
+void HELPER(sve2_xar_s)(void *vd, void *vn, void *vm, uint32_t desc)
145
+ /*
123
+{
146
+ * ResetSMEState.
124
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
147
+ *
125
+ int shr = simd_data(desc);
148
+ * SetPSTATE_ZA zeros on enable and disable. We can zero this only
126
+ uint32_t *d = vd, *n = vn, *m = vm;
149
+ * on enable: while disabled, the storage is inaccessible and the
127
+
150
+ * value does not matter. We're not saving the storage in vmstate
128
+ for (i = 0; i < opr_sz; ++i) {
151
+ * when disabled either.
129
+ d[i] = ror32(n[i] ^ m[i], shr);
152
+ */
153
+ if (i) {
154
+ memset(env->zarray, 0, sizeof(env->zarray));
130
+ }
155
+ }
131
+}
156
+}
132
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
157
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
133
index XXXXXXX..XXXXXXX 100644
158
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/translate-a64.c
159
--- a/target/arm/translate-a64.c
135
+++ b/target/arm/translate-a64.c
160
+++ b/target/arm/translate-a64.c
136
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
161
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
137
int imm6 = extract32(insn, 10, 6);
162
}
138
int rn = extract32(insn, 5, 5);
163
break;
139
int rd = extract32(insn, 0, 5);
164
140
- TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
165
+ case 0x1b: /* SVCR* */
141
- int pass;
166
+ if (!dc_isar_feature(aa64_sme, s) || crm < 2 || crm > 7) {
142
167
+ goto do_unallocated;
143
if (!dc_isar_feature(aa64_sha3, s)) {
168
+ }
169
+ if (sme_access_check(s)) {
170
+ bool i = crm & 1;
171
+ bool changed = false;
172
+
173
+ if ((crm & 2) && i != s->pstate_sm) {
174
+ gen_helper_set_pstate_sm(cpu_env, tcg_constant_i32(i));
175
+ changed = true;
176
+ }
177
+ if ((crm & 4) && i != s->pstate_za) {
178
+ gen_helper_set_pstate_za(cpu_env, tcg_constant_i32(i));
179
+ changed = true;
180
+ }
181
+ if (changed) {
182
+ gen_rebuild_hflags(s);
183
+ } else {
184
+ s->base.is_jmp = DISAS_NEXT;
185
+ }
186
+ }
187
+ break;
188
+
189
default:
190
do_unallocated:
144
unallocated_encoding(s);
191
unallocated_encoding(s);
145
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
192
diff --git a/target/arm/meson.build b/target/arm/meson.build
146
return;
193
index XXXXXXX..XXXXXXX 100644
147
}
194
--- a/target/arm/meson.build
148
195
+++ b/target/arm/meson.build
149
- tcg_op1 = tcg_temp_new_i64();
196
@@ -XXX,XX +XXX,XX @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
150
- tcg_op2 = tcg_temp_new_i64();
197
'mte_helper.c',
151
- tcg_res[0] = tcg_temp_new_i64();
198
'pauth_helper.c',
152
- tcg_res[1] = tcg_temp_new_i64();
199
'sve_helper.c',
153
-
200
+ 'sme_helper.c',
154
- for (pass = 0; pass < 2; pass++) {
201
'translate-a64.c',
155
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
202
'translate-sve.c',
156
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
203
))
157
-
158
- tcg_gen_xor_i64(tcg_res[pass], tcg_op1, tcg_op2);
159
- tcg_gen_rotri_i64(tcg_res[pass], tcg_res[pass], imm6);
160
- }
161
- write_vec_element(s, tcg_res[0], rd, 0, MO_64);
162
- write_vec_element(s, tcg_res[1], rd, 1, MO_64);
163
-
164
- tcg_temp_free_i64(tcg_op1);
165
- tcg_temp_free_i64(tcg_op2);
166
- tcg_temp_free_i64(tcg_res[0]);
167
- tcg_temp_free_i64(tcg_res[1]);
168
+ gen_gvec_xar(MO_64, vec_full_reg_offset(s, rd),
169
+ vec_full_reg_offset(s, rn),
170
+ vec_full_reg_offset(s, rm), imm6, 16,
171
+ vec_full_reg_size(s));
172
}
173
174
/* Crypto three-reg imm2
175
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
176
index XXXXXXX..XXXXXXX 100644
177
--- a/target/arm/translate-sve.c
178
+++ b/target/arm/translate-sve.c
179
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a)
180
return do_zzz_fn(s, a, tcg_gen_gvec_andc);
181
}
182
183
+static void gen_xar8_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
184
+{
185
+ TCGv_i64 t = tcg_temp_new_i64();
186
+ uint64_t mask = dup_const(MO_8, 0xff >> sh);
187
+
188
+ tcg_gen_xor_i64(t, n, m);
189
+ tcg_gen_shri_i64(d, t, sh);
190
+ tcg_gen_shli_i64(t, t, 8 - sh);
191
+ tcg_gen_andi_i64(d, d, mask);
192
+ tcg_gen_andi_i64(t, t, ~mask);
193
+ tcg_gen_or_i64(d, d, t);
194
+ tcg_temp_free_i64(t);
195
+}
196
+
197
+static void gen_xar16_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
198
+{
199
+ TCGv_i64 t = tcg_temp_new_i64();
200
+ uint64_t mask = dup_const(MO_16, 0xffff >> sh);
201
+
202
+ tcg_gen_xor_i64(t, n, m);
203
+ tcg_gen_shri_i64(d, t, sh);
204
+ tcg_gen_shli_i64(t, t, 16 - sh);
205
+ tcg_gen_andi_i64(d, d, mask);
206
+ tcg_gen_andi_i64(t, t, ~mask);
207
+ tcg_gen_or_i64(d, d, t);
208
+ tcg_temp_free_i64(t);
209
+}
210
+
211
+static void gen_xar_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, int32_t sh)
212
+{
213
+ tcg_gen_xor_i32(d, n, m);
214
+ tcg_gen_rotri_i32(d, d, sh);
215
+}
216
+
217
+static void gen_xar_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
218
+{
219
+ tcg_gen_xor_i64(d, n, m);
220
+ tcg_gen_rotri_i64(d, d, sh);
221
+}
222
+
223
+static void gen_xar_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
224
+ TCGv_vec m, int64_t sh)
225
+{
226
+ tcg_gen_xor_vec(vece, d, n, m);
227
+ tcg_gen_rotri_vec(vece, d, d, sh);
228
+}
229
+
230
+void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
231
+ uint32_t rm_ofs, int64_t shift,
232
+ uint32_t opr_sz, uint32_t max_sz)
233
+{
234
+ static const TCGOpcode vecop[] = { INDEX_op_rotli_vec, 0 };
235
+ static const GVecGen3i ops[4] = {
236
+ { .fni8 = gen_xar8_i64,
237
+ .fniv = gen_xar_vec,
238
+ .fno = gen_helper_sve2_xar_b,
239
+ .opt_opc = vecop,
240
+ .vece = MO_8 },
241
+ { .fni8 = gen_xar16_i64,
242
+ .fniv = gen_xar_vec,
243
+ .fno = gen_helper_sve2_xar_h,
244
+ .opt_opc = vecop,
245
+ .vece = MO_16 },
246
+ { .fni4 = gen_xar_i32,
247
+ .fniv = gen_xar_vec,
248
+ .fno = gen_helper_sve2_xar_s,
249
+ .opt_opc = vecop,
250
+ .vece = MO_32 },
251
+ { .fni8 = gen_xar_i64,
252
+ .fniv = gen_xar_vec,
253
+ .fno = gen_helper_gvec_xar_d,
254
+ .opt_opc = vecop,
255
+ .vece = MO_64 }
256
+ };
257
+ int esize = 8 << vece;
258
+
259
+ /* The SVE2 range is 1 .. esize; the AdvSIMD range is 0 .. esize-1. */
260
+ tcg_debug_assert(shift >= 0);
261
+ tcg_debug_assert(shift <= esize);
262
+ shift &= esize - 1;
263
+
264
+ if (shift == 0) {
265
+ /* xar with no rotate devolves to xor. */
266
+ tcg_gen_gvec_xor(vece, rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz);
267
+ } else {
268
+ tcg_gen_gvec_3i(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz,
269
+ shift, &ops[vece]);
270
+ }
271
+}
272
+
273
+static bool trans_XAR(DisasContext *s, arg_rrri_esz *a)
274
+{
275
+ if (a->esz < 0 || !dc_isar_feature(aa64_sve2, s)) {
276
+ return false;
277
+ }
278
+ if (sve_access_check(s)) {
279
+ unsigned vsz = vec_full_reg_size(s);
280
+ gen_gvec_xar(a->esz, vec_full_reg_offset(s, a->rd),
281
+ vec_full_reg_offset(s, a->rn),
282
+ vec_full_reg_offset(s, a->rm), a->imm, vsz, vsz);
283
+ }
284
+ return true;
285
+}
286
+
287
static bool do_sve2_zzzz_fn(DisasContext *s, arg_rrrr_esz *a, GVecGen4Fn *fn)
288
{
289
if (!dc_isar_feature(aa64_sve2, s)) {
290
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
291
index XXXXXXX..XXXXXXX 100644
292
--- a/target/arm/vec_helper.c
293
+++ b/target/arm/vec_helper.c
294
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_umulh_d)(void *vd, void *vn, void *vm, uint32_t desc)
295
}
296
clear_tail(d, opr_sz, simd_maxsz(desc));
297
}
298
+
299
+void HELPER(gvec_xar_d)(void *vd, void *vn, void *vm, uint32_t desc)
300
+{
301
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
302
+ int shr = simd_data(desc);
303
+ uint64_t *d = vd, *n = vn, *m = vm;
304
+
305
+ for (i = 0; i < opr_sz; ++i) {
306
+ d[i] = ror64(n[i] ^ m[i], shr);
307
+ }
308
+ clear_tail(d, opr_sz * 8, simd_maxsz(desc));
309
+}
310
--
204
--
311
2.20.1
205
2.25.1
312
313
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Currently only used by FMUL, but will shortly be used more.
3
Keep all of the error messages together. This does mean that
4
when setting many sve length properties we'll only generate
5
one error, but we only really need one.
4
6
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525010358.152808-52-richard.henderson@linaro.org
9
Message-id: 20220620175235.60881-12-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/sve.decode | 14 ++++++++++----
12
target/arm/cpu64.c | 15 +++++++--------
11
1 file changed, 10 insertions(+), 4 deletions(-)
13
1 file changed, 7 insertions(+), 8 deletions(-)
12
14
13
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
15
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/sve.decode
17
--- a/target/arm/cpu64.c
16
+++ b/target/arm/sve.decode
18
+++ b/target/arm/cpu64.c
17
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
18
&rri_esz rd rn imm esz
20
"using only sve<N> properties.\n");
19
&rrri_esz rd rn rm imm esz
21
} else {
20
&rrr_esz rd rn rm esz
22
error_setg(errp, "cannot enable sve%d", vq * 128);
21
+&rrx_esz rd rn rm index esz
23
- error_append_hint(errp, "This CPU does not support "
22
&rpr_esz rd pg rn esz
24
- "the vector length %d-bits.\n", vq * 128);
23
&rpr_s rd pg rn s
25
+ if (vq_supported) {
24
&rprr_s rd pg rn rm s
26
+ error_append_hint(errp, "This CPU does not support "
25
@@ -XXX,XX +XXX,XX @@
27
+ "the vector length %d-bits.\n", vq * 128);
26
@rpri_scatter_store ....... msz:2 .. imm:5 ... pg:3 rn:5 rd:5 \
28
+ } else {
27
&rpri_scatter_store
29
+ error_append_hint(errp, "SVE not supported by KVM "
28
30
+ "on this host\n");
29
+# Two registers and a scalar by N-bit index
31
+ }
30
+@rrx_3 ........ .. . .. rm:3 ...... rn:5 rd:5 \
32
}
31
+ &rrx_esz index=%index3_22_19
33
return;
32
+@rrx_2 ........ .. . index:2 rm:3 ...... rn:5 rd:5 &rrx_esz
34
} else {
33
+@rrx_1 ........ .. . index:1 rm:4 ...... rn:5 rd:5 &rrx_esz
35
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
34
+
36
return;
35
###########################################################################
37
}
36
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
38
37
39
- if (value && kvm_enabled() && !kvm_arm_sve_supported()) {
38
@@ -XXX,XX +XXX,XX @@ FMLA_zzxz 01100100 111 index:1 rm:4 00000 sub:1 rn:5 rd:5 \
40
- error_setg(errp, "cannot enable %s", name);
39
### SVE FP Multiply Indexed Group
41
- error_append_hint(errp, "SVE not supported by KVM on this host\n");
40
42
- return;
41
# SVE floating-point multiply (indexed)
43
- }
42
-FMUL_zzx 01100100 0.1 .. rm:3 001000 rn:5 rd:5 \
44
-
43
- index=%index3_22_19 esz=1
45
cpu->sve_vq_map = deposit32(cpu->sve_vq_map, vq - 1, 1, value);
44
-FMUL_zzx 01100100 101 index:2 rm:3 001000 rn:5 rd:5 esz=2
46
cpu->sve_vq_init |= 1 << (vq - 1);
45
-FMUL_zzx 01100100 111 index:1 rm:4 001000 rn:5 rd:5 esz=3
47
}
46
+FMUL_zzx 01100100 0. 1 ..... 001000 ..... ..... @rrx_3 esz=1
47
+FMUL_zzx 01100100 10 1 ..... 001000 ..... ..... @rrx_2 esz=2
48
+FMUL_zzx 01100100 11 1 ..... 001000 ..... ..... @rrx_1 esz=3
49
50
### SVE FP Fast Reduction Group
51
52
--
48
--
53
2.20.1
49
2.25.1
54
55
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Will be used for SVE2 isa subset enablement.
3
Pull the three sve_vq_* values into a structure.
4
This will be reused for SME.
4
5
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525010358.152808-2-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-13-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/cpu.h | 16 ++++++++++++++++
11
target/arm/cpu.h | 29 ++++++++++++++---------------
12
target/arm/helper.c | 3 +--
12
target/arm/cpu64.c | 22 +++++++++++-----------
13
target/arm/kvm64.c | 21 +++++++++++++++------
13
target/arm/helper.c | 2 +-
14
3 files changed, 32 insertions(+), 8 deletions(-)
14
target/arm/kvm64.c | 2 +-
15
4 files changed, 27 insertions(+), 28 deletions(-)
15
16
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
21
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
21
uint64_t id_aa64mmfr2;
22
22
uint64_t id_aa64dfr0;
23
typedef struct ARMISARegisters ARMISARegisters;
23
uint64_t id_aa64dfr1;
24
24
+ uint64_t id_aa64zfr0;
25
+/*
25
} isar;
26
+ * In map, each set bit is a supported vector length of (bit-number + 1) * 16
26
uint64_t midr;
27
+ * bytes, i.e. each bit number + 1 is the vector length in quadwords.
27
uint32_t revidr;
28
+ *
28
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64DFR0, DOUBLELOCK, 36, 4)
29
+ * While processing properties during initialization, corresponding init bits
29
FIELD(ID_AA64DFR0, TRACEFILT, 40, 4)
30
+ * are set for bits in sve_vq_map that have been set by properties.
30
FIELD(ID_AA64DFR0, MTPMU, 48, 4)
31
+ *
31
32
+ * Bits set in supported represent valid vector lengths for the CPU type.
32
+FIELD(ID_AA64ZFR0, SVEVER, 0, 4)
33
+ */
33
+FIELD(ID_AA64ZFR0, AES, 4, 4)
34
+typedef struct {
34
+FIELD(ID_AA64ZFR0, BITPERM, 16, 4)
35
+ uint32_t map, init, supported;
35
+FIELD(ID_AA64ZFR0, BFLOAT16, 20, 4)
36
+} ARMVQMap;
36
+FIELD(ID_AA64ZFR0, SHA3, 32, 4)
37
+FIELD(ID_AA64ZFR0, SM4, 40, 4)
38
+FIELD(ID_AA64ZFR0, I8MM, 44, 4)
39
+FIELD(ID_AA64ZFR0, F32MM, 52, 4)
40
+FIELD(ID_AA64ZFR0, F64MM, 56, 4)
41
+
37
+
42
FIELD(ID_DFR0, COPDBG, 0, 4)
38
/**
43
FIELD(ID_DFR0, COPSDBG, 4, 4)
39
* ARMCPU:
44
FIELD(ID_DFR0, MMAPDBG, 8, 4)
40
* @env: #CPUARMState
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
41
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
46
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
42
uint32_t sve_default_vq;
43
#endif
44
45
- /*
46
- * In sve_vq_map each set bit is a supported vector length of
47
- * (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
48
- * length in quadwords.
49
- *
50
- * While processing properties during initialization, corresponding
51
- * sve_vq_init bits are set for bits in sve_vq_map that have been
52
- * set by properties.
53
- *
54
- * Bits set in sve_vq_supported represent valid vector lengths for
55
- * the CPU type.
56
- */
57
- uint32_t sve_vq_map;
58
- uint32_t sve_vq_init;
59
- uint32_t sve_vq_supported;
60
+ ARMVQMap sve_vq;
61
62
/* Generic timer counter frequency, in Hz */
63
uint64_t gt_cntfrq_hz;
64
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/cpu64.c
67
+++ b/target/arm/cpu64.c
68
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
69
* any of the above. Finally, if SVE is not disabled, then at least one
70
* vector length must be enabled.
71
*/
72
- uint32_t vq_map = cpu->sve_vq_map;
73
- uint32_t vq_init = cpu->sve_vq_init;
74
+ uint32_t vq_map = cpu->sve_vq.map;
75
+ uint32_t vq_init = cpu->sve_vq.init;
76
uint32_t vq_supported;
77
uint32_t vq_mask = 0;
78
uint32_t tmp, vq, max_vq = 0;
79
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
80
*/
81
if (kvm_enabled()) {
82
if (kvm_arm_sve_supported()) {
83
- cpu->sve_vq_supported = kvm_arm_sve_get_vls(CPU(cpu));
84
- vq_supported = cpu->sve_vq_supported;
85
+ cpu->sve_vq.supported = kvm_arm_sve_get_vls(CPU(cpu));
86
+ vq_supported = cpu->sve_vq.supported;
87
} else {
88
assert(!cpu_isar_feature(aa64_sve, cpu));
89
vq_supported = 0;
90
}
91
} else {
92
- vq_supported = cpu->sve_vq_supported;
93
+ vq_supported = cpu->sve_vq.supported;
94
}
95
96
/*
97
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
98
99
/* From now on sve_max_vq is the actual maximum supported length. */
100
cpu->sve_max_vq = max_vq;
101
- cpu->sve_vq_map = vq_map;
102
+ cpu->sve_vq.map = vq_map;
47
}
103
}
48
104
49
+static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
105
static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
50
+{
106
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
51
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
107
if (!cpu_isar_feature(aa64_sve, cpu)) {
52
+}
108
value = false;
53
+
109
} else {
54
/*
110
- value = extract32(cpu->sve_vq_map, vq - 1, 1);
55
* Feature tests for "does this exist in either 32-bit or 64-bit?"
111
+ value = extract32(cpu->sve_vq.map, vq - 1, 1);
56
*/
112
}
113
visit_type_bool(v, name, &value, errp);
114
}
115
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
116
return;
117
}
118
119
- cpu->sve_vq_map = deposit32(cpu->sve_vq_map, vq - 1, 1, value);
120
- cpu->sve_vq_init |= 1 << (vq - 1);
121
+ cpu->sve_vq.map = deposit32(cpu->sve_vq.map, vq - 1, 1, value);
122
+ cpu->sve_vq.init |= 1 << (vq - 1);
123
}
124
125
static bool cpu_arm_get_sve(Object *obj, Error **errp)
126
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
127
cpu->dcz_blocksize = 7; /* 512 bytes */
128
#endif
129
130
- cpu->sve_vq_supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
131
+ cpu->sve_vq.supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
132
133
aarch64_add_pauth_properties(obj);
134
aarch64_add_sve_properties(obj);
135
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
136
137
/* The A64FX supports only 128, 256 and 512 bit vector lengths */
138
aarch64_add_sve_properties(obj);
139
- cpu->sve_vq_supported = (1 << 0) /* 128bit */
140
+ cpu->sve_vq.supported = (1 << 0) /* 128bit */
141
| (1 << 1) /* 256bit */
142
| (1 << 3); /* 512bit */
143
57
diff --git a/target/arm/helper.c b/target/arm/helper.c
144
diff --git a/target/arm/helper.c b/target/arm/helper.c
58
index XXXXXXX..XXXXXXX 100644
145
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/helper.c
146
--- a/target/arm/helper.c
60
+++ b/target/arm/helper.c
147
+++ b/target/arm/helper.c
61
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
148
@@ -XXX,XX +XXX,XX @@ uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
62
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 4,
149
len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
63
.access = PL1_R, .type = ARM_CP_CONST,
150
}
64
.accessfn = access_aa64_tid3,
151
65
- /* At present, only SVEver == 0 is defined anyway. */
152
- len = 31 - clz32(cpu->sve_vq_map & MAKE_64BIT_MASK(0, len + 1));
66
- .resetvalue = 0 },
153
+ len = 31 - clz32(cpu->sve_vq.map & MAKE_64BIT_MASK(0, len + 1));
67
+ .resetvalue = cpu->isar.id_aa64zfr0 },
154
return len;
68
{ .name = "ID_AA64PFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
155
}
69
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 5,
156
70
.access = PL1_R, .type = ARM_CP_CONST,
71
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
157
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
72
index XXXXXXX..XXXXXXX 100644
158
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/kvm64.c
159
--- a/target/arm/kvm64.c
74
+++ b/target/arm/kvm64.c
160
+++ b/target/arm/kvm64.c
75
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
161
@@ -XXX,XX +XXX,XX @@ uint32_t kvm_arm_sve_get_vls(CPUState *cs)
76
162
static int kvm_arm_sve_set_vls(CPUState *cs)
77
sve_supported = ioctl(fdarray[0], KVM_CHECK_EXTENSION, KVM_CAP_ARM_SVE) > 0;
163
{
78
164
ARMCPU *cpu = ARM_CPU(cs);
79
- kvm_arm_destroy_scratch_host_vcpu(fdarray);
165
- uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = { cpu->sve_vq_map };
80
-
166
+ uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = { cpu->sve_vq.map };
81
- if (err < 0) {
167
struct kvm_one_reg reg = {
82
- return false;
168
.id = KVM_REG_ARM64_SVE_VLS,
83
- }
169
.addr = (uint64_t)&vls[0],
84
-
85
/* Add feature bits that can't appear until after VCPU init. */
86
if (sve_supported) {
87
t = ahcf->isar.id_aa64pfr0;
88
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
89
ahcf->isar.id_aa64pfr0 = t;
90
+
91
+ /*
92
+ * Before v5.1, KVM did not support SVE and did not expose
93
+ * ID_AA64ZFR0_EL1 even as RAZ. After v5.1, KVM still does
94
+ * not expose the register to "user" requests like this
95
+ * unless the host supports SVE.
96
+ */
97
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64zfr0,
98
+ ARM64_SYS_REG(3, 0, 0, 4, 4));
99
+ }
100
+
101
+ kvm_arm_destroy_scratch_host_vcpu(fdarray);
102
+
103
+ if (err < 0) {
104
+ return false;
105
}
106
107
/*
108
--
170
--
109
2.20.1
171
2.25.1
110
111
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Used by FMLA and DOT, but will shortly be used more.
3
Rename from cpu_arm_{get,set}_sve_vq, and take the
4
Split FMLA from FMLS to avoid an extra sub field;
4
ARMVQMap as the opaque parameter.
5
similarly for SDOT from UDOT.
6
5
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210525010358.152808-53-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-14-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/sve.decode | 29 +++++++++++++++++++----------
11
target/arm/cpu64.c | 29 +++++++++++++++--------------
13
target/arm/translate-sve.c | 38 ++++++++++++++++++++++++++++----------
12
1 file changed, 15 insertions(+), 14 deletions(-)
14
2 files changed, 47 insertions(+), 20 deletions(-)
15
13
16
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve.decode
16
--- a/target/arm/cpu64.c
19
+++ b/target/arm/sve.decode
17
+++ b/target/arm/cpu64.c
20
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
21
&rprr_s rd pg rn rm s
22
&rprr_esz rd pg rn rm esz
23
&rrrr_esz rd ra rn rm esz
24
+&rrxr_esz rd rn rm ra index esz
25
&rprrr_esz rd pg rn rm ra esz
26
&rpri_esz rd pg rn imm esz
27
&ptrue rd esz pat s
28
@@ -XXX,XX +XXX,XX @@
29
@rrx_2 ........ .. . index:2 rm:3 ...... rn:5 rd:5 &rrx_esz
30
@rrx_1 ........ .. . index:1 rm:4 ...... rn:5 rd:5 &rrx_esz
31
32
+# Three registers and a scalar by N-bit index
33
+@rrxr_3 ........ .. . .. rm:3 ...... rn:5 rd:5 \
34
+ &rrxr_esz ra=%reg_movprfx index=%index3_22_19
35
+@rrxr_2 ........ .. . index:2 rm:3 ...... rn:5 rd:5 \
36
+ &rrxr_esz ra=%reg_movprfx
37
+@rrxr_1 ........ .. . index:1 rm:4 ...... rn:5 rd:5 \
38
+ &rrxr_esz ra=%reg_movprfx
39
+
40
###########################################################################
41
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
42
43
@@ -XXX,XX +XXX,XX @@ DOT_zzzz 01000100 1 sz:1 0 rm:5 00000 u:1 rn:5 rd:5 \
44
ra=%reg_movprfx
45
46
# SVE integer dot product (indexed)
47
-DOT_zzxw 01000100 101 index:2 rm:3 00000 u:1 rn:5 rd:5 \
48
- sz=0 ra=%reg_movprfx
49
-DOT_zzxw 01000100 111 index:1 rm:4 00000 u:1 rn:5 rd:5 \
50
- sz=1 ra=%reg_movprfx
51
+SDOT_zzxw_s 01000100 10 1 ..... 000000 ..... ..... @rrxr_2 esz=2
52
+SDOT_zzxw_d 01000100 11 1 ..... 000000 ..... ..... @rrxr_1 esz=3
53
+UDOT_zzxw_s 01000100 10 1 ..... 000001 ..... ..... @rrxr_2 esz=2
54
+UDOT_zzxw_d 01000100 11 1 ..... 000001 ..... ..... @rrxr_1 esz=3
55
56
# SVE floating-point complex add (predicated)
57
FCADD 01100100 esz:2 00000 rot:1 100 pg:3 rm:5 rd:5 \
58
@@ -XXX,XX +XXX,XX @@ FCMLA_zzxz 01100100 11 1 index:1 rm:4 0001 rot:2 rn:5 rd:5 \
59
### SVE FP Multiply-Add Indexed Group
60
61
# SVE floating-point multiply-add (indexed)
62
-FMLA_zzxz 01100100 0.1 .. rm:3 00000 sub:1 rn:5 rd:5 \
63
- ra=%reg_movprfx index=%index3_22_19 esz=1
64
-FMLA_zzxz 01100100 101 index:2 rm:3 00000 sub:1 rn:5 rd:5 \
65
- ra=%reg_movprfx esz=2
66
-FMLA_zzxz 01100100 111 index:1 rm:4 00000 sub:1 rn:5 rd:5 \
67
- ra=%reg_movprfx esz=3
68
+FMLA_zzxz 01100100 0. 1 ..... 000000 ..... ..... @rrxr_3 esz=1
69
+FMLA_zzxz 01100100 10 1 ..... 000000 ..... ..... @rrxr_2 esz=2
70
+FMLA_zzxz 01100100 11 1 ..... 000000 ..... ..... @rrxr_1 esz=3
71
+FMLS_zzxz 01100100 0. 1 ..... 000001 ..... ..... @rrxr_3 esz=1
72
+FMLS_zzxz 01100100 10 1 ..... 000001 ..... ..... @rrxr_2 esz=2
73
+FMLS_zzxz 01100100 11 1 ..... 000001 ..... ..... @rrxr_1 esz=3
74
75
### SVE FP Multiply Indexed Group
76
77
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/translate-sve.c
80
+++ b/target/arm/translate-sve.c
81
@@ -XXX,XX +XXX,XX @@ static bool trans_DOT_zzzz(DisasContext *s, arg_DOT_zzzz *a)
82
return true;
83
}
19
}
84
20
85
-static bool trans_DOT_zzxw(DisasContext *s, arg_DOT_zzxw *a)
21
/*
86
+static bool do_zzxz_ool(DisasContext *s, arg_rrxr_esz *a,
22
- * Note that cpu_arm_get/set_sve_vq cannot use the simpler
87
+ gen_helper_gvec_4 *fn)
23
- * object_property_add_bool interface because they make use
24
- * of the contents of "name" to determine which bit on which
25
- * to operate.
26
+ * Note that cpu_arm_{get,set}_vq cannot use the simpler
27
+ * object_property_add_bool interface because they make use of the
28
+ * contents of "name" to determine which bit on which to operate.
29
*/
30
-static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
31
- void *opaque, Error **errp)
32
+static void cpu_arm_get_vq(Object *obj, Visitor *v, const char *name,
33
+ void *opaque, Error **errp)
88
{
34
{
89
- static gen_helper_gvec_4 * const fns[2][2] = {
35
ARMCPU *cpu = ARM_CPU(obj);
90
- { gen_helper_gvec_sdot_idx_b, gen_helper_gvec_sdot_idx_h },
36
+ ARMVQMap *vq_map = opaque;
91
- { gen_helper_gvec_udot_idx_b, gen_helper_gvec_udot_idx_h }
37
uint32_t vq = atoi(&name[3]) / 128;
92
- };
38
bool value;
93
-
39
94
+ if (fn == NULL) {
40
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
95
+ return false;
41
if (!cpu_isar_feature(aa64_sve, cpu)) {
96
+ }
42
value = false;
97
if (sve_access_check(s)) {
43
} else {
98
- gen_gvec_ool_zzzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm,
44
- value = extract32(cpu->sve_vq.map, vq - 1, 1);
99
- a->ra, a->index);
45
+ value = extract32(vq_map->map, vq - 1, 1);
100
+ gen_gvec_ool_zzzz(s, fn, a->rd, a->rn, a->rm, a->ra, a->index);
101
}
46
}
102
return true;
47
visit_type_bool(v, name, &value, errp);
103
}
48
}
104
49
105
+#define DO_RRXR(NAME, FUNC) \
50
-static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
106
+ static bool NAME(DisasContext *s, arg_rrxr_esz *a) \
51
- void *opaque, Error **errp)
107
+ { return do_zzxz_ool(s, a, FUNC); }
52
+static void cpu_arm_set_vq(Object *obj, Visitor *v, const char *name,
108
+
53
+ void *opaque, Error **errp)
109
+DO_RRXR(trans_SDOT_zzxw_s, gen_helper_gvec_sdot_idx_b)
110
+DO_RRXR(trans_SDOT_zzxw_d, gen_helper_gvec_sdot_idx_h)
111
+DO_RRXR(trans_UDOT_zzxw_s, gen_helper_gvec_udot_idx_b)
112
+DO_RRXR(trans_UDOT_zzxw_d, gen_helper_gvec_udot_idx_h)
113
+
114
+#undef DO_RRXR
115
116
/*
117
*** SVE Floating Point Multiply-Add Indexed Group
118
*/
119
120
-static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)
121
+static bool do_FMLA_zzxz(DisasContext *s, arg_rrxr_esz *a, bool sub)
122
{
54
{
123
static gen_helper_gvec_4_ptr * const fns[3] = {
55
- ARMCPU *cpu = ARM_CPU(obj);
124
gen_helper_gvec_fmla_idx_h,
56
+ ARMVQMap *vq_map = opaque;
125
@@ -XXX,XX +XXX,XX @@ static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)
57
uint32_t vq = atoi(&name[3]) / 128;
126
vec_full_reg_offset(s, a->rn),
58
bool value;
127
vec_full_reg_offset(s, a->rm),
59
128
vec_full_reg_offset(s, a->ra),
60
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
129
- status, vsz, vsz, (a->index << 1) | a->sub,
61
return;
130
+ status, vsz, vsz, (a->index << 1) | sub,
131
fns[a->esz - 1]);
132
tcg_temp_free_ptr(status);
133
}
62
}
134
return true;
63
64
- cpu->sve_vq.map = deposit32(cpu->sve_vq.map, vq - 1, 1, value);
65
- cpu->sve_vq.init |= 1 << (vq - 1);
66
+ vq_map->map = deposit32(vq_map->map, vq - 1, 1, value);
67
+ vq_map->init |= 1 << (vq - 1);
135
}
68
}
136
69
137
+static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)
70
static bool cpu_arm_get_sve(Object *obj, Error **errp)
138
+{
71
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
139
+ return do_FMLA_zzxz(s, a, false);
72
140
+}
73
void aarch64_add_sve_properties(Object *obj)
141
+
74
{
142
+static bool trans_FMLS_zzxz(DisasContext *s, arg_FMLA_zzxz *a)
75
+ ARMCPU *cpu = ARM_CPU(obj);
143
+{
76
uint32_t vq;
144
+ return do_FMLA_zzxz(s, a, true);
77
145
+}
78
object_property_add_bool(obj, "sve", cpu_arm_get_sve, cpu_arm_set_sve);
146
+
79
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
147
/*
80
for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
148
*** SVE Floating Point Multiply Indexed Group
81
char name[8];
149
*/
82
sprintf(name, "sve%d", vq * 128);
83
- object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
84
- cpu_arm_set_sve_vq, NULL, NULL);
85
+ object_property_add(obj, name, "bool", cpu_arm_get_vq,
86
+ cpu_arm_set_vq, NULL, &cpu->sve_vq);
87
}
88
89
#ifdef CONFIG_USER_ONLY
150
--
90
--
151
2.20.1
91
2.25.1
152
153
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rename tlb_flush_page_bits_locked() -> tlb_flush_range_locked(), and
3
Rename from cpu_arm_{get,set}_sve_default_vec_len,
4
have callers pass a length argument (currently TARGET_PAGE_SIZE) via
4
and take the pointer to default_vq from opaque.
5
the TLBFlushPageBitsByMMUIdxData structure.
6
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20220620175235.60881-15-richard.henderson@linaro.org
9
Message-id: 20210509151618.2331764-3-f4bug@amsat.org
10
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
11
[PMD: Split from bigger patch]
12
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
accel/tcg/cputlb.c | 48 +++++++++++++++++++++++++++++++---------------
11
target/arm/cpu64.c | 27 ++++++++++++++-------------
17
1 file changed, 33 insertions(+), 15 deletions(-)
12
1 file changed, 14 insertions(+), 13 deletions(-)
18
13
19
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/cputlb.c
16
--- a/target/arm/cpu64.c
22
+++ b/accel/tcg/cputlb.c
17
+++ b/target/arm/cpu64.c
23
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_all_cpus_synced(CPUState *src, target_ulong addr)
18
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
24
tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS);
19
25
}
20
#ifdef CONFIG_USER_ONLY
26
21
/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
27
-static void tlb_flush_page_bits_locked(CPUArchState *env, int midx,
22
-static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
28
- target_ulong page, unsigned bits)
23
- const char *name, void *opaque,
29
+static void tlb_flush_range_locked(CPUArchState *env, int midx,
24
- Error **errp)
30
+ target_ulong addr, target_ulong len,
25
+static void cpu_arm_set_default_vec_len(Object *obj, Visitor *v,
31
+ unsigned bits)
26
+ const char *name, void *opaque,
27
+ Error **errp)
32
{
28
{
33
CPUTLBDesc *d = &env_tlb(env)->d[midx];
29
- ARMCPU *cpu = ARM_CPU(obj);
34
CPUTLBDescFast *f = &env_tlb(env)->f[midx];
30
+ uint32_t *ptr_default_vq = opaque;
35
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_bits_locked(CPUArchState *env, int midx,
31
int32_t default_len, default_vq, remainder;
36
* If @bits is smaller than the tlb size, there may be multiple entries
32
37
* within the TLB; otherwise all addresses that match under @mask hit
33
if (!visit_type_int32(v, name, &default_len, errp)) {
38
* the same TLB entry.
34
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
39
- *
35
40
* TODO: Perhaps allow bits to be a few bits less than the size.
36
/* Undocumented, but the kernel allows -1 to indicate "maximum". */
41
* For now, just flush the entire TLB.
37
if (default_len == -1) {
42
+ *
38
- cpu->sve_default_vq = ARM_MAX_VQ;
43
+ * If @len is larger than the tlb size, then it will take longer to
39
+ *ptr_default_vq = ARM_MAX_VQ;
44
+ * test all of the entries in the TLB than it will to flush it all.
45
*/
46
- if (mask < f->mask) {
47
+ if (mask < f->mask || len > f->mask) {
48
tlb_debug("forcing full flush midx %d ("
49
- TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
50
- midx, page, mask);
51
+ TARGET_FMT_lx "/" TARGET_FMT_lx "+" TARGET_FMT_lx ")\n",
52
+ midx, addr, mask, len);
53
tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
54
return;
40
return;
55
}
41
}
56
42
57
- /* Check if we need to flush due to large pages. */
43
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
58
- if ((page & d->large_page_mask) == d->large_page_addr) {
59
+ /*
60
+ * Check if we need to flush due to large pages.
61
+ * Because large_page_mask contains all 1's from the msb,
62
+ * we only need to test the end of the range.
63
+ */
64
+ if (((addr + len - 1) & d->large_page_mask) == d->large_page_addr) {
65
tlb_debug("forcing full flush midx %d ("
66
TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
67
midx, d->large_page_addr, d->large_page_mask);
68
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_bits_locked(CPUArchState *env, int midx,
69
return;
44
return;
70
}
45
}
71
46
72
- if (tlb_flush_entry_mask_locked(tlb_entry(env, midx, page), page, mask)) {
47
- cpu->sve_default_vq = default_vq;
73
- tlb_n_used_entries_dec(env, midx);
48
+ *ptr_default_vq = default_vq;
74
+ for (target_ulong i = 0; i < len; i += TARGET_PAGE_SIZE) {
75
+ target_ulong page = addr + i;
76
+ CPUTLBEntry *entry = tlb_entry(env, midx, page);
77
+
78
+ if (tlb_flush_entry_mask_locked(entry, page, mask)) {
79
+ tlb_n_used_entries_dec(env, midx);
80
+ }
81
+ tlb_flush_vtlb_page_mask_locked(env, midx, page, mask);
82
}
83
- tlb_flush_vtlb_page_mask_locked(env, midx, page, mask);
84
}
49
}
85
50
86
typedef struct {
51
-static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
87
target_ulong addr;
52
- const char *name, void *opaque,
88
+ target_ulong len;
53
- Error **errp)
89
uint16_t idxmap;
54
+static void cpu_arm_get_default_vec_len(Object *obj, Visitor *v,
90
uint16_t bits;
55
+ const char *name, void *opaque,
91
} TLBFlushPageBitsByMMUIdxData;
56
+ Error **errp)
92
@@ -XXX,XX +XXX,XX @@ tlb_flush_page_bits_by_mmuidx_async_0(CPUState *cpu,
57
{
93
58
- ARMCPU *cpu = ARM_CPU(obj);
94
assert_cpu_is_self(cpu);
59
- int32_t value = cpu->sve_default_vq * 16;
95
60
+ uint32_t *ptr_default_vq = opaque;
96
- tlb_debug("page addr:" TARGET_FMT_lx "/%u mmu_map:0x%x\n",
61
+ int32_t value = *ptr_default_vq * 16;
97
- d.addr, d.bits, d.idxmap);
62
98
+ tlb_debug("range:" TARGET_FMT_lx "/%u+" TARGET_FMT_lx " mmu_map:0x%x\n",
63
visit_type_int32(v, name, &value, errp);
99
+ d.addr, d.bits, d.len, d.idxmap);
100
101
qemu_spin_lock(&env_tlb(env)->c.lock);
102
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
103
if ((d.idxmap >> mmu_idx) & 1) {
104
- tlb_flush_page_bits_locked(env, mmu_idx, d.addr, d.bits);
105
+ tlb_flush_range_locked(env, mmu_idx, d.addr, d.len, d.bits);
106
}
107
}
108
qemu_spin_unlock(&env_tlb(env)->c.lock);
109
110
- tb_flush_jmp_cache(cpu, d.addr);
111
+ for (target_ulong i = 0; i < d.len; i += TARGET_PAGE_SIZE) {
112
+ tb_flush_jmp_cache(cpu, d.addr + i);
113
+ }
114
}
64
}
115
65
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
116
static bool encode_pbm_to_runon(run_on_cpu_data *out,
66
#ifdef CONFIG_USER_ONLY
117
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
67
/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
118
68
object_property_add(obj, "sve-default-vector-length", "int32",
119
/* This should already be page aligned */
69
- cpu_arm_get_sve_default_vec_len,
120
d.addr = addr & TARGET_PAGE_MASK;
70
- cpu_arm_set_sve_default_vec_len, NULL, NULL);
121
+ d.len = TARGET_PAGE_SIZE;
71
+ cpu_arm_get_default_vec_len,
122
d.idxmap = idxmap;
72
+ cpu_arm_set_default_vec_len, NULL,
123
d.bits = bits;
73
+ &cpu->sve_default_vq);
124
74
#endif
125
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
75
}
126
127
/* This should already be page aligned */
128
d.addr = addr & TARGET_PAGE_MASK;
129
+ d.len = TARGET_PAGE_SIZE;
130
d.idxmap = idxmap;
131
d.bits = bits;
132
133
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
134
135
/* This should already be page aligned */
136
d.addr = addr & TARGET_PAGE_MASK;
137
+ d.len = TARGET_PAGE_SIZE;
138
d.idxmap = idxmap;
139
d.bits = bits;
140
76
141
--
77
--
142
2.20.1
78
2.25.1
143
144
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Drop the aa32-only inline fallbacks,
4
and just use a couple of ifdefs.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-18-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-16-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/cpu.h | 5 +++
11
target/arm/cpu.h | 6 ------
9
target/arm/helper-sve.h | 15 ++++++++
12
target/arm/internals.h | 3 +++
10
target/arm/sve.decode | 6 ++++
13
target/arm/cpu.c | 2 ++
11
target/arm/sve_helper.c | 73 ++++++++++++++++++++++++++++++++++++++
14
3 files changed, 5 insertions(+), 6 deletions(-)
12
target/arm/translate-sve.c | 36 +++++++++++++++++++
13
5 files changed, 135 insertions(+)
14
15
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2_pmull128(const ARMISARegisters *id)
20
@@ -XXX,XX +XXX,XX @@ typedef struct {
20
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) >= 2;
21
21
}
22
#ifdef TARGET_AARCH64
22
23
# define ARM_MAX_VQ 16
23
+static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
24
-void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
24
+{
25
-void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
25
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
26
-void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
26
+}
27
#else
27
+
28
# define ARM_MAX_VQ 1
28
/*
29
-static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
29
* Feature tests for "does this exist in either 32-bit or 64-bit?"
30
-static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { }
30
*/
31
-static inline void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp) { }
31
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
32
#endif
33
34
typedef struct ARMVectorReg {
35
diff --git a/target/arm/internals.h b/target/arm/internals.h
32
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/helper-sve.h
37
--- a/target/arm/internals.h
34
+++ b/target/arm/helper-sve.h
38
+++ b/target/arm/internals.h
35
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_eoril_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
@@ -XXX,XX +XXX,XX @@ int arm_gdb_get_svereg(CPUARMState *env, GByteArray *buf, int reg);
36
DEF_HELPER_FLAGS_4(sve2_eoril_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg);
37
DEF_HELPER_FLAGS_4(sve2_eoril_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
38
DEF_HELPER_FLAGS_4(sve2_eoril_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
39
+
43
+void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
40
+DEF_HELPER_FLAGS_4(sve2_bext_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
+void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
41
+DEF_HELPER_FLAGS_4(sve2_bext_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
+void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
42
+DEF_HELPER_FLAGS_4(sve2_bext_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
#endif
43
+DEF_HELPER_FLAGS_4(sve2_bext_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
47
44
+
48
#ifdef CONFIG_USER_ONLY
45
+DEF_HELPER_FLAGS_4(sve2_bdep_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
49
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
+DEF_HELPER_FLAGS_4(sve2_bdep_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(sve2_bdep_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_4(sve2_bdep_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
49
+
50
+DEF_HELPER_FLAGS_4(sve2_bgrp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_4(sve2_bgrp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(sve2_bgrp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_4(sve2_bgrp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
54
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
55
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/sve.decode
51
--- a/target/arm/cpu.c
57
+++ b/target/arm/sve.decode
52
+++ b/target/arm/cpu.c
58
@@ -XXX,XX +XXX,XX @@ USHLLT 01000101 .. 0 ..... 1010 11 ..... ..... @rd_rn_tszimm_shl
53
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
59
60
EORBT 01000101 .. 0 ..... 10010 0 ..... ..... @rd_rn_rm
61
EORTB 01000101 .. 0 ..... 10010 1 ..... ..... @rd_rn_rm
62
+
63
+## SVE2 bitwise permute
64
+
65
+BEXT 01000101 .. 0 ..... 1011 00 ..... ..... @rd_rn_rm
66
+BDEP 01000101 .. 0 ..... 1011 01 ..... ..... @rd_rn_rm
67
+BGRP 01000101 .. 0 ..... 1011 10 ..... ..... @rd_rn_rm
68
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/sve_helper.c
71
+++ b/target/arm/sve_helper.c
72
@@ -XXX,XX +XXX,XX @@ DO_ZZZ_NTB(sve2_eoril_d, uint64_t, , DO_EOR)
73
74
#undef DO_ZZZ_NTB
75
76
+#define DO_BITPERM(NAME, TYPE, OP) \
77
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
78
+{ \
79
+ intptr_t i, opr_sz = simd_oprsz(desc); \
80
+ for (i = 0; i < opr_sz; i += sizeof(TYPE)) { \
81
+ TYPE nn = *(TYPE *)(vn + i); \
82
+ TYPE mm = *(TYPE *)(vm + i); \
83
+ *(TYPE *)(vd + i) = OP(nn, mm, sizeof(TYPE) * 8); \
84
+ } \
85
+}
86
+
87
+static uint64_t bitextract(uint64_t data, uint64_t mask, int n)
88
+{
89
+ uint64_t res = 0;
90
+ int db, rb = 0;
91
+
92
+ for (db = 0; db < n; ++db) {
93
+ if ((mask >> db) & 1) {
94
+ res |= ((data >> db) & 1) << rb;
95
+ ++rb;
96
+ }
97
+ }
98
+ return res;
99
+}
100
+
101
+DO_BITPERM(sve2_bext_b, uint8_t, bitextract)
102
+DO_BITPERM(sve2_bext_h, uint16_t, bitextract)
103
+DO_BITPERM(sve2_bext_s, uint32_t, bitextract)
104
+DO_BITPERM(sve2_bext_d, uint64_t, bitextract)
105
+
106
+static uint64_t bitdeposit(uint64_t data, uint64_t mask, int n)
107
+{
108
+ uint64_t res = 0;
109
+ int rb, db = 0;
110
+
111
+ for (rb = 0; rb < n; ++rb) {
112
+ if ((mask >> rb) & 1) {
113
+ res |= ((data >> db) & 1) << rb;
114
+ ++db;
115
+ }
116
+ }
117
+ return res;
118
+}
119
+
120
+DO_BITPERM(sve2_bdep_b, uint8_t, bitdeposit)
121
+DO_BITPERM(sve2_bdep_h, uint16_t, bitdeposit)
122
+DO_BITPERM(sve2_bdep_s, uint32_t, bitdeposit)
123
+DO_BITPERM(sve2_bdep_d, uint64_t, bitdeposit)
124
+
125
+static uint64_t bitgroup(uint64_t data, uint64_t mask, int n)
126
+{
127
+ uint64_t resm = 0, resu = 0;
128
+ int db, rbm = 0, rbu = 0;
129
+
130
+ for (db = 0; db < n; ++db) {
131
+ uint64_t val = (data >> db) & 1;
132
+ if ((mask >> db) & 1) {
133
+ resm |= val << rbm++;
134
+ } else {
135
+ resu |= val << rbu++;
136
+ }
137
+ }
138
+
139
+ return resm | (resu << rbm);
140
+}
141
+
142
+DO_BITPERM(sve2_bgrp_b, uint8_t, bitgroup)
143
+DO_BITPERM(sve2_bgrp_h, uint16_t, bitgroup)
144
+DO_BITPERM(sve2_bgrp_s, uint32_t, bitgroup)
145
+DO_BITPERM(sve2_bgrp_d, uint64_t, bitgroup)
146
+
147
+#undef DO_BITPERM
148
+
149
#define DO_ZZI_SHLL(NAME, TYPEW, TYPEN, HW, HN) \
150
void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
151
{ \
152
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate-sve.c
155
+++ b/target/arm/translate-sve.c
156
@@ -XXX,XX +XXX,XX @@ static bool trans_USHLLT(DisasContext *s, arg_rri_esz *a)
157
{
54
{
158
return do_sve2_shll_tb(s, a, true, true);
55
Error *local_err = NULL;
159
}
56
160
+
57
+#ifdef TARGET_AARCH64
161
+static bool trans_BEXT(DisasContext *s, arg_rrr_esz *a)
58
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
162
+{
59
arm_cpu_sve_finalize(cpu, &local_err);
163
+ static gen_helper_gvec_3 * const fns[4] = {
60
if (local_err != NULL) {
164
+ gen_helper_sve2_bext_b, gen_helper_sve2_bext_h,
61
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
165
+ gen_helper_sve2_bext_s, gen_helper_sve2_bext_d,
62
return;
166
+ };
63
}
167
+ if (!dc_isar_feature(aa64_sve2_bitperm, s)) {
64
}
168
+ return false;
65
+#endif
169
+ }
66
170
+ return do_sve2_zzw_ool(s, a, fns[a->esz], 0);
67
if (kvm_enabled()) {
171
+}
68
kvm_arm_steal_time_finalize(cpu, &local_err);
172
+
173
+static bool trans_BDEP(DisasContext *s, arg_rrr_esz *a)
174
+{
175
+ static gen_helper_gvec_3 * const fns[4] = {
176
+ gen_helper_sve2_bdep_b, gen_helper_sve2_bdep_h,
177
+ gen_helper_sve2_bdep_s, gen_helper_sve2_bdep_d,
178
+ };
179
+ if (!dc_isar_feature(aa64_sve2_bitperm, s)) {
180
+ return false;
181
+ }
182
+ return do_sve2_zzw_ool(s, a, fns[a->esz], 0);
183
+}
184
+
185
+static bool trans_BGRP(DisasContext *s, arg_rrr_esz *a)
186
+{
187
+ static gen_helper_gvec_3 * const fns[4] = {
188
+ gen_helper_sve2_bgrp_b, gen_helper_sve2_bgrp_h,
189
+ gen_helper_sve2_bgrp_s, gen_helper_sve2_bgrp_d,
190
+ };
191
+ if (!dc_isar_feature(aa64_sve2_bitperm, s)) {
192
+ return false;
193
+ }
194
+ return do_sve2_zzw_ool(s, a, fns[a->esz], 0);
195
+}
196
--
69
--
197
2.20.1
70
2.25.1
198
199
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
These functions are not used outside cpu64.c,
4
so make them static.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-15-richard.henderson@linaro.org
8
Message-id: 20220620175235.60881-17-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/cpu.h | 10 ++++++++++
11
target/arm/cpu.h | 3 ---
9
target/arm/helper-sve.h | 1 +
12
target/arm/cpu64.c | 4 ++--
10
target/arm/sve.decode | 2 ++
13
2 files changed, 2 insertions(+), 5 deletions(-)
11
target/arm/translate-sve.c | 22 ++++++++++++++++++++++
12
target/arm/vec_helper.c | 24 ++++++++++++++++++++++++
13
5 files changed, 59 insertions(+)
14
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
19
@@ -XXX,XX +XXX,XX @@ int aarch64_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
20
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
20
void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq);
21
void aarch64_sve_change_el(CPUARMState *env, int old_el,
22
int new_el, bool el0_a64);
23
-void aarch64_add_sve_properties(Object *obj);
24
-void aarch64_add_pauth_properties(Object *obj);
25
void arm_reset_sve_state(CPUARMState *env);
26
27
/*
28
@@ -XXX,XX +XXX,XX @@ static inline void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) { }
29
static inline void aarch64_sve_change_el(CPUARMState *env, int o,
30
int n, bool a)
31
{ }
32
-static inline void aarch64_add_sve_properties(Object *obj) { }
33
#endif
34
35
void aarch64_sync_32_to_64(CPUARMState *env);
36
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/cpu64.c
39
+++ b/target/arm/cpu64.c
40
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_default_vec_len(Object *obj, Visitor *v,
21
}
41
}
22
23
+static inline bool isar_feature_aa64_sve2_aes(const ARMISARegisters *id)
24
+{
25
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) != 0;
26
+}
27
+
28
+static inline bool isar_feature_aa64_sve2_pmull128(const ARMISARegisters *id)
29
+{
30
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) >= 2;
31
+}
32
+
33
/*
34
* Feature tests for "does this exist in either 32-bit or 64-bit?"
35
*/
36
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/helper-sve.h
39
+++ b/target/arm/helper-sve.h
40
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_umull_zzz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_4(sve2_umull_zzz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
43
DEF_HELPER_FLAGS_4(sve2_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_4(sve2_pmull_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/sve.decode
48
+++ b/target/arm/sve.decode
49
@@ -XXX,XX +XXX,XX @@ USUBWT 01000101 .. 0 ..... 010 111 ..... ..... @rd_rn_rm
50
51
SQDMULLB_zzz 01000101 .. 0 ..... 011 000 ..... ..... @rd_rn_rm
52
SQDMULLT_zzz 01000101 .. 0 ..... 011 001 ..... ..... @rd_rn_rm
53
+PMULLB 01000101 .. 0 ..... 011 010 ..... ..... @rd_rn_rm
54
+PMULLT 01000101 .. 0 ..... 011 011 ..... ..... @rd_rn_rm
55
SMULLB_zzz 01000101 .. 0 ..... 011 100 ..... ..... @rd_rn_rm
56
SMULLT_zzz 01000101 .. 0 ..... 011 101 ..... ..... @rd_rn_rm
57
UMULLB_zzz 01000101 .. 0 ..... 011 110 ..... ..... @rd_rn_rm
58
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-sve.c
61
+++ b/target/arm/translate-sve.c
62
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_TB(SMULLT_zzz, smull_zzz, true, true)
63
DO_SVE2_ZZZ_TB(UMULLB_zzz, umull_zzz, false, false)
64
DO_SVE2_ZZZ_TB(UMULLT_zzz, umull_zzz, true, true)
65
66
+static bool do_trans_pmull(DisasContext *s, arg_rrr_esz *a, bool sel)
67
+{
68
+ static gen_helper_gvec_3 * const fns[4] = {
69
+ gen_helper_gvec_pmull_q, gen_helper_sve2_pmull_h,
70
+ NULL, gen_helper_sve2_pmull_d,
71
+ };
72
+ if (a->esz == 0 && !dc_isar_feature(aa64_sve2_pmull128, s)) {
73
+ return false;
74
+ }
75
+ return do_sve2_zzw_ool(s, a, fns[a->esz], sel);
76
+}
77
+
78
+static bool trans_PMULLB(DisasContext *s, arg_rrr_esz *a)
79
+{
80
+ return do_trans_pmull(s, a, false);
81
+}
82
+
83
+static bool trans_PMULLT(DisasContext *s, arg_rrr_esz *a)
84
+{
85
+ return do_trans_pmull(s, a, true);
86
+}
87
+
88
#define DO_SVE2_ZZZ_WTB(NAME, name, SEL2) \
89
static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a) \
90
{ \
91
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/vec_helper.c
94
+++ b/target/arm/vec_helper.c
95
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_pmull_h)(void *vd, void *vn, void *vm, uint32_t desc)
96
d[i] = pmull_h(nn, mm);
97
}
98
}
99
+
100
+static uint64_t pmull_d(uint64_t op1, uint64_t op2)
101
+{
102
+ uint64_t result = 0;
103
+ int i;
104
+
105
+ for (i = 0; i < 32; ++i) {
106
+ uint64_t mask = -((op1 >> i) & 1);
107
+ result ^= (op2 << i) & mask;
108
+ }
109
+ return result;
110
+}
111
+
112
+void HELPER(sve2_pmull_d)(void *vd, void *vn, void *vm, uint32_t desc)
113
+{
114
+ intptr_t sel = H4(simd_data(desc));
115
+ intptr_t i, opr_sz = simd_oprsz(desc);
116
+ uint32_t *n = vn, *m = vm;
117
+ uint64_t *d = vd;
118
+
119
+ for (i = 0; i < opr_sz / 8; ++i) {
120
+ d[i] = pmull_d(n[2 * i + sel], m[2 * i + sel]);
121
+ }
122
+}
123
#endif
42
#endif
124
43
125
#define DO_CMP0(NAME, TYPE, OP) \
44
-void aarch64_add_sve_properties(Object *obj)
45
+static void aarch64_add_sve_properties(Object *obj)
46
{
47
ARMCPU *cpu = ARM_CPU(obj);
48
uint32_t vq;
49
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_pauth_property =
50
static Property arm_cpu_pauth_impdef_property =
51
DEFINE_PROP_BOOL("pauth-impdef", ARMCPU, prop_pauth_impdef, false);
52
53
-void aarch64_add_pauth_properties(Object *obj)
54
+static void aarch64_add_pauth_properties(Object *obj)
55
{
56
ARMCPU *cpu = ARM_CPU(obj);
57
126
--
58
--
127
2.20.1
59
2.25.1
128
129
diff view generated by jsdifflib
1
From: Stephen Long <steplong@quicinc.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Mirror the properties for SVE. The main difference is
4
that any arbitrary set of powers of 2 may be supported,
5
and not the stricter constraints that apply to SVE.
6
7
Include a property to control FEAT_SME_FA64, as failing
8
to restrict the runtime to the proper subset of insns
9
could be a major point for bugs.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
13
Message-id: 20220620175235.60881-18-richard.henderson@linaro.org
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-72-richard.henderson@linaro.org
7
Message-Id: <20200428144352.9275-1-steplong@quicinc.com>
8
[rth: rearrange the macros a little and rebase]
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
15
---
12
target/arm/helper-sve.h | 10 +++++
16
docs/system/arm/cpu-features.rst | 56 +++++++++++++++
13
target/arm/sve.decode | 5 +++
17
target/arm/cpu.h | 2 +
14
target/arm/sve_helper.c | 90 ++++++++++++++++++++++++++++++--------
18
target/arm/internals.h | 1 +
15
target/arm/translate-sve.c | 33 ++++++++++++++
19
target/arm/cpu.c | 14 +++-
16
4 files changed, 119 insertions(+), 19 deletions(-)
20
target/arm/cpu64.c | 114 +++++++++++++++++++++++++++++--
21
5 files changed, 180 insertions(+), 7 deletions(-)
17
22
18
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
23
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
19
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-sve.h
25
--- a/docs/system/arm/cpu-features.rst
21
+++ b/target/arm/helper-sve.h
26
+++ b/docs/system/arm/cpu-features.rst
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_tbl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
@@ -XXX,XX +XXX,XX @@ verbose command lines. However, the recommended way to select vector
23
DEF_HELPER_FLAGS_4(sve_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
lengths is to explicitly enable each desired length. Therefore only
24
DEF_HELPER_FLAGS_4(sve_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
example's (1), (4), and (6) exhibit recommended uses of the properties.
25
30
26
+DEF_HELPER_FLAGS_5(sve2_tbl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
+SME CPU Property Examples
27
+DEF_HELPER_FLAGS_5(sve2_tbl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
32
+-------------------------
28
+DEF_HELPER_FLAGS_5(sve2_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
+
29
+DEF_HELPER_FLAGS_5(sve2_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
+ 1) Disable SME::
30
+
35
+
31
+DEF_HELPER_FLAGS_4(sve2_tbx_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+ $ qemu-system-aarch64 -M virt -cpu max,sme=off
32
+DEF_HELPER_FLAGS_4(sve2_tbx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
+
33
+DEF_HELPER_FLAGS_4(sve2_tbx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+ 2) Implicitly enable all vector lengths for the ``max`` CPU type::
34
+DEF_HELPER_FLAGS_4(sve2_tbx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+
35
+
40
+ $ qemu-system-aarch64 -M virt -cpu max
36
DEF_HELPER_FLAGS_3(sve_sunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
41
+
37
DEF_HELPER_FLAGS_3(sve_sunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
42
+ 3) Only enable the 256-bit vector length::
38
DEF_HELPER_FLAGS_3(sve_sunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
43
+
39
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
44
+ $ qemu-system-aarch64 -M virt -cpu max,sme256=on
40
index XXXXXXX..XXXXXXX 100644
45
+
41
--- a/target/arm/sve.decode
46
+ 3) Enable the 256-bit and 1024-bit vector lengths::
42
+++ b/target/arm/sve.decode
47
+
43
@@ -XXX,XX +XXX,XX @@ TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
48
+ $ qemu-system-aarch64 -M virt -cpu max,sme256=on,sme1024=on
44
# SVE unpack vector elements
49
+
45
UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
50
+ 4) Disable the 512-bit vector length. This results in all the other
46
51
+ lengths supported by ``max`` defaulting to enabled
47
+# SVE2 Table Lookup (three sources)
52
+ (128, 256, 1024 and 2048)::
48
+
53
+
49
+TBL_sve2 00000101 .. 1 ..... 001010 ..... ..... @rd_rn_rm
54
+ $ qemu-system-aarch64 -M virt -cpu max,sve512=off
50
+TBX 00000101 .. 1 ..... 001011 ..... ..... @rd_rn_rm
55
+
51
+
56
SVE User-mode Default Vector Length Property
52
### SVE Permute - Predicates Group
57
--------------------------------------------
53
58
54
# SVE permute predicate elements
59
@@ -XXX,XX +XXX,XX @@ length supported by QEMU is 256.
55
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
60
56
index XXXXXXX..XXXXXXX 100644
61
If this property is set to ``-1`` then the default vector length
57
--- a/target/arm/sve_helper.c
62
is set to the maximum possible length.
58
+++ b/target/arm/sve_helper.c
63
+
59
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_rev_d)(void *vd, void *vn, uint32_t desc)
64
+SME CPU Properties
60
}
65
+==================
66
+
67
+The SME CPU properties are much like the SVE properties: ``sme`` is
68
+used to enable or disable the entire SME feature, and ``sme<N>`` is
69
+used to enable or disable specific vector lengths. Finally,
70
+``sme_fa64`` is used to enable or disable ``FEAT_SME_FA64``, which
71
+allows execution of the "full a64" instruction set while Streaming
72
+SVE mode is enabled.
73
+
74
+SME is not supported by KVM at this time.
75
+
76
+At least one vector length must be enabled when ``sme`` is enabled,
77
+and all vector lengths must be powers of 2. The maximum vector
78
+length supported by qemu is 2048 bits. Otherwise, there are no
79
+additional constraints on the set of vector lengths supported by SME.
80
+
81
+SME User-mode Default Vector Length Property
82
+--------------------------------------------
83
+
84
+For qemu-aarch64, the cpu propery ``sme-default-vector-length=N`` is
85
+defined to mirror the Linux kernel parameter file
86
+``/proc/sys/abi/sme_default_vector_length``. The default length, ``N``,
87
+is in units of bytes and must be between 16 and 8192.
88
+If not specified, the default vector length is 32.
89
+
90
+As with ``sve-default-vector-length``, if the default length is larger
91
+than the maximum vector length enabled, the actual vector length will
92
+be reduced. If this property is set to ``-1`` then the default vector
93
+length is set to the maximum possible length.
94
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
95
index XXXXXXX..XXXXXXX 100644
96
--- a/target/arm/cpu.h
97
+++ b/target/arm/cpu.h
98
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
99
#ifdef CONFIG_USER_ONLY
100
/* Used to set the default vector length at process start. */
101
uint32_t sve_default_vq;
102
+ uint32_t sme_default_vq;
103
#endif
104
105
ARMVQMap sve_vq;
106
+ ARMVQMap sme_vq;
107
108
/* Generic timer counter frequency, in Hz */
109
uint64_t gt_cntfrq_hz;
110
diff --git a/target/arm/internals.h b/target/arm/internals.h
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/internals.h
113
+++ b/target/arm/internals.h
114
@@ -XXX,XX +XXX,XX @@ int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg);
115
int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
116
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
117
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
118
+void arm_cpu_sme_finalize(ARMCPU *cpu, Error **errp);
119
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
120
void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
121
#endif
122
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
123
index XXXXXXX..XXXXXXX 100644
124
--- a/target/arm/cpu.c
125
+++ b/target/arm/cpu.c
126
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
127
#ifdef CONFIG_USER_ONLY
128
# ifdef TARGET_AARCH64
129
/*
130
- * The linux kernel defaults to 512-bit vectors, when sve is supported.
131
- * See documentation for /proc/sys/abi/sve_default_vector_length, and
132
- * our corresponding sve-default-vector-length cpu property.
133
+ * The linux kernel defaults to 512-bit for SVE, and 256-bit for SME.
134
+ * These values were chosen to fit within the default signal frame.
135
+ * See documentation for /proc/sys/abi/{sve,sme}_default_vector_length,
136
+ * and our corresponding cpu property.
137
*/
138
cpu->sve_default_vq = 4;
139
+ cpu->sme_default_vq = 2;
140
# endif
141
#else
142
/* Our inbound IRQ and FIQ lines */
143
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
144
return;
145
}
146
147
+ arm_cpu_sme_finalize(cpu, &local_err);
148
+ if (local_err != NULL) {
149
+ error_propagate(errp, local_err);
150
+ return;
151
+ }
152
+
153
arm_cpu_pauth_finalize(cpu, &local_err);
154
if (local_err != NULL) {
155
error_propagate(errp, local_err);
156
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/target/arm/cpu64.c
159
+++ b/target/arm/cpu64.c
160
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_vq(Object *obj, Visitor *v, const char *name,
161
ARMCPU *cpu = ARM_CPU(obj);
162
ARMVQMap *vq_map = opaque;
163
uint32_t vq = atoi(&name[3]) / 128;
164
+ bool sve = vq_map == &cpu->sve_vq;
165
bool value;
166
167
- /* All vector lengths are disabled when SVE is off. */
168
- if (!cpu_isar_feature(aa64_sve, cpu)) {
169
+ /* All vector lengths are disabled when feature is off. */
170
+ if (sve
171
+ ? !cpu_isar_feature(aa64_sve, cpu)
172
+ : !cpu_isar_feature(aa64_sme, cpu)) {
173
value = false;
174
} else {
175
value = extract32(vq_map->map, vq - 1, 1);
176
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
177
cpu->isar.id_aa64pfr0 = t;
61
}
178
}
62
179
63
-#define DO_TBL(NAME, TYPE, H) \
180
+void arm_cpu_sme_finalize(ARMCPU *cpu, Error **errp)
64
-void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
181
+{
65
-{ \
182
+ uint32_t vq_map = cpu->sme_vq.map;
66
- intptr_t i, opr_sz = simd_oprsz(desc); \
183
+ uint32_t vq_init = cpu->sme_vq.init;
67
- uintptr_t elem = opr_sz / sizeof(TYPE); \
184
+ uint32_t vq_supported = cpu->sme_vq.supported;
68
- TYPE *d = vd, *n = vn, *m = vm; \
185
+ uint32_t vq;
69
- ARMVectorReg tmp; \
186
+
70
- if (unlikely(vd == vn)) { \
187
+ if (vq_map == 0) {
71
- n = memcpy(&tmp, vn, opr_sz); \
188
+ if (!cpu_isar_feature(aa64_sme, cpu)) {
72
- } \
189
+ cpu->isar.id_aa64smfr0 = 0;
73
- for (i = 0; i < elem; i++) { \
190
+ return;
74
- TYPE j = m[H(i)]; \
191
+ }
75
- d[H(i)] = j < elem ? n[H(j)] : 0; \
192
+
76
- } \
193
+ /* TODO: KVM will require limitations via SMCR_EL2. */
77
+typedef void tb_impl_fn(void *, void *, void *, void *, uintptr_t, bool);
194
+ vq_map = vq_supported & ~vq_init;
78
+
195
+
79
+static inline void do_tbl1(void *vd, void *vn, void *vm, uint32_t desc,
196
+ if (vq_map == 0) {
80
+ bool is_tbx, tb_impl_fn *fn)
197
+ vq = ctz32(vq_supported) + 1;
81
+{
198
+ error_setg(errp, "cannot disable sme%d", vq * 128);
82
+ ARMVectorReg scratch;
199
+ error_append_hint(errp, "All SME vector lengths are disabled.\n");
83
+ uintptr_t oprsz = simd_oprsz(desc);
200
+ error_append_hint(errp, "With SME enabled, at least one "
84
+
201
+ "vector length must be enabled.\n");
85
+ if (unlikely(vd == vn)) {
202
+ return;
86
+ vn = memcpy(&scratch, vn, oprsz);
203
+ }
204
+ } else {
205
+ if (!cpu_isar_feature(aa64_sme, cpu)) {
206
+ vq = 32 - clz32(vq_map);
207
+ error_setg(errp, "cannot enable sme%d", vq * 128);
208
+ error_append_hint(errp, "SME must be enabled to enable "
209
+ "vector lengths.\n");
210
+ error_append_hint(errp, "Add sme=on to the CPU property list.\n");
211
+ return;
212
+ }
213
+ /* TODO: KVM will require limitations via SMCR_EL2. */
87
+ }
214
+ }
88
+
215
+
89
+ fn(vd, vn, NULL, vm, oprsz, is_tbx);
216
+ cpu->sme_vq.map = vq_map;
217
+}
218
+
219
+static bool cpu_arm_get_sme(Object *obj, Error **errp)
220
+{
221
+ ARMCPU *cpu = ARM_CPU(obj);
222
+ return cpu_isar_feature(aa64_sme, cpu);
223
+}
224
+
225
+static void cpu_arm_set_sme(Object *obj, bool value, Error **errp)
226
+{
227
+ ARMCPU *cpu = ARM_CPU(obj);
228
+ uint64_t t;
229
+
230
+ t = cpu->isar.id_aa64pfr1;
231
+ t = FIELD_DP64(t, ID_AA64PFR1, SME, value);
232
+ cpu->isar.id_aa64pfr1 = t;
233
+}
234
+
235
+static bool cpu_arm_get_sme_fa64(Object *obj, Error **errp)
236
+{
237
+ ARMCPU *cpu = ARM_CPU(obj);
238
+ return cpu_isar_feature(aa64_sme, cpu) &&
239
+ cpu_isar_feature(aa64_sme_fa64, cpu);
240
+}
241
+
242
+static void cpu_arm_set_sme_fa64(Object *obj, bool value, Error **errp)
243
+{
244
+ ARMCPU *cpu = ARM_CPU(obj);
245
+ uint64_t t;
246
+
247
+ t = cpu->isar.id_aa64smfr0;
248
+ t = FIELD_DP64(t, ID_AA64SMFR0, FA64, value);
249
+ cpu->isar.id_aa64smfr0 = t;
250
+}
251
+
252
#ifdef CONFIG_USER_ONLY
253
-/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
254
+/* Mirror linux /proc/sys/abi/{sve,sme}_default_vector_length. */
255
static void cpu_arm_set_default_vec_len(Object *obj, Visitor *v,
256
const char *name, void *opaque,
257
Error **errp)
258
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_default_vec_len(Object *obj, Visitor *v,
259
* and is the maximum architectural width of ZCR_ELx.LEN.
260
*/
261
if (remainder || default_vq < 1 || default_vq > 512) {
262
- error_setg(errp, "cannot set sve-default-vector-length");
263
+ ARMCPU *cpu = ARM_CPU(obj);
264
+ const char *which =
265
+ (ptr_default_vq == &cpu->sve_default_vq ? "sve" : "sme");
266
+
267
+ error_setg(errp, "cannot set %s-default-vector-length", which);
268
if (remainder) {
269
error_append_hint(errp, "Vector length not a multiple of 16\n");
270
} else if (default_vq < 1) {
271
@@ -XXX,XX +XXX,XX @@ static void aarch64_add_sve_properties(Object *obj)
272
#endif
90
}
273
}
91
274
92
-DO_TBL(sve_tbl_b, uint8_t, H1)
275
+static void aarch64_add_sme_properties(Object *obj)
93
-DO_TBL(sve_tbl_h, uint16_t, H2)
276
+{
94
-DO_TBL(sve_tbl_s, uint32_t, H4)
277
+ ARMCPU *cpu = ARM_CPU(obj);
95
-DO_TBL(sve_tbl_d, uint64_t, )
278
+ uint32_t vq;
96
+static inline void do_tbl2(void *vd, void *vn0, void *vn1, void *vm,
279
+
97
+ uint32_t desc, bool is_tbx, tb_impl_fn *fn)
280
+ object_property_add_bool(obj, "sme", cpu_arm_get_sme, cpu_arm_set_sme);
98
+{
281
+ object_property_add_bool(obj, "sme_fa64", cpu_arm_get_sme_fa64,
99
+ ARMVectorReg scratch;
282
+ cpu_arm_set_sme_fa64);
100
+ uintptr_t oprsz = simd_oprsz(desc);
283
+
101
284
+ for (vq = 1; vq <= ARM_MAX_VQ; vq <<= 1) {
102
-#undef TBL
285
+ char name[8];
103
+ if (unlikely(vd == vn0)) {
286
+ sprintf(name, "sme%d", vq * 128);
104
+ vn0 = memcpy(&scratch, vn0, oprsz);
287
+ object_property_add(obj, name, "bool", cpu_arm_get_vq,
105
+ if (vd == vn1) {
288
+ cpu_arm_set_vq, NULL, &cpu->sme_vq);
106
+ vn1 = vn0;
107
+ }
108
+ } else if (unlikely(vd == vn1)) {
109
+ vn1 = memcpy(&scratch, vn1, oprsz);
110
+ }
289
+ }
111
+
290
+
112
+ fn(vd, vn0, vn1, vm, oprsz, is_tbx);
291
+#ifdef CONFIG_USER_ONLY
113
+}
292
+ /* Mirror linux /proc/sys/abi/sme_default_vector_length. */
114
+
293
+ object_property_add(obj, "sme-default-vector-length", "int32",
115
+#define DO_TB(SUFF, TYPE, H) \
294
+ cpu_arm_get_default_vec_len,
116
+static inline void do_tb_##SUFF(void *vd, void *vt0, void *vt1, \
295
+ cpu_arm_set_default_vec_len, NULL,
117
+ void *vm, uintptr_t oprsz, bool is_tbx) \
296
+ &cpu->sme_default_vq);
118
+{ \
297
+#endif
119
+ TYPE *d = vd, *tbl0 = vt0, *tbl1 = vt1, *indexes = vm; \
298
+}
120
+ uintptr_t i, nelem = oprsz / sizeof(TYPE); \
299
+
121
+ for (i = 0; i < nelem; ++i) { \
300
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
122
+ TYPE index = indexes[H1(i)], val = 0; \
123
+ if (index < nelem) { \
124
+ val = tbl0[H(index)]; \
125
+ } else { \
126
+ index -= nelem; \
127
+ if (tbl1 && index < nelem) { \
128
+ val = tbl1[H(index)]; \
129
+ } else if (is_tbx) { \
130
+ continue; \
131
+ } \
132
+ } \
133
+ d[H(i)] = val; \
134
+ } \
135
+} \
136
+void HELPER(sve_tbl_##SUFF)(void *vd, void *vn, void *vm, uint32_t desc) \
137
+{ \
138
+ do_tbl1(vd, vn, vm, desc, false, do_tb_##SUFF); \
139
+} \
140
+void HELPER(sve2_tbl_##SUFF)(void *vd, void *vn0, void *vn1, \
141
+ void *vm, uint32_t desc) \
142
+{ \
143
+ do_tbl2(vd, vn0, vn1, vm, desc, false, do_tb_##SUFF); \
144
+} \
145
+void HELPER(sve2_tbx_##SUFF)(void *vd, void *vn, void *vm, uint32_t desc) \
146
+{ \
147
+ do_tbl1(vd, vn, vm, desc, true, do_tb_##SUFF); \
148
+}
149
+
150
+DO_TB(b, uint8_t, H1)
151
+DO_TB(h, uint16_t, H2)
152
+DO_TB(s, uint32_t, H4)
153
+DO_TB(d, uint64_t, )
154
+
155
+#undef DO_TB
156
157
#define DO_UNPK(NAME, TYPED, TYPES, HD, HS) \
158
void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
159
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/target/arm/translate-sve.c
162
+++ b/target/arm/translate-sve.c
163
@@ -XXX,XX +XXX,XX @@ static bool trans_TBL(DisasContext *s, arg_rrr_esz *a)
164
return true;
165
}
166
167
+static bool trans_TBL_sve2(DisasContext *s, arg_rrr_esz *a)
168
+{
169
+ static gen_helper_gvec_4 * const fns[4] = {
170
+ gen_helper_sve2_tbl_b, gen_helper_sve2_tbl_h,
171
+ gen_helper_sve2_tbl_s, gen_helper_sve2_tbl_d
172
+ };
173
+
174
+ if (!dc_isar_feature(aa64_sve2, s)) {
175
+ return false;
176
+ }
177
+ if (sve_access_check(s)) {
178
+ gen_gvec_ool_zzzz(s, fns[a->esz], a->rd, a->rn,
179
+ (a->rn + 1) % 32, a->rm, 0);
180
+ }
181
+ return true;
182
+}
183
+
184
+static bool trans_TBX(DisasContext *s, arg_rrr_esz *a)
185
+{
186
+ static gen_helper_gvec_3 * const fns[4] = {
187
+ gen_helper_sve2_tbx_b, gen_helper_sve2_tbx_h,
188
+ gen_helper_sve2_tbx_s, gen_helper_sve2_tbx_d
189
+ };
190
+
191
+ if (!dc_isar_feature(aa64_sve2, s)) {
192
+ return false;
193
+ }
194
+ if (sve_access_check(s)) {
195
+ gen_gvec_ool_zzz(s, fns[a->esz], a->rd, a->rn, a->rm, 0);
196
+ }
197
+ return true;
198
+}
199
+
200
static bool trans_UNPK(DisasContext *s, arg_UNPK *a)
201
{
301
{
202
static gen_helper_gvec_2 * const fns[4][2] = {
302
int arch_val = 0, impdef_val = 0;
303
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
304
#endif
305
306
cpu->sve_vq.supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
307
+ cpu->sme_vq.supported = SVE_VQ_POW2_MAP;
308
309
aarch64_add_pauth_properties(obj);
310
aarch64_add_sve_properties(obj);
311
+ aarch64_add_sme_properties(obj);
312
object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
313
cpu_max_set_sve_max_vq, NULL, NULL);
314
qdev_property_add_static(DEVICE(obj), &arm_cpu_lpa2_property);
203
--
315
--
204
2.20.1
316
2.25.1
205
206
diff view generated by jsdifflib
1
From: Rebecca Cran <rebecca@nuviainc.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
ARMv8.4 adds the mandatory FEAT_TLBIRANGE. It provides TLBI
3
When Streaming SVE mode is enabled, the size is taken from
4
maintenance instructions that apply to a range of input addresses.
4
SMCR_ELx instead of ZCR_ELx. The format is shared, but the
5
set of vector lengths is not. Further, Streaming SVE does
6
not require any particular length to be supported.
5
7
6
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
8
Adjust sve_vqm1_for_el to pass the current value of PSTATE.SM
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
to the new function.
8
Message-id: 20210512182337.18563-2-rebecca@nuviainc.com
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220620175235.60881-19-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
target/arm/cpu.h | 5 +
16
target/arm/cpu.h | 9 +++++++--
12
target/arm/helper.c | 281 ++++++++++++++++++++++++++++++++++++++++++++
17
target/arm/helper.c | 32 +++++++++++++++++++++++++-------
13
2 files changed, 286 insertions(+)
18
2 files changed, 32 insertions(+), 9 deletions(-)
14
19
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
22
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pauth_arch(const ARMISARegisters *id)
24
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int cur_el);
20
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, APA) != 0;
25
int sme_exception_el(CPUARMState *env, int cur_el);
21
}
26
22
27
/**
23
+static inline bool isar_feature_aa64_tlbirange(const ARMISARegisters *id)
28
- * sve_vqm1_for_el:
24
+{
29
+ * sve_vqm1_for_el_sm:
25
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
30
* @env: CPUARMState
26
+}
31
* @el: exception level
32
+ * @sm: streaming mode
33
*
34
- * Compute the current SVE vector length for @el, in units of
35
+ * Compute the current vector length for @el & @sm, in units of
36
* Quadwords Minus 1 -- the same scale used for ZCR_ELx.LEN.
37
+ * If @sm, compute for SVL, otherwise NVL.
38
*/
39
+uint32_t sve_vqm1_for_el_sm(CPUARMState *env, int el, bool sm);
27
+
40
+
28
static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
41
+/* Likewise, but using @sm = PSTATE.SM. */
29
{
42
uint32_t sve_vqm1_for_el(CPUARMState *env, int el);
30
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
43
44
static inline bool is_a64(CPUARMState *env)
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/helper.c
47
--- a/target/arm/helper.c
34
+++ b/target/arm/helper.c
48
+++ b/target/arm/helper.c
35
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
49
@@ -XXX,XX +XXX,XX @@ int sme_exception_el(CPUARMState *env, int el)
36
ARMMMUIdxBit_SE3, bits);
50
/*
37
}
51
* Given that SVE is enabled, return the vector length for EL.
38
52
*/
39
+#ifdef TARGET_AARCH64
53
-uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
40
+static uint64_t tlbi_aa64_range_get_length(CPUARMState *env,
54
+uint32_t sve_vqm1_for_el_sm(CPUARMState *env, int el, bool sm)
41
+ uint64_t value)
55
{
42
+{
56
ARMCPU *cpu = env_archcpu(env);
43
+ unsigned int page_shift;
57
- uint32_t len = cpu->sve_max_vq - 1;
44
+ unsigned int page_size_granule;
58
+ uint64_t *cr = env->vfp.zcr_el;
45
+ uint64_t num;
59
+ uint32_t map = cpu->sve_vq.map;
46
+ uint64_t scale;
60
+ uint32_t len = ARM_MAX_VQ - 1;
47
+ uint64_t exponent;
48
+ uint64_t length;
49
+
61
+
50
+ num = extract64(value, 39, 4);
62
+ if (sm) {
51
+ scale = extract64(value, 44, 2);
63
+ cr = env->vfp.smcr_el;
52
+ page_size_granule = extract64(value, 46, 2);
64
+ map = cpu->sme_vq.map;
53
+
65
+ }
54
+ page_shift = page_size_granule * 2 + 12;
66
55
+
67
if (el <= 1 && !el_is_in_host(env, el)) {
56
+ if (page_size_granule == 0) {
68
- len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
57
+ qemu_log_mask(LOG_GUEST_ERROR, "Invalid page size granule %d\n",
69
+ len = MIN(len, 0xf & (uint32_t)cr[1]);
58
+ page_size_granule);
70
}
59
+ return 0;
71
if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) {
72
- len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[2]);
73
+ len = MIN(len, 0xf & (uint32_t)cr[2]);
74
}
75
if (arm_feature(env, ARM_FEATURE_EL3)) {
76
- len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
77
+ len = MIN(len, 0xf & (uint32_t)cr[3]);
78
}
79
80
- len = 31 - clz32(cpu->sve_vq.map & MAKE_64BIT_MASK(0, len + 1));
81
- return len;
82
+ map &= MAKE_64BIT_MASK(0, len + 1);
83
+ if (map != 0) {
84
+ return 31 - clz32(map);
60
+ }
85
+ }
61
+
86
+
62
+ exponent = (5 * scale) + 1;
87
+ /* Bit 0 is always set for Normal SVE -- not so for Streaming SVE. */
63
+ length = (num + 1) << (exponent + page_shift);
88
+ assert(sm);
64
+
89
+ return ctz32(cpu->sme_vq.map);
65
+ return length;
66
+}
90
+}
67
+
91
+
68
+static uint64_t tlbi_aa64_range_get_base(CPUARMState *env, uint64_t value,
92
+uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
69
+ bool two_ranges)
70
+{
93
+{
71
+ /* TODO: ARMv8.7 FEAT_LPA2 */
94
+ return sve_vqm1_for_el_sm(env, el, FIELD_EX64(env->svcr, SVCR, SM));
72
+ uint64_t pageaddr;
95
}
73
+
96
74
+ if (two_ranges) {
97
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
75
+ pageaddr = sextract64(value, 0, 37) << TARGET_PAGE_BITS;
76
+ } else {
77
+ pageaddr = extract64(value, 0, 37) << TARGET_PAGE_BITS;
78
+ }
79
+
80
+ return pageaddr;
81
+}
82
+
83
+static void do_rvae_write(CPUARMState *env, uint64_t value,
84
+ int idxmap, bool synced)
85
+{
86
+ ARMMMUIdx one_idx = ARM_MMU_IDX_A | ctz32(idxmap);
87
+ bool two_ranges = regime_has_2_ranges(one_idx);
88
+ uint64_t baseaddr, length;
89
+ int bits;
90
+
91
+ baseaddr = tlbi_aa64_range_get_base(env, value, two_ranges);
92
+ length = tlbi_aa64_range_get_length(env, value);
93
+ bits = tlbbits_for_regime(env, one_idx, baseaddr);
94
+
95
+ if (synced) {
96
+ tlb_flush_range_by_mmuidx_all_cpus_synced(env_cpu(env),
97
+ baseaddr,
98
+ length,
99
+ idxmap,
100
+ bits);
101
+ } else {
102
+ tlb_flush_range_by_mmuidx(env_cpu(env), baseaddr,
103
+ length, idxmap, bits);
104
+ }
105
+}
106
+
107
+static void tlbi_aa64_rvae1_write(CPUARMState *env,
108
+ const ARMCPRegInfo *ri,
109
+ uint64_t value)
110
+{
111
+ /*
112
+ * Invalidate by VA range, EL1&0.
113
+ * Currently handles all of RVAE1, RVAAE1, RVAALE1 and RVALE1,
114
+ * since we don't support flush-for-specific-ASID-only or
115
+ * flush-last-level-only.
116
+ */
117
+
118
+ do_rvae_write(env, value, vae1_tlbmask(env),
119
+ tlb_force_broadcast(env));
120
+}
121
+
122
+static void tlbi_aa64_rvae1is_write(CPUARMState *env,
123
+ const ARMCPRegInfo *ri,
124
+ uint64_t value)
125
+{
126
+ /*
127
+ * Invalidate by VA range, Inner/Outer Shareable EL1&0.
128
+ * Currently handles all of RVAE1IS, RVAE1OS, RVAAE1IS, RVAAE1OS,
129
+ * RVAALE1IS, RVAALE1OS, RVALE1IS and RVALE1OS, since we don't support
130
+ * flush-for-specific-ASID-only, flush-last-level-only or inner/outer
131
+ * shareable specific flushes.
132
+ */
133
+
134
+ do_rvae_write(env, value, vae1_tlbmask(env), true);
135
+}
136
+
137
+static int vae2_tlbmask(CPUARMState *env)
138
+{
139
+ return (arm_is_secure_below_el3(env)
140
+ ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2);
141
+}
142
+
143
+static void tlbi_aa64_rvae2_write(CPUARMState *env,
144
+ const ARMCPRegInfo *ri,
145
+ uint64_t value)
146
+{
147
+ /*
148
+ * Invalidate by VA range, EL2.
149
+ * Currently handles all of RVAE2 and RVALE2,
150
+ * since we don't support flush-for-specific-ASID-only or
151
+ * flush-last-level-only.
152
+ */
153
+
154
+ do_rvae_write(env, value, vae2_tlbmask(env),
155
+ tlb_force_broadcast(env));
156
+
157
+
158
+}
159
+
160
+static void tlbi_aa64_rvae2is_write(CPUARMState *env,
161
+ const ARMCPRegInfo *ri,
162
+ uint64_t value)
163
+{
164
+ /*
165
+ * Invalidate by VA range, Inner/Outer Shareable, EL2.
166
+ * Currently handles all of RVAE2IS, RVAE2OS, RVALE2IS and RVALE2OS,
167
+ * since we don't support flush-for-specific-ASID-only,
168
+ * flush-last-level-only or inner/outer shareable specific flushes.
169
+ */
170
+
171
+ do_rvae_write(env, value, vae2_tlbmask(env), true);
172
+
173
+}
174
+
175
+static void tlbi_aa64_rvae3_write(CPUARMState *env,
176
+ const ARMCPRegInfo *ri,
177
+ uint64_t value)
178
+{
179
+ /*
180
+ * Invalidate by VA range, EL3.
181
+ * Currently handles all of RVAE3 and RVALE3,
182
+ * since we don't support flush-for-specific-ASID-only or
183
+ * flush-last-level-only.
184
+ */
185
+
186
+ do_rvae_write(env, value, ARMMMUIdxBit_SE3,
187
+ tlb_force_broadcast(env));
188
+}
189
+
190
+static void tlbi_aa64_rvae3is_write(CPUARMState *env,
191
+ const ARMCPRegInfo *ri,
192
+ uint64_t value)
193
+{
194
+ /*
195
+ * Invalidate by VA range, EL3, Inner/Outer Shareable.
196
+ * Currently handles all of RVAE3IS, RVAE3OS, RVALE3IS and RVALE3OS,
197
+ * since we don't support flush-for-specific-ASID-only,
198
+ * flush-last-level-only or inner/outer specific flushes.
199
+ */
200
+
201
+ do_rvae_write(env, value, ARMMMUIdxBit_SE3, true);
202
+}
203
+#endif
204
+
205
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
206
bool isread)
207
{
208
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
209
REGINFO_SENTINEL
210
};
211
212
+static const ARMCPRegInfo tlbirange_reginfo[] = {
213
+ { .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
214
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
215
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
216
+ .writefn = tlbi_aa64_rvae1is_write },
217
+ { .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
218
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
219
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
220
+ .writefn = tlbi_aa64_rvae1is_write },
221
+ { .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
222
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
223
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
224
+ .writefn = tlbi_aa64_rvae1is_write },
225
+ { .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
226
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
227
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
228
+ .writefn = tlbi_aa64_rvae1is_write },
229
+ { .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
230
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
231
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
232
+ .writefn = tlbi_aa64_rvae1is_write },
233
+ { .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
234
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
235
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
236
+ .writefn = tlbi_aa64_rvae1is_write },
237
+ { .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
238
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
239
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
240
+ .writefn = tlbi_aa64_rvae1is_write },
241
+ { .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
242
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
243
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
244
+ .writefn = tlbi_aa64_rvae1is_write },
245
+ { .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
246
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
247
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
248
+ .writefn = tlbi_aa64_rvae1_write },
249
+ { .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
250
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
251
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
252
+ .writefn = tlbi_aa64_rvae1_write },
253
+ { .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
254
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
255
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
256
+ .writefn = tlbi_aa64_rvae1_write },
257
+ { .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
258
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
259
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
260
+ .writefn = tlbi_aa64_rvae1_write },
261
+ { .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
262
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
263
+ .access = PL2_W, .type = ARM_CP_NOP },
264
+ { .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
265
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
266
+ .access = PL2_W, .type = ARM_CP_NOP },
267
+ { .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
268
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
269
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
270
+ .writefn = tlbi_aa64_rvae2is_write },
271
+ { .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
272
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
273
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
274
+ .writefn = tlbi_aa64_rvae2is_write },
275
+ { .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
276
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
277
+ .access = PL2_W, .type = ARM_CP_NOP },
278
+ { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
279
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
280
+ .access = PL2_W, .type = ARM_CP_NOP },
281
+ { .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
282
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
283
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
284
+ .writefn = tlbi_aa64_rvae2is_write },
285
+ { .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
286
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
287
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
288
+ .writefn = tlbi_aa64_rvae2is_write },
289
+ { .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
290
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
291
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
292
+ .writefn = tlbi_aa64_rvae2_write },
293
+ { .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
294
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
295
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
296
+ .writefn = tlbi_aa64_rvae2_write },
297
+ { .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
298
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
299
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
300
+ .writefn = tlbi_aa64_rvae3is_write },
301
+ { .name = "TLBI_RVALE3IS", .state = ARM_CP_STATE_AA64,
302
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 5,
303
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
304
+ .writefn = tlbi_aa64_rvae3is_write },
305
+ { .name = "TLBI_RVAE3OS", .state = ARM_CP_STATE_AA64,
306
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 1,
307
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
308
+ .writefn = tlbi_aa64_rvae3is_write },
309
+ { .name = "TLBI_RVALE3OS", .state = ARM_CP_STATE_AA64,
310
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 5,
311
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
312
+ .writefn = tlbi_aa64_rvae3is_write },
313
+ { .name = "TLBI_RVAE3", .state = ARM_CP_STATE_AA64,
314
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 1,
315
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
316
+ .writefn = tlbi_aa64_rvae3_write },
317
+ { .name = "TLBI_RVALE3", .state = ARM_CP_STATE_AA64,
318
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
319
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
320
+ .writefn = tlbi_aa64_rvae3_write },
321
+ REGINFO_SENTINEL
322
+};
323
+
324
static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
325
{
326
Error *err = NULL;
327
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
328
if (cpu_isar_feature(aa64_rndr, cpu)) {
329
define_arm_cp_regs(cpu, rndr_reginfo);
330
}
331
+ if (cpu_isar_feature(aa64_tlbirange, cpu)) {
332
+ define_arm_cp_regs(cpu, tlbirange_reginfo);
333
+ }
334
#ifndef CONFIG_USER_ONLY
335
/* Data Cache clean instructions up to PoP */
336
if (cpu_isar_feature(aa64_dcpop, cpu)) {
337
--
98
--
338
2.20.1
99
2.25.1
339
340
diff view generated by jsdifflib
1
From: Rebecca Cran <rebecca@nuviainc.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
ARMv8.4 adds the mandatory FEAT_TLBIOS. It provides TLBI
3
We need SVL separate from VL for RDSVL et al, as well as
4
maintenance instructions that extend to the Outer Shareable domain.
4
ZA storage loads and stores, which do not require PSTATE.SM.
5
5
6
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210512182337.18563-3-rebecca@nuviainc.com
8
Message-id: 20220620175235.60881-20-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/cpu.h | 5 +++++
11
target/arm/cpu.h | 12 ++++++++++++
12
target/arm/helper.c | 43 +++++++++++++++++++++++++++++++++++++++++++
12
target/arm/translate.h | 1 +
13
2 files changed, 48 insertions(+)
13
target/arm/helper.c | 8 +++++++-
14
target/arm/translate-a64.c | 1 +
15
4 files changed, 21 insertions(+), 1 deletion(-)
14
16
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tlbirange(const ARMISARegisters *id)
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
20
return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
22
FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
23
FIELD(TBFLAG_A64, PSTATE_SM, 22, 1)
24
FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
25
+FIELD(TBFLAG_A64, SVL, 24, 4)
26
27
/*
28
* Helpers for using the above.
29
@@ -XXX,XX +XXX,XX @@ static inline int sve_vq(CPUARMState *env)
30
return EX_TBFLAG_A64(env->hflags, VL) + 1;
21
}
31
}
22
32
23
+static inline bool isar_feature_aa64_tlbios(const ARMISARegisters *id)
33
+/**
34
+ * sme_vq
35
+ * @env: the cpu context
36
+ *
37
+ * Return the SVL cached within env->hflags, in units of quadwords.
38
+ */
39
+static inline int sme_vq(CPUARMState *env)
24
+{
40
+{
25
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) != 0;
41
+ return EX_TBFLAG_A64(env->hflags, SVL) + 1;
26
+}
42
+}
27
+
43
+
28
static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
44
static inline bool bswap_code(bool sctlr_b)
29
{
45
{
30
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
46
#ifdef CONFIG_USER_ONLY
47
diff --git a/target/arm/translate.h b/target/arm/translate.h
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate.h
50
+++ b/target/arm/translate.h
51
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
52
int sve_excp_el; /* SVE exception EL or 0 if enabled */
53
int sme_excp_el; /* SME exception EL or 0 if enabled */
54
int vl; /* current vector length in bytes */
55
+ int svl; /* current streaming vector length in bytes */
56
bool vfp_enabled; /* FP enabled via FPSCR.EN */
57
int vec_len;
58
int vec_stride;
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
59
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/helper.c
61
--- a/target/arm/helper.c
34
+++ b/target/arm/helper.c
62
+++ b/target/arm/helper.c
35
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
63
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
36
REGINFO_SENTINEL
64
DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
37
};
65
}
38
66
if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
39
+static const ARMCPRegInfo tlbios_reginfo[] = {
67
- DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
40
+ { .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
68
+ int sme_el = sme_exception_el(env, el);
41
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
42
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
43
+ .writefn = tlbi_aa64_vmalle1is_write },
44
+ { .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
45
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
46
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
47
+ .writefn = tlbi_aa64_vmalle1is_write },
48
+ { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
49
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
50
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
51
+ .writefn = tlbi_aa64_alle2is_write },
52
+ { .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
53
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
54
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
55
+ .writefn = tlbi_aa64_alle1is_write },
56
+ { .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
57
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
58
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
59
+ .writefn = tlbi_aa64_alle1is_write },
60
+ { .name = "TLBI_IPAS2E1OS", .state = ARM_CP_STATE_AA64,
61
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 0,
62
+ .access = PL2_W, .type = ARM_CP_NOP },
63
+ { .name = "TLBI_RIPAS2E1OS", .state = ARM_CP_STATE_AA64,
64
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 3,
65
+ .access = PL2_W, .type = ARM_CP_NOP },
66
+ { .name = "TLBI_IPAS2LE1OS", .state = ARM_CP_STATE_AA64,
67
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 4,
68
+ .access = PL2_W, .type = ARM_CP_NOP },
69
+ { .name = "TLBI_RIPAS2LE1OS", .state = ARM_CP_STATE_AA64,
70
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 7,
71
+ .access = PL2_W, .type = ARM_CP_NOP },
72
+ { .name = "TLBI_ALLE3OS", .state = ARM_CP_STATE_AA64,
73
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 0,
74
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
75
+ .writefn = tlbi_aa64_alle3is_write },
76
+ REGINFO_SENTINEL
77
+};
78
+
69
+
79
static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
70
+ DP_TBFLAG_A64(flags, SMEEXC_EL, sme_el);
80
{
71
+ if (sme_el == 0) {
81
Error *err = NULL;
72
+ /* Similarly, do not compute SVL if SME is disabled. */
82
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
73
+ DP_TBFLAG_A64(flags, SVL, sve_vqm1_for_el_sm(env, el, true));
83
if (cpu_isar_feature(aa64_tlbirange, cpu)) {
74
+ }
84
define_arm_cp_regs(cpu, tlbirange_reginfo);
75
if (FIELD_EX64(env->svcr, SVCR, SM)) {
85
}
76
DP_TBFLAG_A64(flags, PSTATE_SM, 1);
86
+ if (cpu_isar_feature(aa64_tlbios, cpu)) {
77
}
87
+ define_arm_cp_regs(cpu, tlbios_reginfo);
78
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
88
+ }
79
index XXXXXXX..XXXXXXX 100644
89
#ifndef CONFIG_USER_ONLY
80
--- a/target/arm/translate-a64.c
90
/* Data Cache clean instructions up to PoP */
81
+++ b/target/arm/translate-a64.c
91
if (cpu_isar_feature(aa64_dcpop, cpu)) {
82
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
83
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
84
dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
85
dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
86
+ dc->svl = (EX_TBFLAG_A64(tb_flags, SVL) + 1) * 16;
87
dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
88
dc->bt = EX_TBFLAG_A64(tb_flags, BT);
89
dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
92
--
90
--
93
2.20.1
91
2.25.1
94
95
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We will need these functions in translate-sme.c.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20210525010358.152808-64-richard.henderson@linaro.org
7
Message-id: 20220620175235.60881-21-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper-sve.h | 10 ++++
10
target/arm/translate-a64.h | 38 ++++++++++++++++++++++++++++++++++++++
9
target/arm/sve.decode | 9 ++++
11
target/arm/translate-sve.c | 36 ------------------------------------
10
target/arm/sve_helper.c | 99 ++++++++++++++++++++++++++++++++++++++
12
2 files changed, 38 insertions(+), 36 deletions(-)
11
target/arm/translate-sve.c | 17 +++++++
12
4 files changed, 135 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/translate-a64.h
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/translate-a64.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_idx_h, TCG_CALL_NO_RWG,
18
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
19
void, ptr, ptr, ptr, ptr, i32)
19
return s->vl;
20
DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_idx_s, TCG_CALL_NO_RWG,
20
}
21
void, ptr, ptr, ptr, ptr, i32)
21
22
+
22
+/*
23
+DEF_HELPER_FLAGS_5(sve2_cdot_zzzz_s, TCG_CALL_NO_RWG,
23
+ * Return the offset info CPUARMState of the predicate vector register Pn.
24
+ void, ptr, ptr, ptr, ptr, i32)
24
+ * Note for this purpose, FFR is P16.
25
+DEF_HELPER_FLAGS_5(sve2_cdot_zzzz_d, TCG_CALL_NO_RWG,
25
+ */
26
+ void, ptr, ptr, ptr, ptr, i32)
26
+static inline int pred_full_reg_offset(DisasContext *s, int regno)
27
+
28
+DEF_HELPER_FLAGS_5(sve2_cdot_idx_s, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(sve2_cdot_idx_d, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/sve.decode
35
+++ b/target/arm/sve.decode
36
@@ -XXX,XX +XXX,XX @@ MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
37
DOT_zzzz 01000100 1 sz:1 0 rm:5 00000 u:1 rn:5 rd:5 \
38
ra=%reg_movprfx
39
40
+# SVE2 complex dot product (vectors)
41
+CDOT_zzzz 01000100 esz:2 0 rm:5 0001 rot:2 rn:5 rd:5 ra=%reg_movprfx
42
+
43
#### SVE Multiply - Indexed
44
45
# SVE integer dot product (indexed)
46
@@ -XXX,XX +XXX,XX @@ SQDMLSLB_zzxw_d 01000100 11 1 ..... 0011.0 ..... ..... @rrxr_2a esz=3
47
SQDMLSLT_zzxw_s 01000100 10 1 ..... 0011.1 ..... ..... @rrxr_3a esz=2
48
SQDMLSLT_zzxw_d 01000100 11 1 ..... 0011.1 ..... ..... @rrxr_2a esz=3
49
50
+# SVE2 complex integer dot product (indexed)
51
+CDOT_zzxw_s 01000100 10 1 index:2 rm:3 0100 rot:2 rn:5 rd:5 \
52
+ ra=%reg_movprfx
53
+CDOT_zzxw_d 01000100 11 1 index:1 rm:4 0100 rot:2 rn:5 rd:5 \
54
+ ra=%reg_movprfx
55
+
56
# SVE2 complex integer multiply-add (indexed)
57
CMLA_zzxz_h 01000100 10 1 index:2 rm:3 0110 rot:2 rn:5 rd:5 \
58
ra=%reg_movprfx
59
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/sve_helper.c
62
+++ b/target/arm/sve_helper.c
63
@@ -XXX,XX +XXX,XX @@ DO_CMLA_IDX_FUNC(sve2_sqrdcmlah_idx_s, int32_t, H4, DO_SQRDMLAH_S)
64
#undef DO_SQRDMLAH_S
65
#undef DO_SQRDMLAH_D
66
67
+/* Note N and M are 4 elements bundled into one unit. */
68
+static int32_t do_cdot_s(uint32_t n, uint32_t m, int32_t a,
69
+ int sel_a, int sel_b, int sub_i)
70
+{
27
+{
71
+ for (int i = 0; i <= 1; i++) {
28
+ return offsetof(CPUARMState, vfp.pregs[regno]);
72
+ int32_t elt1_r = (int8_t)(n >> (16 * i));
73
+ int32_t elt1_i = (int8_t)(n >> (16 * i + 8));
74
+ int32_t elt2_a = (int8_t)(m >> (16 * i + 8 * sel_a));
75
+ int32_t elt2_b = (int8_t)(m >> (16 * i + 8 * sel_b));
76
+
77
+ a += elt1_r * elt2_a + elt1_i * elt2_b * sub_i;
78
+ }
79
+ return a;
80
+}
29
+}
81
+
30
+
82
+static int64_t do_cdot_d(uint64_t n, uint64_t m, int64_t a,
31
+/* Return the byte size of the whole predicate register, VL / 64. */
83
+ int sel_a, int sel_b, int sub_i)
32
+static inline int pred_full_reg_size(DisasContext *s)
84
+{
33
+{
85
+ for (int i = 0; i <= 1; i++) {
34
+ return s->vl >> 3;
86
+ int64_t elt1_r = (int16_t)(n >> (32 * i + 0));
87
+ int64_t elt1_i = (int16_t)(n >> (32 * i + 16));
88
+ int64_t elt2_a = (int16_t)(m >> (32 * i + 16 * sel_a));
89
+ int64_t elt2_b = (int16_t)(m >> (32 * i + 16 * sel_b));
90
+
91
+ a += elt1_r * elt2_a + elt1_i * elt2_b * sub_i;
92
+ }
93
+ return a;
94
+}
35
+}
95
+
36
+
96
+void HELPER(sve2_cdot_zzzz_s)(void *vd, void *vn, void *vm,
37
+/*
97
+ void *va, uint32_t desc)
38
+ * Round up the size of a register to a size allowed by
39
+ * the tcg vector infrastructure. Any operation which uses this
40
+ * size may assume that the bits above pred_full_reg_size are zero,
41
+ * and must leave them the same way.
42
+ *
43
+ * Note that this is not needed for the vector registers as they
44
+ * are always properly sized for tcg vectors.
45
+ */
46
+static inline int size_for_gvec(int size)
98
+{
47
+{
99
+ int opr_sz = simd_oprsz(desc);
48
+ if (size <= 8) {
100
+ int rot = simd_data(desc);
49
+ return 8;
101
+ int sel_a = rot & 1;
50
+ } else {
102
+ int sel_b = sel_a ^ 1;
51
+ return QEMU_ALIGN_UP(size, 16);
103
+ int sub_i = (rot == 0 || rot == 3 ? -1 : 1);
104
+ uint32_t *d = vd, *n = vn, *m = vm, *a = va;
105
+
106
+ for (int e = 0; e < opr_sz / 4; e++) {
107
+ d[e] = do_cdot_s(n[e], m[e], a[e], sel_a, sel_b, sub_i);
108
+ }
52
+ }
109
+}
53
+}
110
+
54
+
111
+void HELPER(sve2_cdot_zzzz_d)(void *vd, void *vn, void *vm,
55
+static inline int pred_gvec_reg_size(DisasContext *s)
112
+ void *va, uint32_t desc)
113
+{
56
+{
114
+ int opr_sz = simd_oprsz(desc);
57
+ return size_for_gvec(pred_full_reg_size(s));
115
+ int rot = simd_data(desc);
116
+ int sel_a = rot & 1;
117
+ int sel_b = sel_a ^ 1;
118
+ int sub_i = (rot == 0 || rot == 3 ? -1 : 1);
119
+ uint64_t *d = vd, *n = vn, *m = vm, *a = va;
120
+
121
+ for (int e = 0; e < opr_sz / 8; e++) {
122
+ d[e] = do_cdot_d(n[e], m[e], a[e], sel_a, sel_b, sub_i);
123
+ }
124
+}
58
+}
125
+
59
+
126
+void HELPER(sve2_cdot_idx_s)(void *vd, void *vn, void *vm,
60
bool disas_sve(DisasContext *, uint32_t);
127
+ void *va, uint32_t desc)
61
128
+{
62
void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
129
+ int opr_sz = simd_oprsz(desc);
130
+ int rot = extract32(desc, SIMD_DATA_SHIFT, 2);
131
+ int idx = H4(extract32(desc, SIMD_DATA_SHIFT + 2, 2));
132
+ int sel_a = rot & 1;
133
+ int sel_b = sel_a ^ 1;
134
+ int sub_i = (rot == 0 || rot == 3 ? -1 : 1);
135
+ uint32_t *d = vd, *n = vn, *m = vm, *a = va;
136
+
137
+ for (int seg = 0; seg < opr_sz / 4; seg += 4) {
138
+ uint32_t seg_m = m[seg + idx];
139
+ for (int e = 0; e < 4; e++) {
140
+ d[seg + e] = do_cdot_s(n[seg + e], seg_m, a[seg + e],
141
+ sel_a, sel_b, sub_i);
142
+ }
143
+ }
144
+}
145
+
146
+void HELPER(sve2_cdot_idx_d)(void *vd, void *vn, void *vm,
147
+ void *va, uint32_t desc)
148
+{
149
+ int seg, opr_sz = simd_oprsz(desc);
150
+ int rot = extract32(desc, SIMD_DATA_SHIFT, 2);
151
+ int idx = extract32(desc, SIMD_DATA_SHIFT + 2, 2);
152
+ int sel_a = rot & 1;
153
+ int sel_b = sel_a ^ 1;
154
+ int sub_i = (rot == 0 || rot == 3 ? -1 : 1);
155
+ uint64_t *d = vd, *n = vn, *m = vm, *a = va;
156
+
157
+ for (seg = 0; seg < opr_sz / 8; seg += 2) {
158
+ uint64_t seg_m = m[seg + idx];
159
+ for (int e = 0; e < 2; e++) {
160
+ d[seg + e] = do_cdot_d(n[seg + e], seg_m, a[seg + e],
161
+ sel_a, sel_b, sub_i);
162
+ }
163
+ }
164
+}
165
+
166
#define DO_ZZXZ(NAME, TYPE, H, OP) \
167
void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
168
{ \
169
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
63
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
170
index XXXXXXX..XXXXXXX 100644
64
index XXXXXXX..XXXXXXX 100644
171
--- a/target/arm/translate-sve.c
65
--- a/target/arm/translate-sve.c
172
+++ b/target/arm/translate-sve.c
66
+++ b/target/arm/translate-sve.c
173
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRXR_ROT(CMLA_zzxz_s, gen_helper_sve2_cmla_idx_s)
67
@@ -XXX,XX +XXX,XX @@ static inline int msz_dtype(DisasContext *s, int msz)
174
DO_SVE2_RRXR_ROT(SQRDCMLAH_zzxz_h, gen_helper_sve2_sqrdcmlah_idx_h)
68
* Implement all of the translator functions referenced by the decoder.
175
DO_SVE2_RRXR_ROT(SQRDCMLAH_zzxz_s, gen_helper_sve2_sqrdcmlah_idx_s)
69
*/
176
70
177
+DO_SVE2_RRXR_ROT(CDOT_zzxw_s, gen_helper_sve2_cdot_idx_s)
71
-/* Return the offset info CPUARMState of the predicate vector register Pn.
178
+DO_SVE2_RRXR_ROT(CDOT_zzxw_d, gen_helper_sve2_cdot_idx_d)
72
- * Note for this purpose, FFR is P16.
179
+
73
- */
180
#undef DO_SVE2_RRXR_ROT
74
-static inline int pred_full_reg_offset(DisasContext *s, int regno)
181
75
-{
182
/*
76
- return offsetof(CPUARMState, vfp.pregs[regno]);
183
@@ -XXX,XX +XXX,XX @@ static bool trans_CMLA_zzzz(DisasContext *s, arg_CMLA_zzzz *a)
77
-}
184
return true;
78
-
185
}
79
-/* Return the byte size of the whole predicate register, VL / 64. */
186
80
-static inline int pred_full_reg_size(DisasContext *s)
187
+static bool trans_CDOT_zzzz(DisasContext *s, arg_CMLA_zzzz *a)
81
-{
188
+{
82
- return s->vl >> 3;
189
+ if (!dc_isar_feature(aa64_sve2, s) || a->esz < MO_32) {
83
-}
190
+ return false;
84
-
191
+ }
85
-/* Round up the size of a register to a size allowed by
192
+ if (sve_access_check(s)) {
86
- * the tcg vector infrastructure. Any operation which uses this
193
+ gen_helper_gvec_4 *fn = (a->esz == MO_32
87
- * size may assume that the bits above pred_full_reg_size are zero,
194
+ ? gen_helper_sve2_cdot_zzzz_s
88
- * and must leave them the same way.
195
+ : gen_helper_sve2_cdot_zzzz_d);
89
- *
196
+ gen_gvec_ool_zzzz(s, fn, a->rd, a->rn, a->rm, a->ra, a->rot);
90
- * Note that this is not needed for the vector registers as they
197
+ }
91
- * are always properly sized for tcg vectors.
198
+ return true;
92
- */
199
+}
93
-static int size_for_gvec(int size)
200
+
94
-{
201
static bool trans_SQRDCMLAH_zzzz(DisasContext *s, arg_SQRDCMLAH_zzzz *a)
95
- if (size <= 8) {
202
{
96
- return 8;
203
static gen_helper_gvec_4 * const fns[] = {
97
- } else {
98
- return QEMU_ALIGN_UP(size, 16);
99
- }
100
-}
101
-
102
-static int pred_gvec_reg_size(DisasContext *s)
103
-{
104
- return size_for_gvec(pred_full_reg_size(s));
105
-}
106
-
107
/* Invoke an out-of-line helper on 2 Zregs. */
108
static bool gen_gvec_ool_zz(DisasContext *s, gen_helper_gvec_2 *fn,
109
int rd, int rn, int data)
204
--
110
--
205
2.20.1
111
2.25.1
206
207
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We're about to add more variations on this theme.
3
Move the code from hw/arm/virt.c that is supposed
4
to handle v7 into the one function.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-65-richard.henderson@linaro.org
7
Reported-by: He Zhe <zhe.he@windriver.com>
8
Message-id: 20220619001541.131672-2-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/vec_helper.c | 82 ++++++++++-------------------------------
12
hw/arm/virt.c | 10 +---------
11
1 file changed, 20 insertions(+), 62 deletions(-)
13
target/arm/ptw.c | 24 ++++++++++++++++--------
14
2 files changed, 17 insertions(+), 17 deletions(-)
12
15
13
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
16
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/vec_helper.c
18
--- a/hw/arm/virt.c
16
+++ b/target/arm/vec_helper.c
19
+++ b/hw/arm/virt.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmulh_idx_d)(void *vd, void *vn, void *vm, uint32_t desc)
20
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
18
/* Integer 8 and 16-bit dot-product.
21
cpuobj = object_new(possible_cpus->cpus[0].type);
19
*
22
armcpu = ARM_CPU(cpuobj);
20
* Note that for the loops herein, host endianness does not matter
23
21
- * with respect to the ordering of data within the 64-bit lanes.
24
- if (object_property_get_bool(cpuobj, "aarch64", NULL)) {
22
+ * with respect to the ordering of data within the quad-width lanes.
25
- pa_bits = arm_pamax(armcpu);
23
* All elements are treated equally, no matter where they are.
26
- } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
24
*/
27
- /* v7 with LPAE */
25
28
- pa_bits = 40;
26
-void HELPER(gvec_sdot_b)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
29
- } else {
27
-{
30
- /* Anything else */
28
- intptr_t i, opr_sz = simd_oprsz(desc);
31
- pa_bits = 32;
29
- int32_t *d = vd, *a = va;
32
- }
30
- int8_t *n = vn, *m = vm;
33
+ pa_bits = arm_pamax(armcpu);
31
-
34
32
- for (i = 0; i < opr_sz / 4; ++i) {
35
object_unref(cpuobj);
33
- d[i] = (a[i] +
36
34
- n[i * 4 + 0] * m[i * 4 + 0] +
37
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
35
- n[i * 4 + 1] * m[i * 4 + 1] +
38
index XXXXXXX..XXXXXXX 100644
36
- n[i * 4 + 2] * m[i * 4 + 2] +
39
--- a/target/arm/ptw.c
37
- n[i * 4 + 3] * m[i * 4 + 3]);
40
+++ b/target/arm/ptw.c
38
- }
41
@@ -XXX,XX +XXX,XX @@ static const uint8_t pamax_map[] = {
39
- clear_tail(d, opr_sz, simd_maxsz(desc));
42
/* The cpu-specific constant value of PAMax; also used by hw/arm/virt. */
40
+#define DO_DOT(NAME, TYPED, TYPEN, TYPEM) \
43
unsigned int arm_pamax(ARMCPU *cpu)
41
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
44
{
42
+{ \
45
- unsigned int parange =
43
+ intptr_t i, opr_sz = simd_oprsz(desc); \
46
- FIELD_EX64(cpu->isar.id_aa64mmfr0, ID_AA64MMFR0, PARANGE);
44
+ TYPED *d = vd, *a = va; \
47
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
45
+ TYPEN *n = vn; \
48
+ unsigned int parange =
46
+ TYPEM *m = vm; \
49
+ FIELD_EX64(cpu->isar.id_aa64mmfr0, ID_AA64MMFR0, PARANGE);
47
+ for (i = 0; i < opr_sz / sizeof(TYPED); ++i) { \
50
48
+ d[i] = (a[i] + \
51
- /*
49
+ (TYPED)n[i * 4 + 0] * m[i * 4 + 0] + \
52
- * id_aa64mmfr0 is a read-only register so values outside of the
50
+ (TYPED)n[i * 4 + 1] * m[i * 4 + 1] + \
53
- * supported mappings can be considered an implementation error.
51
+ (TYPED)n[i * 4 + 2] * m[i * 4 + 2] + \
54
- */
52
+ (TYPED)n[i * 4 + 3] * m[i * 4 + 3]); \
55
- assert(parange < ARRAY_SIZE(pamax_map));
53
+ } \
56
- return pamax_map[parange];
54
+ clear_tail(d, opr_sz, simd_maxsz(desc)); \
57
+ /*
58
+ * id_aa64mmfr0 is a read-only register so values outside of the
59
+ * supported mappings can be considered an implementation error.
60
+ */
61
+ assert(parange < ARRAY_SIZE(pamax_map));
62
+ return pamax_map[parange];
63
+ }
64
+ if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) {
65
+ /* v7 with LPAE */
66
+ return 40;
67
+ }
68
+ /* Anything else */
69
+ return 32;
55
}
70
}
56
71
57
-void HELPER(gvec_udot_b)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
72
/*
58
-{
59
- intptr_t i, opr_sz = simd_oprsz(desc);
60
- uint32_t *d = vd, *a = va;
61
- uint8_t *n = vn, *m = vm;
62
-
63
- for (i = 0; i < opr_sz / 4; ++i) {
64
- d[i] = (a[i] +
65
- n[i * 4 + 0] * m[i * 4 + 0] +
66
- n[i * 4 + 1] * m[i * 4 + 1] +
67
- n[i * 4 + 2] * m[i * 4 + 2] +
68
- n[i * 4 + 3] * m[i * 4 + 3]);
69
- }
70
- clear_tail(d, opr_sz, simd_maxsz(desc));
71
-}
72
-
73
-void HELPER(gvec_sdot_h)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
74
-{
75
- intptr_t i, opr_sz = simd_oprsz(desc);
76
- int64_t *d = vd, *a = va;
77
- int16_t *n = vn, *m = vm;
78
-
79
- for (i = 0; i < opr_sz / 8; ++i) {
80
- d[i] = (a[i] +
81
- (int64_t)n[i * 4 + 0] * m[i * 4 + 0] +
82
- (int64_t)n[i * 4 + 1] * m[i * 4 + 1] +
83
- (int64_t)n[i * 4 + 2] * m[i * 4 + 2] +
84
- (int64_t)n[i * 4 + 3] * m[i * 4 + 3]);
85
- }
86
- clear_tail(d, opr_sz, simd_maxsz(desc));
87
-}
88
-
89
-void HELPER(gvec_udot_h)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
90
-{
91
- intptr_t i, opr_sz = simd_oprsz(desc);
92
- uint64_t *d = vd, *a = va;
93
- uint16_t *n = vn, *m = vm;
94
-
95
- for (i = 0; i < opr_sz / 8; ++i) {
96
- d[i] = (a[i] +
97
- (uint64_t)n[i * 4 + 0] * m[i * 4 + 0] +
98
- (uint64_t)n[i * 4 + 1] * m[i * 4 + 1] +
99
- (uint64_t)n[i * 4 + 2] * m[i * 4 + 2] +
100
- (uint64_t)n[i * 4 + 3] * m[i * 4 + 3]);
101
- }
102
- clear_tail(d, opr_sz, simd_maxsz(desc));
103
-}
104
+DO_DOT(gvec_sdot_b, int32_t, int8_t, int8_t)
105
+DO_DOT(gvec_udot_b, uint32_t, uint8_t, uint8_t)
106
+DO_DOT(gvec_sdot_h, int64_t, int16_t, int16_t)
107
+DO_DOT(gvec_udot_h, uint64_t, uint16_t, uint16_t)
108
109
void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm,
110
void *va, uint32_t desc)
111
--
73
--
112
2.20.1
74
2.25.1
113
114
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Using g_memdup is a bit more compact than g_new + memcpy.
3
In machvirt_init we create a cpu but do not fully initialize it.
4
Thus the propagation of V7VE to LPAE has not been done, and we
5
compute the wrong value for some v7 cpus, e.g. cortex-a15.
4
6
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1078
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reported-by: He Zhe <zhe.he@windriver.com>
7
Message-id: 20210509151618.2331764-2-f4bug@amsat.org
10
Message-id: 20220619001541.131672-3-richard.henderson@linaro.org
8
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
9
[PMD: Split from bigger patch]
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
13
---
14
accel/tcg/cputlb.c | 15 ++++-----------
14
target/arm/ptw.c | 8 +++++++-
15
1 file changed, 4 insertions(+), 11 deletions(-)
15
1 file changed, 7 insertions(+), 1 deletion(-)
16
16
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
19
--- a/target/arm/ptw.c
20
+++ b/accel/tcg/cputlb.c
20
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
21
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
22
} else if (encode_pbm_to_runon(&runon, d)) {
22
assert(parange < ARRAY_SIZE(pamax_map));
23
async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
23
return pamax_map[parange];
24
} else {
25
- TLBFlushPageBitsByMMUIdxData *p
26
- = g_new(TLBFlushPageBitsByMMUIdxData, 1);
27
-
28
/* Otherwise allocate a structure, freed by the worker. */
29
- *p = d;
30
+ TLBFlushPageBitsByMMUIdxData *p = g_memdup(&d, sizeof(d));
31
async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_2,
32
RUN_ON_CPU_HOST_PTR(p));
33
}
24
}
34
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
25
- if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) {
35
flush_all_helper(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
26
+
36
} else {
27
+ /*
37
CPUState *dst_cpu;
28
+ * In machvirt_init, we call arm_pamax on a cpu that is not fully
38
- TLBFlushPageBitsByMMUIdxData *p;
29
+ * initialized, so we can't rely on the propagation done in realize.
39
30
+ */
40
/* Allocate a separate data block for each destination cpu. */
31
+ if (arm_feature(&cpu->env, ARM_FEATURE_LPAE) ||
41
CPU_FOREACH(dst_cpu) {
32
+ arm_feature(&cpu->env, ARM_FEATURE_V7VE)) {
42
if (dst_cpu != src_cpu) {
33
/* v7 with LPAE */
43
- p = g_new(TLBFlushPageBitsByMMUIdxData, 1);
34
return 40;
44
- *p = d;
45
+ TLBFlushPageBitsByMMUIdxData *p = g_memdup(&d, sizeof(d));
46
async_run_on_cpu(dst_cpu,
47
tlb_flush_page_bits_by_mmuidx_async_2,
48
RUN_ON_CPU_HOST_PTR(p));
49
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
50
/* Allocate a separate data block for each destination cpu. */
51
CPU_FOREACH(dst_cpu) {
52
if (dst_cpu != src_cpu) {
53
- p = g_new(TLBFlushPageBitsByMMUIdxData, 1);
54
- *p = d;
55
+ p = g_memdup(&d, sizeof(d));
56
async_run_on_cpu(dst_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
57
RUN_ON_CPU_HOST_PTR(p));
58
}
59
}
60
61
- p = g_new(TLBFlushPageBitsByMMUIdxData, 1);
62
- *p = d;
63
+ p = g_memdup(&d, sizeof(d));
64
async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
65
RUN_ON_CPU_HOST_PTR(p));
66
}
35
}
67
--
36
--
68
2.20.1
37
2.25.1
69
70
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Rename the structure to match the rename of tlb_flush_range_locked.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20210509151618.2331764-4-f4bug@amsat.org
8
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
9
[PMD: Split from bigger patch]
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
accel/tcg/cputlb.c | 24 ++++++++++++------------
15
1 file changed, 12 insertions(+), 12 deletions(-)
16
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
20
+++ b/accel/tcg/cputlb.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct {
22
target_ulong len;
23
uint16_t idxmap;
24
uint16_t bits;
25
-} TLBFlushPageBitsByMMUIdxData;
26
+} TLBFlushRangeData;
27
28
static void
29
tlb_flush_page_bits_by_mmuidx_async_0(CPUState *cpu,
30
- TLBFlushPageBitsByMMUIdxData d)
31
+ TLBFlushRangeData d)
32
{
33
CPUArchState *env = cpu->env_ptr;
34
int mmu_idx;
35
@@ -XXX,XX +XXX,XX @@ tlb_flush_page_bits_by_mmuidx_async_0(CPUState *cpu,
36
}
37
38
static bool encode_pbm_to_runon(run_on_cpu_data *out,
39
- TLBFlushPageBitsByMMUIdxData d)
40
+ TLBFlushRangeData d)
41
{
42
/* We need 6 bits to hold to hold @bits up to 63. */
43
if (d.idxmap <= MAKE_64BIT_MASK(0, TARGET_PAGE_BITS - 6)) {
44
@@ -XXX,XX +XXX,XX @@ static bool encode_pbm_to_runon(run_on_cpu_data *out,
45
return false;
46
}
47
48
-static TLBFlushPageBitsByMMUIdxData
49
+static TLBFlushRangeData
50
decode_runon_to_pbm(run_on_cpu_data data)
51
{
52
target_ulong addr_map_bits = (target_ulong) data.target_ptr;
53
- return (TLBFlushPageBitsByMMUIdxData){
54
+ return (TLBFlushRangeData){
55
.addr = addr_map_bits & TARGET_PAGE_MASK,
56
.idxmap = (addr_map_bits & ~TARGET_PAGE_MASK) >> 6,
57
.bits = addr_map_bits & 0x3f
58
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_bits_by_mmuidx_async_1(CPUState *cpu,
59
static void tlb_flush_page_bits_by_mmuidx_async_2(CPUState *cpu,
60
run_on_cpu_data data)
61
{
62
- TLBFlushPageBitsByMMUIdxData *d = data.host_ptr;
63
+ TLBFlushRangeData *d = data.host_ptr;
64
tlb_flush_page_bits_by_mmuidx_async_0(cpu, *d);
65
g_free(d);
66
}
67
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_bits_by_mmuidx_async_2(CPUState *cpu,
68
void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
69
uint16_t idxmap, unsigned bits)
70
{
71
- TLBFlushPageBitsByMMUIdxData d;
72
+ TLBFlushRangeData d;
73
run_on_cpu_data runon;
74
75
/* If all bits are significant, this devolves to tlb_flush_page. */
76
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
77
async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
78
} else {
79
/* Otherwise allocate a structure, freed by the worker. */
80
- TLBFlushPageBitsByMMUIdxData *p = g_memdup(&d, sizeof(d));
81
+ TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
82
async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_2,
83
RUN_ON_CPU_HOST_PTR(p));
84
}
85
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
86
uint16_t idxmap,
87
unsigned bits)
88
{
89
- TLBFlushPageBitsByMMUIdxData d;
90
+ TLBFlushRangeData d;
91
run_on_cpu_data runon;
92
93
/* If all bits are significant, this devolves to tlb_flush_page. */
94
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
95
/* Allocate a separate data block for each destination cpu. */
96
CPU_FOREACH(dst_cpu) {
97
if (dst_cpu != src_cpu) {
98
- TLBFlushPageBitsByMMUIdxData *p = g_memdup(&d, sizeof(d));
99
+ TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
100
async_run_on_cpu(dst_cpu,
101
tlb_flush_page_bits_by_mmuidx_async_2,
102
RUN_ON_CPU_HOST_PTR(p));
103
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
104
uint16_t idxmap,
105
unsigned bits)
106
{
107
- TLBFlushPageBitsByMMUIdxData d;
108
+ TLBFlushRangeData d;
109
run_on_cpu_data runon;
110
111
/* If all bits are significant, this devolves to tlb_flush_page. */
112
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
113
runon);
114
} else {
115
CPUState *dst_cpu;
116
- TLBFlushPageBitsByMMUIdxData *p;
117
+ TLBFlushRangeData *p;
118
119
/* Allocate a separate data block for each destination cpu. */
120
CPU_FOREACH(dst_cpu) {
121
--
122
2.20.1
123
124
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Forward tlb_flush_page_bits_by_mmuidx to tlb_flush_range_by_mmuidx
4
passing TARGET_PAGE_SIZE.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20210509151618.2331764-5-f4bug@amsat.org
9
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
10
[PMD: Split from bigger patch]
11
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
include/exec/exec-all.h | 19 +++++++++++++++++++
16
accel/tcg/cputlb.c | 20 +++++++++++++++-----
17
2 files changed, 34 insertions(+), 5 deletions(-)
18
19
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/exec-all.h
22
+++ b/include/exec/exec-all.h
23
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
24
void tlb_flush_page_bits_by_mmuidx_all_cpus_synced
25
(CPUState *cpu, target_ulong addr, uint16_t idxmap, unsigned bits);
26
27
+/**
28
+ * tlb_flush_range_by_mmuidx
29
+ * @cpu: CPU whose TLB should be flushed
30
+ * @addr: virtual address of the start of the range to be flushed
31
+ * @len: length of range to be flushed
32
+ * @idxmap: bitmap of mmu indexes to flush
33
+ * @bits: number of significant bits in address
34
+ *
35
+ * For each mmuidx in @idxmap, flush all pages within [@addr,@addr+@len),
36
+ * comparing only the low @bits worth of each virtual page.
37
+ */
38
+void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
39
+ target_ulong len, uint16_t idxmap,
40
+ unsigned bits);
41
/**
42
* tlb_set_page_with_attrs:
43
* @cpu: CPU to add this TLB entry for
44
@@ -XXX,XX +XXX,XX @@ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *cpu, target_ulong addr,
45
uint16_t idxmap, unsigned bits)
46
{
47
}
48
+static inline void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
49
+ target_ulong len, uint16_t idxmap,
50
+ unsigned bits)
51
+{
52
+}
53
#endif
54
/**
55
* probe_access:
56
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/accel/tcg/cputlb.c
59
+++ b/accel/tcg/cputlb.c
60
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_bits_by_mmuidx_async_2(CPUState *cpu,
61
g_free(d);
62
}
63
64
-void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
65
- uint16_t idxmap, unsigned bits)
66
+void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
67
+ target_ulong len, uint16_t idxmap,
68
+ unsigned bits)
69
{
70
TLBFlushRangeData d;
71
72
- /* If all bits are significant, this devolves to tlb_flush_page. */
73
- if (bits >= TARGET_LONG_BITS) {
74
+ /*
75
+ * If all bits are significant, and len is small,
76
+ * this devolves to tlb_flush_page.
77
+ */
78
+ if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
79
tlb_flush_page_by_mmuidx(cpu, addr, idxmap);
80
return;
81
}
82
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
83
84
/* This should already be page aligned */
85
d.addr = addr & TARGET_PAGE_MASK;
86
- d.len = TARGET_PAGE_SIZE;
87
+ d.len = len;
88
d.idxmap = idxmap;
89
d.bits = bits;
90
91
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
92
}
93
}
94
95
+void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
96
+ uint16_t idxmap, unsigned bits)
97
+{
98
+ tlb_flush_range_by_mmuidx(cpu, addr, TARGET_PAGE_SIZE, idxmap, bits);
99
+}
100
+
101
void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
102
target_ulong addr,
103
uint16_t idxmap,
104
--
105
2.20.1
106
107
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Forward tlb_flush_page_bits_by_mmuidx_all_cpus to
4
tlb_flush_range_by_mmuidx_all_cpus passing TARGET_PAGE_SIZE.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20210509151618.2331764-6-f4bug@amsat.org
9
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
10
[PMD: Split from bigger patch]
11
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
include/exec/exec-all.h | 13 +++++++++++++
16
accel/tcg/cputlb.c | 24 +++++++++++++++++-------
17
2 files changed, 30 insertions(+), 7 deletions(-)
18
19
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/exec-all.h
22
+++ b/include/exec/exec-all.h
23
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced
24
void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
25
target_ulong len, uint16_t idxmap,
26
unsigned bits);
27
+
28
+/* Similarly, with broadcast and syncing. */
29
+void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
30
+ target_ulong len, uint16_t idxmap,
31
+ unsigned bits);
32
+
33
/**
34
* tlb_set_page_with_attrs:
35
* @cpu: CPU to add this TLB entry for
36
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
37
unsigned bits)
38
{
39
}
40
+static inline void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu,
41
+ target_ulong addr,
42
+ target_ulong len,
43
+ uint16_t idxmap,
44
+ unsigned bits)
45
+{
46
+}
47
#endif
48
/**
49
* probe_access:
50
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/accel/tcg/cputlb.c
53
+++ b/accel/tcg/cputlb.c
54
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
55
tlb_flush_range_by_mmuidx(cpu, addr, TARGET_PAGE_SIZE, idxmap, bits);
56
}
57
58
-void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
59
- target_ulong addr,
60
- uint16_t idxmap,
61
- unsigned bits)
62
+void tlb_flush_range_by_mmuidx_all_cpus(CPUState *src_cpu,
63
+ target_ulong addr, target_ulong len,
64
+ uint16_t idxmap, unsigned bits)
65
{
66
TLBFlushRangeData d;
67
CPUState *dst_cpu;
68
69
- /* If all bits are significant, this devolves to tlb_flush_page. */
70
- if (bits >= TARGET_LONG_BITS) {
71
+ /*
72
+ * If all bits are significant, and len is small,
73
+ * this devolves to tlb_flush_page.
74
+ */
75
+ if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
76
tlb_flush_page_by_mmuidx_all_cpus(src_cpu, addr, idxmap);
77
return;
78
}
79
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
80
81
/* This should already be page aligned */
82
d.addr = addr & TARGET_PAGE_MASK;
83
- d.len = TARGET_PAGE_SIZE;
84
+ d.len = len;
85
d.idxmap = idxmap;
86
d.bits = bits;
87
88
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
89
tlb_flush_page_bits_by_mmuidx_async_0(src_cpu, d);
90
}
91
92
+void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
93
+ target_ulong addr,
94
+ uint16_t idxmap, unsigned bits)
95
+{
96
+ tlb_flush_range_by_mmuidx_all_cpus(src_cpu, addr, TARGET_PAGE_SIZE,
97
+ idxmap, bits);
98
+}
99
+
100
void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
101
target_ulong addr,
102
uint16_t idxmap,
103
--
104
2.20.1
105
106
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Forward tlb_flush_page_bits_by_mmuidx_all_cpus_synced to
4
tlb_flush_range_by_mmuidx_all_cpus_synced passing TARGET_PAGE_SIZE.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20210509151618.2331764-7-f4bug@amsat.org
9
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
10
[PMD: Split from bigger patch]
11
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
include/exec/exec-all.h | 12 ++++++++++++
16
accel/tcg/cputlb.c | 27 ++++++++++++++++++++-------
17
2 files changed, 32 insertions(+), 7 deletions(-)
18
19
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/exec-all.h
22
+++ b/include/exec/exec-all.h
23
@@ -XXX,XX +XXX,XX @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
24
void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
25
target_ulong len, uint16_t idxmap,
26
unsigned bits);
27
+void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
28
+ target_ulong addr,
29
+ target_ulong len,
30
+ uint16_t idxmap,
31
+ unsigned bits);
32
33
/**
34
* tlb_set_page_with_attrs:
35
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu,
36
unsigned bits)
37
{
38
}
39
+static inline void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
40
+ target_ulong addr,
41
+ target_long len,
42
+ uint16_t idxmap,
43
+ unsigned bits)
44
+{
45
+}
46
#endif
47
/**
48
* probe_access:
49
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/accel/tcg/cputlb.c
52
+++ b/accel/tcg/cputlb.c
53
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
54
idxmap, bits);
55
}
56
57
-void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
58
- target_ulong addr,
59
- uint16_t idxmap,
60
- unsigned bits)
61
+void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
62
+ target_ulong addr,
63
+ target_ulong len,
64
+ uint16_t idxmap,
65
+ unsigned bits)
66
{
67
TLBFlushRangeData d, *p;
68
CPUState *dst_cpu;
69
70
- /* If all bits are significant, this devolves to tlb_flush_page. */
71
- if (bits >= TARGET_LONG_BITS) {
72
+ /*
73
+ * If all bits are significant, and len is small,
74
+ * this devolves to tlb_flush_page.
75
+ */
76
+ if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
77
tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap);
78
return;
79
}
80
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
81
82
/* This should already be page aligned */
83
d.addr = addr & TARGET_PAGE_MASK;
84
- d.len = TARGET_PAGE_SIZE;
85
+ d.len = len;
86
d.idxmap = idxmap;
87
d.bits = bits;
88
89
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
90
RUN_ON_CPU_HOST_PTR(p));
91
}
92
93
+void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
94
+ target_ulong addr,
95
+ uint16_t idxmap,
96
+ unsigned bits)
97
+{
98
+ tlb_flush_range_by_mmuidx_all_cpus_synced(src_cpu, addr, TARGET_PAGE_SIZE,
99
+ idxmap, bits);
100
+}
101
+
102
/* update the TLBs so that writes to code in the virtual page 'addr'
103
can be detected */
104
void tlb_protect_code(ram_addr_t ram_addr)
105
--
106
2.20.1
107
108
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Rename to match tlb_flush_range_locked.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20210509151618.2331764-8-f4bug@amsat.org
8
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
9
[PMD: Split from bigger patch]
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
accel/tcg/cputlb.c | 11 +++++------
15
1 file changed, 5 insertions(+), 6 deletions(-)
16
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
20
+++ b/accel/tcg/cputlb.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct {
22
uint16_t bits;
23
} TLBFlushRangeData;
24
25
-static void
26
-tlb_flush_page_bits_by_mmuidx_async_0(CPUState *cpu,
27
- TLBFlushRangeData d)
28
+static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
29
+ TLBFlushRangeData d)
30
{
31
CPUArchState *env = cpu->env_ptr;
32
int mmu_idx;
33
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_bits_by_mmuidx_async_2(CPUState *cpu,
34
run_on_cpu_data data)
35
{
36
TLBFlushRangeData *d = data.host_ptr;
37
- tlb_flush_page_bits_by_mmuidx_async_0(cpu, *d);
38
+ tlb_flush_range_by_mmuidx_async_0(cpu, *d);
39
g_free(d);
40
}
41
42
@@ -XXX,XX +XXX,XX @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
43
d.bits = bits;
44
45
if (qemu_cpu_is_self(cpu)) {
46
- tlb_flush_page_bits_by_mmuidx_async_0(cpu, d);
47
+ tlb_flush_range_by_mmuidx_async_0(cpu, d);
48
} else {
49
/* Otherwise allocate a structure, freed by the worker. */
50
TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
51
@@ -XXX,XX +XXX,XX @@ void tlb_flush_range_by_mmuidx_all_cpus(CPUState *src_cpu,
52
}
53
}
54
55
- tlb_flush_page_bits_by_mmuidx_async_0(src_cpu, d);
56
+ tlb_flush_range_by_mmuidx_async_0(src_cpu, d);
57
}
58
59
void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
60
--
61
2.20.1
62
63
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Rename to match tlb_flush_range_locked.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20210509151618.2331764-9-f4bug@amsat.org
8
Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org>
9
[PMD: Split from bigger patch]
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
accel/tcg/cputlb.c | 12 ++++++------
15
1 file changed, 6 insertions(+), 6 deletions(-)
16
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
20
+++ b/accel/tcg/cputlb.c
21
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
22
}
23
}
24
25
-static void tlb_flush_page_bits_by_mmuidx_async_2(CPUState *cpu,
26
- run_on_cpu_data data)
27
+static void tlb_flush_range_by_mmuidx_async_1(CPUState *cpu,
28
+ run_on_cpu_data data)
29
{
30
TLBFlushRangeData *d = data.host_ptr;
31
tlb_flush_range_by_mmuidx_async_0(cpu, *d);
32
@@ -XXX,XX +XXX,XX @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
33
} else {
34
/* Otherwise allocate a structure, freed by the worker. */
35
TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
36
- async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_2,
37
+ async_run_on_cpu(cpu, tlb_flush_range_by_mmuidx_async_1,
38
RUN_ON_CPU_HOST_PTR(p));
39
}
40
}
41
@@ -XXX,XX +XXX,XX @@ void tlb_flush_range_by_mmuidx_all_cpus(CPUState *src_cpu,
42
if (dst_cpu != src_cpu) {
43
TLBFlushRangeData *p = g_memdup(&d, sizeof(d));
44
async_run_on_cpu(dst_cpu,
45
- tlb_flush_page_bits_by_mmuidx_async_2,
46
+ tlb_flush_range_by_mmuidx_async_1,
47
RUN_ON_CPU_HOST_PTR(p));
48
}
49
}
50
@@ -XXX,XX +XXX,XX @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
51
CPU_FOREACH(dst_cpu) {
52
if (dst_cpu != src_cpu) {
53
p = g_memdup(&d, sizeof(d));
54
- async_run_on_cpu(dst_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
55
+ async_run_on_cpu(dst_cpu, tlb_flush_range_by_mmuidx_async_1,
56
RUN_ON_CPU_HOST_PTR(p));
57
}
58
}
59
60
p = g_memdup(&d, sizeof(d));
61
- async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
62
+ async_safe_run_on_cpu(src_cpu, tlb_flush_range_by_mmuidx_async_1,
63
RUN_ON_CPU_HOST_PTR(p));
64
}
65
66
--
67
2.20.1
68
69
diff view generated by jsdifflib
Deleted patch
1
From: Rebecca Cran <rebecca@nuviainc.com>
2
1
3
Indicate support for FEAT_TLBIOS and FEAT_TLBIRANGE by setting
4
ID_AA64ISAR0.TLB to 2 for the max AARCH64 CPU type.
5
6
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210512182337.18563-4-rebecca@nuviainc.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu64.c | 1 +
12
1 file changed, 1 insertion(+)
13
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu64.c
17
+++ b/target/arm/cpu64.c
18
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
19
t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
20
t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
21
t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
22
+ t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
23
t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1);
24
cpu->isar.id_aa64isar0 = t;
25
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
When selecting an ARM target on Debian unstable, we get:
4
5
Compiling C++ object libcommon.fa.p/disas_libvixl_vixl_utils.cc.o
6
FAILED: libcommon.fa.p/disas_libvixl_vixl_utils.cc.o
7
c++ -Ilibcommon.fa.p -I. -I.. [...] -o libcommon.fa.p/disas_libvixl_vixl_utils.cc.o -c ../disas/libvixl/vixl/utils.cc
8
In file included from /home/philmd/qemu/disas/libvixl/vixl/utils.h:30,
9
from ../disas/libvixl/vixl/utils.cc:27:
10
/usr/include/string.h:36:43: error: missing binary operator before token "("
11
36 | #if defined __cplusplus && (__GNUC_PREREQ (4, 4) \
12
| ^
13
/usr/include/string.h:53:62: error: missing binary operator before token "("
14
53 | #if defined __USE_MISC || defined __USE_XOPEN || __GLIBC_USE (ISOC2X)
15
| ^
16
/usr/include/string.h:165:21: error: missing binary operator before token "("
17
165 | || __GLIBC_USE (LIB_EXT2) || __GLIBC_USE (ISOC2X))
18
| ^
19
/usr/include/string.h:174:43: error: missing binary operator before token "("
20
174 | #if defined __USE_XOPEN2K8 || __GLIBC_USE (LIB_EXT2) || __GLIBC_USE (ISOC2X)
21
| ^
22
/usr/include/string.h:492:19: error: missing binary operator before token "("
23
492 | #if __GNUC_PREREQ (3,4)
24
| ^
25
26
Relevant information from the host:
27
28
$ lsb_release -d
29
Description: Debian GNU/Linux 11 (bullseye)
30
$ gcc --version
31
gcc (Debian 10.2.1-6) 10.2.1 20210110
32
$ dpkg -S /usr/include/string.h
33
libc6-dev: /usr/include/string.h
34
$ apt-cache show libc6-dev
35
Package: libc6-dev
36
Version: 2.31-11
37
38
Partially cherry-pick vixl commit 78973f258039f6e96 [*]:
39
40
Refactor VIXL to use `extern` block when including C header
41
that do not have a C++ counterpart.
42
43
which is similar to commit 875df03b221 ('osdep: protect qemu/osdep.h
44
with extern "C"').
45
46
[*] https://git.linaro.org/arm/vixl.git/commit/?id=78973f258039f6e96
47
48
Buglink: https://bugs.launchpad.net/qemu/+bug/1914870
49
Suggested-by: Thomas Huth <thuth@redhat.com>
50
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
51
Reviewed-by: Thomas Huth <thuth@redhat.com>
52
Message-id: 20210516171023.510778-1-f4bug@amsat.org
53
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
54
---
55
disas/libvixl/vixl/code-buffer.h | 2 +-
56
disas/libvixl/vixl/globals.h | 16 +++++++++-------
57
disas/libvixl/vixl/invalset.h | 2 +-
58
disas/libvixl/vixl/platform.h | 2 ++
59
disas/libvixl/vixl/utils.h | 2 +-
60
disas/libvixl/vixl/utils.cc | 2 +-
61
6 files changed, 15 insertions(+), 11 deletions(-)
62
63
diff --git a/disas/libvixl/vixl/code-buffer.h b/disas/libvixl/vixl/code-buffer.h
64
index XXXXXXX..XXXXXXX 100644
65
--- a/disas/libvixl/vixl/code-buffer.h
66
+++ b/disas/libvixl/vixl/code-buffer.h
67
@@ -XXX,XX +XXX,XX @@
68
#ifndef VIXL_CODE_BUFFER_H
69
#define VIXL_CODE_BUFFER_H
70
71
-#include <string.h>
72
+#include <cstring>
73
#include "vixl/globals.h"
74
75
namespace vixl {
76
diff --git a/disas/libvixl/vixl/globals.h b/disas/libvixl/vixl/globals.h
77
index XXXXXXX..XXXXXXX 100644
78
--- a/disas/libvixl/vixl/globals.h
79
+++ b/disas/libvixl/vixl/globals.h
80
@@ -XXX,XX +XXX,XX @@
81
#define __STDC_FORMAT_MACROS
82
#endif
83
84
-#include <stdint.h>
85
+extern "C" {
86
#include <inttypes.h>
87
-
88
-#include <assert.h>
89
-#include <stdarg.h>
90
-#include <stdio.h>
91
#include <stdint.h>
92
-#include <stdlib.h>
93
-#include <stddef.h>
94
+}
95
+
96
+#include <cassert>
97
+#include <cstdarg>
98
+#include <cstddef>
99
+#include <cstdio>
100
+#include <cstdlib>
101
+
102
#include "vixl/platform.h"
103
104
105
diff --git a/disas/libvixl/vixl/invalset.h b/disas/libvixl/vixl/invalset.h
106
index XXXXXXX..XXXXXXX 100644
107
--- a/disas/libvixl/vixl/invalset.h
108
+++ b/disas/libvixl/vixl/invalset.h
109
@@ -XXX,XX +XXX,XX @@
110
#ifndef VIXL_INVALSET_H_
111
#define VIXL_INVALSET_H_
112
113
-#include <string.h>
114
+#include <cstring>
115
116
#include <algorithm>
117
#include <vector>
118
diff --git a/disas/libvixl/vixl/platform.h b/disas/libvixl/vixl/platform.h
119
index XXXXXXX..XXXXXXX 100644
120
--- a/disas/libvixl/vixl/platform.h
121
+++ b/disas/libvixl/vixl/platform.h
122
@@ -XXX,XX +XXX,XX @@
123
#define PLATFORM_H
124
125
// Define platform specific functionalities.
126
+extern "C" {
127
#include <signal.h>
128
+}
129
130
namespace vixl {
131
inline void HostBreakpoint() { raise(SIGINT); }
132
diff --git a/disas/libvixl/vixl/utils.h b/disas/libvixl/vixl/utils.h
133
index XXXXXXX..XXXXXXX 100644
134
--- a/disas/libvixl/vixl/utils.h
135
+++ b/disas/libvixl/vixl/utils.h
136
@@ -XXX,XX +XXX,XX @@
137
#ifndef VIXL_UTILS_H
138
#define VIXL_UTILS_H
139
140
-#include <string.h>
141
#include <cmath>
142
+#include <cstring>
143
#include "vixl/globals.h"
144
#include "vixl/compiler-intrinsics.h"
145
146
diff --git a/disas/libvixl/vixl/utils.cc b/disas/libvixl/vixl/utils.cc
147
index XXXXXXX..XXXXXXX 100644
148
--- a/disas/libvixl/vixl/utils.cc
149
+++ b/disas/libvixl/vixl/utils.cc
150
@@ -XXX,XX +XXX,XX @@
151
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
152
153
#include "vixl/utils.h"
154
-#include <stdio.h>
155
+#include <cstdio>
156
157
namespace vixl {
158
159
--
160
2.20.1
161
162
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
For MUL, we can rely on generic support. For SMULH and UMULH,
4
create some trivial helpers. For PMUL, back in a21bb78e5817,
5
we organized helper_gvec_pmul_b in preparation for this use.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210525010358.152808-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.h | 10 ++++
13
target/arm/sve.decode | 10 ++++
14
target/arm/translate-sve.c | 50 ++++++++++++++++++++
15
target/arm/vec_helper.c | 96 ++++++++++++++++++++++++++++++++++++++
16
4 files changed, 166 insertions(+)
17
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
21
+++ b/target/arm/helper.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(gvec_cgt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_3(gvec_cge0_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_3(gvec_cge0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(gvec_smulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(gvec_smulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(gvec_smulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(gvec_smulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_4(gvec_umulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(gvec_umulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(gvec_umulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(gvec_umulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+
36
DEF_HELPER_FLAGS_4(gvec_sshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_4(gvec_sshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_4(gvec_ushl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve.decode
42
+++ b/target/arm/sve.decode
43
@@ -XXX,XX +XXX,XX @@ ST1_zprz 1110010 .. 00 ..... 100 ... ..... ..... \
44
@rprr_scatter_store xs=0 esz=3 scale=0
45
ST1_zprz 1110010 .. 00 ..... 110 ... ..... ..... \
46
@rprr_scatter_store xs=1 esz=3 scale=0
47
+
48
+#### SVE2 Support
49
+
50
+### SVE2 Integer Multiply - Unpredicated
51
+
52
+# SVE2 integer multiply vectors (unpredicated)
53
+MUL_zzz 00000100 .. 1 ..... 0110 00 ..... ..... @rd_rn_rm
54
+SMULH_zzz 00000100 .. 1 ..... 0110 10 ..... ..... @rd_rn_rm
55
+UMULH_zzz 00000100 .. 1 ..... 0110 11 ..... ..... @rd_rn_rm
56
+PMUL_zzz 00000100 00 1 ..... 0110 01 ..... ..... @rd_rn_rm_e0
57
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate-sve.c
60
+++ b/target/arm/translate-sve.c
61
@@ -XXX,XX +XXX,XX @@ static bool trans_MOVPRFX_z(DisasContext *s, arg_rpr_esz *a)
62
{
63
return do_movz_zpz(s, a->rd, a->rn, a->pg, a->esz, false);
64
}
65
+
66
+/*
67
+ * SVE2 Integer Multiply - Unpredicated
68
+ */
69
+
70
+static bool trans_MUL_zzz(DisasContext *s, arg_rrr_esz *a)
71
+{
72
+ if (!dc_isar_feature(aa64_sve2, s)) {
73
+ return false;
74
+ }
75
+ if (sve_access_check(s)) {
76
+ gen_gvec_fn_zzz(s, tcg_gen_gvec_mul, a->esz, a->rd, a->rn, a->rm);
77
+ }
78
+ return true;
79
+}
80
+
81
+static bool do_sve2_zzz_ool(DisasContext *s, arg_rrr_esz *a,
82
+ gen_helper_gvec_3 *fn)
83
+{
84
+ if (fn == NULL || !dc_isar_feature(aa64_sve2, s)) {
85
+ return false;
86
+ }
87
+ if (sve_access_check(s)) {
88
+ gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, 0);
89
+ }
90
+ return true;
91
+}
92
+
93
+static bool trans_SMULH_zzz(DisasContext *s, arg_rrr_esz *a)
94
+{
95
+ static gen_helper_gvec_3 * const fns[4] = {
96
+ gen_helper_gvec_smulh_b, gen_helper_gvec_smulh_h,
97
+ gen_helper_gvec_smulh_s, gen_helper_gvec_smulh_d,
98
+ };
99
+ return do_sve2_zzz_ool(s, a, fns[a->esz]);
100
+}
101
+
102
+static bool trans_UMULH_zzz(DisasContext *s, arg_rrr_esz *a)
103
+{
104
+ static gen_helper_gvec_3 * const fns[4] = {
105
+ gen_helper_gvec_umulh_b, gen_helper_gvec_umulh_h,
106
+ gen_helper_gvec_umulh_s, gen_helper_gvec_umulh_d,
107
+ };
108
+ return do_sve2_zzz_ool(s, a, fns[a->esz]);
109
+}
110
+
111
+static bool trans_PMUL_zzz(DisasContext *s, arg_rrr_esz *a)
112
+{
113
+ return do_sve2_zzz_ool(s, a, gen_helper_gvec_pmul_b);
114
+}
115
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/vec_helper.c
118
+++ b/target/arm/vec_helper.c
119
@@ -XXX,XX +XXX,XX @@ void HELPER(simd_tblx)(void *vd, void *vm, void *venv, uint32_t desc)
120
clear_tail(vd, oprsz, simd_maxsz(desc));
121
}
122
#endif
123
+
124
+/*
125
+ * NxN -> N highpart multiply
126
+ *
127
+ * TODO: expose this as a generic vector operation.
128
+ */
129
+
130
+void HELPER(gvec_smulh_b)(void *vd, void *vn, void *vm, uint32_t desc)
131
+{
132
+ intptr_t i, opr_sz = simd_oprsz(desc);
133
+ int8_t *d = vd, *n = vn, *m = vm;
134
+
135
+ for (i = 0; i < opr_sz; ++i) {
136
+ d[i] = ((int32_t)n[i] * m[i]) >> 8;
137
+ }
138
+ clear_tail(d, opr_sz, simd_maxsz(desc));
139
+}
140
+
141
+void HELPER(gvec_smulh_h)(void *vd, void *vn, void *vm, uint32_t desc)
142
+{
143
+ intptr_t i, opr_sz = simd_oprsz(desc);
144
+ int16_t *d = vd, *n = vn, *m = vm;
145
+
146
+ for (i = 0; i < opr_sz / 2; ++i) {
147
+ d[i] = ((int32_t)n[i] * m[i]) >> 16;
148
+ }
149
+ clear_tail(d, opr_sz, simd_maxsz(desc));
150
+}
151
+
152
+void HELPER(gvec_smulh_s)(void *vd, void *vn, void *vm, uint32_t desc)
153
+{
154
+ intptr_t i, opr_sz = simd_oprsz(desc);
155
+ int32_t *d = vd, *n = vn, *m = vm;
156
+
157
+ for (i = 0; i < opr_sz / 4; ++i) {
158
+ d[i] = ((int64_t)n[i] * m[i]) >> 32;
159
+ }
160
+ clear_tail(d, opr_sz, simd_maxsz(desc));
161
+}
162
+
163
+void HELPER(gvec_smulh_d)(void *vd, void *vn, void *vm, uint32_t desc)
164
+{
165
+ intptr_t i, opr_sz = simd_oprsz(desc);
166
+ uint64_t *d = vd, *n = vn, *m = vm;
167
+ uint64_t discard;
168
+
169
+ for (i = 0; i < opr_sz / 8; ++i) {
170
+ muls64(&discard, &d[i], n[i], m[i]);
171
+ }
172
+ clear_tail(d, opr_sz, simd_maxsz(desc));
173
+}
174
+
175
+void HELPER(gvec_umulh_b)(void *vd, void *vn, void *vm, uint32_t desc)
176
+{
177
+ intptr_t i, opr_sz = simd_oprsz(desc);
178
+ uint8_t *d = vd, *n = vn, *m = vm;
179
+
180
+ for (i = 0; i < opr_sz; ++i) {
181
+ d[i] = ((uint32_t)n[i] * m[i]) >> 8;
182
+ }
183
+ clear_tail(d, opr_sz, simd_maxsz(desc));
184
+}
185
+
186
+void HELPER(gvec_umulh_h)(void *vd, void *vn, void *vm, uint32_t desc)
187
+{
188
+ intptr_t i, opr_sz = simd_oprsz(desc);
189
+ uint16_t *d = vd, *n = vn, *m = vm;
190
+
191
+ for (i = 0; i < opr_sz / 2; ++i) {
192
+ d[i] = ((uint32_t)n[i] * m[i]) >> 16;
193
+ }
194
+ clear_tail(d, opr_sz, simd_maxsz(desc));
195
+}
196
+
197
+void HELPER(gvec_umulh_s)(void *vd, void *vn, void *vm, uint32_t desc)
198
+{
199
+ intptr_t i, opr_sz = simd_oprsz(desc);
200
+ uint32_t *d = vd, *n = vn, *m = vm;
201
+
202
+ for (i = 0; i < opr_sz / 4; ++i) {
203
+ d[i] = ((uint64_t)n[i] * m[i]) >> 32;
204
+ }
205
+ clear_tail(d, opr_sz, simd_maxsz(desc));
206
+}
207
+
208
+void HELPER(gvec_umulh_d)(void *vd, void *vn, void *vm, uint32_t desc)
209
+{
210
+ intptr_t i, opr_sz = simd_oprsz(desc);
211
+ uint64_t *d = vd, *n = vn, *m = vm;
212
+ uint64_t discard;
213
+
214
+ for (i = 0; i < opr_sz / 8; ++i) {
215
+ mulu64(&discard, &d[i], n[i], m[i]);
216
+ }
217
+ clear_tail(d, opr_sz, simd_maxsz(desc));
218
+}
219
--
220
2.20.1
221
222
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 14 ++++++++++++
9
target/arm/sve.decode | 5 +++++
10
target/arm/sve_helper.c | 44 ++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 39 +++++++++++++++++++++++++++++++++
12
4 files changed, 102 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_umulh_zpzz_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve_umulh_zpzz_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve2_sadalp_zpzz_h, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve2_sadalp_zpzz_s, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve2_sadalp_zpzz_d, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_h, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_s, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_d, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+
36
DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_s, TCG_CALL_NO_RWG,
37
void, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_d, TCG_CALL_NO_RWG,
39
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve.decode
42
+++ b/target/arm/sve.decode
43
@@ -XXX,XX +XXX,XX @@ MUL_zzz 00000100 .. 1 ..... 0110 00 ..... ..... @rd_rn_rm
44
SMULH_zzz 00000100 .. 1 ..... 0110 10 ..... ..... @rd_rn_rm
45
UMULH_zzz 00000100 .. 1 ..... 0110 11 ..... ..... @rd_rn_rm
46
PMUL_zzz 00000100 00 1 ..... 0110 01 ..... ..... @rd_rn_rm_e0
47
+
48
+### SVE2 Integer - Predicated
49
+
50
+SADALP_zpzz 01000100 .. 000 100 101 ... ..... ..... @rdm_pg_rn
51
+UADALP_zpzz 01000100 .. 000 101 101 ... ..... ..... @rdm_pg_rn
52
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/sve_helper.c
55
+++ b/target/arm/sve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve_asr_zpzz_d, int64_t, DO_ASR)
57
DO_ZPZZ_D(sve_lsr_zpzz_d, uint64_t, DO_LSR)
58
DO_ZPZZ_D(sve_lsl_zpzz_d, uint64_t, DO_LSL)
59
60
+static inline uint16_t do_sadalp_h(int16_t n, int16_t m)
61
+{
62
+ int8_t n1 = n, n2 = n >> 8;
63
+ return m + n1 + n2;
64
+}
65
+
66
+static inline uint32_t do_sadalp_s(int32_t n, int32_t m)
67
+{
68
+ int16_t n1 = n, n2 = n >> 16;
69
+ return m + n1 + n2;
70
+}
71
+
72
+static inline uint64_t do_sadalp_d(int64_t n, int64_t m)
73
+{
74
+ int32_t n1 = n, n2 = n >> 32;
75
+ return m + n1 + n2;
76
+}
77
+
78
+DO_ZPZZ(sve2_sadalp_zpzz_h, int16_t, H1_2, do_sadalp_h)
79
+DO_ZPZZ(sve2_sadalp_zpzz_s, int32_t, H1_4, do_sadalp_s)
80
+DO_ZPZZ_D(sve2_sadalp_zpzz_d, int64_t, do_sadalp_d)
81
+
82
+static inline uint16_t do_uadalp_h(uint16_t n, uint16_t m)
83
+{
84
+ uint8_t n1 = n, n2 = n >> 8;
85
+ return m + n1 + n2;
86
+}
87
+
88
+static inline uint32_t do_uadalp_s(uint32_t n, uint32_t m)
89
+{
90
+ uint16_t n1 = n, n2 = n >> 16;
91
+ return m + n1 + n2;
92
+}
93
+
94
+static inline uint64_t do_uadalp_d(uint64_t n, uint64_t m)
95
+{
96
+ uint32_t n1 = n, n2 = n >> 32;
97
+ return m + n1 + n2;
98
+}
99
+
100
+DO_ZPZZ(sve2_uadalp_zpzz_h, uint16_t, H1_2, do_uadalp_h)
101
+DO_ZPZZ(sve2_uadalp_zpzz_s, uint32_t, H1_4, do_uadalp_s)
102
+DO_ZPZZ_D(sve2_uadalp_zpzz_d, uint64_t, do_uadalp_d)
103
+
104
#undef DO_ZPZZ
105
#undef DO_ZPZZ_D
106
107
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate-sve.c
110
+++ b/target/arm/translate-sve.c
111
@@ -XXX,XX +XXX,XX @@ static bool trans_PMUL_zzz(DisasContext *s, arg_rrr_esz *a)
112
{
113
return do_sve2_zzz_ool(s, a, gen_helper_gvec_pmul_b);
114
}
115
+
116
+/*
117
+ * SVE2 Integer - Predicated
118
+ */
119
+
120
+static bool do_sve2_zpzz_ool(DisasContext *s, arg_rprr_esz *a,
121
+ gen_helper_gvec_4 *fn)
122
+{
123
+ if (!dc_isar_feature(aa64_sve2, s)) {
124
+ return false;
125
+ }
126
+ return do_zpzz_ool(s, a, fn);
127
+}
128
+
129
+static bool trans_SADALP_zpzz(DisasContext *s, arg_rprr_esz *a)
130
+{
131
+ static gen_helper_gvec_4 * const fns[3] = {
132
+ gen_helper_sve2_sadalp_zpzz_h,
133
+ gen_helper_sve2_sadalp_zpzz_s,
134
+ gen_helper_sve2_sadalp_zpzz_d,
135
+ };
136
+ if (a->esz == 0) {
137
+ return false;
138
+ }
139
+ return do_sve2_zpzz_ool(s, a, fns[a->esz - 1]);
140
+}
141
+
142
+static bool trans_UADALP_zpzz(DisasContext *s, arg_rprr_esz *a)
143
+{
144
+ static gen_helper_gvec_4 * const fns[3] = {
145
+ gen_helper_sve2_uadalp_zpzz_h,
146
+ gen_helper_sve2_uadalp_zpzz_s,
147
+ gen_helper_sve2_uadalp_zpzz_d,
148
+ };
149
+ if (a->esz == 0) {
150
+ return false;
151
+ }
152
+ return do_sve2_zpzz_ool(s, a, fns[a->esz - 1]);
153
+}
154
--
155
2.20.1
156
157
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 13 +++++++++++
9
target/arm/sve.decode | 7 ++++++
10
target/arm/sve_helper.c | 21 +++++++++++++++++
11
target/arm/translate-sve.c | 47 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 88 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve2_sqabs_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve2_sqabs_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve2_sqabs_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve2_sqabs_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
27
+DEF_HELPER_FLAGS_4(sve2_sqneg_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve2_sqneg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve2_sqneg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve2_sqneg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve2_urecpe_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve2_ursqrte_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+
35
DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
36
37
DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_b, TCG_CALL_NO_RWG,
38
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/sve.decode
41
+++ b/target/arm/sve.decode
42
@@ -XXX,XX +XXX,XX @@ PMUL_zzz 00000100 00 1 ..... 0110 01 ..... ..... @rd_rn_rm_e0
43
44
SADALP_zpzz 01000100 .. 000 100 101 ... ..... ..... @rdm_pg_rn
45
UADALP_zpzz 01000100 .. 000 101 101 ... ..... ..... @rdm_pg_rn
46
+
47
+### SVE2 integer unary operations (predicated)
48
+
49
+URECPE 01000100 .. 000 000 101 ... ..... ..... @rd_pg_rn
50
+URSQRTE 01000100 .. 000 001 101 ... ..... ..... @rd_pg_rn
51
+SQABS 01000100 .. 001 000 101 ... ..... ..... @rd_pg_rn
52
+SQNEG 01000100 .. 001 001 101 ... ..... ..... @rd_pg_rn
53
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/sve_helper.c
56
+++ b/target/arm/sve_helper.c
57
@@ -XXX,XX +XXX,XX @@ DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
58
DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
59
DO_ZPZ_D(sve_rbit_d, uint64_t, revbit64)
60
61
+#define DO_SQABS(X) \
62
+ ({ __typeof(X) x_ = (X), min_ = 1ull << (sizeof(X) * 8 - 1); \
63
+ x_ >= 0 ? x_ : x_ == min_ ? -min_ - 1 : -x_; })
64
+
65
+DO_ZPZ(sve2_sqabs_b, int8_t, H1, DO_SQABS)
66
+DO_ZPZ(sve2_sqabs_h, int16_t, H1_2, DO_SQABS)
67
+DO_ZPZ(sve2_sqabs_s, int32_t, H1_4, DO_SQABS)
68
+DO_ZPZ_D(sve2_sqabs_d, int64_t, DO_SQABS)
69
+
70
+#define DO_SQNEG(X) \
71
+ ({ __typeof(X) x_ = (X), min_ = 1ull << (sizeof(X) * 8 - 1); \
72
+ x_ == min_ ? -min_ - 1 : -x_; })
73
+
74
+DO_ZPZ(sve2_sqneg_b, uint8_t, H1, DO_SQNEG)
75
+DO_ZPZ(sve2_sqneg_h, uint16_t, H1_2, DO_SQNEG)
76
+DO_ZPZ(sve2_sqneg_s, uint32_t, H1_4, DO_SQNEG)
77
+DO_ZPZ_D(sve2_sqneg_d, uint64_t, DO_SQNEG)
78
+
79
+DO_ZPZ(sve2_urecpe_s, uint32_t, H1_4, helper_recpe_u32)
80
+DO_ZPZ(sve2_ursqrte_s, uint32_t, H1_4, helper_rsqrte_u32)
81
+
82
/* Three-operand expander, unpredicated, in which the third operand is "wide".
83
*/
84
#define DO_ZZW(NAME, TYPE, TYPEW, H, OP) \
85
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/translate-sve.c
88
+++ b/target/arm/translate-sve.c
89
@@ -XXX,XX +XXX,XX @@ static bool trans_UADALP_zpzz(DisasContext *s, arg_rprr_esz *a)
90
}
91
return do_sve2_zpzz_ool(s, a, fns[a->esz - 1]);
92
}
93
+
94
+/*
95
+ * SVE2 integer unary operations (predicated)
96
+ */
97
+
98
+static bool do_sve2_zpz_ool(DisasContext *s, arg_rpr_esz *a,
99
+ gen_helper_gvec_3 *fn)
100
+{
101
+ if (!dc_isar_feature(aa64_sve2, s)) {
102
+ return false;
103
+ }
104
+ return do_zpz_ool(s, a, fn);
105
+}
106
+
107
+static bool trans_URECPE(DisasContext *s, arg_rpr_esz *a)
108
+{
109
+ if (a->esz != 2) {
110
+ return false;
111
+ }
112
+ return do_sve2_zpz_ool(s, a, gen_helper_sve2_urecpe_s);
113
+}
114
+
115
+static bool trans_URSQRTE(DisasContext *s, arg_rpr_esz *a)
116
+{
117
+ if (a->esz != 2) {
118
+ return false;
119
+ }
120
+ return do_sve2_zpz_ool(s, a, gen_helper_sve2_ursqrte_s);
121
+}
122
+
123
+static bool trans_SQABS(DisasContext *s, arg_rpr_esz *a)
124
+{
125
+ static gen_helper_gvec_3 * const fns[4] = {
126
+ gen_helper_sve2_sqabs_b, gen_helper_sve2_sqabs_h,
127
+ gen_helper_sve2_sqabs_s, gen_helper_sve2_sqabs_d,
128
+ };
129
+ return do_sve2_zpz_ool(s, a, fns[a->esz]);
130
+}
131
+
132
+static bool trans_SQNEG(DisasContext *s, arg_rpr_esz *a)
133
+{
134
+ static gen_helper_gvec_3 * const fns[4] = {
135
+ gen_helper_sve2_sqneg_b, gen_helper_sve2_sqneg_h,
136
+ gen_helper_sve2_sqneg_s, gen_helper_sve2_sqneg_d,
137
+ };
138
+ return do_sve2_zpz_ool(s, a, fns[a->esz]);
139
+}
140
--
141
2.20.1
142
143
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20210525010358.152808-7-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 54 +++++++++++++++++++++++
9
target/arm/sve.decode | 17 ++++++++
10
target/arm/sve_helper.c | 87 ++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 18 ++++++++
12
4 files changed, 176 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve2_srshl_zpzz_b, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve2_srshl_zpzz_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve2_srshl_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve2_srshl_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_5(sve2_urshl_zpzz_b, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve2_urshl_zpzz_h, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(sve2_urshl_zpzz_s, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve2_urshl_zpzz_d, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, i32)
39
+
40
+DEF_HELPER_FLAGS_5(sve2_sqshl_zpzz_b, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_5(sve2_sqshl_zpzz_h, TCG_CALL_NO_RWG,
43
+ void, ptr, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_5(sve2_sqshl_zpzz_s, TCG_CALL_NO_RWG,
45
+ void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sve2_sqshl_zpzz_d, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_5(sve2_uqshl_zpzz_b, TCG_CALL_NO_RWG,
50
+ void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(sve2_uqshl_zpzz_h, TCG_CALL_NO_RWG,
52
+ void, ptr, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_5(sve2_uqshl_zpzz_s, TCG_CALL_NO_RWG,
54
+ void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(sve2_uqshl_zpzz_d, TCG_CALL_NO_RWG,
56
+ void, ptr, ptr, ptr, ptr, i32)
57
+
58
+DEF_HELPER_FLAGS_5(sve2_sqrshl_zpzz_b, TCG_CALL_NO_RWG,
59
+ void, ptr, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_5(sve2_sqrshl_zpzz_h, TCG_CALL_NO_RWG,
61
+ void, ptr, ptr, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_5(sve2_sqrshl_zpzz_s, TCG_CALL_NO_RWG,
63
+ void, ptr, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_5(sve2_sqrshl_zpzz_d, TCG_CALL_NO_RWG,
65
+ void, ptr, ptr, ptr, ptr, i32)
66
+
67
+DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_b, TCG_CALL_NO_RWG,
68
+ void, ptr, ptr, ptr, ptr, i32)
69
+DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_h, TCG_CALL_NO_RWG,
70
+ void, ptr, ptr, ptr, ptr, i32)
71
+DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_s, TCG_CALL_NO_RWG,
72
+ void, ptr, ptr, ptr, ptr, i32)
73
+DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_d, TCG_CALL_NO_RWG,
74
+ void, ptr, ptr, ptr, ptr, i32)
75
+
76
DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_s, TCG_CALL_NO_RWG,
77
void, ptr, ptr, ptr, ptr, i32)
78
DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_d, TCG_CALL_NO_RWG,
79
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/sve.decode
82
+++ b/target/arm/sve.decode
83
@@ -XXX,XX +XXX,XX @@ URECPE 01000100 .. 000 000 101 ... ..... ..... @rd_pg_rn
84
URSQRTE 01000100 .. 000 001 101 ... ..... ..... @rd_pg_rn
85
SQABS 01000100 .. 001 000 101 ... ..... ..... @rd_pg_rn
86
SQNEG 01000100 .. 001 001 101 ... ..... ..... @rd_pg_rn
87
+
88
+### SVE2 saturating/rounding bitwise shift left (predicated)
89
+
90
+SRSHL 01000100 .. 000 010 100 ... ..... ..... @rdn_pg_rm
91
+URSHL 01000100 .. 000 011 100 ... ..... ..... @rdn_pg_rm
92
+SRSHL 01000100 .. 000 110 100 ... ..... ..... @rdm_pg_rn # SRSHLR
93
+URSHL 01000100 .. 000 111 100 ... ..... ..... @rdm_pg_rn # URSHLR
94
+
95
+SQSHL 01000100 .. 001 000 100 ... ..... ..... @rdn_pg_rm
96
+UQSHL 01000100 .. 001 001 100 ... ..... ..... @rdn_pg_rm
97
+SQSHL 01000100 .. 001 100 100 ... ..... ..... @rdm_pg_rn # SQSHLR
98
+UQSHL 01000100 .. 001 101 100 ... ..... ..... @rdm_pg_rn # UQSHLR
99
+
100
+SQRSHL 01000100 .. 001 010 100 ... ..... ..... @rdn_pg_rm
101
+UQRSHL 01000100 .. 001 011 100 ... ..... ..... @rdn_pg_rm
102
+SQRSHL 01000100 .. 001 110 100 ... ..... ..... @rdm_pg_rn # SQRSHLR
103
+UQRSHL 01000100 .. 001 111 100 ... ..... ..... @rdm_pg_rn # UQRSHLR
104
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/sve_helper.c
107
+++ b/target/arm/sve_helper.c
108
@@ -XXX,XX +XXX,XX @@
109
#include "tcg/tcg-gvec-desc.h"
110
#include "fpu/softfloat.h"
111
#include "tcg/tcg.h"
112
+#include "vec_internal.h"
113
114
115
/* Note that vector data is stored in host-endian 64-bit chunks,
116
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ(sve2_uadalp_zpzz_h, uint16_t, H1_2, do_uadalp_h)
117
DO_ZPZZ(sve2_uadalp_zpzz_s, uint32_t, H1_4, do_uadalp_s)
118
DO_ZPZZ_D(sve2_uadalp_zpzz_d, uint64_t, do_uadalp_d)
119
120
+#define do_srshl_b(n, m) do_sqrshl_bhs(n, m, 8, true, NULL)
121
+#define do_srshl_h(n, m) do_sqrshl_bhs(n, m, 16, true, NULL)
122
+#define do_srshl_s(n, m) do_sqrshl_bhs(n, m, 32, true, NULL)
123
+#define do_srshl_d(n, m) do_sqrshl_d(n, m, true, NULL)
124
+
125
+DO_ZPZZ(sve2_srshl_zpzz_b, int8_t, H1, do_srshl_b)
126
+DO_ZPZZ(sve2_srshl_zpzz_h, int16_t, H1_2, do_srshl_h)
127
+DO_ZPZZ(sve2_srshl_zpzz_s, int32_t, H1_4, do_srshl_s)
128
+DO_ZPZZ_D(sve2_srshl_zpzz_d, int64_t, do_srshl_d)
129
+
130
+#define do_urshl_b(n, m) do_uqrshl_bhs(n, (int8_t)m, 8, true, NULL)
131
+#define do_urshl_h(n, m) do_uqrshl_bhs(n, (int16_t)m, 16, true, NULL)
132
+#define do_urshl_s(n, m) do_uqrshl_bhs(n, m, 32, true, NULL)
133
+#define do_urshl_d(n, m) do_uqrshl_d(n, m, true, NULL)
134
+
135
+DO_ZPZZ(sve2_urshl_zpzz_b, uint8_t, H1, do_urshl_b)
136
+DO_ZPZZ(sve2_urshl_zpzz_h, uint16_t, H1_2, do_urshl_h)
137
+DO_ZPZZ(sve2_urshl_zpzz_s, uint32_t, H1_4, do_urshl_s)
138
+DO_ZPZZ_D(sve2_urshl_zpzz_d, uint64_t, do_urshl_d)
139
+
140
+/*
141
+ * Unlike the NEON and AdvSIMD versions, there is no QC bit to set.
142
+ * We pass in a pointer to a dummy saturation field to trigger
143
+ * the saturating arithmetic but discard the information about
144
+ * whether it has occurred.
145
+ */
146
+#define do_sqshl_b(n, m) \
147
+ ({ uint32_t discard; do_sqrshl_bhs(n, m, 8, false, &discard); })
148
+#define do_sqshl_h(n, m) \
149
+ ({ uint32_t discard; do_sqrshl_bhs(n, m, 16, false, &discard); })
150
+#define do_sqshl_s(n, m) \
151
+ ({ uint32_t discard; do_sqrshl_bhs(n, m, 32, false, &discard); })
152
+#define do_sqshl_d(n, m) \
153
+ ({ uint32_t discard; do_sqrshl_d(n, m, false, &discard); })
154
+
155
+DO_ZPZZ(sve2_sqshl_zpzz_b, int8_t, H1_2, do_sqshl_b)
156
+DO_ZPZZ(sve2_sqshl_zpzz_h, int16_t, H1_2, do_sqshl_h)
157
+DO_ZPZZ(sve2_sqshl_zpzz_s, int32_t, H1_4, do_sqshl_s)
158
+DO_ZPZZ_D(sve2_sqshl_zpzz_d, int64_t, do_sqshl_d)
159
+
160
+#define do_uqshl_b(n, m) \
161
+ ({ uint32_t discard; do_uqrshl_bhs(n, (int8_t)m, 8, false, &discard); })
162
+#define do_uqshl_h(n, m) \
163
+ ({ uint32_t discard; do_uqrshl_bhs(n, (int16_t)m, 16, false, &discard); })
164
+#define do_uqshl_s(n, m) \
165
+ ({ uint32_t discard; do_uqrshl_bhs(n, m, 32, false, &discard); })
166
+#define do_uqshl_d(n, m) \
167
+ ({ uint32_t discard; do_uqrshl_d(n, m, false, &discard); })
168
+
169
+DO_ZPZZ(sve2_uqshl_zpzz_b, uint8_t, H1_2, do_uqshl_b)
170
+DO_ZPZZ(sve2_uqshl_zpzz_h, uint16_t, H1_2, do_uqshl_h)
171
+DO_ZPZZ(sve2_uqshl_zpzz_s, uint32_t, H1_4, do_uqshl_s)
172
+DO_ZPZZ_D(sve2_uqshl_zpzz_d, uint64_t, do_uqshl_d)
173
+
174
+#define do_sqrshl_b(n, m) \
175
+ ({ uint32_t discard; do_sqrshl_bhs(n, m, 8, true, &discard); })
176
+#define do_sqrshl_h(n, m) \
177
+ ({ uint32_t discard; do_sqrshl_bhs(n, m, 16, true, &discard); })
178
+#define do_sqrshl_s(n, m) \
179
+ ({ uint32_t discard; do_sqrshl_bhs(n, m, 32, true, &discard); })
180
+#define do_sqrshl_d(n, m) \
181
+ ({ uint32_t discard; do_sqrshl_d(n, m, true, &discard); })
182
+
183
+DO_ZPZZ(sve2_sqrshl_zpzz_b, int8_t, H1_2, do_sqrshl_b)
184
+DO_ZPZZ(sve2_sqrshl_zpzz_h, int16_t, H1_2, do_sqrshl_h)
185
+DO_ZPZZ(sve2_sqrshl_zpzz_s, int32_t, H1_4, do_sqrshl_s)
186
+DO_ZPZZ_D(sve2_sqrshl_zpzz_d, int64_t, do_sqrshl_d)
187
+
188
+#undef do_sqrshl_d
189
+
190
+#define do_uqrshl_b(n, m) \
191
+ ({ uint32_t discard; do_uqrshl_bhs(n, (int8_t)m, 8, true, &discard); })
192
+#define do_uqrshl_h(n, m) \
193
+ ({ uint32_t discard; do_uqrshl_bhs(n, (int16_t)m, 16, true, &discard); })
194
+#define do_uqrshl_s(n, m) \
195
+ ({ uint32_t discard; do_uqrshl_bhs(n, m, 32, true, &discard); })
196
+#define do_uqrshl_d(n, m) \
197
+ ({ uint32_t discard; do_uqrshl_d(n, m, true, &discard); })
198
+
199
+DO_ZPZZ(sve2_uqrshl_zpzz_b, uint8_t, H1_2, do_uqrshl_b)
200
+DO_ZPZZ(sve2_uqrshl_zpzz_h, uint16_t, H1_2, do_uqrshl_h)
201
+DO_ZPZZ(sve2_uqrshl_zpzz_s, uint32_t, H1_4, do_uqrshl_s)
202
+DO_ZPZZ_D(sve2_uqrshl_zpzz_d, uint64_t, do_uqrshl_d)
203
+
204
+#undef do_uqrshl_d
205
+
206
#undef DO_ZPZZ
207
#undef DO_ZPZZ_D
208
209
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
210
index XXXXXXX..XXXXXXX 100644
211
--- a/target/arm/translate-sve.c
212
+++ b/target/arm/translate-sve.c
213
@@ -XXX,XX +XXX,XX @@ static bool trans_SQNEG(DisasContext *s, arg_rpr_esz *a)
214
};
215
return do_sve2_zpz_ool(s, a, fns[a->esz]);
216
}
217
+
218
+#define DO_SVE2_ZPZZ(NAME, name) \
219
+static bool trans_##NAME(DisasContext *s, arg_rprr_esz *a) \
220
+{ \
221
+ static gen_helper_gvec_4 * const fns[4] = { \
222
+ gen_helper_sve2_##name##_zpzz_b, gen_helper_sve2_##name##_zpzz_h, \
223
+ gen_helper_sve2_##name##_zpzz_s, gen_helper_sve2_##name##_zpzz_d, \
224
+ }; \
225
+ return do_sve2_zpzz_ool(s, a, fns[a->esz]); \
226
+}
227
+
228
+DO_SVE2_ZPZZ(SQSHL, sqshl)
229
+DO_SVE2_ZPZZ(SQRSHL, sqrshl)
230
+DO_SVE2_ZPZZ(SRSHL, srshl)
231
+
232
+DO_SVE2_ZPZZ(UQSHL, uqshl)
233
+DO_SVE2_ZPZZ(UQRSHL, uqrshl)
234
+DO_SVE2_ZPZZ(URSHL, urshl)
235
--
236
2.20.1
237
238
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 54 ++++++++++++++++++++++++++++++++++++++
9
target/arm/sve.decode | 11 ++++++++
10
target/arm/sve_helper.c | 39 +++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 8 ++++++
12
4 files changed, 112 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve2_shadd_zpzz_b, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve2_shadd_zpzz_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve2_shadd_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve2_shadd_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_5(sve2_uhadd_zpzz_b, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve2_uhadd_zpzz_h, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(sve2_uhadd_zpzz_s, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve2_uhadd_zpzz_d, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, i32)
39
+
40
+DEF_HELPER_FLAGS_5(sve2_srhadd_zpzz_b, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_5(sve2_srhadd_zpzz_h, TCG_CALL_NO_RWG,
43
+ void, ptr, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_5(sve2_srhadd_zpzz_s, TCG_CALL_NO_RWG,
45
+ void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sve2_srhadd_zpzz_d, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_5(sve2_urhadd_zpzz_b, TCG_CALL_NO_RWG,
50
+ void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(sve2_urhadd_zpzz_h, TCG_CALL_NO_RWG,
52
+ void, ptr, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_5(sve2_urhadd_zpzz_s, TCG_CALL_NO_RWG,
54
+ void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(sve2_urhadd_zpzz_d, TCG_CALL_NO_RWG,
56
+ void, ptr, ptr, ptr, ptr, i32)
57
+
58
+DEF_HELPER_FLAGS_5(sve2_shsub_zpzz_b, TCG_CALL_NO_RWG,
59
+ void, ptr, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_5(sve2_shsub_zpzz_h, TCG_CALL_NO_RWG,
61
+ void, ptr, ptr, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_5(sve2_shsub_zpzz_s, TCG_CALL_NO_RWG,
63
+ void, ptr, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_5(sve2_shsub_zpzz_d, TCG_CALL_NO_RWG,
65
+ void, ptr, ptr, ptr, ptr, i32)
66
+
67
+DEF_HELPER_FLAGS_5(sve2_uhsub_zpzz_b, TCG_CALL_NO_RWG,
68
+ void, ptr, ptr, ptr, ptr, i32)
69
+DEF_HELPER_FLAGS_5(sve2_uhsub_zpzz_h, TCG_CALL_NO_RWG,
70
+ void, ptr, ptr, ptr, ptr, i32)
71
+DEF_HELPER_FLAGS_5(sve2_uhsub_zpzz_s, TCG_CALL_NO_RWG,
72
+ void, ptr, ptr, ptr, ptr, i32)
73
+DEF_HELPER_FLAGS_5(sve2_uhsub_zpzz_d, TCG_CALL_NO_RWG,
74
+ void, ptr, ptr, ptr, ptr, i32)
75
+
76
DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_s, TCG_CALL_NO_RWG,
77
void, ptr, ptr, ptr, ptr, i32)
78
DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_d, TCG_CALL_NO_RWG,
79
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/sve.decode
82
+++ b/target/arm/sve.decode
83
@@ -XXX,XX +XXX,XX @@ SQRSHL 01000100 .. 001 010 100 ... ..... ..... @rdn_pg_rm
84
UQRSHL 01000100 .. 001 011 100 ... ..... ..... @rdn_pg_rm
85
SQRSHL 01000100 .. 001 110 100 ... ..... ..... @rdm_pg_rn # SQRSHLR
86
UQRSHL 01000100 .. 001 111 100 ... ..... ..... @rdm_pg_rn # UQRSHLR
87
+
88
+### SVE2 integer halving add/subtract (predicated)
89
+
90
+SHADD 01000100 .. 010 000 100 ... ..... ..... @rdn_pg_rm
91
+UHADD 01000100 .. 010 001 100 ... ..... ..... @rdn_pg_rm
92
+SHSUB 01000100 .. 010 010 100 ... ..... ..... @rdn_pg_rm
93
+UHSUB 01000100 .. 010 011 100 ... ..... ..... @rdn_pg_rm
94
+SRHADD 01000100 .. 010 100 100 ... ..... ..... @rdn_pg_rm
95
+URHADD 01000100 .. 010 101 100 ... ..... ..... @rdn_pg_rm
96
+SHSUB 01000100 .. 010 110 100 ... ..... ..... @rdm_pg_rn # SHSUBR
97
+UHSUB 01000100 .. 010 111 100 ... ..... ..... @rdm_pg_rn # UHSUBR
98
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/sve_helper.c
101
+++ b/target/arm/sve_helper.c
102
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve2_uqrshl_zpzz_d, uint64_t, do_uqrshl_d)
103
104
#undef do_uqrshl_d
105
106
+#define DO_HADD_BHS(n, m) (((int64_t)n + m) >> 1)
107
+#define DO_HADD_D(n, m) ((n >> 1) + (m >> 1) + (n & m & 1))
108
+
109
+DO_ZPZZ(sve2_shadd_zpzz_b, int8_t, H1, DO_HADD_BHS)
110
+DO_ZPZZ(sve2_shadd_zpzz_h, int16_t, H1_2, DO_HADD_BHS)
111
+DO_ZPZZ(sve2_shadd_zpzz_s, int32_t, H1_4, DO_HADD_BHS)
112
+DO_ZPZZ_D(sve2_shadd_zpzz_d, int64_t, DO_HADD_D)
113
+
114
+DO_ZPZZ(sve2_uhadd_zpzz_b, uint8_t, H1, DO_HADD_BHS)
115
+DO_ZPZZ(sve2_uhadd_zpzz_h, uint16_t, H1_2, DO_HADD_BHS)
116
+DO_ZPZZ(sve2_uhadd_zpzz_s, uint32_t, H1_4, DO_HADD_BHS)
117
+DO_ZPZZ_D(sve2_uhadd_zpzz_d, uint64_t, DO_HADD_D)
118
+
119
+#define DO_RHADD_BHS(n, m) (((int64_t)n + m + 1) >> 1)
120
+#define DO_RHADD_D(n, m) ((n >> 1) + (m >> 1) + ((n | m) & 1))
121
+
122
+DO_ZPZZ(sve2_srhadd_zpzz_b, int8_t, H1, DO_RHADD_BHS)
123
+DO_ZPZZ(sve2_srhadd_zpzz_h, int16_t, H1_2, DO_RHADD_BHS)
124
+DO_ZPZZ(sve2_srhadd_zpzz_s, int32_t, H1_4, DO_RHADD_BHS)
125
+DO_ZPZZ_D(sve2_srhadd_zpzz_d, int64_t, DO_RHADD_D)
126
+
127
+DO_ZPZZ(sve2_urhadd_zpzz_b, uint8_t, H1, DO_RHADD_BHS)
128
+DO_ZPZZ(sve2_urhadd_zpzz_h, uint16_t, H1_2, DO_RHADD_BHS)
129
+DO_ZPZZ(sve2_urhadd_zpzz_s, uint32_t, H1_4, DO_RHADD_BHS)
130
+DO_ZPZZ_D(sve2_urhadd_zpzz_d, uint64_t, DO_RHADD_D)
131
+
132
+#define DO_HSUB_BHS(n, m) (((int64_t)n - m) >> 1)
133
+#define DO_HSUB_D(n, m) ((n >> 1) - (m >> 1) - (~n & m & 1))
134
+
135
+DO_ZPZZ(sve2_shsub_zpzz_b, int8_t, H1, DO_HSUB_BHS)
136
+DO_ZPZZ(sve2_shsub_zpzz_h, int16_t, H1_2, DO_HSUB_BHS)
137
+DO_ZPZZ(sve2_shsub_zpzz_s, int32_t, H1_4, DO_HSUB_BHS)
138
+DO_ZPZZ_D(sve2_shsub_zpzz_d, int64_t, DO_HSUB_D)
139
+
140
+DO_ZPZZ(sve2_uhsub_zpzz_b, uint8_t, H1, DO_HSUB_BHS)
141
+DO_ZPZZ(sve2_uhsub_zpzz_h, uint16_t, H1_2, DO_HSUB_BHS)
142
+DO_ZPZZ(sve2_uhsub_zpzz_s, uint32_t, H1_4, DO_HSUB_BHS)
143
+DO_ZPZZ_D(sve2_uhsub_zpzz_d, uint64_t, DO_HSUB_D)
144
+
145
#undef DO_ZPZZ
146
#undef DO_ZPZZ_D
147
148
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/target/arm/translate-sve.c
151
+++ b/target/arm/translate-sve.c
152
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZPZZ(SRSHL, srshl)
153
DO_SVE2_ZPZZ(UQSHL, uqshl)
154
DO_SVE2_ZPZZ(UQRSHL, uqrshl)
155
DO_SVE2_ZPZZ(URSHL, urshl)
156
+
157
+DO_SVE2_ZPZZ(SHADD, shadd)
158
+DO_SVE2_ZPZZ(SRHADD, srhadd)
159
+DO_SVE2_ZPZZ(SHSUB, shsub)
160
+
161
+DO_SVE2_ZPZZ(UHADD, uhadd)
162
+DO_SVE2_ZPZZ(URHADD, urhadd)
163
+DO_SVE2_ZPZZ(UHSUB, uhsub)
164
--
165
2.20.1
166
167
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 45 ++++++++++++++++++++++
9
target/arm/sve.decode | 8 ++++
10
target/arm/sve_helper.c | 76 ++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 6 +++
12
4 files changed, 135 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve2_addp_zpzz_b, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve2_addp_zpzz_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve2_addp_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve2_addp_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_5(sve2_smaxp_zpzz_b, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve2_smaxp_zpzz_h, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(sve2_smaxp_zpzz_s, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve2_smaxp_zpzz_d, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, i32)
39
+
40
+DEF_HELPER_FLAGS_5(sve2_umaxp_zpzz_b, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_5(sve2_umaxp_zpzz_h, TCG_CALL_NO_RWG,
43
+ void, ptr, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_5(sve2_umaxp_zpzz_s, TCG_CALL_NO_RWG,
45
+ void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sve2_umaxp_zpzz_d, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_5(sve2_sminp_zpzz_b, TCG_CALL_NO_RWG,
50
+ void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(sve2_sminp_zpzz_h, TCG_CALL_NO_RWG,
52
+ void, ptr, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_5(sve2_sminp_zpzz_s, TCG_CALL_NO_RWG,
54
+ void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(sve2_sminp_zpzz_d, TCG_CALL_NO_RWG,
56
+ void, ptr, ptr, ptr, ptr, i32)
57
+
58
+DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_b, TCG_CALL_NO_RWG,
59
+ void, ptr, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_h, TCG_CALL_NO_RWG,
61
+ void, ptr, ptr, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_s, TCG_CALL_NO_RWG,
63
+ void, ptr, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_d, TCG_CALL_NO_RWG,
65
+ void, ptr, ptr, ptr, ptr, i32)
66
+
67
DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
68
void, ptr, ptr, ptr, ptr, i32)
69
DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
70
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/sve.decode
73
+++ b/target/arm/sve.decode
74
@@ -XXX,XX +XXX,XX @@ SRHADD 01000100 .. 010 100 100 ... ..... ..... @rdn_pg_rm
75
URHADD 01000100 .. 010 101 100 ... ..... ..... @rdn_pg_rm
76
SHSUB 01000100 .. 010 110 100 ... ..... ..... @rdm_pg_rn # SHSUBR
77
UHSUB 01000100 .. 010 111 100 ... ..... ..... @rdm_pg_rn # UHSUBR
78
+
79
+### SVE2 integer pairwise arithmetic
80
+
81
+ADDP 01000100 .. 010 001 101 ... ..... ..... @rdn_pg_rm
82
+SMAXP 01000100 .. 010 100 101 ... ..... ..... @rdn_pg_rm
83
+UMAXP 01000100 .. 010 101 101 ... ..... ..... @rdn_pg_rm
84
+SMINP 01000100 .. 010 110 101 ... ..... ..... @rdn_pg_rm
85
+UMINP 01000100 .. 010 111 101 ... ..... ..... @rdn_pg_rm
86
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/sve_helper.c
89
+++ b/target/arm/sve_helper.c
90
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve2_uhsub_zpzz_d, uint64_t, DO_HSUB_D)
91
#undef DO_ZPZZ
92
#undef DO_ZPZZ_D
93
94
+/*
95
+ * Three operand expander, operating on element pairs.
96
+ * If the slot I is even, the elements from from VN {I, I+1}.
97
+ * If the slot I is odd, the elements from from VM {I-1, I}.
98
+ * Load all of the input elements in each pair before overwriting output.
99
+ */
100
+#define DO_ZPZZ_PAIR(NAME, TYPE, H, OP) \
101
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
102
+{ \
103
+ intptr_t i, opr_sz = simd_oprsz(desc); \
104
+ for (i = 0; i < opr_sz; ) { \
105
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
106
+ do { \
107
+ TYPE n0 = *(TYPE *)(vn + H(i)); \
108
+ TYPE m0 = *(TYPE *)(vm + H(i)); \
109
+ TYPE n1 = *(TYPE *)(vn + H(i + sizeof(TYPE))); \
110
+ TYPE m1 = *(TYPE *)(vm + H(i + sizeof(TYPE))); \
111
+ if (pg & 1) { \
112
+ *(TYPE *)(vd + H(i)) = OP(n0, n1); \
113
+ } \
114
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
115
+ if (pg & 1) { \
116
+ *(TYPE *)(vd + H(i)) = OP(m0, m1); \
117
+ } \
118
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
119
+ } while (i & 15); \
120
+ } \
121
+}
122
+
123
+/* Similarly, specialized for 64-bit operands. */
124
+#define DO_ZPZZ_PAIR_D(NAME, TYPE, OP) \
125
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
126
+{ \
127
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
128
+ TYPE *d = vd, *n = vn, *m = vm; \
129
+ uint8_t *pg = vg; \
130
+ for (i = 0; i < opr_sz; i += 2) { \
131
+ TYPE n0 = n[i], n1 = n[i + 1]; \
132
+ TYPE m0 = m[i], m1 = m[i + 1]; \
133
+ if (pg[H1(i)] & 1) { \
134
+ d[i] = OP(n0, n1); \
135
+ } \
136
+ if (pg[H1(i + 1)] & 1) { \
137
+ d[i + 1] = OP(m0, m1); \
138
+ } \
139
+ } \
140
+}
141
+
142
+DO_ZPZZ_PAIR(sve2_addp_zpzz_b, uint8_t, H1, DO_ADD)
143
+DO_ZPZZ_PAIR(sve2_addp_zpzz_h, uint16_t, H1_2, DO_ADD)
144
+DO_ZPZZ_PAIR(sve2_addp_zpzz_s, uint32_t, H1_4, DO_ADD)
145
+DO_ZPZZ_PAIR_D(sve2_addp_zpzz_d, uint64_t, DO_ADD)
146
+
147
+DO_ZPZZ_PAIR(sve2_umaxp_zpzz_b, uint8_t, H1, DO_MAX)
148
+DO_ZPZZ_PAIR(sve2_umaxp_zpzz_h, uint16_t, H1_2, DO_MAX)
149
+DO_ZPZZ_PAIR(sve2_umaxp_zpzz_s, uint32_t, H1_4, DO_MAX)
150
+DO_ZPZZ_PAIR_D(sve2_umaxp_zpzz_d, uint64_t, DO_MAX)
151
+
152
+DO_ZPZZ_PAIR(sve2_uminp_zpzz_b, uint8_t, H1, DO_MIN)
153
+DO_ZPZZ_PAIR(sve2_uminp_zpzz_h, uint16_t, H1_2, DO_MIN)
154
+DO_ZPZZ_PAIR(sve2_uminp_zpzz_s, uint32_t, H1_4, DO_MIN)
155
+DO_ZPZZ_PAIR_D(sve2_uminp_zpzz_d, uint64_t, DO_MIN)
156
+
157
+DO_ZPZZ_PAIR(sve2_smaxp_zpzz_b, int8_t, H1, DO_MAX)
158
+DO_ZPZZ_PAIR(sve2_smaxp_zpzz_h, int16_t, H1_2, DO_MAX)
159
+DO_ZPZZ_PAIR(sve2_smaxp_zpzz_s, int32_t, H1_4, DO_MAX)
160
+DO_ZPZZ_PAIR_D(sve2_smaxp_zpzz_d, int64_t, DO_MAX)
161
+
162
+DO_ZPZZ_PAIR(sve2_sminp_zpzz_b, int8_t, H1, DO_MIN)
163
+DO_ZPZZ_PAIR(sve2_sminp_zpzz_h, int16_t, H1_2, DO_MIN)
164
+DO_ZPZZ_PAIR(sve2_sminp_zpzz_s, int32_t, H1_4, DO_MIN)
165
+DO_ZPZZ_PAIR_D(sve2_sminp_zpzz_d, int64_t, DO_MIN)
166
+
167
+#undef DO_ZPZZ_PAIR
168
+#undef DO_ZPZZ_PAIR_D
169
+
170
/* Three-operand expander, controlled by a predicate, in which the
171
* third operand is "wide". That is, for D = N op M, the same 64-bit
172
* value of M is used with all of the narrower values of N.
173
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/translate-sve.c
176
+++ b/target/arm/translate-sve.c
177
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZPZZ(SHSUB, shsub)
178
DO_SVE2_ZPZZ(UHADD, uhadd)
179
DO_SVE2_ZPZZ(URHADD, urhadd)
180
DO_SVE2_ZPZZ(UHSUB, uhsub)
181
+
182
+DO_SVE2_ZPZZ(ADDP, addp)
183
+DO_SVE2_ZPZZ(SMAXP, smaxp)
184
+DO_SVE2_ZPZZ(UMAXP, umaxp)
185
+DO_SVE2_ZPZZ(SMINP, sminp)
186
+DO_SVE2_ZPZZ(UMINP, uminp)
187
--
188
2.20.1
189
190
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 54 +++++++++++
9
target/arm/sve.decode | 11 +++
10
target/arm/sve_helper.c | 194 ++++++++++++++++++++++++++-----------
11
target/arm/translate-sve.c | 7 ++
12
4 files changed, 210 insertions(+), 56 deletions(-)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_d, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_5(sve2_sqadd_zpzz_b, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve2_sqadd_zpzz_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve2_sqadd_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve2_sqadd_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_5(sve2_uqadd_zpzz_b, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve2_uqadd_zpzz_h, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(sve2_uqadd_zpzz_s, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve2_uqadd_zpzz_d, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, i32)
39
+
40
+DEF_HELPER_FLAGS_5(sve2_sqsub_zpzz_b, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_5(sve2_sqsub_zpzz_h, TCG_CALL_NO_RWG,
43
+ void, ptr, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_5(sve2_sqsub_zpzz_s, TCG_CALL_NO_RWG,
45
+ void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sve2_sqsub_zpzz_d, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_5(sve2_uqsub_zpzz_b, TCG_CALL_NO_RWG,
50
+ void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(sve2_uqsub_zpzz_h, TCG_CALL_NO_RWG,
52
+ void, ptr, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_5(sve2_uqsub_zpzz_s, TCG_CALL_NO_RWG,
54
+ void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(sve2_uqsub_zpzz_d, TCG_CALL_NO_RWG,
56
+ void, ptr, ptr, ptr, ptr, i32)
57
+
58
+DEF_HELPER_FLAGS_5(sve2_suqadd_zpzz_b, TCG_CALL_NO_RWG,
59
+ void, ptr, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_5(sve2_suqadd_zpzz_h, TCG_CALL_NO_RWG,
61
+ void, ptr, ptr, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_5(sve2_suqadd_zpzz_s, TCG_CALL_NO_RWG,
63
+ void, ptr, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_5(sve2_suqadd_zpzz_d, TCG_CALL_NO_RWG,
65
+ void, ptr, ptr, ptr, ptr, i32)
66
+
67
+DEF_HELPER_FLAGS_5(sve2_usqadd_zpzz_b, TCG_CALL_NO_RWG,
68
+ void, ptr, ptr, ptr, ptr, i32)
69
+DEF_HELPER_FLAGS_5(sve2_usqadd_zpzz_h, TCG_CALL_NO_RWG,
70
+ void, ptr, ptr, ptr, ptr, i32)
71
+DEF_HELPER_FLAGS_5(sve2_usqadd_zpzz_s, TCG_CALL_NO_RWG,
72
+ void, ptr, ptr, ptr, ptr, i32)
73
+DEF_HELPER_FLAGS_5(sve2_usqadd_zpzz_d, TCG_CALL_NO_RWG,
74
+ void, ptr, ptr, ptr, ptr, i32)
75
+
76
DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
77
void, ptr, ptr, ptr, ptr, i32)
78
DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
79
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/sve.decode
82
+++ b/target/arm/sve.decode
83
@@ -XXX,XX +XXX,XX @@ SMAXP 01000100 .. 010 100 101 ... ..... ..... @rdn_pg_rm
84
UMAXP 01000100 .. 010 101 101 ... ..... ..... @rdn_pg_rm
85
SMINP 01000100 .. 010 110 101 ... ..... ..... @rdn_pg_rm
86
UMINP 01000100 .. 010 111 101 ... ..... ..... @rdn_pg_rm
87
+
88
+### SVE2 saturating add/subtract (predicated)
89
+
90
+SQADD_zpzz 01000100 .. 011 000 100 ... ..... ..... @rdn_pg_rm
91
+UQADD_zpzz 01000100 .. 011 001 100 ... ..... ..... @rdn_pg_rm
92
+SQSUB_zpzz 01000100 .. 011 010 100 ... ..... ..... @rdn_pg_rm
93
+UQSUB_zpzz 01000100 .. 011 011 100 ... ..... ..... @rdn_pg_rm
94
+SUQADD 01000100 .. 011 100 100 ... ..... ..... @rdn_pg_rm
95
+USQADD 01000100 .. 011 101 100 ... ..... ..... @rdn_pg_rm
96
+SQSUB_zpzz 01000100 .. 011 110 100 ... ..... ..... @rdm_pg_rn # SQSUBR
97
+UQSUB_zpzz 01000100 .. 011 111 100 ... ..... ..... @rdm_pg_rn # UQSUBR
98
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/sve_helper.c
101
+++ b/target/arm/sve_helper.c
102
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ(sve2_uhsub_zpzz_h, uint16_t, H1_2, DO_HSUB_BHS)
103
DO_ZPZZ(sve2_uhsub_zpzz_s, uint32_t, H1_4, DO_HSUB_BHS)
104
DO_ZPZZ_D(sve2_uhsub_zpzz_d, uint64_t, DO_HSUB_D)
105
106
+static inline int32_t do_sat_bhs(int64_t val, int64_t min, int64_t max)
107
+{
108
+ return val >= max ? max : val <= min ? min : val;
109
+}
110
+
111
+#define DO_SQADD_B(n, m) do_sat_bhs((int64_t)n + m, INT8_MIN, INT8_MAX)
112
+#define DO_SQADD_H(n, m) do_sat_bhs((int64_t)n + m, INT16_MIN, INT16_MAX)
113
+#define DO_SQADD_S(n, m) do_sat_bhs((int64_t)n + m, INT32_MIN, INT32_MAX)
114
+
115
+static inline int64_t do_sqadd_d(int64_t n, int64_t m)
116
+{
117
+ int64_t r = n + m;
118
+ if (((r ^ n) & ~(n ^ m)) < 0) {
119
+ /* Signed overflow. */
120
+ return r < 0 ? INT64_MAX : INT64_MIN;
121
+ }
122
+ return r;
123
+}
124
+
125
+DO_ZPZZ(sve2_sqadd_zpzz_b, int8_t, H1, DO_SQADD_B)
126
+DO_ZPZZ(sve2_sqadd_zpzz_h, int16_t, H1_2, DO_SQADD_H)
127
+DO_ZPZZ(sve2_sqadd_zpzz_s, int32_t, H1_4, DO_SQADD_S)
128
+DO_ZPZZ_D(sve2_sqadd_zpzz_d, int64_t, do_sqadd_d)
129
+
130
+#define DO_UQADD_B(n, m) do_sat_bhs((int64_t)n + m, 0, UINT8_MAX)
131
+#define DO_UQADD_H(n, m) do_sat_bhs((int64_t)n + m, 0, UINT16_MAX)
132
+#define DO_UQADD_S(n, m) do_sat_bhs((int64_t)n + m, 0, UINT32_MAX)
133
+
134
+static inline uint64_t do_uqadd_d(uint64_t n, uint64_t m)
135
+{
136
+ uint64_t r = n + m;
137
+ return r < n ? UINT64_MAX : r;
138
+}
139
+
140
+DO_ZPZZ(sve2_uqadd_zpzz_b, uint8_t, H1, DO_UQADD_B)
141
+DO_ZPZZ(sve2_uqadd_zpzz_h, uint16_t, H1_2, DO_UQADD_H)
142
+DO_ZPZZ(sve2_uqadd_zpzz_s, uint32_t, H1_4, DO_UQADD_S)
143
+DO_ZPZZ_D(sve2_uqadd_zpzz_d, uint64_t, do_uqadd_d)
144
+
145
+#define DO_SQSUB_B(n, m) do_sat_bhs((int64_t)n - m, INT8_MIN, INT8_MAX)
146
+#define DO_SQSUB_H(n, m) do_sat_bhs((int64_t)n - m, INT16_MIN, INT16_MAX)
147
+#define DO_SQSUB_S(n, m) do_sat_bhs((int64_t)n - m, INT32_MIN, INT32_MAX)
148
+
149
+static inline int64_t do_sqsub_d(int64_t n, int64_t m)
150
+{
151
+ int64_t r = n - m;
152
+ if (((r ^ n) & (n ^ m)) < 0) {
153
+ /* Signed overflow. */
154
+ return r < 0 ? INT64_MAX : INT64_MIN;
155
+ }
156
+ return r;
157
+}
158
+
159
+DO_ZPZZ(sve2_sqsub_zpzz_b, int8_t, H1, DO_SQSUB_B)
160
+DO_ZPZZ(sve2_sqsub_zpzz_h, int16_t, H1_2, DO_SQSUB_H)
161
+DO_ZPZZ(sve2_sqsub_zpzz_s, int32_t, H1_4, DO_SQSUB_S)
162
+DO_ZPZZ_D(sve2_sqsub_zpzz_d, int64_t, do_sqsub_d)
163
+
164
+#define DO_UQSUB_B(n, m) do_sat_bhs((int64_t)n - m, 0, UINT8_MAX)
165
+#define DO_UQSUB_H(n, m) do_sat_bhs((int64_t)n - m, 0, UINT16_MAX)
166
+#define DO_UQSUB_S(n, m) do_sat_bhs((int64_t)n - m, 0, UINT32_MAX)
167
+
168
+static inline uint64_t do_uqsub_d(uint64_t n, uint64_t m)
169
+{
170
+ return n > m ? n - m : 0;
171
+}
172
+
173
+DO_ZPZZ(sve2_uqsub_zpzz_b, uint8_t, H1, DO_UQSUB_B)
174
+DO_ZPZZ(sve2_uqsub_zpzz_h, uint16_t, H1_2, DO_UQSUB_H)
175
+DO_ZPZZ(sve2_uqsub_zpzz_s, uint32_t, H1_4, DO_UQSUB_S)
176
+DO_ZPZZ_D(sve2_uqsub_zpzz_d, uint64_t, do_uqsub_d)
177
+
178
+#define DO_SUQADD_B(n, m) \
179
+ do_sat_bhs((int64_t)(int8_t)n + m, INT8_MIN, INT8_MAX)
180
+#define DO_SUQADD_H(n, m) \
181
+ do_sat_bhs((int64_t)(int16_t)n + m, INT16_MIN, INT16_MAX)
182
+#define DO_SUQADD_S(n, m) \
183
+ do_sat_bhs((int64_t)(int32_t)n + m, INT32_MIN, INT32_MAX)
184
+
185
+static inline int64_t do_suqadd_d(int64_t n, uint64_t m)
186
+{
187
+ uint64_t r = n + m;
188
+
189
+ if (n < 0) {
190
+ /* Note that m - abs(n) cannot underflow. */
191
+ if (r > INT64_MAX) {
192
+ /* Result is either very large positive or negative. */
193
+ if (m > -n) {
194
+ /* m > abs(n), so r is a very large positive. */
195
+ return INT64_MAX;
196
+ }
197
+ /* Result is negative. */
198
+ }
199
+ } else {
200
+ /* Both inputs are positive: check for overflow. */
201
+ if (r < m || r > INT64_MAX) {
202
+ return INT64_MAX;
203
+ }
204
+ }
205
+ return r;
206
+}
207
+
208
+DO_ZPZZ(sve2_suqadd_zpzz_b, uint8_t, H1, DO_SUQADD_B)
209
+DO_ZPZZ(sve2_suqadd_zpzz_h, uint16_t, H1_2, DO_SUQADD_H)
210
+DO_ZPZZ(sve2_suqadd_zpzz_s, uint32_t, H1_4, DO_SUQADD_S)
211
+DO_ZPZZ_D(sve2_suqadd_zpzz_d, uint64_t, do_suqadd_d)
212
+
213
+#define DO_USQADD_B(n, m) \
214
+ do_sat_bhs((int64_t)n + (int8_t)m, 0, UINT8_MAX)
215
+#define DO_USQADD_H(n, m) \
216
+ do_sat_bhs((int64_t)n + (int16_t)m, 0, UINT16_MAX)
217
+#define DO_USQADD_S(n, m) \
218
+ do_sat_bhs((int64_t)n + (int32_t)m, 0, UINT32_MAX)
219
+
220
+static inline uint64_t do_usqadd_d(uint64_t n, int64_t m)
221
+{
222
+ uint64_t r = n + m;
223
+
224
+ if (m < 0) {
225
+ return n < -m ? 0 : r;
226
+ }
227
+ return r < n ? UINT64_MAX : r;
228
+}
229
+
230
+DO_ZPZZ(sve2_usqadd_zpzz_b, uint8_t, H1, DO_USQADD_B)
231
+DO_ZPZZ(sve2_usqadd_zpzz_h, uint16_t, H1_2, DO_USQADD_H)
232
+DO_ZPZZ(sve2_usqadd_zpzz_s, uint32_t, H1_4, DO_USQADD_S)
233
+DO_ZPZZ_D(sve2_usqadd_zpzz_d, uint64_t, do_usqadd_d)
234
+
235
#undef DO_ZPZZ
236
#undef DO_ZPZZ_D
237
238
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sqaddi_b)(void *d, void *a, int32_t b, uint32_t desc)
239
intptr_t i, oprsz = simd_oprsz(desc);
240
241
for (i = 0; i < oprsz; i += sizeof(int8_t)) {
242
- int r = *(int8_t *)(a + i) + b;
243
- if (r > INT8_MAX) {
244
- r = INT8_MAX;
245
- } else if (r < INT8_MIN) {
246
- r = INT8_MIN;
247
- }
248
- *(int8_t *)(d + i) = r;
249
+ *(int8_t *)(d + i) = DO_SQADD_B(b, *(int8_t *)(a + i));
250
}
251
}
252
253
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sqaddi_h)(void *d, void *a, int32_t b, uint32_t desc)
254
intptr_t i, oprsz = simd_oprsz(desc);
255
256
for (i = 0; i < oprsz; i += sizeof(int16_t)) {
257
- int r = *(int16_t *)(a + i) + b;
258
- if (r > INT16_MAX) {
259
- r = INT16_MAX;
260
- } else if (r < INT16_MIN) {
261
- r = INT16_MIN;
262
- }
263
- *(int16_t *)(d + i) = r;
264
+ *(int16_t *)(d + i) = DO_SQADD_H(b, *(int16_t *)(a + i));
265
}
266
}
267
268
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sqaddi_s)(void *d, void *a, int64_t b, uint32_t desc)
269
intptr_t i, oprsz = simd_oprsz(desc);
270
271
for (i = 0; i < oprsz; i += sizeof(int32_t)) {
272
- int64_t r = *(int32_t *)(a + i) + b;
273
- if (r > INT32_MAX) {
274
- r = INT32_MAX;
275
- } else if (r < INT32_MIN) {
276
- r = INT32_MIN;
277
- }
278
- *(int32_t *)(d + i) = r;
279
+ *(int32_t *)(d + i) = DO_SQADD_S(b, *(int32_t *)(a + i));
280
}
281
}
282
283
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sqaddi_d)(void *d, void *a, int64_t b, uint32_t desc)
284
intptr_t i, oprsz = simd_oprsz(desc);
285
286
for (i = 0; i < oprsz; i += sizeof(int64_t)) {
287
- int64_t ai = *(int64_t *)(a + i);
288
- int64_t r = ai + b;
289
- if (((r ^ ai) & ~(ai ^ b)) < 0) {
290
- /* Signed overflow. */
291
- r = (r < 0 ? INT64_MAX : INT64_MIN);
292
- }
293
- *(int64_t *)(d + i) = r;
294
+ *(int64_t *)(d + i) = do_sqadd_d(b, *(int64_t *)(a + i));
295
}
296
}
297
298
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uqaddi_b)(void *d, void *a, int32_t b, uint32_t desc)
299
intptr_t i, oprsz = simd_oprsz(desc);
300
301
for (i = 0; i < oprsz; i += sizeof(uint8_t)) {
302
- int r = *(uint8_t *)(a + i) + b;
303
- if (r > UINT8_MAX) {
304
- r = UINT8_MAX;
305
- } else if (r < 0) {
306
- r = 0;
307
- }
308
- *(uint8_t *)(d + i) = r;
309
+ *(uint8_t *)(d + i) = DO_UQADD_B(b, *(uint8_t *)(a + i));
310
}
311
}
312
313
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uqaddi_h)(void *d, void *a, int32_t b, uint32_t desc)
314
intptr_t i, oprsz = simd_oprsz(desc);
315
316
for (i = 0; i < oprsz; i += sizeof(uint16_t)) {
317
- int r = *(uint16_t *)(a + i) + b;
318
- if (r > UINT16_MAX) {
319
- r = UINT16_MAX;
320
- } else if (r < 0) {
321
- r = 0;
322
- }
323
- *(uint16_t *)(d + i) = r;
324
+ *(uint16_t *)(d + i) = DO_UQADD_H(b, *(uint16_t *)(a + i));
325
}
326
}
327
328
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uqaddi_s)(void *d, void *a, int64_t b, uint32_t desc)
329
intptr_t i, oprsz = simd_oprsz(desc);
330
331
for (i = 0; i < oprsz; i += sizeof(uint32_t)) {
332
- int64_t r = *(uint32_t *)(a + i) + b;
333
- if (r > UINT32_MAX) {
334
- r = UINT32_MAX;
335
- } else if (r < 0) {
336
- r = 0;
337
- }
338
- *(uint32_t *)(d + i) = r;
339
+ *(uint32_t *)(d + i) = DO_UQADD_S(b, *(uint32_t *)(a + i));
340
}
341
}
342
343
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uqaddi_d)(void *d, void *a, uint64_t b, uint32_t desc)
344
intptr_t i, oprsz = simd_oprsz(desc);
345
346
for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
347
- uint64_t r = *(uint64_t *)(a + i) + b;
348
- if (r < b) {
349
- r = UINT64_MAX;
350
- }
351
- *(uint64_t *)(d + i) = r;
352
+ *(uint64_t *)(d + i) = do_uqadd_d(b, *(uint64_t *)(a + i));
353
}
354
}
355
356
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uqsubi_d)(void *d, void *a, uint64_t b, uint32_t desc)
357
intptr_t i, oprsz = simd_oprsz(desc);
358
359
for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
360
- uint64_t ai = *(uint64_t *)(a + i);
361
- *(uint64_t *)(d + i) = (ai < b ? 0 : ai - b);
362
+ *(uint64_t *)(d + i) = do_uqsub_d(*(uint64_t *)(a + i), b);
363
}
364
}
365
366
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
367
index XXXXXXX..XXXXXXX 100644
368
--- a/target/arm/translate-sve.c
369
+++ b/target/arm/translate-sve.c
370
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZPZZ(SMAXP, smaxp)
371
DO_SVE2_ZPZZ(UMAXP, umaxp)
372
DO_SVE2_ZPZZ(SMINP, sminp)
373
DO_SVE2_ZPZZ(UMINP, uminp)
374
+
375
+DO_SVE2_ZPZZ(SQADD_zpzz, sqadd)
376
+DO_SVE2_ZPZZ(UQADD_zpzz, uqadd)
377
+DO_SVE2_ZPZZ(SQSUB_zpzz, sqsub)
378
+DO_SVE2_ZPZZ(UQSUB_zpzz, uqsub)
379
+DO_SVE2_ZPZZ(SUQADD, suqadd)
380
+DO_SVE2_ZPZZ(USQADD, usqadd)
381
--
382
2.20.1
383
384
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 24 ++++++++++++++++++++
9
target/arm/sve.decode | 19 ++++++++++++++++
10
target/arm/sve_helper.c | 43 +++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 46 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 132 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_ftmad_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_5(sve_ftmad_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve_ftmad_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve2_saddl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve2_saddl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve2_saddl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_4(sve2_ssubl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve2_ssubl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve2_ssubl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_4(sve2_sabdl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve2_sabdl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(sve2_sabdl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_4(sve2_uaddl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve2_uaddl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve2_uaddl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
+
38
+DEF_HELPER_FLAGS_4(sve2_usubl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve2_usubl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(sve2_usubl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
+
42
+DEF_HELPER_FLAGS_4(sve2_uabdl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_4(sve2_uabdl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_4(sve2_uabdl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
+
46
DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
47
DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
48
DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
49
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/sve.decode
52
+++ b/target/arm/sve.decode
53
@@ -XXX,XX +XXX,XX @@ SUQADD 01000100 .. 011 100 100 ... ..... ..... @rdn_pg_rm
54
USQADD 01000100 .. 011 101 100 ... ..... ..... @rdn_pg_rm
55
SQSUB_zpzz 01000100 .. 011 110 100 ... ..... ..... @rdm_pg_rn # SQSUBR
56
UQSUB_zpzz 01000100 .. 011 111 100 ... ..... ..... @rdm_pg_rn # UQSUBR
57
+
58
+#### SVE2 Widening Integer Arithmetic
59
+
60
+## SVE2 integer add/subtract long
61
+
62
+SADDLB 01000101 .. 0 ..... 00 0000 ..... ..... @rd_rn_rm
63
+SADDLT 01000101 .. 0 ..... 00 0001 ..... ..... @rd_rn_rm
64
+UADDLB 01000101 .. 0 ..... 00 0010 ..... ..... @rd_rn_rm
65
+UADDLT 01000101 .. 0 ..... 00 0011 ..... ..... @rd_rn_rm
66
+
67
+SSUBLB 01000101 .. 0 ..... 00 0100 ..... ..... @rd_rn_rm
68
+SSUBLT 01000101 .. 0 ..... 00 0101 ..... ..... @rd_rn_rm
69
+USUBLB 01000101 .. 0 ..... 00 0110 ..... ..... @rd_rn_rm
70
+USUBLT 01000101 .. 0 ..... 00 0111 ..... ..... @rd_rn_rm
71
+
72
+SABDLB 01000101 .. 0 ..... 00 1100 ..... ..... @rd_rn_rm
73
+SABDLT 01000101 .. 0 ..... 00 1101 ..... ..... @rd_rn_rm
74
+UABDLB 01000101 .. 0 ..... 00 1110 ..... ..... @rd_rn_rm
75
+UABDLT 01000101 .. 0 ..... 00 1111 ..... ..... @rd_rn_rm
76
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/sve_helper.c
79
+++ b/target/arm/sve_helper.c
80
@@ -XXX,XX +XXX,XX @@ DO_ZZW(sve_lsl_zzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
81
#undef DO_ZPZ
82
#undef DO_ZPZ_D
83
84
+/*
85
+ * Three-operand expander, unpredicated, in which the two inputs are
86
+ * selected from the top or bottom half of the wide column.
87
+ */
88
+#define DO_ZZZ_TB(NAME, TYPEW, TYPEN, HW, HN, OP) \
89
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
90
+{ \
91
+ intptr_t i, opr_sz = simd_oprsz(desc); \
92
+ int sel1 = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN); \
93
+ int sel2 = extract32(desc, SIMD_DATA_SHIFT + 1, 1) * sizeof(TYPEN); \
94
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
95
+ TYPEW nn = *(TYPEN *)(vn + HN(i + sel1)); \
96
+ TYPEW mm = *(TYPEN *)(vm + HN(i + sel2)); \
97
+ *(TYPEW *)(vd + HW(i)) = OP(nn, mm); \
98
+ } \
99
+}
100
+
101
+DO_ZZZ_TB(sve2_saddl_h, int16_t, int8_t, H1_2, H1, DO_ADD)
102
+DO_ZZZ_TB(sve2_saddl_s, int32_t, int16_t, H1_4, H1_2, DO_ADD)
103
+DO_ZZZ_TB(sve2_saddl_d, int64_t, int32_t, , H1_4, DO_ADD)
104
+
105
+DO_ZZZ_TB(sve2_ssubl_h, int16_t, int8_t, H1_2, H1, DO_SUB)
106
+DO_ZZZ_TB(sve2_ssubl_s, int32_t, int16_t, H1_4, H1_2, DO_SUB)
107
+DO_ZZZ_TB(sve2_ssubl_d, int64_t, int32_t, , H1_4, DO_SUB)
108
+
109
+DO_ZZZ_TB(sve2_sabdl_h, int16_t, int8_t, H1_2, H1, DO_ABD)
110
+DO_ZZZ_TB(sve2_sabdl_s, int32_t, int16_t, H1_4, H1_2, DO_ABD)
111
+DO_ZZZ_TB(sve2_sabdl_d, int64_t, int32_t, , H1_4, DO_ABD)
112
+
113
+DO_ZZZ_TB(sve2_uaddl_h, uint16_t, uint8_t, H1_2, H1, DO_ADD)
114
+DO_ZZZ_TB(sve2_uaddl_s, uint32_t, uint16_t, H1_4, H1_2, DO_ADD)
115
+DO_ZZZ_TB(sve2_uaddl_d, uint64_t, uint32_t, , H1_4, DO_ADD)
116
+
117
+DO_ZZZ_TB(sve2_usubl_h, uint16_t, uint8_t, H1_2, H1, DO_SUB)
118
+DO_ZZZ_TB(sve2_usubl_s, uint32_t, uint16_t, H1_4, H1_2, DO_SUB)
119
+DO_ZZZ_TB(sve2_usubl_d, uint64_t, uint32_t, , H1_4, DO_SUB)
120
+
121
+DO_ZZZ_TB(sve2_uabdl_h, uint16_t, uint8_t, H1_2, H1, DO_ABD)
122
+DO_ZZZ_TB(sve2_uabdl_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
123
+DO_ZZZ_TB(sve2_uabdl_d, uint64_t, uint32_t, , H1_4, DO_ABD)
124
+
125
+#undef DO_ZZZ_TB
126
+
127
/* Two-operand reduction expander, controlled by a predicate.
128
* The difference between TYPERED and TYPERET has to do with
129
* sign-extension. E.g. for SMAX, TYPERED must be signed,
130
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
131
index XXXXXXX..XXXXXXX 100644
132
--- a/target/arm/translate-sve.c
133
+++ b/target/arm/translate-sve.c
134
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZPZZ(SQSUB_zpzz, sqsub)
135
DO_SVE2_ZPZZ(UQSUB_zpzz, uqsub)
136
DO_SVE2_ZPZZ(SUQADD, suqadd)
137
DO_SVE2_ZPZZ(USQADD, usqadd)
138
+
139
+/*
140
+ * SVE2 Widening Integer Arithmetic
141
+ */
142
+
143
+static bool do_sve2_zzw_ool(DisasContext *s, arg_rrr_esz *a,
144
+ gen_helper_gvec_3 *fn, int data)
145
+{
146
+ if (fn == NULL || !dc_isar_feature(aa64_sve2, s)) {
147
+ return false;
148
+ }
149
+ if (sve_access_check(s)) {
150
+ unsigned vsz = vec_full_reg_size(s);
151
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
152
+ vec_full_reg_offset(s, a->rn),
153
+ vec_full_reg_offset(s, a->rm),
154
+ vsz, vsz, data, fn);
155
+ }
156
+ return true;
157
+}
158
+
159
+#define DO_SVE2_ZZZ_TB(NAME, name, SEL1, SEL2) \
160
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a) \
161
+{ \
162
+ static gen_helper_gvec_3 * const fns[4] = { \
163
+ NULL, gen_helper_sve2_##name##_h, \
164
+ gen_helper_sve2_##name##_s, gen_helper_sve2_##name##_d, \
165
+ }; \
166
+ return do_sve2_zzw_ool(s, a, fns[a->esz], (SEL2 << 1) | SEL1); \
167
+}
168
+
169
+DO_SVE2_ZZZ_TB(SADDLB, saddl, false, false)
170
+DO_SVE2_ZZZ_TB(SSUBLB, ssubl, false, false)
171
+DO_SVE2_ZZZ_TB(SABDLB, sabdl, false, false)
172
+
173
+DO_SVE2_ZZZ_TB(UADDLB, uaddl, false, false)
174
+DO_SVE2_ZZZ_TB(USUBLB, usubl, false, false)
175
+DO_SVE2_ZZZ_TB(UABDLB, uabdl, false, false)
176
+
177
+DO_SVE2_ZZZ_TB(SADDLT, saddl, true, true)
178
+DO_SVE2_ZZZ_TB(SSUBLT, ssubl, true, true)
179
+DO_SVE2_ZZZ_TB(SABDLT, sabdl, true, true)
180
+
181
+DO_SVE2_ZZZ_TB(UADDLT, uaddl, true, true)
182
+DO_SVE2_ZZZ_TB(USUBLT, usubl, true, true)
183
+DO_SVE2_ZZZ_TB(UABDLT, uabdl, true, true)
184
--
185
2.20.1
186
187
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/sve.decode | 6 ++++++
9
target/arm/translate-sve.c | 4 ++++
10
2 files changed, 10 insertions(+)
11
12
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve.decode
15
+++ b/target/arm/sve.decode
16
@@ -XXX,XX +XXX,XX @@ SABDLB 01000101 .. 0 ..... 00 1100 ..... ..... @rd_rn_rm
17
SABDLT 01000101 .. 0 ..... 00 1101 ..... ..... @rd_rn_rm
18
UABDLB 01000101 .. 0 ..... 00 1110 ..... ..... @rd_rn_rm
19
UABDLT 01000101 .. 0 ..... 00 1111 ..... ..... @rd_rn_rm
20
+
21
+## SVE2 integer add/subtract interleaved long
22
+
23
+SADDLBT 01000101 .. 0 ..... 1000 00 ..... ..... @rd_rn_rm
24
+SSUBLBT 01000101 .. 0 ..... 1000 10 ..... ..... @rd_rn_rm
25
+SSUBLTB 01000101 .. 0 ..... 1000 11 ..... ..... @rd_rn_rm
26
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-sve.c
29
+++ b/target/arm/translate-sve.c
30
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_TB(SABDLT, sabdl, true, true)
31
DO_SVE2_ZZZ_TB(UADDLT, uaddl, true, true)
32
DO_SVE2_ZZZ_TB(USUBLT, usubl, true, true)
33
DO_SVE2_ZZZ_TB(UABDLT, uabdl, true, true)
34
+
35
+DO_SVE2_ZZZ_TB(SADDLBT, saddl, false, true)
36
+DO_SVE2_ZZZ_TB(SSUBLBT, ssubl, false, true)
37
+DO_SVE2_ZZZ_TB(SSUBLTB, ssubl, true, false)
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 16 ++++++++++++++++
9
target/arm/sve.decode | 12 ++++++++++++
10
target/arm/sve_helper.c | 30 ++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 20 ++++++++++++++++++++
12
4 files changed, 78 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_uabdl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve2_uabdl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve2_uabdl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve2_saddw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve2_saddw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve2_saddw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_4(sve2_ssubw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve2_ssubw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve2_ssubw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_4(sve2_uaddw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve2_uaddw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(sve2_uaddw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_4(sve2_usubw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve2_usubw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve2_usubw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
+
38
DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
39
DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
40
DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
41
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/sve.decode
44
+++ b/target/arm/sve.decode
45
@@ -XXX,XX +XXX,XX @@ UABDLT 01000101 .. 0 ..... 00 1111 ..... ..... @rd_rn_rm
46
SADDLBT 01000101 .. 0 ..... 1000 00 ..... ..... @rd_rn_rm
47
SSUBLBT 01000101 .. 0 ..... 1000 10 ..... ..... @rd_rn_rm
48
SSUBLTB 01000101 .. 0 ..... 1000 11 ..... ..... @rd_rn_rm
49
+
50
+## SVE2 integer add/subtract wide
51
+
52
+SADDWB 01000101 .. 0 ..... 010 000 ..... ..... @rd_rn_rm
53
+SADDWT 01000101 .. 0 ..... 010 001 ..... ..... @rd_rn_rm
54
+UADDWB 01000101 .. 0 ..... 010 010 ..... ..... @rd_rn_rm
55
+UADDWT 01000101 .. 0 ..... 010 011 ..... ..... @rd_rn_rm
56
+
57
+SSUBWB 01000101 .. 0 ..... 010 100 ..... ..... @rd_rn_rm
58
+SSUBWT 01000101 .. 0 ..... 010 101 ..... ..... @rd_rn_rm
59
+USUBWB 01000101 .. 0 ..... 010 110 ..... ..... @rd_rn_rm
60
+USUBWT 01000101 .. 0 ..... 010 111 ..... ..... @rd_rn_rm
61
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/sve_helper.c
64
+++ b/target/arm/sve_helper.c
65
@@ -XXX,XX +XXX,XX @@ DO_ZZZ_TB(sve2_uabdl_d, uint64_t, uint32_t, , H1_4, DO_ABD)
66
67
#undef DO_ZZZ_TB
68
69
+#define DO_ZZZ_WTB(NAME, TYPEW, TYPEN, HW, HN, OP) \
70
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
71
+{ \
72
+ intptr_t i, opr_sz = simd_oprsz(desc); \
73
+ int sel2 = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN); \
74
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
75
+ TYPEW nn = *(TYPEW *)(vn + HW(i)); \
76
+ TYPEW mm = *(TYPEN *)(vm + HN(i + sel2)); \
77
+ *(TYPEW *)(vd + HW(i)) = OP(nn, mm); \
78
+ } \
79
+}
80
+
81
+DO_ZZZ_WTB(sve2_saddw_h, int16_t, int8_t, H1_2, H1, DO_ADD)
82
+DO_ZZZ_WTB(sve2_saddw_s, int32_t, int16_t, H1_4, H1_2, DO_ADD)
83
+DO_ZZZ_WTB(sve2_saddw_d, int64_t, int32_t, , H1_4, DO_ADD)
84
+
85
+DO_ZZZ_WTB(sve2_ssubw_h, int16_t, int8_t, H1_2, H1, DO_SUB)
86
+DO_ZZZ_WTB(sve2_ssubw_s, int32_t, int16_t, H1_4, H1_2, DO_SUB)
87
+DO_ZZZ_WTB(sve2_ssubw_d, int64_t, int32_t, , H1_4, DO_SUB)
88
+
89
+DO_ZZZ_WTB(sve2_uaddw_h, uint16_t, uint8_t, H1_2, H1, DO_ADD)
90
+DO_ZZZ_WTB(sve2_uaddw_s, uint32_t, uint16_t, H1_4, H1_2, DO_ADD)
91
+DO_ZZZ_WTB(sve2_uaddw_d, uint64_t, uint32_t, , H1_4, DO_ADD)
92
+
93
+DO_ZZZ_WTB(sve2_usubw_h, uint16_t, uint8_t, H1_2, H1, DO_SUB)
94
+DO_ZZZ_WTB(sve2_usubw_s, uint32_t, uint16_t, H1_4, H1_2, DO_SUB)
95
+DO_ZZZ_WTB(sve2_usubw_d, uint64_t, uint32_t, , H1_4, DO_SUB)
96
+
97
+#undef DO_ZZZ_WTB
98
+
99
/* Two-operand reduction expander, controlled by a predicate.
100
* The difference between TYPERED and TYPERET has to do with
101
* sign-extension. E.g. for SMAX, TYPERED must be signed,
102
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/translate-sve.c
105
+++ b/target/arm/translate-sve.c
106
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_TB(UABDLT, uabdl, true, true)
107
DO_SVE2_ZZZ_TB(SADDLBT, saddl, false, true)
108
DO_SVE2_ZZZ_TB(SSUBLBT, ssubl, false, true)
109
DO_SVE2_ZZZ_TB(SSUBLTB, ssubl, true, false)
110
+
111
+#define DO_SVE2_ZZZ_WTB(NAME, name, SEL2) \
112
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a) \
113
+{ \
114
+ static gen_helper_gvec_3 * const fns[4] = { \
115
+ NULL, gen_helper_sve2_##name##_h, \
116
+ gen_helper_sve2_##name##_s, gen_helper_sve2_##name##_d, \
117
+ }; \
118
+ return do_sve2_zzw_ool(s, a, fns[a->esz], SEL2); \
119
+}
120
+
121
+DO_SVE2_ZZZ_WTB(SADDWB, saddw, false)
122
+DO_SVE2_ZZZ_WTB(SADDWT, saddw, true)
123
+DO_SVE2_ZZZ_WTB(SSUBWB, ssubw, false)
124
+DO_SVE2_ZZZ_WTB(SSUBWT, ssubw, true)
125
+
126
+DO_SVE2_ZZZ_WTB(UADDWB, uaddw, false)
127
+DO_SVE2_ZZZ_WTB(UADDWT, uaddw, true)
128
+DO_SVE2_ZZZ_WTB(USUBWB, usubw, false)
129
+DO_SVE2_ZZZ_WTB(USUBWT, usubw, true)
130
--
131
2.20.1
132
133
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Exclude PMULL from this category for the moment.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525010358.152808-14-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper-sve.h | 15 +++++++++++++++
11
target/arm/sve.decode | 9 +++++++++
12
target/arm/sve_helper.c | 31 +++++++++++++++++++++++++++++++
13
target/arm/translate-sve.c | 9 +++++++++
14
4 files changed, 64 insertions(+)
15
16
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-sve.h
19
+++ b/target/arm/helper-sve.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_stdd_le_zd_mte, TCG_CALL_NO_WG,
21
DEF_HELPER_FLAGS_6(sve_stdd_be_zd_mte, TCG_CALL_NO_WG,
22
void, env, ptr, ptr, ptr, tl, i32)
23
24
+DEF_HELPER_FLAGS_4(sve2_sqdmull_zzz_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve2_sqdmull_zzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve2_sqdmull_zzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_4(sve2_smull_zzz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(sve2_smull_zzz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve2_smull_zzz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_4(sve2_umull_zzz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve2_umull_zzz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(sve2_umull_zzz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+
39
DEF_HELPER_FLAGS_4(sve2_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve.decode
43
+++ b/target/arm/sve.decode
44
@@ -XXX,XX +XXX,XX @@ SSUBWB 01000101 .. 0 ..... 010 100 ..... ..... @rd_rn_rm
45
SSUBWT 01000101 .. 0 ..... 010 101 ..... ..... @rd_rn_rm
46
USUBWB 01000101 .. 0 ..... 010 110 ..... ..... @rd_rn_rm
47
USUBWT 01000101 .. 0 ..... 010 111 ..... ..... @rd_rn_rm
48
+
49
+## SVE2 integer multiply long
50
+
51
+SQDMULLB_zzz 01000101 .. 0 ..... 011 000 ..... ..... @rd_rn_rm
52
+SQDMULLT_zzz 01000101 .. 0 ..... 011 001 ..... ..... @rd_rn_rm
53
+SMULLB_zzz 01000101 .. 0 ..... 011 100 ..... ..... @rd_rn_rm
54
+SMULLT_zzz 01000101 .. 0 ..... 011 101 ..... ..... @rd_rn_rm
55
+UMULLB_zzz 01000101 .. 0 ..... 011 110 ..... ..... @rd_rn_rm
56
+UMULLT_zzz 01000101 .. 0 ..... 011 111 ..... ..... @rd_rn_rm
57
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/sve_helper.c
60
+++ b/target/arm/sve_helper.c
61
@@ -XXX,XX +XXX,XX @@ DO_ZZZ_TB(sve2_uabdl_h, uint16_t, uint8_t, H1_2, H1, DO_ABD)
62
DO_ZZZ_TB(sve2_uabdl_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
63
DO_ZZZ_TB(sve2_uabdl_d, uint64_t, uint32_t, , H1_4, DO_ABD)
64
65
+DO_ZZZ_TB(sve2_smull_zzz_h, int16_t, int8_t, H1_2, H1, DO_MUL)
66
+DO_ZZZ_TB(sve2_smull_zzz_s, int32_t, int16_t, H1_4, H1_2, DO_MUL)
67
+DO_ZZZ_TB(sve2_smull_zzz_d, int64_t, int32_t, , H1_4, DO_MUL)
68
+
69
+DO_ZZZ_TB(sve2_umull_zzz_h, uint16_t, uint8_t, H1_2, H1, DO_MUL)
70
+DO_ZZZ_TB(sve2_umull_zzz_s, uint32_t, uint16_t, H1_4, H1_2, DO_MUL)
71
+DO_ZZZ_TB(sve2_umull_zzz_d, uint64_t, uint32_t, , H1_4, DO_MUL)
72
+
73
+/* Note that the multiply cannot overflow, but the doubling can. */
74
+static inline int16_t do_sqdmull_h(int16_t n, int16_t m)
75
+{
76
+ int16_t val = n * m;
77
+ return DO_SQADD_H(val, val);
78
+}
79
+
80
+static inline int32_t do_sqdmull_s(int32_t n, int32_t m)
81
+{
82
+ int32_t val = n * m;
83
+ return DO_SQADD_S(val, val);
84
+}
85
+
86
+static inline int64_t do_sqdmull_d(int64_t n, int64_t m)
87
+{
88
+ int64_t val = n * m;
89
+ return do_sqadd_d(val, val);
90
+}
91
+
92
+DO_ZZZ_TB(sve2_sqdmull_zzz_h, int16_t, int8_t, H1_2, H1, do_sqdmull_h)
93
+DO_ZZZ_TB(sve2_sqdmull_zzz_s, int32_t, int16_t, H1_4, H1_2, do_sqdmull_s)
94
+DO_ZZZ_TB(sve2_sqdmull_zzz_d, int64_t, int32_t, , H1_4, do_sqdmull_d)
95
+
96
#undef DO_ZZZ_TB
97
98
#define DO_ZZZ_WTB(NAME, TYPEW, TYPEN, HW, HN, OP) \
99
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/translate-sve.c
102
+++ b/target/arm/translate-sve.c
103
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_TB(SADDLBT, saddl, false, true)
104
DO_SVE2_ZZZ_TB(SSUBLBT, ssubl, false, true)
105
DO_SVE2_ZZZ_TB(SSUBLTB, ssubl, true, false)
106
107
+DO_SVE2_ZZZ_TB(SQDMULLB_zzz, sqdmull_zzz, false, false)
108
+DO_SVE2_ZZZ_TB(SQDMULLT_zzz, sqdmull_zzz, true, true)
109
+
110
+DO_SVE2_ZZZ_TB(SMULLB_zzz, smull_zzz, false, false)
111
+DO_SVE2_ZZZ_TB(SMULLT_zzz, smull_zzz, true, true)
112
+
113
+DO_SVE2_ZZZ_TB(UMULLB_zzz, umull_zzz, false, false)
114
+DO_SVE2_ZZZ_TB(UMULLT_zzz, umull_zzz, true, true)
115
+
116
#define DO_SVE2_ZZZ_WTB(NAME, name, SEL2) \
117
static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a) \
118
{ \
119
--
120
2.20.1
121
122
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-16-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 8 ++
9
target/arm/sve.decode | 8 ++
10
target/arm/sve_helper.c | 22 +++++
11
target/arm/translate-sve.c | 159 +++++++++++++++++++++++++++++++++++++
12
4 files changed, 197 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_umull_zzz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
20
DEF_HELPER_FLAGS_4(sve2_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(sve2_pmull_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_3(sve2_sshll_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_3(sve2_sshll_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_3(sve2_sshll_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
+
27
+DEF_HELPER_FLAGS_3(sve2_ushll_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve2_ushll_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(sve2_ushll_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/sve.decode
33
+++ b/target/arm/sve.decode
34
@@ -XXX,XX +XXX,XX @@ SMULLB_zzz 01000101 .. 0 ..... 011 100 ..... ..... @rd_rn_rm
35
SMULLT_zzz 01000101 .. 0 ..... 011 101 ..... ..... @rd_rn_rm
36
UMULLB_zzz 01000101 .. 0 ..... 011 110 ..... ..... @rd_rn_rm
37
UMULLT_zzz 01000101 .. 0 ..... 011 111 ..... ..... @rd_rn_rm
38
+
39
+## SVE2 bitwise shift left long
40
+
41
+# Note bit23 == 0 is handled by esz > 0 in do_sve2_shll_tb.
42
+SSHLLB 01000101 .. 0 ..... 1010 00 ..... ..... @rd_rn_tszimm_shl
43
+SSHLLT 01000101 .. 0 ..... 1010 01 ..... ..... @rd_rn_tszimm_shl
44
+USHLLB 01000101 .. 0 ..... 1010 10 ..... ..... @rd_rn_tszimm_shl
45
+USHLLT 01000101 .. 0 ..... 1010 11 ..... ..... @rd_rn_tszimm_shl
46
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/sve_helper.c
49
+++ b/target/arm/sve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_ZZZ_WTB(sve2_usubw_d, uint64_t, uint32_t, , H1_4, DO_SUB)
51
52
#undef DO_ZZZ_WTB
53
54
+#define DO_ZZI_SHLL(NAME, TYPEW, TYPEN, HW, HN) \
55
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
56
+{ \
57
+ intptr_t i, opr_sz = simd_oprsz(desc); \
58
+ intptr_t sel = (simd_data(desc) & 1) * sizeof(TYPEN); \
59
+ int shift = simd_data(desc) >> 1; \
60
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
61
+ TYPEW nn = *(TYPEN *)(vn + HN(i + sel)); \
62
+ *(TYPEW *)(vd + HW(i)) = nn << shift; \
63
+ } \
64
+}
65
+
66
+DO_ZZI_SHLL(sve2_sshll_h, int16_t, int8_t, H1_2, H1)
67
+DO_ZZI_SHLL(sve2_sshll_s, int32_t, int16_t, H1_4, H1_2)
68
+DO_ZZI_SHLL(sve2_sshll_d, int64_t, int32_t, , H1_4)
69
+
70
+DO_ZZI_SHLL(sve2_ushll_h, uint16_t, uint8_t, H1_2, H1)
71
+DO_ZZI_SHLL(sve2_ushll_s, uint32_t, uint16_t, H1_4, H1_2)
72
+DO_ZZI_SHLL(sve2_ushll_d, uint64_t, uint32_t, , H1_4)
73
+
74
+#undef DO_ZZI_SHLL
75
+
76
/* Two-operand reduction expander, controlled by a predicate.
77
* The difference between TYPERED and TYPERET has to do with
78
* sign-extension. E.g. for SMAX, TYPERED must be signed,
79
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/translate-sve.c
82
+++ b/target/arm/translate-sve.c
83
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_WTB(UADDWB, uaddw, false)
84
DO_SVE2_ZZZ_WTB(UADDWT, uaddw, true)
85
DO_SVE2_ZZZ_WTB(USUBWB, usubw, false)
86
DO_SVE2_ZZZ_WTB(USUBWT, usubw, true)
87
+
88
+static void gen_sshll_vec(unsigned vece, TCGv_vec d, TCGv_vec n, int64_t imm)
89
+{
90
+ int top = imm & 1;
91
+ int shl = imm >> 1;
92
+ int halfbits = 4 << vece;
93
+
94
+ if (top) {
95
+ if (shl == halfbits) {
96
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
97
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(halfbits, halfbits));
98
+ tcg_gen_and_vec(vece, d, n, t);
99
+ tcg_temp_free_vec(t);
100
+ } else {
101
+ tcg_gen_sari_vec(vece, d, n, halfbits);
102
+ tcg_gen_shli_vec(vece, d, d, shl);
103
+ }
104
+ } else {
105
+ tcg_gen_shli_vec(vece, d, n, halfbits);
106
+ tcg_gen_sari_vec(vece, d, d, halfbits - shl);
107
+ }
108
+}
109
+
110
+static void gen_ushll_i64(unsigned vece, TCGv_i64 d, TCGv_i64 n, int imm)
111
+{
112
+ int halfbits = 4 << vece;
113
+ int top = imm & 1;
114
+ int shl = (imm >> 1);
115
+ int shift;
116
+ uint64_t mask;
117
+
118
+ mask = MAKE_64BIT_MASK(0, halfbits);
119
+ mask <<= shl;
120
+ mask = dup_const(vece, mask);
121
+
122
+ shift = shl - top * halfbits;
123
+ if (shift < 0) {
124
+ tcg_gen_shri_i64(d, n, -shift);
125
+ } else {
126
+ tcg_gen_shli_i64(d, n, shift);
127
+ }
128
+ tcg_gen_andi_i64(d, d, mask);
129
+}
130
+
131
+static void gen_ushll16_i64(TCGv_i64 d, TCGv_i64 n, int64_t imm)
132
+{
133
+ gen_ushll_i64(MO_16, d, n, imm);
134
+}
135
+
136
+static void gen_ushll32_i64(TCGv_i64 d, TCGv_i64 n, int64_t imm)
137
+{
138
+ gen_ushll_i64(MO_32, d, n, imm);
139
+}
140
+
141
+static void gen_ushll64_i64(TCGv_i64 d, TCGv_i64 n, int64_t imm)
142
+{
143
+ gen_ushll_i64(MO_64, d, n, imm);
144
+}
145
+
146
+static void gen_ushll_vec(unsigned vece, TCGv_vec d, TCGv_vec n, int64_t imm)
147
+{
148
+ int halfbits = 4 << vece;
149
+ int top = imm & 1;
150
+ int shl = imm >> 1;
151
+
152
+ if (top) {
153
+ if (shl == halfbits) {
154
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
155
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(halfbits, halfbits));
156
+ tcg_gen_and_vec(vece, d, n, t);
157
+ tcg_temp_free_vec(t);
158
+ } else {
159
+ tcg_gen_shri_vec(vece, d, n, halfbits);
160
+ tcg_gen_shli_vec(vece, d, d, shl);
161
+ }
162
+ } else {
163
+ if (shl == 0) {
164
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
165
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(0, halfbits));
166
+ tcg_gen_and_vec(vece, d, n, t);
167
+ tcg_temp_free_vec(t);
168
+ } else {
169
+ tcg_gen_shli_vec(vece, d, n, halfbits);
170
+ tcg_gen_shri_vec(vece, d, d, halfbits - shl);
171
+ }
172
+ }
173
+}
174
+
175
+static bool do_sve2_shll_tb(DisasContext *s, arg_rri_esz *a,
176
+ bool sel, bool uns)
177
+{
178
+ static const TCGOpcode sshll_list[] = {
179
+ INDEX_op_shli_vec, INDEX_op_sari_vec, 0
180
+ };
181
+ static const TCGOpcode ushll_list[] = {
182
+ INDEX_op_shli_vec, INDEX_op_shri_vec, 0
183
+ };
184
+ static const GVecGen2i ops[2][3] = {
185
+ { { .fniv = gen_sshll_vec,
186
+ .opt_opc = sshll_list,
187
+ .fno = gen_helper_sve2_sshll_h,
188
+ .vece = MO_16 },
189
+ { .fniv = gen_sshll_vec,
190
+ .opt_opc = sshll_list,
191
+ .fno = gen_helper_sve2_sshll_s,
192
+ .vece = MO_32 },
193
+ { .fniv = gen_sshll_vec,
194
+ .opt_opc = sshll_list,
195
+ .fno = gen_helper_sve2_sshll_d,
196
+ .vece = MO_64 } },
197
+ { { .fni8 = gen_ushll16_i64,
198
+ .fniv = gen_ushll_vec,
199
+ .opt_opc = ushll_list,
200
+ .fno = gen_helper_sve2_ushll_h,
201
+ .vece = MO_16 },
202
+ { .fni8 = gen_ushll32_i64,
203
+ .fniv = gen_ushll_vec,
204
+ .opt_opc = ushll_list,
205
+ .fno = gen_helper_sve2_ushll_s,
206
+ .vece = MO_32 },
207
+ { .fni8 = gen_ushll64_i64,
208
+ .fniv = gen_ushll_vec,
209
+ .opt_opc = ushll_list,
210
+ .fno = gen_helper_sve2_ushll_d,
211
+ .vece = MO_64 } },
212
+ };
213
+
214
+ if (a->esz < 0 || a->esz > 2 || !dc_isar_feature(aa64_sve2, s)) {
215
+ return false;
216
+ }
217
+ if (sve_access_check(s)) {
218
+ unsigned vsz = vec_full_reg_size(s);
219
+ tcg_gen_gvec_2i(vec_full_reg_offset(s, a->rd),
220
+ vec_full_reg_offset(s, a->rn),
221
+ vsz, vsz, (a->imm << 1) | sel,
222
+ &ops[uns][a->esz]);
223
+ }
224
+ return true;
225
+}
226
+
227
+static bool trans_SSHLLB(DisasContext *s, arg_rri_esz *a)
228
+{
229
+ return do_sve2_shll_tb(s, a, false, false);
230
+}
231
+
232
+static bool trans_SSHLLT(DisasContext *s, arg_rri_esz *a)
233
+{
234
+ return do_sve2_shll_tb(s, a, true, false);
235
+}
236
+
237
+static bool trans_USHLLB(DisasContext *s, arg_rri_esz *a)
238
+{
239
+ return do_sve2_shll_tb(s, a, false, true);
240
+}
241
+
242
+static bool trans_USHLLT(DisasContext *s, arg_rri_esz *a)
243
+{
244
+ return do_sve2_shll_tb(s, a, true, true);
245
+}
246
--
247
2.20.1
248
249
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-17-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 5 +++++
9
target/arm/sve.decode | 5 +++++
10
target/arm/sve_helper.c | 20 ++++++++++++++++++++
11
target/arm/translate-sve.c | 19 +++++++++++++++++++
12
4 files changed, 49 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve2_sshll_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve2_ushll_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve2_ushll_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_3(sve2_ushll_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_4(sve2_eoril_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve2_eoril_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve2_eoril_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve2_eoril_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve.decode
30
+++ b/target/arm/sve.decode
31
@@ -XXX,XX +XXX,XX @@ SSHLLB 01000101 .. 0 ..... 1010 00 ..... ..... @rd_rn_tszimm_shl
32
SSHLLT 01000101 .. 0 ..... 1010 01 ..... ..... @rd_rn_tszimm_shl
33
USHLLB 01000101 .. 0 ..... 1010 10 ..... ..... @rd_rn_tszimm_shl
34
USHLLT 01000101 .. 0 ..... 1010 11 ..... ..... @rd_rn_tszimm_shl
35
+
36
+## SVE2 bitwise exclusive-or interleaved
37
+
38
+EORBT 01000101 .. 0 ..... 10010 0 ..... ..... @rd_rn_rm
39
+EORTB 01000101 .. 0 ..... 10010 1 ..... ..... @rd_rn_rm
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
43
+++ b/target/arm/sve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_ZZZ_WTB(sve2_usubw_d, uint64_t, uint32_t, , H1_4, DO_SUB)
45
46
#undef DO_ZZZ_WTB
47
48
+#define DO_ZZZ_NTB(NAME, TYPE, H, OP) \
49
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
50
+{ \
51
+ intptr_t i, opr_sz = simd_oprsz(desc); \
52
+ intptr_t sel1 = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPE); \
53
+ intptr_t sel2 = extract32(desc, SIMD_DATA_SHIFT + 1, 1) * sizeof(TYPE); \
54
+ for (i = 0; i < opr_sz; i += 2 * sizeof(TYPE)) { \
55
+ TYPE nn = *(TYPE *)(vn + H(i + sel1)); \
56
+ TYPE mm = *(TYPE *)(vm + H(i + sel2)); \
57
+ *(TYPE *)(vd + H(i + sel1)) = OP(nn, mm); \
58
+ } \
59
+}
60
+
61
+DO_ZZZ_NTB(sve2_eoril_b, uint8_t, H1, DO_EOR)
62
+DO_ZZZ_NTB(sve2_eoril_h, uint16_t, H1_2, DO_EOR)
63
+DO_ZZZ_NTB(sve2_eoril_s, uint32_t, H1_4, DO_EOR)
64
+DO_ZZZ_NTB(sve2_eoril_d, uint64_t, , DO_EOR)
65
+
66
+#undef DO_ZZZ_NTB
67
+
68
#define DO_ZZI_SHLL(NAME, TYPEW, TYPEN, HW, HN) \
69
void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
70
{ \
71
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate-sve.c
74
+++ b/target/arm/translate-sve.c
75
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_TB(SMULLT_zzz, smull_zzz, true, true)
76
DO_SVE2_ZZZ_TB(UMULLB_zzz, umull_zzz, false, false)
77
DO_SVE2_ZZZ_TB(UMULLT_zzz, umull_zzz, true, true)
78
79
+static bool do_eor_tb(DisasContext *s, arg_rrr_esz *a, bool sel1)
80
+{
81
+ static gen_helper_gvec_3 * const fns[4] = {
82
+ gen_helper_sve2_eoril_b, gen_helper_sve2_eoril_h,
83
+ gen_helper_sve2_eoril_s, gen_helper_sve2_eoril_d,
84
+ };
85
+ return do_sve2_zzw_ool(s, a, fns[a->esz], (!sel1 << 1) | sel1);
86
+}
87
+
88
+static bool trans_EORBT(DisasContext *s, arg_rrr_esz *a)
89
+{
90
+ return do_eor_tb(s, a, false);
91
+}
92
+
93
+static bool trans_EORTB(DisasContext *s, arg_rrr_esz *a)
94
+{
95
+ return do_eor_tb(s, a, true);
96
+}
97
+
98
static bool do_trans_pmull(DisasContext *s, arg_rrr_esz *a, bool sel)
99
{
100
static gen_helper_gvec_3 * const fns[4] = {
101
--
102
2.20.1
103
104
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 10 +++++++++
9
target/arm/sve.decode | 9 ++++++++
10
target/arm/sve_helper.c | 42 ++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 31 ++++++++++++++++++++++++++++
12
4 files changed, 92 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_bgrp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve2_bgrp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve2_bgrp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(sve2_bgrp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_4(sve2_cadd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve2_cadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve2_cadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve2_cadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+
28
+DEF_HELPER_FLAGS_4(sve2_sqcadd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve2_sqcadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve2_sqcadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve2_sqcadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/sve.decode
35
+++ b/target/arm/sve.decode
36
@@ -XXX,XX +XXX,XX @@ EORTB 01000101 .. 0 ..... 10010 1 ..... ..... @rd_rn_rm
37
BEXT 01000101 .. 0 ..... 1011 00 ..... ..... @rd_rn_rm
38
BDEP 01000101 .. 0 ..... 1011 01 ..... ..... @rd_rn_rm
39
BGRP 01000101 .. 0 ..... 1011 10 ..... ..... @rd_rn_rm
40
+
41
+#### SVE2 Accumulate
42
+
43
+## SVE2 complex integer add
44
+
45
+CADD_rot90 01000101 .. 00000 0 11011 0 ..... ..... @rdn_rm
46
+CADD_rot270 01000101 .. 00000 0 11011 1 ..... ..... @rdn_rm
47
+SQCADD_rot90 01000101 .. 00000 1 11011 0 ..... ..... @rdn_rm
48
+SQCADD_rot270 01000101 .. 00000 1 11011 1 ..... ..... @rdn_rm
49
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/sve_helper.c
52
+++ b/target/arm/sve_helper.c
53
@@ -XXX,XX +XXX,XX @@ DO_BITPERM(sve2_bgrp_d, uint64_t, bitgroup)
54
55
#undef DO_BITPERM
56
57
+#define DO_CADD(NAME, TYPE, H, ADD_OP, SUB_OP) \
58
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
59
+{ \
60
+ intptr_t i, opr_sz = simd_oprsz(desc); \
61
+ int sub_r = simd_data(desc); \
62
+ if (sub_r) { \
63
+ for (i = 0; i < opr_sz; i += 2 * sizeof(TYPE)) { \
64
+ TYPE acc_r = *(TYPE *)(vn + H(i)); \
65
+ TYPE acc_i = *(TYPE *)(vn + H(i + sizeof(TYPE))); \
66
+ TYPE el2_r = *(TYPE *)(vm + H(i)); \
67
+ TYPE el2_i = *(TYPE *)(vm + H(i + sizeof(TYPE))); \
68
+ acc_r = ADD_OP(acc_r, el2_i); \
69
+ acc_i = SUB_OP(acc_i, el2_r); \
70
+ *(TYPE *)(vd + H(i)) = acc_r; \
71
+ *(TYPE *)(vd + H(i + sizeof(TYPE))) = acc_i; \
72
+ } \
73
+ } else { \
74
+ for (i = 0; i < opr_sz; i += 2 * sizeof(TYPE)) { \
75
+ TYPE acc_r = *(TYPE *)(vn + H(i)); \
76
+ TYPE acc_i = *(TYPE *)(vn + H(i + sizeof(TYPE))); \
77
+ TYPE el2_r = *(TYPE *)(vm + H(i)); \
78
+ TYPE el2_i = *(TYPE *)(vm + H(i + sizeof(TYPE))); \
79
+ acc_r = SUB_OP(acc_r, el2_i); \
80
+ acc_i = ADD_OP(acc_i, el2_r); \
81
+ *(TYPE *)(vd + H(i)) = acc_r; \
82
+ *(TYPE *)(vd + H(i + sizeof(TYPE))) = acc_i; \
83
+ } \
84
+ } \
85
+}
86
+
87
+DO_CADD(sve2_cadd_b, int8_t, H1, DO_ADD, DO_SUB)
88
+DO_CADD(sve2_cadd_h, int16_t, H1_2, DO_ADD, DO_SUB)
89
+DO_CADD(sve2_cadd_s, int32_t, H1_4, DO_ADD, DO_SUB)
90
+DO_CADD(sve2_cadd_d, int64_t, , DO_ADD, DO_SUB)
91
+
92
+DO_CADD(sve2_sqcadd_b, int8_t, H1, DO_SQADD_B, DO_SQSUB_B)
93
+DO_CADD(sve2_sqcadd_h, int16_t, H1_2, DO_SQADD_H, DO_SQSUB_H)
94
+DO_CADD(sve2_sqcadd_s, int32_t, H1_4, DO_SQADD_S, DO_SQSUB_S)
95
+DO_CADD(sve2_sqcadd_d, int64_t, , do_sqadd_d, do_sqsub_d)
96
+
97
+#undef DO_CADD
98
+
99
#define DO_ZZI_SHLL(NAME, TYPEW, TYPEN, HW, HN) \
100
void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
101
{ \
102
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/translate-sve.c
105
+++ b/target/arm/translate-sve.c
106
@@ -XXX,XX +XXX,XX @@ static bool trans_BGRP(DisasContext *s, arg_rrr_esz *a)
107
}
108
return do_sve2_zzw_ool(s, a, fns[a->esz], 0);
109
}
110
+
111
+static bool do_cadd(DisasContext *s, arg_rrr_esz *a, bool sq, bool rot)
112
+{
113
+ static gen_helper_gvec_3 * const fns[2][4] = {
114
+ { gen_helper_sve2_cadd_b, gen_helper_sve2_cadd_h,
115
+ gen_helper_sve2_cadd_s, gen_helper_sve2_cadd_d },
116
+ { gen_helper_sve2_sqcadd_b, gen_helper_sve2_sqcadd_h,
117
+ gen_helper_sve2_sqcadd_s, gen_helper_sve2_sqcadd_d },
118
+ };
119
+ return do_sve2_zzw_ool(s, a, fns[sq][a->esz], rot);
120
+}
121
+
122
+static bool trans_CADD_rot90(DisasContext *s, arg_rrr_esz *a)
123
+{
124
+ return do_cadd(s, a, false, false);
125
+}
126
+
127
+static bool trans_CADD_rot270(DisasContext *s, arg_rrr_esz *a)
128
+{
129
+ return do_cadd(s, a, false, true);
130
+}
131
+
132
+static bool trans_SQCADD_rot90(DisasContext *s, arg_rrr_esz *a)
133
+{
134
+ return do_cadd(s, a, true, false);
135
+}
136
+
137
+static bool trans_SQCADD_rot270(DisasContext *s, arg_rrr_esz *a)
138
+{
139
+ return do_cadd(s, a, true, true);
140
+}
141
--
142
2.20.1
143
144
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-20-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 14 ++++++++++
9
target/arm/sve.decode | 12 +++++++++
10
target/arm/sve_helper.c | 23 ++++++++++++++++
11
target/arm/translate-sve.c | 55 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 104 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_sqcadd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve2_sqcadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve2_sqcadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(sve2_sqcadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_sabal_h, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve2_sabal_s, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_sabal_d, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_5(sve2_uabal_h, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve2_uabal_s, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(sve2_uabal_d, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/sve.decode
39
+++ b/target/arm/sve.decode
40
@@ -XXX,XX +XXX,XX @@
41
&rpr_s rd pg rn s
42
&rprr_s rd pg rn rm s
43
&rprr_esz rd pg rn rm esz
44
+&rrrr_esz rd ra rn rm esz
45
&rprrr_esz rd pg rn rm ra esz
46
&rpri_esz rd pg rn imm esz
47
&ptrue rd esz pat s
48
@@ -XXX,XX +XXX,XX @@
49
@rdn_i8s ........ esz:2 ...... ... imm:s8 rd:5 \
50
&rri_esz rn=%reg_movprfx
51
52
+# Four operand, vector element size
53
+@rda_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 \
54
+ &rrrr_esz ra=%reg_movprfx
55
+
56
# Three operand with "memory" size, aka immediate left shift
57
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
58
59
@@ -XXX,XX +XXX,XX @@ CADD_rot90 01000101 .. 00000 0 11011 0 ..... ..... @rdn_rm
60
CADD_rot270 01000101 .. 00000 0 11011 1 ..... ..... @rdn_rm
61
SQCADD_rot90 01000101 .. 00000 1 11011 0 ..... ..... @rdn_rm
62
SQCADD_rot270 01000101 .. 00000 1 11011 1 ..... ..... @rdn_rm
63
+
64
+## SVE2 integer absolute difference and accumulate long
65
+
66
+SABALB 01000101 .. 0 ..... 1100 00 ..... ..... @rda_rn_rm
67
+SABALT 01000101 .. 0 ..... 1100 01 ..... ..... @rda_rn_rm
68
+UABALB 01000101 .. 0 ..... 1100 10 ..... ..... @rda_rn_rm
69
+UABALT 01000101 .. 0 ..... 1100 11 ..... ..... @rda_rn_rm
70
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/sve_helper.c
73
+++ b/target/arm/sve_helper.c
74
@@ -XXX,XX +XXX,XX @@ DO_ZZZ_NTB(sve2_eoril_d, uint64_t, , DO_EOR)
75
76
#undef DO_ZZZ_NTB
77
78
+#define DO_ZZZW_ACC(NAME, TYPEW, TYPEN, HW, HN, OP) \
79
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
80
+{ \
81
+ intptr_t i, opr_sz = simd_oprsz(desc); \
82
+ intptr_t sel1 = simd_data(desc) * sizeof(TYPEN); \
83
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
84
+ TYPEW nn = *(TYPEN *)(vn + HN(i + sel1)); \
85
+ TYPEW mm = *(TYPEN *)(vm + HN(i + sel1)); \
86
+ TYPEW aa = *(TYPEW *)(va + HW(i)); \
87
+ *(TYPEW *)(vd + HW(i)) = OP(nn, mm) + aa; \
88
+ } \
89
+}
90
+
91
+DO_ZZZW_ACC(sve2_sabal_h, int16_t, int8_t, H1_2, H1, DO_ABD)
92
+DO_ZZZW_ACC(sve2_sabal_s, int32_t, int16_t, H1_4, H1_2, DO_ABD)
93
+DO_ZZZW_ACC(sve2_sabal_d, int64_t, int32_t, , H1_4, DO_ABD)
94
+
95
+DO_ZZZW_ACC(sve2_uabal_h, uint16_t, uint8_t, H1_2, H1, DO_ABD)
96
+DO_ZZZW_ACC(sve2_uabal_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
97
+DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , H1_4, DO_ABD)
98
+
99
+#undef DO_ZZZW_ACC
100
+
101
#define DO_BITPERM(NAME, TYPE, OP) \
102
void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
103
{ \
104
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/translate-sve.c
107
+++ b/target/arm/translate-sve.c
108
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_ool_zzz(DisasContext *s, gen_helper_gvec_3 *fn,
109
vsz, vsz, data, fn);
110
}
111
112
+/* Invoke an out-of-line helper on 4 Zregs. */
113
+static void gen_gvec_ool_zzzz(DisasContext *s, gen_helper_gvec_4 *fn,
114
+ int rd, int rn, int rm, int ra, int data)
115
+{
116
+ unsigned vsz = vec_full_reg_size(s);
117
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
118
+ vec_full_reg_offset(s, rn),
119
+ vec_full_reg_offset(s, rm),
120
+ vec_full_reg_offset(s, ra),
121
+ vsz, vsz, data, fn);
122
+}
123
+
124
/* Invoke an out-of-line helper on 2 Zregs and a predicate. */
125
static void gen_gvec_ool_zzp(DisasContext *s, gen_helper_gvec_3 *fn,
126
int rd, int rn, int pg, int data)
127
@@ -XXX,XX +XXX,XX @@ static bool trans_SQCADD_rot270(DisasContext *s, arg_rrr_esz *a)
128
{
129
return do_cadd(s, a, true, true);
130
}
131
+
132
+static bool do_sve2_zzzz_ool(DisasContext *s, arg_rrrr_esz *a,
133
+ gen_helper_gvec_4 *fn, int data)
134
+{
135
+ if (fn == NULL || !dc_isar_feature(aa64_sve2, s)) {
136
+ return false;
137
+ }
138
+ if (sve_access_check(s)) {
139
+ gen_gvec_ool_zzzz(s, fn, a->rd, a->rn, a->rm, a->ra, data);
140
+ }
141
+ return true;
142
+}
143
+
144
+static bool do_abal(DisasContext *s, arg_rrrr_esz *a, bool uns, bool sel)
145
+{
146
+ static gen_helper_gvec_4 * const fns[2][4] = {
147
+ { NULL, gen_helper_sve2_sabal_h,
148
+ gen_helper_sve2_sabal_s, gen_helper_sve2_sabal_d },
149
+ { NULL, gen_helper_sve2_uabal_h,
150
+ gen_helper_sve2_uabal_s, gen_helper_sve2_uabal_d },
151
+ };
152
+ return do_sve2_zzzz_ool(s, a, fns[uns][a->esz], sel);
153
+}
154
+
155
+static bool trans_SABALB(DisasContext *s, arg_rrrr_esz *a)
156
+{
157
+ return do_abal(s, a, false, false);
158
+}
159
+
160
+static bool trans_SABALT(DisasContext *s, arg_rrrr_esz *a)
161
+{
162
+ return do_abal(s, a, false, true);
163
+}
164
+
165
+static bool trans_UABALB(DisasContext *s, arg_rrrr_esz *a)
166
+{
167
+ return do_abal(s, a, true, false);
168
+}
169
+
170
+static bool trans_UABALT(DisasContext *s, arg_rrrr_esz *a)
171
+{
172
+ return do_abal(s, a, true, true);
173
+}
174
--
175
2.20.1
176
177
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 3 +++
9
target/arm/sve.decode | 6 ++++++
10
target/arm/sve_helper.c | 34 ++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 23 +++++++++++++++++++++++
12
4 files changed, 66 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_uabal_s, TCG_CALL_NO_RWG,
19
void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve2_uabal_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_adcl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve2_adcl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/sve.decode
28
+++ b/target/arm/sve.decode
29
@@ -XXX,XX +XXX,XX @@ SABALB 01000101 .. 0 ..... 1100 00 ..... ..... @rda_rn_rm
30
SABALT 01000101 .. 0 ..... 1100 01 ..... ..... @rda_rn_rm
31
UABALB 01000101 .. 0 ..... 1100 10 ..... ..... @rda_rn_rm
32
UABALT 01000101 .. 0 ..... 1100 11 ..... ..... @rda_rn_rm
33
+
34
+## SVE2 integer add/subtract long with carry
35
+
36
+# ADC and SBC decoded via size in helper dispatch.
37
+ADCLB 01000101 .. 0 ..... 11010 0 ..... ..... @rda_rn_rm
38
+ADCLT 01000101 .. 0 ..... 11010 1 ..... ..... @rda_rn_rm
39
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve_helper.c
42
+++ b/target/arm/sve_helper.c
43
@@ -XXX,XX +XXX,XX @@ DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , H1_4, DO_ABD)
44
45
#undef DO_ZZZW_ACC
46
47
+void HELPER(sve2_adcl_s)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
48
+{
49
+ intptr_t i, opr_sz = simd_oprsz(desc);
50
+ int sel = H4(extract32(desc, SIMD_DATA_SHIFT, 1));
51
+ uint32_t inv = -extract32(desc, SIMD_DATA_SHIFT + 1, 1);
52
+ uint32_t *a = va, *n = vn;
53
+ uint64_t *d = vd, *m = vm;
54
+
55
+ for (i = 0; i < opr_sz / 8; ++i) {
56
+ uint32_t e1 = a[2 * i + H4(0)];
57
+ uint32_t e2 = n[2 * i + sel] ^ inv;
58
+ uint64_t c = extract64(m[i], 32, 1);
59
+ /* Compute and store the entire 33-bit result at once. */
60
+ d[i] = c + e1 + e2;
61
+ }
62
+}
63
+
64
+void HELPER(sve2_adcl_d)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
65
+{
66
+ intptr_t i, opr_sz = simd_oprsz(desc);
67
+ int sel = extract32(desc, SIMD_DATA_SHIFT, 1);
68
+ uint64_t inv = -(uint64_t)extract32(desc, SIMD_DATA_SHIFT + 1, 1);
69
+ uint64_t *d = vd, *a = va, *n = vn, *m = vm;
70
+
71
+ for (i = 0; i < opr_sz / 8; i += 2) {
72
+ Int128 e1 = int128_make64(a[i]);
73
+ Int128 e2 = int128_make64(n[i + sel] ^ inv);
74
+ Int128 c = int128_make64(m[i + 1] & 1);
75
+ Int128 r = int128_add(int128_add(e1, e2), c);
76
+ d[i + 0] = int128_getlo(r);
77
+ d[i + 1] = int128_gethi(r);
78
+ }
79
+}
80
+
81
#define DO_BITPERM(NAME, TYPE, OP) \
82
void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
83
{ \
84
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/translate-sve.c
87
+++ b/target/arm/translate-sve.c
88
@@ -XXX,XX +XXX,XX @@ static bool trans_UABALT(DisasContext *s, arg_rrrr_esz *a)
89
{
90
return do_abal(s, a, true, true);
91
}
92
+
93
+static bool do_adcl(DisasContext *s, arg_rrrr_esz *a, bool sel)
94
+{
95
+ static gen_helper_gvec_4 * const fns[2] = {
96
+ gen_helper_sve2_adcl_s,
97
+ gen_helper_sve2_adcl_d,
98
+ };
99
+ /*
100
+ * Note that in this case the ESZ field encodes both size and sign.
101
+ * Split out 'subtract' into bit 1 of the data field for the helper.
102
+ */
103
+ return do_sve2_zzzz_ool(s, a, fns[a->esz & 1], (a->esz & 2) | sel);
104
+}
105
+
106
+static bool trans_ADCLB(DisasContext *s, arg_rrrr_esz *a)
107
+{
108
+ return do_adcl(s, a, false);
109
+}
110
+
111
+static bool trans_ADCLT(DisasContext *s, arg_rrrr_esz *a)
112
+{
113
+ return do_adcl(s, a, true);
114
+}
115
--
116
2.20.1
117
118
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-22-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/sve.decode | 8 ++++++++
9
target/arm/translate-sve.c | 34 ++++++++++++++++++++++++++++++++++
10
2 files changed, 42 insertions(+)
11
12
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve.decode
15
+++ b/target/arm/sve.decode
16
@@ -XXX,XX +XXX,XX @@ UABALT 01000101 .. 0 ..... 1100 11 ..... ..... @rda_rn_rm
17
# ADC and SBC decoded via size in helper dispatch.
18
ADCLB 01000101 .. 0 ..... 11010 0 ..... ..... @rda_rn_rm
19
ADCLT 01000101 .. 0 ..... 11010 1 ..... ..... @rda_rn_rm
20
+
21
+## SVE2 bitwise shift right and accumulate
22
+
23
+# TODO: Use @rda and %reg_movprfx here.
24
+SSRA 01000101 .. 0 ..... 1110 00 ..... ..... @rd_rn_tszimm_shr
25
+USRA 01000101 .. 0 ..... 1110 01 ..... ..... @rd_rn_tszimm_shr
26
+SRSRA 01000101 .. 0 ..... 1110 10 ..... ..... @rd_rn_tszimm_shr
27
+URSRA 01000101 .. 0 ..... 1110 11 ..... ..... @rd_rn_tszimm_shr
28
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-sve.c
31
+++ b/target/arm/translate-sve.c
32
@@ -XXX,XX +XXX,XX @@ static bool trans_ADCLT(DisasContext *s, arg_rrrr_esz *a)
33
{
34
return do_adcl(s, a, true);
35
}
36
+
37
+static bool do_sve2_fn2i(DisasContext *s, arg_rri_esz *a, GVecGen2iFn *fn)
38
+{
39
+ if (a->esz < 0 || !dc_isar_feature(aa64_sve2, s)) {
40
+ return false;
41
+ }
42
+ if (sve_access_check(s)) {
43
+ unsigned vsz = vec_full_reg_size(s);
44
+ unsigned rd_ofs = vec_full_reg_offset(s, a->rd);
45
+ unsigned rn_ofs = vec_full_reg_offset(s, a->rn);
46
+ fn(a->esz, rd_ofs, rn_ofs, a->imm, vsz, vsz);
47
+ }
48
+ return true;
49
+}
50
+
51
+static bool trans_SSRA(DisasContext *s, arg_rri_esz *a)
52
+{
53
+ return do_sve2_fn2i(s, a, gen_gvec_ssra);
54
+}
55
+
56
+static bool trans_USRA(DisasContext *s, arg_rri_esz *a)
57
+{
58
+ return do_sve2_fn2i(s, a, gen_gvec_usra);
59
+}
60
+
61
+static bool trans_SRSRA(DisasContext *s, arg_rri_esz *a)
62
+{
63
+ return do_sve2_fn2i(s, a, gen_gvec_srsra);
64
+}
65
+
66
+static bool trans_URSRA(DisasContext *s, arg_rri_esz *a)
67
+{
68
+ return do_sve2_fn2i(s, a, gen_gvec_ursra);
69
+}
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-23-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/sve.decode | 5 +++++
9
target/arm/translate-sve.c | 10 ++++++++++
10
2 files changed, 15 insertions(+)
11
12
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve.decode
15
+++ b/target/arm/sve.decode
16
@@ -XXX,XX +XXX,XX @@ SSRA 01000101 .. 0 ..... 1110 00 ..... ..... @rd_rn_tszimm_shr
17
USRA 01000101 .. 0 ..... 1110 01 ..... ..... @rd_rn_tszimm_shr
18
SRSRA 01000101 .. 0 ..... 1110 10 ..... ..... @rd_rn_tszimm_shr
19
URSRA 01000101 .. 0 ..... 1110 11 ..... ..... @rd_rn_tszimm_shr
20
+
21
+## SVE2 bitwise shift and insert
22
+
23
+SRI 01000101 .. 0 ..... 11110 0 ..... ..... @rd_rn_tszimm_shr
24
+SLI 01000101 .. 0 ..... 11110 1 ..... ..... @rd_rn_tszimm_shl
25
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/translate-sve.c
28
+++ b/target/arm/translate-sve.c
29
@@ -XXX,XX +XXX,XX @@ static bool trans_URSRA(DisasContext *s, arg_rri_esz *a)
30
{
31
return do_sve2_fn2i(s, a, gen_gvec_ursra);
32
}
33
+
34
+static bool trans_SRI(DisasContext *s, arg_rri_esz *a)
35
+{
36
+ return do_sve2_fn2i(s, a, gen_gvec_sri);
37
+}
38
+
39
+static bool trans_SLI(DisasContext *s, arg_rri_esz *a)
40
+{
41
+ return do_sve2_fn2i(s, a, gen_gvec_sli);
42
+}
43
--
44
2.20.1
45
46
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-24-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/sve.decode | 6 ++++++
9
target/arm/translate-sve.c | 21 +++++++++++++++++++++
10
2 files changed, 27 insertions(+)
11
12
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve.decode
15
+++ b/target/arm/sve.decode
16
@@ -XXX,XX +XXX,XX @@ URSRA 01000101 .. 0 ..... 1110 11 ..... ..... @rd_rn_tszimm_shr
17
18
SRI 01000101 .. 0 ..... 11110 0 ..... ..... @rd_rn_tszimm_shr
19
SLI 01000101 .. 0 ..... 11110 1 ..... ..... @rd_rn_tszimm_shl
20
+
21
+## SVE2 integer absolute difference and accumulate
22
+
23
+# TODO: Use @rda and %reg_movprfx here.
24
+SABA 01000101 .. 0 ..... 11111 0 ..... ..... @rd_rn_rm
25
+UABA 01000101 .. 0 ..... 11111 1 ..... ..... @rd_rn_rm
26
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-sve.c
29
+++ b/target/arm/translate-sve.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_SLI(DisasContext *s, arg_rri_esz *a)
31
{
32
return do_sve2_fn2i(s, a, gen_gvec_sli);
33
}
34
+
35
+static bool do_sve2_fn_zzz(DisasContext *s, arg_rrr_esz *a, GVecGen3Fn *fn)
36
+{
37
+ if (!dc_isar_feature(aa64_sve2, s)) {
38
+ return false;
39
+ }
40
+ if (sve_access_check(s)) {
41
+ gen_gvec_fn_zzz(s, fn, a->esz, a->rd, a->rn, a->rm);
42
+ }
43
+ return true;
44
+}
45
+
46
+static bool trans_SABA(DisasContext *s, arg_rrr_esz *a)
47
+{
48
+ return do_sve2_fn_zzz(s, a, gen_gvec_saba);
49
+}
50
+
51
+static bool trans_UABA(DisasContext *s, arg_rrr_esz *a)
52
+{
53
+ return do_sve2_fn_zzz(s, a, gen_gvec_uaba);
54
+}
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-25-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 24 ++++
9
target/arm/sve.decode | 12 ++
10
target/arm/sve_helper.c | 56 +++++++++
11
target/arm/translate-sve.c | 238 +++++++++++++++++++++++++++++++++++++
12
4 files changed, 330 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_uabal_d, TCG_CALL_NO_RWG,
19
20
DEF_HELPER_FLAGS_5(sve2_adcl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_5(sve2_adcl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_3(sve2_sqxtnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_3(sve2_sqxtnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_3(sve2_sqxtnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
+
27
+DEF_HELPER_FLAGS_3(sve2_uqxtnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve2_uqxtnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(sve2_uqxtnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_3(sve2_sqxtunb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_3(sve2_sqxtunb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_3(sve2_sqxtunb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_3(sve2_sqxtnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_3(sve2_sqxtnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_3(sve2_sqxtnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_3(sve2_uqxtnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_3(sve2_uqxtnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_3(sve2_uqxtnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_3(sve2_sqxtunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_3(sve2_sqxtunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_3(sve2_sqxtunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
46
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/sve.decode
49
+++ b/target/arm/sve.decode
50
@@ -XXX,XX +XXX,XX @@ SLI 01000101 .. 0 ..... 11110 1 ..... ..... @rd_rn_tszimm_shl
51
# TODO: Use @rda and %reg_movprfx here.
52
SABA 01000101 .. 0 ..... 11111 0 ..... ..... @rd_rn_rm
53
UABA 01000101 .. 0 ..... 11111 1 ..... ..... @rd_rn_rm
54
+
55
+#### SVE2 Narrowing
56
+
57
+## SVE2 saturating extract narrow
58
+
59
+# Bits 23, 18-16 are zero, limited in the translator via esz < 3 & imm == 0.
60
+SQXTNB 01000101 .. 1 ..... 010 000 ..... ..... @rd_rn_tszimm_shl
61
+SQXTNT 01000101 .. 1 ..... 010 001 ..... ..... @rd_rn_tszimm_shl
62
+UQXTNB 01000101 .. 1 ..... 010 010 ..... ..... @rd_rn_tszimm_shl
63
+UQXTNT 01000101 .. 1 ..... 010 011 ..... ..... @rd_rn_tszimm_shl
64
+SQXTUNB 01000101 .. 1 ..... 010 100 ..... ..... @rd_rn_tszimm_shl
65
+SQXTUNT 01000101 .. 1 ..... 010 101 ..... ..... @rd_rn_tszimm_shl
66
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/sve_helper.c
69
+++ b/target/arm/sve_helper.c
70
@@ -XXX,XX +XXX,XX @@ DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , H1_4, DO_ABD)
71
72
#undef DO_ZZZW_ACC
73
74
+#define DO_XTNB(NAME, TYPE, OP) \
75
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
76
+{ \
77
+ intptr_t i, opr_sz = simd_oprsz(desc); \
78
+ for (i = 0; i < opr_sz; i += sizeof(TYPE)) { \
79
+ TYPE nn = *(TYPE *)(vn + i); \
80
+ nn = OP(nn) & MAKE_64BIT_MASK(0, sizeof(TYPE) * 4); \
81
+ *(TYPE *)(vd + i) = nn; \
82
+ } \
83
+}
84
+
85
+#define DO_XTNT(NAME, TYPE, TYPEN, H, OP) \
86
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
87
+{ \
88
+ intptr_t i, opr_sz = simd_oprsz(desc), odd = H(sizeof(TYPEN)); \
89
+ for (i = 0; i < opr_sz; i += sizeof(TYPE)) { \
90
+ TYPE nn = *(TYPE *)(vn + i); \
91
+ *(TYPEN *)(vd + i + odd) = OP(nn); \
92
+ } \
93
+}
94
+
95
+#define DO_SQXTN_H(n) do_sat_bhs(n, INT8_MIN, INT8_MAX)
96
+#define DO_SQXTN_S(n) do_sat_bhs(n, INT16_MIN, INT16_MAX)
97
+#define DO_SQXTN_D(n) do_sat_bhs(n, INT32_MIN, INT32_MAX)
98
+
99
+DO_XTNB(sve2_sqxtnb_h, int16_t, DO_SQXTN_H)
100
+DO_XTNB(sve2_sqxtnb_s, int32_t, DO_SQXTN_S)
101
+DO_XTNB(sve2_sqxtnb_d, int64_t, DO_SQXTN_D)
102
+
103
+DO_XTNT(sve2_sqxtnt_h, int16_t, int8_t, H1, DO_SQXTN_H)
104
+DO_XTNT(sve2_sqxtnt_s, int32_t, int16_t, H1_2, DO_SQXTN_S)
105
+DO_XTNT(sve2_sqxtnt_d, int64_t, int32_t, H1_4, DO_SQXTN_D)
106
+
107
+#define DO_UQXTN_H(n) do_sat_bhs(n, 0, UINT8_MAX)
108
+#define DO_UQXTN_S(n) do_sat_bhs(n, 0, UINT16_MAX)
109
+#define DO_UQXTN_D(n) do_sat_bhs(n, 0, UINT32_MAX)
110
+
111
+DO_XTNB(sve2_uqxtnb_h, uint16_t, DO_UQXTN_H)
112
+DO_XTNB(sve2_uqxtnb_s, uint32_t, DO_UQXTN_S)
113
+DO_XTNB(sve2_uqxtnb_d, uint64_t, DO_UQXTN_D)
114
+
115
+DO_XTNT(sve2_uqxtnt_h, uint16_t, uint8_t, H1, DO_UQXTN_H)
116
+DO_XTNT(sve2_uqxtnt_s, uint32_t, uint16_t, H1_2, DO_UQXTN_S)
117
+DO_XTNT(sve2_uqxtnt_d, uint64_t, uint32_t, H1_4, DO_UQXTN_D)
118
+
119
+DO_XTNB(sve2_sqxtunb_h, int16_t, DO_UQXTN_H)
120
+DO_XTNB(sve2_sqxtunb_s, int32_t, DO_UQXTN_S)
121
+DO_XTNB(sve2_sqxtunb_d, int64_t, DO_UQXTN_D)
122
+
123
+DO_XTNT(sve2_sqxtunt_h, int16_t, int8_t, H1, DO_UQXTN_H)
124
+DO_XTNT(sve2_sqxtunt_s, int32_t, int16_t, H1_2, DO_UQXTN_S)
125
+DO_XTNT(sve2_sqxtunt_d, int64_t, int32_t, H1_4, DO_UQXTN_D)
126
+
127
+#undef DO_XTNB
128
+#undef DO_XTNT
129
+
130
void HELPER(sve2_adcl_s)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
131
{
132
intptr_t i, opr_sz = simd_oprsz(desc);
133
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/target/arm/translate-sve.c
136
+++ b/target/arm/translate-sve.c
137
@@ -XXX,XX +XXX,XX @@ static bool trans_UABA(DisasContext *s, arg_rrr_esz *a)
138
{
139
return do_sve2_fn_zzz(s, a, gen_gvec_uaba);
140
}
141
+
142
+static bool do_sve2_narrow_extract(DisasContext *s, arg_rri_esz *a,
143
+ const GVecGen2 ops[3])
144
+{
145
+ if (a->esz < 0 || a->esz > MO_32 || a->imm != 0 ||
146
+ !dc_isar_feature(aa64_sve2, s)) {
147
+ return false;
148
+ }
149
+ if (sve_access_check(s)) {
150
+ unsigned vsz = vec_full_reg_size(s);
151
+ tcg_gen_gvec_2(vec_full_reg_offset(s, a->rd),
152
+ vec_full_reg_offset(s, a->rn),
153
+ vsz, vsz, &ops[a->esz]);
154
+ }
155
+ return true;
156
+}
157
+
158
+static const TCGOpcode sqxtn_list[] = {
159
+ INDEX_op_shli_vec, INDEX_op_smin_vec, INDEX_op_smax_vec, 0
160
+};
161
+
162
+static void gen_sqxtnb_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
163
+{
164
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
165
+ int halfbits = 4 << vece;
166
+ int64_t mask = (1ull << halfbits) - 1;
167
+ int64_t min = -1ull << (halfbits - 1);
168
+ int64_t max = -min - 1;
169
+
170
+ tcg_gen_dupi_vec(vece, t, min);
171
+ tcg_gen_smax_vec(vece, d, n, t);
172
+ tcg_gen_dupi_vec(vece, t, max);
173
+ tcg_gen_smin_vec(vece, d, d, t);
174
+ tcg_gen_dupi_vec(vece, t, mask);
175
+ tcg_gen_and_vec(vece, d, d, t);
176
+ tcg_temp_free_vec(t);
177
+}
178
+
179
+static bool trans_SQXTNB(DisasContext *s, arg_rri_esz *a)
180
+{
181
+ static const GVecGen2 ops[3] = {
182
+ { .fniv = gen_sqxtnb_vec,
183
+ .opt_opc = sqxtn_list,
184
+ .fno = gen_helper_sve2_sqxtnb_h,
185
+ .vece = MO_16 },
186
+ { .fniv = gen_sqxtnb_vec,
187
+ .opt_opc = sqxtn_list,
188
+ .fno = gen_helper_sve2_sqxtnb_s,
189
+ .vece = MO_32 },
190
+ { .fniv = gen_sqxtnb_vec,
191
+ .opt_opc = sqxtn_list,
192
+ .fno = gen_helper_sve2_sqxtnb_d,
193
+ .vece = MO_64 },
194
+ };
195
+ return do_sve2_narrow_extract(s, a, ops);
196
+}
197
+
198
+static void gen_sqxtnt_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
199
+{
200
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
201
+ int halfbits = 4 << vece;
202
+ int64_t mask = (1ull << halfbits) - 1;
203
+ int64_t min = -1ull << (halfbits - 1);
204
+ int64_t max = -min - 1;
205
+
206
+ tcg_gen_dupi_vec(vece, t, min);
207
+ tcg_gen_smax_vec(vece, n, n, t);
208
+ tcg_gen_dupi_vec(vece, t, max);
209
+ tcg_gen_smin_vec(vece, n, n, t);
210
+ tcg_gen_shli_vec(vece, n, n, halfbits);
211
+ tcg_gen_dupi_vec(vece, t, mask);
212
+ tcg_gen_bitsel_vec(vece, d, t, d, n);
213
+ tcg_temp_free_vec(t);
214
+}
215
+
216
+static bool trans_SQXTNT(DisasContext *s, arg_rri_esz *a)
217
+{
218
+ static const GVecGen2 ops[3] = {
219
+ { .fniv = gen_sqxtnt_vec,
220
+ .opt_opc = sqxtn_list,
221
+ .load_dest = true,
222
+ .fno = gen_helper_sve2_sqxtnt_h,
223
+ .vece = MO_16 },
224
+ { .fniv = gen_sqxtnt_vec,
225
+ .opt_opc = sqxtn_list,
226
+ .load_dest = true,
227
+ .fno = gen_helper_sve2_sqxtnt_s,
228
+ .vece = MO_32 },
229
+ { .fniv = gen_sqxtnt_vec,
230
+ .opt_opc = sqxtn_list,
231
+ .load_dest = true,
232
+ .fno = gen_helper_sve2_sqxtnt_d,
233
+ .vece = MO_64 },
234
+ };
235
+ return do_sve2_narrow_extract(s, a, ops);
236
+}
237
+
238
+static const TCGOpcode uqxtn_list[] = {
239
+ INDEX_op_shli_vec, INDEX_op_umin_vec, 0
240
+};
241
+
242
+static void gen_uqxtnb_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
243
+{
244
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
245
+ int halfbits = 4 << vece;
246
+ int64_t max = (1ull << halfbits) - 1;
247
+
248
+ tcg_gen_dupi_vec(vece, t, max);
249
+ tcg_gen_umin_vec(vece, d, n, t);
250
+ tcg_temp_free_vec(t);
251
+}
252
+
253
+static bool trans_UQXTNB(DisasContext *s, arg_rri_esz *a)
254
+{
255
+ static const GVecGen2 ops[3] = {
256
+ { .fniv = gen_uqxtnb_vec,
257
+ .opt_opc = uqxtn_list,
258
+ .fno = gen_helper_sve2_uqxtnb_h,
259
+ .vece = MO_16 },
260
+ { .fniv = gen_uqxtnb_vec,
261
+ .opt_opc = uqxtn_list,
262
+ .fno = gen_helper_sve2_uqxtnb_s,
263
+ .vece = MO_32 },
264
+ { .fniv = gen_uqxtnb_vec,
265
+ .opt_opc = uqxtn_list,
266
+ .fno = gen_helper_sve2_uqxtnb_d,
267
+ .vece = MO_64 },
268
+ };
269
+ return do_sve2_narrow_extract(s, a, ops);
270
+}
271
+
272
+static void gen_uqxtnt_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
273
+{
274
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
275
+ int halfbits = 4 << vece;
276
+ int64_t max = (1ull << halfbits) - 1;
277
+
278
+ tcg_gen_dupi_vec(vece, t, max);
279
+ tcg_gen_umin_vec(vece, n, n, t);
280
+ tcg_gen_shli_vec(vece, n, n, halfbits);
281
+ tcg_gen_bitsel_vec(vece, d, t, d, n);
282
+ tcg_temp_free_vec(t);
283
+}
284
+
285
+static bool trans_UQXTNT(DisasContext *s, arg_rri_esz *a)
286
+{
287
+ static const GVecGen2 ops[3] = {
288
+ { .fniv = gen_uqxtnt_vec,
289
+ .opt_opc = uqxtn_list,
290
+ .load_dest = true,
291
+ .fno = gen_helper_sve2_uqxtnt_h,
292
+ .vece = MO_16 },
293
+ { .fniv = gen_uqxtnt_vec,
294
+ .opt_opc = uqxtn_list,
295
+ .load_dest = true,
296
+ .fno = gen_helper_sve2_uqxtnt_s,
297
+ .vece = MO_32 },
298
+ { .fniv = gen_uqxtnt_vec,
299
+ .opt_opc = uqxtn_list,
300
+ .load_dest = true,
301
+ .fno = gen_helper_sve2_uqxtnt_d,
302
+ .vece = MO_64 },
303
+ };
304
+ return do_sve2_narrow_extract(s, a, ops);
305
+}
306
+
307
+static const TCGOpcode sqxtun_list[] = {
308
+ INDEX_op_shli_vec, INDEX_op_umin_vec, INDEX_op_smax_vec, 0
309
+};
310
+
311
+static void gen_sqxtunb_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
312
+{
313
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
314
+ int halfbits = 4 << vece;
315
+ int64_t max = (1ull << halfbits) - 1;
316
+
317
+ tcg_gen_dupi_vec(vece, t, 0);
318
+ tcg_gen_smax_vec(vece, d, n, t);
319
+ tcg_gen_dupi_vec(vece, t, max);
320
+ tcg_gen_umin_vec(vece, d, d, t);
321
+ tcg_temp_free_vec(t);
322
+}
323
+
324
+static bool trans_SQXTUNB(DisasContext *s, arg_rri_esz *a)
325
+{
326
+ static const GVecGen2 ops[3] = {
327
+ { .fniv = gen_sqxtunb_vec,
328
+ .opt_opc = sqxtun_list,
329
+ .fno = gen_helper_sve2_sqxtunb_h,
330
+ .vece = MO_16 },
331
+ { .fniv = gen_sqxtunb_vec,
332
+ .opt_opc = sqxtun_list,
333
+ .fno = gen_helper_sve2_sqxtunb_s,
334
+ .vece = MO_32 },
335
+ { .fniv = gen_sqxtunb_vec,
336
+ .opt_opc = sqxtun_list,
337
+ .fno = gen_helper_sve2_sqxtunb_d,
338
+ .vece = MO_64 },
339
+ };
340
+ return do_sve2_narrow_extract(s, a, ops);
341
+}
342
+
343
+static void gen_sqxtunt_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
344
+{
345
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
346
+ int halfbits = 4 << vece;
347
+ int64_t max = (1ull << halfbits) - 1;
348
+
349
+ tcg_gen_dupi_vec(vece, t, 0);
350
+ tcg_gen_smax_vec(vece, n, n, t);
351
+ tcg_gen_dupi_vec(vece, t, max);
352
+ tcg_gen_umin_vec(vece, n, n, t);
353
+ tcg_gen_shli_vec(vece, n, n, halfbits);
354
+ tcg_gen_bitsel_vec(vece, d, t, d, n);
355
+ tcg_temp_free_vec(t);
356
+}
357
+
358
+static bool trans_SQXTUNT(DisasContext *s, arg_rri_esz *a)
359
+{
360
+ static const GVecGen2 ops[3] = {
361
+ { .fniv = gen_sqxtunt_vec,
362
+ .opt_opc = sqxtun_list,
363
+ .load_dest = true,
364
+ .fno = gen_helper_sve2_sqxtunt_h,
365
+ .vece = MO_16 },
366
+ { .fniv = gen_sqxtunt_vec,
367
+ .opt_opc = sqxtun_list,
368
+ .load_dest = true,
369
+ .fno = gen_helper_sve2_sqxtunt_s,
370
+ .vece = MO_32 },
371
+ { .fniv = gen_sqxtunt_vec,
372
+ .opt_opc = sqxtun_list,
373
+ .load_dest = true,
374
+ .fno = gen_helper_sve2_sqxtunt_d,
375
+ .vece = MO_64 },
376
+ };
377
+ return do_sve2_narrow_extract(s, a, ops);
378
+}
379
--
380
2.20.1
381
382
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525010358.152808-26-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper-sve.h | 35 +++++++++++++++++++++++++++++
11
target/arm/sve.decode | 8 +++++++
12
target/arm/sve_helper.c | 46 ++++++++++++++++++++++++++++++++++++++
13
target/arm/translate-sve.c | 25 +++++++++++++++++++++
14
4 files changed, 114 insertions(+)
15
16
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-sve.h
19
+++ b/target/arm/helper-sve.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve2_uqxtnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_3(sve2_sqxtunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_3(sve2_sqxtunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_3(sve2_sqxtunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+
25
+DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_d, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_h, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_s, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_d, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_h, TCG_CALL_NO_RWG,
40
+ void, ptr, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_s, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_d, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, ptr, i32)
45
+
46
+DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_h, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_s, TCG_CALL_NO_RWG,
49
+ void, ptr, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_d, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, ptr, ptr, i32)
52
+
53
+DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_h, TCG_CALL_NO_RWG,
54
+ void, ptr, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_s, TCG_CALL_NO_RWG,
56
+ void, ptr, ptr, ptr, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_d, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, ptr, i32)
59
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/sve.decode
62
+++ b/target/arm/sve.decode
63
@@ -XXX,XX +XXX,XX @@ UQXTNB 01000101 .. 1 ..... 010 010 ..... ..... @rd_rn_tszimm_shl
64
UQXTNT 01000101 .. 1 ..... 010 011 ..... ..... @rd_rn_tszimm_shl
65
SQXTUNB 01000101 .. 1 ..... 010 100 ..... ..... @rd_rn_tszimm_shl
66
SQXTUNT 01000101 .. 1 ..... 010 101 ..... ..... @rd_rn_tszimm_shl
67
+
68
+## SVE2 floating-point pairwise operations
69
+
70
+FADDP 01100100 .. 010 00 0 100 ... ..... ..... @rdn_pg_rm
71
+FMAXNMP 01100100 .. 010 10 0 100 ... ..... ..... @rdn_pg_rm
72
+FMINNMP 01100100 .. 010 10 1 100 ... ..... ..... @rdn_pg_rm
73
+FMAXP 01100100 .. 010 11 0 100 ... ..... ..... @rdn_pg_rm
74
+FMINP 01100100 .. 010 11 1 100 ... ..... ..... @rdn_pg_rm
75
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/sve_helper.c
78
+++ b/target/arm/sve_helper.c
79
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_PAIR_D(sve2_sminp_zpzz_d, int64_t, DO_MIN)
80
#undef DO_ZPZZ_PAIR
81
#undef DO_ZPZZ_PAIR_D
82
83
+#define DO_ZPZZ_PAIR_FP(NAME, TYPE, H, OP) \
84
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, \
85
+ void *status, uint32_t desc) \
86
+{ \
87
+ intptr_t i, opr_sz = simd_oprsz(desc); \
88
+ for (i = 0; i < opr_sz; ) { \
89
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
90
+ do { \
91
+ TYPE n0 = *(TYPE *)(vn + H(i)); \
92
+ TYPE m0 = *(TYPE *)(vm + H(i)); \
93
+ TYPE n1 = *(TYPE *)(vn + H(i + sizeof(TYPE))); \
94
+ TYPE m1 = *(TYPE *)(vm + H(i + sizeof(TYPE))); \
95
+ if (pg & 1) { \
96
+ *(TYPE *)(vd + H(i)) = OP(n0, n1, status); \
97
+ } \
98
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
99
+ if (pg & 1) { \
100
+ *(TYPE *)(vd + H(i)) = OP(m0, m1, status); \
101
+ } \
102
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
103
+ } while (i & 15); \
104
+ } \
105
+}
106
+
107
+DO_ZPZZ_PAIR_FP(sve2_faddp_zpzz_h, float16, H1_2, float16_add)
108
+DO_ZPZZ_PAIR_FP(sve2_faddp_zpzz_s, float32, H1_4, float32_add)
109
+DO_ZPZZ_PAIR_FP(sve2_faddp_zpzz_d, float64, , float64_add)
110
+
111
+DO_ZPZZ_PAIR_FP(sve2_fmaxnmp_zpzz_h, float16, H1_2, float16_maxnum)
112
+DO_ZPZZ_PAIR_FP(sve2_fmaxnmp_zpzz_s, float32, H1_4, float32_maxnum)
113
+DO_ZPZZ_PAIR_FP(sve2_fmaxnmp_zpzz_d, float64, , float64_maxnum)
114
+
115
+DO_ZPZZ_PAIR_FP(sve2_fminnmp_zpzz_h, float16, H1_2, float16_minnum)
116
+DO_ZPZZ_PAIR_FP(sve2_fminnmp_zpzz_s, float32, H1_4, float32_minnum)
117
+DO_ZPZZ_PAIR_FP(sve2_fminnmp_zpzz_d, float64, , float64_minnum)
118
+
119
+DO_ZPZZ_PAIR_FP(sve2_fmaxp_zpzz_h, float16, H1_2, float16_max)
120
+DO_ZPZZ_PAIR_FP(sve2_fmaxp_zpzz_s, float32, H1_4, float32_max)
121
+DO_ZPZZ_PAIR_FP(sve2_fmaxp_zpzz_d, float64, , float64_max)
122
+
123
+DO_ZPZZ_PAIR_FP(sve2_fminp_zpzz_h, float16, H1_2, float16_min)
124
+DO_ZPZZ_PAIR_FP(sve2_fminp_zpzz_s, float32, H1_4, float32_min)
125
+DO_ZPZZ_PAIR_FP(sve2_fminp_zpzz_d, float64, , float64_min)
126
+
127
+#undef DO_ZPZZ_PAIR_FP
128
+
129
/* Three-operand expander, controlled by a predicate, in which the
130
* third operand is "wide". That is, for D = N op M, the same 64-bit
131
* value of M is used with all of the narrower values of N.
132
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/translate-sve.c
135
+++ b/target/arm/translate-sve.c
136
@@ -XXX,XX +XXX,XX @@ static bool trans_SQXTUNT(DisasContext *s, arg_rri_esz *a)
137
};
138
return do_sve2_narrow_extract(s, a, ops);
139
}
140
+
141
+static bool do_sve2_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
142
+ gen_helper_gvec_4_ptr *fn)
143
+{
144
+ if (!dc_isar_feature(aa64_sve2, s)) {
145
+ return false;
146
+ }
147
+ return do_zpzz_fp(s, a, fn);
148
+}
149
+
150
+#define DO_SVE2_ZPZZ_FP(NAME, name) \
151
+static bool trans_##NAME(DisasContext *s, arg_rprr_esz *a) \
152
+{ \
153
+ static gen_helper_gvec_4_ptr * const fns[4] = { \
154
+ NULL, gen_helper_sve2_##name##_zpzz_h, \
155
+ gen_helper_sve2_##name##_zpzz_s, gen_helper_sve2_##name##_zpzz_d \
156
+ }; \
157
+ return do_sve2_zpzz_fp(s, a, fns[a->esz]); \
158
+}
159
+
160
+DO_SVE2_ZPZZ_FP(FADDP, faddp)
161
+DO_SVE2_ZPZZ_FP(FMAXNMP, fmaxnmp)
162
+DO_SVE2_ZPZZ_FP(FMINNMP, fminnmp)
163
+DO_SVE2_ZPZZ_FP(FMAXP, fmaxp)
164
+DO_SVE2_ZPZZ_FP(FMINP, fminp)
165
--
166
2.20.1
167
168
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-27-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 16 ++++
9
target/arm/sve.decode | 8 ++
10
target/arm/sve_helper.c | 54 ++++++++++++-
11
target/arm/translate-sve.c | 160 +++++++++++++++++++++++++++++++++++++
12
4 files changed, 236 insertions(+), 2 deletions(-)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve2_sqxtunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve2_sqxtunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve2_sqxtunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_3(sve2_shrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_3(sve2_shrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_3(sve2_shrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_3(sve2_shrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_3(sve2_shrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve2_shrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_3(sve2_rshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(sve2_rshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_3(sve2_rshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_3(sve2_rshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_3(sve2_rshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_3(sve2_rshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
37
+
38
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
39
void, ptr, ptr, ptr, ptr, ptr, i32)
40
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
41
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/sve.decode
44
+++ b/target/arm/sve.decode
45
@@ -XXX,XX +XXX,XX @@ UQXTNT 01000101 .. 1 ..... 010 011 ..... ..... @rd_rn_tszimm_shl
46
SQXTUNB 01000101 .. 1 ..... 010 100 ..... ..... @rd_rn_tszimm_shl
47
SQXTUNT 01000101 .. 1 ..... 010 101 ..... ..... @rd_rn_tszimm_shl
48
49
+## SVE2 bitwise shift right narrow
50
+
51
+# Bit 23 == 0 is handled by esz > 0 in the translator.
52
+SHRNB 01000101 .. 1 ..... 00 0100 ..... ..... @rd_rn_tszimm_shr
53
+SHRNT 01000101 .. 1 ..... 00 0101 ..... ..... @rd_rn_tszimm_shr
54
+RSHRNB 01000101 .. 1 ..... 00 0110 ..... ..... @rd_rn_tszimm_shr
55
+RSHRNT 01000101 .. 1 ..... 00 0111 ..... ..... @rd_rn_tszimm_shr
56
+
57
## SVE2 floating-point pairwise operations
58
59
FADDP 01100100 .. 010 00 0 100 ... ..... ..... @rdn_pg_rm
60
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/sve_helper.c
63
+++ b/target/arm/sve_helper.c
64
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
65
when N is negative, add 2**M-1. */
66
#define DO_ASRD(N, M) ((N + (N < 0 ? ((__typeof(N))1 << M) - 1 : 0)) >> M)
67
68
+static inline uint64_t do_urshr(uint64_t x, unsigned sh)
69
+{
70
+ if (likely(sh < 64)) {
71
+ return (x >> sh) + ((x >> (sh - 1)) & 1);
72
+ } else if (sh == 64) {
73
+ return x >> 63;
74
+ } else {
75
+ return 0;
76
+ }
77
+}
78
+
79
DO_ZPZI(sve_asr_zpzi_b, int8_t, H1, DO_SHR)
80
DO_ZPZI(sve_asr_zpzi_h, int16_t, H1_2, DO_SHR)
81
DO_ZPZI(sve_asr_zpzi_s, int32_t, H1_4, DO_SHR)
82
@@ -XXX,XX +XXX,XX @@ DO_ZPZI(sve_asrd_h, int16_t, H1_2, DO_ASRD)
83
DO_ZPZI(sve_asrd_s, int32_t, H1_4, DO_ASRD)
84
DO_ZPZI_D(sve_asrd_d, int64_t, DO_ASRD)
85
86
-#undef DO_SHR
87
-#undef DO_SHL
88
#undef DO_ASRD
89
#undef DO_ZPZI
90
#undef DO_ZPZI_D
91
92
+#define DO_SHRNB(NAME, TYPEW, TYPEN, OP) \
93
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
94
+{ \
95
+ intptr_t i, opr_sz = simd_oprsz(desc); \
96
+ int shift = simd_data(desc); \
97
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
98
+ TYPEW nn = *(TYPEW *)(vn + i); \
99
+ *(TYPEW *)(vd + i) = (TYPEN)OP(nn, shift); \
100
+ } \
101
+}
102
+
103
+#define DO_SHRNT(NAME, TYPEW, TYPEN, HW, HN, OP) \
104
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
105
+{ \
106
+ intptr_t i, opr_sz = simd_oprsz(desc); \
107
+ int shift = simd_data(desc); \
108
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
109
+ TYPEW nn = *(TYPEW *)(vn + HW(i)); \
110
+ *(TYPEN *)(vd + HN(i + sizeof(TYPEN))) = OP(nn, shift); \
111
+ } \
112
+}
113
+
114
+DO_SHRNB(sve2_shrnb_h, uint16_t, uint8_t, DO_SHR)
115
+DO_SHRNB(sve2_shrnb_s, uint32_t, uint16_t, DO_SHR)
116
+DO_SHRNB(sve2_shrnb_d, uint64_t, uint32_t, DO_SHR)
117
+
118
+DO_SHRNT(sve2_shrnt_h, uint16_t, uint8_t, H1_2, H1, DO_SHR)
119
+DO_SHRNT(sve2_shrnt_s, uint32_t, uint16_t, H1_4, H1_2, DO_SHR)
120
+DO_SHRNT(sve2_shrnt_d, uint64_t, uint32_t, , H1_4, DO_SHR)
121
+
122
+DO_SHRNB(sve2_rshrnb_h, uint16_t, uint8_t, do_urshr)
123
+DO_SHRNB(sve2_rshrnb_s, uint32_t, uint16_t, do_urshr)
124
+DO_SHRNB(sve2_rshrnb_d, uint64_t, uint32_t, do_urshr)
125
+
126
+DO_SHRNT(sve2_rshrnt_h, uint16_t, uint8_t, H1_2, H1, do_urshr)
127
+DO_SHRNT(sve2_rshrnt_s, uint32_t, uint16_t, H1_4, H1_2, do_urshr)
128
+DO_SHRNT(sve2_rshrnt_d, uint64_t, uint32_t, , H1_4, do_urshr)
129
+
130
+#undef DO_SHRNB
131
+#undef DO_SHRNT
132
+
133
/* Fully general four-operand expander, controlled by a predicate.
134
*/
135
#define DO_ZPZZZ(NAME, TYPE, H, OP) \
136
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
137
index XXXXXXX..XXXXXXX 100644
138
--- a/target/arm/translate-sve.c
139
+++ b/target/arm/translate-sve.c
140
@@ -XXX,XX +XXX,XX @@ static bool trans_SQXTUNT(DisasContext *s, arg_rri_esz *a)
141
return do_sve2_narrow_extract(s, a, ops);
142
}
143
144
+static bool do_sve2_shr_narrow(DisasContext *s, arg_rri_esz *a,
145
+ const GVecGen2i ops[3])
146
+{
147
+ if (a->esz < 0 || a->esz > MO_32 || !dc_isar_feature(aa64_sve2, s)) {
148
+ return false;
149
+ }
150
+ assert(a->imm > 0 && a->imm <= (8 << a->esz));
151
+ if (sve_access_check(s)) {
152
+ unsigned vsz = vec_full_reg_size(s);
153
+ tcg_gen_gvec_2i(vec_full_reg_offset(s, a->rd),
154
+ vec_full_reg_offset(s, a->rn),
155
+ vsz, vsz, a->imm, &ops[a->esz]);
156
+ }
157
+ return true;
158
+}
159
+
160
+static void gen_shrnb_i64(unsigned vece, TCGv_i64 d, TCGv_i64 n, int shr)
161
+{
162
+ int halfbits = 4 << vece;
163
+ uint64_t mask = dup_const(vece, MAKE_64BIT_MASK(0, halfbits));
164
+
165
+ tcg_gen_shri_i64(d, n, shr);
166
+ tcg_gen_andi_i64(d, d, mask);
167
+}
168
+
169
+static void gen_shrnb16_i64(TCGv_i64 d, TCGv_i64 n, int64_t shr)
170
+{
171
+ gen_shrnb_i64(MO_16, d, n, shr);
172
+}
173
+
174
+static void gen_shrnb32_i64(TCGv_i64 d, TCGv_i64 n, int64_t shr)
175
+{
176
+ gen_shrnb_i64(MO_32, d, n, shr);
177
+}
178
+
179
+static void gen_shrnb64_i64(TCGv_i64 d, TCGv_i64 n, int64_t shr)
180
+{
181
+ gen_shrnb_i64(MO_64, d, n, shr);
182
+}
183
+
184
+static void gen_shrnb_vec(unsigned vece, TCGv_vec d, TCGv_vec n, int64_t shr)
185
+{
186
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
187
+ int halfbits = 4 << vece;
188
+ uint64_t mask = MAKE_64BIT_MASK(0, halfbits);
189
+
190
+ tcg_gen_shri_vec(vece, n, n, shr);
191
+ tcg_gen_dupi_vec(vece, t, mask);
192
+ tcg_gen_and_vec(vece, d, n, t);
193
+ tcg_temp_free_vec(t);
194
+}
195
+
196
+static bool trans_SHRNB(DisasContext *s, arg_rri_esz *a)
197
+{
198
+ static const TCGOpcode vec_list[] = { INDEX_op_shri_vec, 0 };
199
+ static const GVecGen2i ops[3] = {
200
+ { .fni8 = gen_shrnb16_i64,
201
+ .fniv = gen_shrnb_vec,
202
+ .opt_opc = vec_list,
203
+ .fno = gen_helper_sve2_shrnb_h,
204
+ .vece = MO_16 },
205
+ { .fni8 = gen_shrnb32_i64,
206
+ .fniv = gen_shrnb_vec,
207
+ .opt_opc = vec_list,
208
+ .fno = gen_helper_sve2_shrnb_s,
209
+ .vece = MO_32 },
210
+ { .fni8 = gen_shrnb64_i64,
211
+ .fniv = gen_shrnb_vec,
212
+ .opt_opc = vec_list,
213
+ .fno = gen_helper_sve2_shrnb_d,
214
+ .vece = MO_64 },
215
+ };
216
+ return do_sve2_shr_narrow(s, a, ops);
217
+}
218
+
219
+static void gen_shrnt_i64(unsigned vece, TCGv_i64 d, TCGv_i64 n, int shr)
220
+{
221
+ int halfbits = 4 << vece;
222
+ uint64_t mask = dup_const(vece, MAKE_64BIT_MASK(0, halfbits));
223
+
224
+ tcg_gen_shli_i64(n, n, halfbits - shr);
225
+ tcg_gen_andi_i64(n, n, ~mask);
226
+ tcg_gen_andi_i64(d, d, mask);
227
+ tcg_gen_or_i64(d, d, n);
228
+}
229
+
230
+static void gen_shrnt16_i64(TCGv_i64 d, TCGv_i64 n, int64_t shr)
231
+{
232
+ gen_shrnt_i64(MO_16, d, n, shr);
233
+}
234
+
235
+static void gen_shrnt32_i64(TCGv_i64 d, TCGv_i64 n, int64_t shr)
236
+{
237
+ gen_shrnt_i64(MO_32, d, n, shr);
238
+}
239
+
240
+static void gen_shrnt64_i64(TCGv_i64 d, TCGv_i64 n, int64_t shr)
241
+{
242
+ tcg_gen_shri_i64(n, n, shr);
243
+ tcg_gen_deposit_i64(d, d, n, 32, 32);
244
+}
245
+
246
+static void gen_shrnt_vec(unsigned vece, TCGv_vec d, TCGv_vec n, int64_t shr)
247
+{
248
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
249
+ int halfbits = 4 << vece;
250
+ uint64_t mask = MAKE_64BIT_MASK(0, halfbits);
251
+
252
+ tcg_gen_shli_vec(vece, n, n, halfbits - shr);
253
+ tcg_gen_dupi_vec(vece, t, mask);
254
+ tcg_gen_bitsel_vec(vece, d, t, d, n);
255
+ tcg_temp_free_vec(t);
256
+}
257
+
258
+static bool trans_SHRNT(DisasContext *s, arg_rri_esz *a)
259
+{
260
+ static const TCGOpcode vec_list[] = { INDEX_op_shli_vec, 0 };
261
+ static const GVecGen2i ops[3] = {
262
+ { .fni8 = gen_shrnt16_i64,
263
+ .fniv = gen_shrnt_vec,
264
+ .opt_opc = vec_list,
265
+ .load_dest = true,
266
+ .fno = gen_helper_sve2_shrnt_h,
267
+ .vece = MO_16 },
268
+ { .fni8 = gen_shrnt32_i64,
269
+ .fniv = gen_shrnt_vec,
270
+ .opt_opc = vec_list,
271
+ .load_dest = true,
272
+ .fno = gen_helper_sve2_shrnt_s,
273
+ .vece = MO_32 },
274
+ { .fni8 = gen_shrnt64_i64,
275
+ .fniv = gen_shrnt_vec,
276
+ .opt_opc = vec_list,
277
+ .load_dest = true,
278
+ .fno = gen_helper_sve2_shrnt_d,
279
+ .vece = MO_64 },
280
+ };
281
+ return do_sve2_shr_narrow(s, a, ops);
282
+}
283
+
284
+static bool trans_RSHRNB(DisasContext *s, arg_rri_esz *a)
285
+{
286
+ static const GVecGen2i ops[3] = {
287
+ { .fno = gen_helper_sve2_rshrnb_h },
288
+ { .fno = gen_helper_sve2_rshrnb_s },
289
+ { .fno = gen_helper_sve2_rshrnb_d },
290
+ };
291
+ return do_sve2_shr_narrow(s, a, ops);
292
+}
293
+
294
+static bool trans_RSHRNT(DisasContext *s, arg_rri_esz *a)
295
+{
296
+ static const GVecGen2i ops[3] = {
297
+ { .fno = gen_helper_sve2_rshrnt_h },
298
+ { .fno = gen_helper_sve2_rshrnt_s },
299
+ { .fno = gen_helper_sve2_rshrnt_d },
300
+ };
301
+ return do_sve2_shr_narrow(s, a, ops);
302
+}
303
+
304
static bool do_sve2_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
305
gen_helper_gvec_4_ptr *fn)
306
{
307
--
308
2.20.1
309
310
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 16 +++++++
9
target/arm/sve.decode | 4 ++
10
target/arm/sve_helper.c | 35 ++++++++++++++
11
target/arm/translate-sve.c | 98 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 153 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve2_rshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve2_rshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve2_rshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_3(sve2_sqshrunb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_3(sve2_sqshrunb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_3(sve2_sqshrunb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_3(sve2_sqshrunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_3(sve2_sqshrunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve2_sqshrunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_3(sve2_sqrshrunb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(sve2_sqrshrunb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_3(sve2_sqrshrunb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_3(sve2_sqrshrunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_3(sve2_sqrshrunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_3(sve2_sqrshrunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
37
+
38
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
39
void, ptr, ptr, ptr, ptr, ptr, i32)
40
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
41
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/sve.decode
44
+++ b/target/arm/sve.decode
45
@@ -XXX,XX +XXX,XX @@ SQXTUNT 01000101 .. 1 ..... 010 101 ..... ..... @rd_rn_tszimm_shl
46
## SVE2 bitwise shift right narrow
47
48
# Bit 23 == 0 is handled by esz > 0 in the translator.
49
+SQSHRUNB 01000101 .. 1 ..... 00 0000 ..... ..... @rd_rn_tszimm_shr
50
+SQSHRUNT 01000101 .. 1 ..... 00 0001 ..... ..... @rd_rn_tszimm_shr
51
+SQRSHRUNB 01000101 .. 1 ..... 00 0010 ..... ..... @rd_rn_tszimm_shr
52
+SQRSHRUNT 01000101 .. 1 ..... 00 0011 ..... ..... @rd_rn_tszimm_shr
53
SHRNB 01000101 .. 1 ..... 00 0100 ..... ..... @rd_rn_tszimm_shr
54
SHRNT 01000101 .. 1 ..... 00 0101 ..... ..... @rd_rn_tszimm_shr
55
RSHRNB 01000101 .. 1 ..... 00 0110 ..... ..... @rd_rn_tszimm_shr
56
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/sve_helper.c
59
+++ b/target/arm/sve_helper.c
60
@@ -XXX,XX +XXX,XX @@ static inline uint64_t do_urshr(uint64_t x, unsigned sh)
61
}
62
}
63
64
+static inline int64_t do_srshr(int64_t x, unsigned sh)
65
+{
66
+ if (likely(sh < 64)) {
67
+ return (x >> sh) + ((x >> (sh - 1)) & 1);
68
+ } else {
69
+ /* Rounding the sign bit always produces 0. */
70
+ return 0;
71
+ }
72
+}
73
+
74
DO_ZPZI(sve_asr_zpzi_b, int8_t, H1, DO_SHR)
75
DO_ZPZI(sve_asr_zpzi_h, int16_t, H1_2, DO_SHR)
76
DO_ZPZI(sve_asr_zpzi_s, int32_t, H1_4, DO_SHR)
77
@@ -XXX,XX +XXX,XX @@ DO_SHRNT(sve2_rshrnt_h, uint16_t, uint8_t, H1_2, H1, do_urshr)
78
DO_SHRNT(sve2_rshrnt_s, uint32_t, uint16_t, H1_4, H1_2, do_urshr)
79
DO_SHRNT(sve2_rshrnt_d, uint64_t, uint32_t, , H1_4, do_urshr)
80
81
+#define DO_SQSHRUN_H(x, sh) do_sat_bhs((int64_t)(x) >> sh, 0, UINT8_MAX)
82
+#define DO_SQSHRUN_S(x, sh) do_sat_bhs((int64_t)(x) >> sh, 0, UINT16_MAX)
83
+#define DO_SQSHRUN_D(x, sh) \
84
+ do_sat_bhs((int64_t)(x) >> (sh < 64 ? sh : 63), 0, UINT32_MAX)
85
+
86
+DO_SHRNB(sve2_sqshrunb_h, int16_t, uint8_t, DO_SQSHRUN_H)
87
+DO_SHRNB(sve2_sqshrunb_s, int32_t, uint16_t, DO_SQSHRUN_S)
88
+DO_SHRNB(sve2_sqshrunb_d, int64_t, uint32_t, DO_SQSHRUN_D)
89
+
90
+DO_SHRNT(sve2_sqshrunt_h, int16_t, uint8_t, H1_2, H1, DO_SQSHRUN_H)
91
+DO_SHRNT(sve2_sqshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQSHRUN_S)
92
+DO_SHRNT(sve2_sqshrunt_d, int64_t, uint32_t, , H1_4, DO_SQSHRUN_D)
93
+
94
+#define DO_SQRSHRUN_H(x, sh) do_sat_bhs(do_srshr(x, sh), 0, UINT8_MAX)
95
+#define DO_SQRSHRUN_S(x, sh) do_sat_bhs(do_srshr(x, sh), 0, UINT16_MAX)
96
+#define DO_SQRSHRUN_D(x, sh) do_sat_bhs(do_srshr(x, sh), 0, UINT32_MAX)
97
+
98
+DO_SHRNB(sve2_sqrshrunb_h, int16_t, uint8_t, DO_SQRSHRUN_H)
99
+DO_SHRNB(sve2_sqrshrunb_s, int32_t, uint16_t, DO_SQRSHRUN_S)
100
+DO_SHRNB(sve2_sqrshrunb_d, int64_t, uint32_t, DO_SQRSHRUN_D)
101
+
102
+DO_SHRNT(sve2_sqrshrunt_h, int16_t, uint8_t, H1_2, H1, DO_SQRSHRUN_H)
103
+DO_SHRNT(sve2_sqrshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRUN_S)
104
+DO_SHRNT(sve2_sqrshrunt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRUN_D)
105
+
106
#undef DO_SHRNB
107
#undef DO_SHRNT
108
109
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/translate-sve.c
112
+++ b/target/arm/translate-sve.c
113
@@ -XXX,XX +XXX,XX @@ static bool trans_RSHRNT(DisasContext *s, arg_rri_esz *a)
114
return do_sve2_shr_narrow(s, a, ops);
115
}
116
117
+static void gen_sqshrunb_vec(unsigned vece, TCGv_vec d,
118
+ TCGv_vec n, int64_t shr)
119
+{
120
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
121
+ int halfbits = 4 << vece;
122
+
123
+ tcg_gen_sari_vec(vece, n, n, shr);
124
+ tcg_gen_dupi_vec(vece, t, 0);
125
+ tcg_gen_smax_vec(vece, n, n, t);
126
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(0, halfbits));
127
+ tcg_gen_umin_vec(vece, d, n, t);
128
+ tcg_temp_free_vec(t);
129
+}
130
+
131
+static bool trans_SQSHRUNB(DisasContext *s, arg_rri_esz *a)
132
+{
133
+ static const TCGOpcode vec_list[] = {
134
+ INDEX_op_sari_vec, INDEX_op_smax_vec, INDEX_op_umin_vec, 0
135
+ };
136
+ static const GVecGen2i ops[3] = {
137
+ { .fniv = gen_sqshrunb_vec,
138
+ .opt_opc = vec_list,
139
+ .fno = gen_helper_sve2_sqshrunb_h,
140
+ .vece = MO_16 },
141
+ { .fniv = gen_sqshrunb_vec,
142
+ .opt_opc = vec_list,
143
+ .fno = gen_helper_sve2_sqshrunb_s,
144
+ .vece = MO_32 },
145
+ { .fniv = gen_sqshrunb_vec,
146
+ .opt_opc = vec_list,
147
+ .fno = gen_helper_sve2_sqshrunb_d,
148
+ .vece = MO_64 },
149
+ };
150
+ return do_sve2_shr_narrow(s, a, ops);
151
+}
152
+
153
+static void gen_sqshrunt_vec(unsigned vece, TCGv_vec d,
154
+ TCGv_vec n, int64_t shr)
155
+{
156
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
157
+ int halfbits = 4 << vece;
158
+
159
+ tcg_gen_sari_vec(vece, n, n, shr);
160
+ tcg_gen_dupi_vec(vece, t, 0);
161
+ tcg_gen_smax_vec(vece, n, n, t);
162
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(0, halfbits));
163
+ tcg_gen_umin_vec(vece, n, n, t);
164
+ tcg_gen_shli_vec(vece, n, n, halfbits);
165
+ tcg_gen_bitsel_vec(vece, d, t, d, n);
166
+ tcg_temp_free_vec(t);
167
+}
168
+
169
+static bool trans_SQSHRUNT(DisasContext *s, arg_rri_esz *a)
170
+{
171
+ static const TCGOpcode vec_list[] = {
172
+ INDEX_op_shli_vec, INDEX_op_sari_vec,
173
+ INDEX_op_smax_vec, INDEX_op_umin_vec, 0
174
+ };
175
+ static const GVecGen2i ops[3] = {
176
+ { .fniv = gen_sqshrunt_vec,
177
+ .opt_opc = vec_list,
178
+ .load_dest = true,
179
+ .fno = gen_helper_sve2_sqshrunt_h,
180
+ .vece = MO_16 },
181
+ { .fniv = gen_sqshrunt_vec,
182
+ .opt_opc = vec_list,
183
+ .load_dest = true,
184
+ .fno = gen_helper_sve2_sqshrunt_s,
185
+ .vece = MO_32 },
186
+ { .fniv = gen_sqshrunt_vec,
187
+ .opt_opc = vec_list,
188
+ .load_dest = true,
189
+ .fno = gen_helper_sve2_sqshrunt_d,
190
+ .vece = MO_64 },
191
+ };
192
+ return do_sve2_shr_narrow(s, a, ops);
193
+}
194
+
195
+static bool trans_SQRSHRUNB(DisasContext *s, arg_rri_esz *a)
196
+{
197
+ static const GVecGen2i ops[3] = {
198
+ { .fno = gen_helper_sve2_sqrshrunb_h },
199
+ { .fno = gen_helper_sve2_sqrshrunb_s },
200
+ { .fno = gen_helper_sve2_sqrshrunb_d },
201
+ };
202
+ return do_sve2_shr_narrow(s, a, ops);
203
+}
204
+
205
+static bool trans_SQRSHRUNT(DisasContext *s, arg_rri_esz *a)
206
+{
207
+ static const GVecGen2i ops[3] = {
208
+ { .fno = gen_helper_sve2_sqrshrunt_h },
209
+ { .fno = gen_helper_sve2_sqrshrunt_s },
210
+ { .fno = gen_helper_sve2_sqrshrunt_d },
211
+ };
212
+ return do_sve2_shr_narrow(s, a, ops);
213
+}
214
+
215
static bool do_sve2_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
216
gen_helper_gvec_4_ptr *fn)
217
{
218
--
219
2.20.1
220
221
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-29-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 16 +++++++
9
target/arm/sve.decode | 4 ++
10
target/arm/sve_helper.c | 24 ++++++++++
11
target/arm/translate-sve.c | 93 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 137 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve2_sqrshrunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve2_sqrshrunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_3(sve2_sqrshrunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_3(sve2_uqshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_3(sve2_uqshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_3(sve2_uqshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_3(sve2_uqshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_3(sve2_uqshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve2_uqshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_3(sve2_uqrshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(sve2_uqrshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_3(sve2_uqrshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_3(sve2_uqrshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_3(sve2_uqrshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_3(sve2_uqrshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
37
+
38
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
39
void, ptr, ptr, ptr, ptr, ptr, i32)
40
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
41
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/sve.decode
44
+++ b/target/arm/sve.decode
45
@@ -XXX,XX +XXX,XX @@ SHRNB 01000101 .. 1 ..... 00 0100 ..... ..... @rd_rn_tszimm_shr
46
SHRNT 01000101 .. 1 ..... 00 0101 ..... ..... @rd_rn_tszimm_shr
47
RSHRNB 01000101 .. 1 ..... 00 0110 ..... ..... @rd_rn_tszimm_shr
48
RSHRNT 01000101 .. 1 ..... 00 0111 ..... ..... @rd_rn_tszimm_shr
49
+UQSHRNB 01000101 .. 1 ..... 00 1100 ..... ..... @rd_rn_tszimm_shr
50
+UQSHRNT 01000101 .. 1 ..... 00 1101 ..... ..... @rd_rn_tszimm_shr
51
+UQRSHRNB 01000101 .. 1 ..... 00 1110 ..... ..... @rd_rn_tszimm_shr
52
+UQRSHRNT 01000101 .. 1 ..... 00 1111 ..... ..... @rd_rn_tszimm_shr
53
54
## SVE2 floating-point pairwise operations
55
56
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/sve_helper.c
59
+++ b/target/arm/sve_helper.c
60
@@ -XXX,XX +XXX,XX @@ DO_SHRNT(sve2_sqrshrunt_h, int16_t, uint8_t, H1_2, H1, DO_SQRSHRUN_H)
61
DO_SHRNT(sve2_sqrshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRUN_S)
62
DO_SHRNT(sve2_sqrshrunt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRUN_D)
63
64
+#define DO_UQSHRN_H(x, sh) MIN(x >> sh, UINT8_MAX)
65
+#define DO_UQSHRN_S(x, sh) MIN(x >> sh, UINT16_MAX)
66
+#define DO_UQSHRN_D(x, sh) MIN(x >> sh, UINT32_MAX)
67
+
68
+DO_SHRNB(sve2_uqshrnb_h, uint16_t, uint8_t, DO_UQSHRN_H)
69
+DO_SHRNB(sve2_uqshrnb_s, uint32_t, uint16_t, DO_UQSHRN_S)
70
+DO_SHRNB(sve2_uqshrnb_d, uint64_t, uint32_t, DO_UQSHRN_D)
71
+
72
+DO_SHRNT(sve2_uqshrnt_h, uint16_t, uint8_t, H1_2, H1, DO_UQSHRN_H)
73
+DO_SHRNT(sve2_uqshrnt_s, uint32_t, uint16_t, H1_4, H1_2, DO_UQSHRN_S)
74
+DO_SHRNT(sve2_uqshrnt_d, uint64_t, uint32_t, , H1_4, DO_UQSHRN_D)
75
+
76
+#define DO_UQRSHRN_H(x, sh) MIN(do_urshr(x, sh), UINT8_MAX)
77
+#define DO_UQRSHRN_S(x, sh) MIN(do_urshr(x, sh), UINT16_MAX)
78
+#define DO_UQRSHRN_D(x, sh) MIN(do_urshr(x, sh), UINT32_MAX)
79
+
80
+DO_SHRNB(sve2_uqrshrnb_h, uint16_t, uint8_t, DO_UQRSHRN_H)
81
+DO_SHRNB(sve2_uqrshrnb_s, uint32_t, uint16_t, DO_UQRSHRN_S)
82
+DO_SHRNB(sve2_uqrshrnb_d, uint64_t, uint32_t, DO_UQRSHRN_D)
83
+
84
+DO_SHRNT(sve2_uqrshrnt_h, uint16_t, uint8_t, H1_2, H1, DO_UQRSHRN_H)
85
+DO_SHRNT(sve2_uqrshrnt_s, uint32_t, uint16_t, H1_4, H1_2, DO_UQRSHRN_S)
86
+DO_SHRNT(sve2_uqrshrnt_d, uint64_t, uint32_t, , H1_4, DO_UQRSHRN_D)
87
+
88
#undef DO_SHRNB
89
#undef DO_SHRNT
90
91
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate-sve.c
94
+++ b/target/arm/translate-sve.c
95
@@ -XXX,XX +XXX,XX @@ static bool trans_SQRSHRUNT(DisasContext *s, arg_rri_esz *a)
96
return do_sve2_shr_narrow(s, a, ops);
97
}
98
99
+static void gen_uqshrnb_vec(unsigned vece, TCGv_vec d,
100
+ TCGv_vec n, int64_t shr)
101
+{
102
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
103
+ int halfbits = 4 << vece;
104
+
105
+ tcg_gen_shri_vec(vece, n, n, shr);
106
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(0, halfbits));
107
+ tcg_gen_umin_vec(vece, d, n, t);
108
+ tcg_temp_free_vec(t);
109
+}
110
+
111
+static bool trans_UQSHRNB(DisasContext *s, arg_rri_esz *a)
112
+{
113
+ static const TCGOpcode vec_list[] = {
114
+ INDEX_op_shri_vec, INDEX_op_umin_vec, 0
115
+ };
116
+ static const GVecGen2i ops[3] = {
117
+ { .fniv = gen_uqshrnb_vec,
118
+ .opt_opc = vec_list,
119
+ .fno = gen_helper_sve2_uqshrnb_h,
120
+ .vece = MO_16 },
121
+ { .fniv = gen_uqshrnb_vec,
122
+ .opt_opc = vec_list,
123
+ .fno = gen_helper_sve2_uqshrnb_s,
124
+ .vece = MO_32 },
125
+ { .fniv = gen_uqshrnb_vec,
126
+ .opt_opc = vec_list,
127
+ .fno = gen_helper_sve2_uqshrnb_d,
128
+ .vece = MO_64 },
129
+ };
130
+ return do_sve2_shr_narrow(s, a, ops);
131
+}
132
+
133
+static void gen_uqshrnt_vec(unsigned vece, TCGv_vec d,
134
+ TCGv_vec n, int64_t shr)
135
+{
136
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
137
+ int halfbits = 4 << vece;
138
+
139
+ tcg_gen_shri_vec(vece, n, n, shr);
140
+ tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(0, halfbits));
141
+ tcg_gen_umin_vec(vece, n, n, t);
142
+ tcg_gen_shli_vec(vece, n, n, halfbits);
143
+ tcg_gen_bitsel_vec(vece, d, t, d, n);
144
+ tcg_temp_free_vec(t);
145
+}
146
+
147
+static bool trans_UQSHRNT(DisasContext *s, arg_rri_esz *a)
148
+{
149
+ static const TCGOpcode vec_list[] = {
150
+ INDEX_op_shli_vec, INDEX_op_shri_vec, INDEX_op_umin_vec, 0
151
+ };
152
+ static const GVecGen2i ops[3] = {
153
+ { .fniv = gen_uqshrnt_vec,
154
+ .opt_opc = vec_list,
155
+ .load_dest = true,
156
+ .fno = gen_helper_sve2_uqshrnt_h,
157
+ .vece = MO_16 },
158
+ { .fniv = gen_uqshrnt_vec,
159
+ .opt_opc = vec_list,
160
+ .load_dest = true,
161
+ .fno = gen_helper_sve2_uqshrnt_s,
162
+ .vece = MO_32 },
163
+ { .fniv = gen_uqshrnt_vec,
164
+ .opt_opc = vec_list,
165
+ .load_dest = true,
166
+ .fno = gen_helper_sve2_uqshrnt_d,
167
+ .vece = MO_64 },
168
+ };
169
+ return do_sve2_shr_narrow(s, a, ops);
170
+}
171
+
172
+static bool trans_UQRSHRNB(DisasContext *s, arg_rri_esz *a)
173
+{
174
+ static const GVecGen2i ops[3] = {
175
+ { .fno = gen_helper_sve2_uqrshrnb_h },
176
+ { .fno = gen_helper_sve2_uqrshrnb_s },
177
+ { .fno = gen_helper_sve2_uqrshrnb_d },
178
+ };
179
+ return do_sve2_shr_narrow(s, a, ops);
180
+}
181
+
182
+static bool trans_UQRSHRNT(DisasContext *s, arg_rri_esz *a)
183
+{
184
+ static const GVecGen2i ops[3] = {
185
+ { .fno = gen_helper_sve2_uqrshrnt_h },
186
+ { .fno = gen_helper_sve2_uqrshrnt_s },
187
+ { .fno = gen_helper_sve2_uqrshrnt_d },
188
+ };
189
+ return do_sve2_shr_narrow(s, a, ops);
190
+}
191
+
192
static bool do_sve2_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
193
gen_helper_gvec_4_ptr *fn)
194
{
195
--
196
2.20.1
197
198
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-32-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/sve.decode | 3 ++
9
target/arm/translate-sve.c | 67 ++++++++++++++++++++++++++++++++++++++
10
2 files changed, 70 insertions(+)
11
12
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve.decode
15
+++ b/target/arm/sve.decode
16
@@ -XXX,XX +XXX,XX @@ CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
17
# SVE integer compare scalar count and limit
18
WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 lt:1 rn:5 eq:1 rd:4
19
20
+# SVE2 pointer conflict compare
21
+WHILE_ptr 00100101 esz:2 1 rm:5 001 100 rn:5 rw:1 rd:4
22
+
23
### SVE Integer Wide Immediate - Unpredicated Group
24
25
# SVE broadcast floating-point immediate (unpredicated)
26
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-sve.c
29
+++ b/target/arm/translate-sve.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
31
return true;
32
}
33
34
+static bool trans_WHILE_ptr(DisasContext *s, arg_WHILE_ptr *a)
35
+{
36
+ TCGv_i64 op0, op1, diff, t1, tmax;
37
+ TCGv_i32 t2, t3;
38
+ TCGv_ptr ptr;
39
+ unsigned vsz = vec_full_reg_size(s);
40
+ unsigned desc = 0;
41
+
42
+ if (!dc_isar_feature(aa64_sve2, s)) {
43
+ return false;
44
+ }
45
+ if (!sve_access_check(s)) {
46
+ return true;
47
+ }
48
+
49
+ op0 = read_cpu_reg(s, a->rn, 1);
50
+ op1 = read_cpu_reg(s, a->rm, 1);
51
+
52
+ tmax = tcg_const_i64(vsz);
53
+ diff = tcg_temp_new_i64();
54
+
55
+ if (a->rw) {
56
+ /* WHILERW */
57
+ /* diff = abs(op1 - op0), noting that op0/1 are unsigned. */
58
+ t1 = tcg_temp_new_i64();
59
+ tcg_gen_sub_i64(diff, op0, op1);
60
+ tcg_gen_sub_i64(t1, op1, op0);
61
+ tcg_gen_movcond_i64(TCG_COND_GEU, diff, op0, op1, diff, t1);
62
+ tcg_temp_free_i64(t1);
63
+ /* Round down to a multiple of ESIZE. */
64
+ tcg_gen_andi_i64(diff, diff, -1 << a->esz);
65
+ /* If op1 == op0, diff == 0, and the condition is always true. */
66
+ tcg_gen_movcond_i64(TCG_COND_EQ, diff, op0, op1, tmax, diff);
67
+ } else {
68
+ /* WHILEWR */
69
+ tcg_gen_sub_i64(diff, op1, op0);
70
+ /* Round down to a multiple of ESIZE. */
71
+ tcg_gen_andi_i64(diff, diff, -1 << a->esz);
72
+ /* If op0 >= op1, diff <= 0, the condition is always true. */
73
+ tcg_gen_movcond_i64(TCG_COND_GEU, diff, op0, op1, tmax, diff);
74
+ }
75
+
76
+ /* Bound to the maximum. */
77
+ tcg_gen_umin_i64(diff, diff, tmax);
78
+ tcg_temp_free_i64(tmax);
79
+
80
+ /* Since we're bounded, pass as a 32-bit type. */
81
+ t2 = tcg_temp_new_i32();
82
+ tcg_gen_extrl_i64_i32(t2, diff);
83
+ tcg_temp_free_i64(diff);
84
+
85
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz / 8);
86
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
87
+ t3 = tcg_const_i32(desc);
88
+
89
+ ptr = tcg_temp_new_ptr();
90
+ tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
91
+
92
+ gen_helper_sve_whilel(t2, ptr, t2, t3);
93
+ do_pred_flags(t2);
94
+
95
+ tcg_temp_free_ptr(ptr);
96
+ tcg_temp_free_i32(t2);
97
+ tcg_temp_free_i32(t3);
98
+ return true;
99
+}
100
+
101
/*
102
*** SVE Integer Wide Immediate - Unpredicated Group
103
*/
104
--
105
2.20.1
106
107
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-33-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 6 ++
9
target/arm/sve.decode | 12 +++
10
target/arm/sve_helper.c | 50 +++++++++
11
target/arm/translate-sve.c | 213 +++++++++++++++++++++++++++++++++++++
12
4 files changed, 281 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_s, TCG_CALL_NO_RWG,
19
void, ptr, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_eor3, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve2_bcax, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve2_bsl1n, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve2_bsl2n, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_nbsl, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/sve.decode
31
+++ b/target/arm/sve.decode
32
@@ -XXX,XX +XXX,XX @@
33
@rda_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 \
34
&rrrr_esz ra=%reg_movprfx
35
36
+# Four operand with unused vector element size
37
+@rdn_ra_rm_e0 ........ ... rm:5 ... ... ra:5 rd:5 \
38
+ &rrrr_esz esz=0 rn=%reg_movprfx
39
+
40
# Three operand with "memory" size, aka immediate left shift
41
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
42
43
@@ -XXX,XX +XXX,XX @@ ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
44
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
45
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
46
47
+# SVE2 bitwise ternary operations
48
+EOR3 00000100 00 1 ..... 001 110 ..... ..... @rdn_ra_rm_e0
49
+BSL 00000100 00 1 ..... 001 111 ..... ..... @rdn_ra_rm_e0
50
+BCAX 00000100 01 1 ..... 001 110 ..... ..... @rdn_ra_rm_e0
51
+BSL1N 00000100 01 1 ..... 001 111 ..... ..... @rdn_ra_rm_e0
52
+BSL2N 00000100 10 1 ..... 001 111 ..... ..... @rdn_ra_rm_e0
53
+NBSL 00000100 11 1 ..... 001 111 ..... ..... @rdn_ra_rm_e0
54
+
55
### SVE Index Generation Group
56
57
# SVE index generation (immediate start, immediate increment)
58
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/sve_helper.c
61
+++ b/target/arm/sve_helper.c
62
@@ -XXX,XX +XXX,XX @@ DO_ST1_ZPZ_D(dd_be, zd, MO_64)
63
64
#undef DO_ST1_ZPZ_S
65
#undef DO_ST1_ZPZ_D
66
+
67
+void HELPER(sve2_eor3)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
68
+{
69
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
70
+ uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
71
+
72
+ for (i = 0; i < opr_sz; ++i) {
73
+ d[i] = n[i] ^ m[i] ^ k[i];
74
+ }
75
+}
76
+
77
+void HELPER(sve2_bcax)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
78
+{
79
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
80
+ uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
81
+
82
+ for (i = 0; i < opr_sz; ++i) {
83
+ d[i] = n[i] ^ (m[i] & ~k[i]);
84
+ }
85
+}
86
+
87
+void HELPER(sve2_bsl1n)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
88
+{
89
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
90
+ uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
91
+
92
+ for (i = 0; i < opr_sz; ++i) {
93
+ d[i] = (~n[i] & k[i]) | (m[i] & ~k[i]);
94
+ }
95
+}
96
+
97
+void HELPER(sve2_bsl2n)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
98
+{
99
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
100
+ uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
101
+
102
+ for (i = 0; i < opr_sz; ++i) {
103
+ d[i] = (n[i] & k[i]) | (~m[i] & ~k[i]);
104
+ }
105
+}
106
+
107
+void HELPER(sve2_nbsl)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
108
+{
109
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
110
+ uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
111
+
112
+ for (i = 0; i < opr_sz; ++i) {
113
+ d[i] = ~((n[i] & k[i]) | (m[i] & ~k[i]));
114
+ }
115
+}
116
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate-sve.c
119
+++ b/target/arm/translate-sve.c
120
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn_zzz(DisasContext *s, GVecGen3Fn *gvec_fn,
121
vec_full_reg_offset(s, rm), vsz, vsz);
122
}
123
124
+/* Invoke a vector expander on four Zregs. */
125
+static void gen_gvec_fn_zzzz(DisasContext *s, GVecGen4Fn *gvec_fn,
126
+ int esz, int rd, int rn, int rm, int ra)
127
+{
128
+ unsigned vsz = vec_full_reg_size(s);
129
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
130
+ vec_full_reg_offset(s, rn),
131
+ vec_full_reg_offset(s, rm),
132
+ vec_full_reg_offset(s, ra), vsz, vsz);
133
+}
134
+
135
/* Invoke a vector move on two Zregs. */
136
static bool do_mov_z(DisasContext *s, int rd, int rn)
137
{
138
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a)
139
return do_zzz_fn(s, a, tcg_gen_gvec_andc);
140
}
141
142
+static bool do_sve2_zzzz_fn(DisasContext *s, arg_rrrr_esz *a, GVecGen4Fn *fn)
143
+{
144
+ if (!dc_isar_feature(aa64_sve2, s)) {
145
+ return false;
146
+ }
147
+ if (sve_access_check(s)) {
148
+ gen_gvec_fn_zzzz(s, fn, a->esz, a->rd, a->rn, a->rm, a->ra);
149
+ }
150
+ return true;
151
+}
152
+
153
+static void gen_eor3_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
154
+{
155
+ tcg_gen_xor_i64(d, n, m);
156
+ tcg_gen_xor_i64(d, d, k);
157
+}
158
+
159
+static void gen_eor3_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
160
+ TCGv_vec m, TCGv_vec k)
161
+{
162
+ tcg_gen_xor_vec(vece, d, n, m);
163
+ tcg_gen_xor_vec(vece, d, d, k);
164
+}
165
+
166
+static void gen_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
167
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
168
+{
169
+ static const GVecGen4 op = {
170
+ .fni8 = gen_eor3_i64,
171
+ .fniv = gen_eor3_vec,
172
+ .fno = gen_helper_sve2_eor3,
173
+ .vece = MO_64,
174
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
175
+ };
176
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
177
+}
178
+
179
+static bool trans_EOR3(DisasContext *s, arg_rrrr_esz *a)
180
+{
181
+ return do_sve2_zzzz_fn(s, a, gen_eor3);
182
+}
183
+
184
+static void gen_bcax_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
185
+{
186
+ tcg_gen_andc_i64(d, m, k);
187
+ tcg_gen_xor_i64(d, d, n);
188
+}
189
+
190
+static void gen_bcax_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
191
+ TCGv_vec m, TCGv_vec k)
192
+{
193
+ tcg_gen_andc_vec(vece, d, m, k);
194
+ tcg_gen_xor_vec(vece, d, d, n);
195
+}
196
+
197
+static void gen_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
198
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
199
+{
200
+ static const GVecGen4 op = {
201
+ .fni8 = gen_bcax_i64,
202
+ .fniv = gen_bcax_vec,
203
+ .fno = gen_helper_sve2_bcax,
204
+ .vece = MO_64,
205
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
206
+ };
207
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
208
+}
209
+
210
+static bool trans_BCAX(DisasContext *s, arg_rrrr_esz *a)
211
+{
212
+ return do_sve2_zzzz_fn(s, a, gen_bcax);
213
+}
214
+
215
+static void gen_bsl(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
216
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
217
+{
218
+ /* BSL differs from the generic bitsel in argument ordering. */
219
+ tcg_gen_gvec_bitsel(vece, d, a, n, m, oprsz, maxsz);
220
+}
221
+
222
+static bool trans_BSL(DisasContext *s, arg_rrrr_esz *a)
223
+{
224
+ return do_sve2_zzzz_fn(s, a, gen_bsl);
225
+}
226
+
227
+static void gen_bsl1n_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
228
+{
229
+ tcg_gen_andc_i64(n, k, n);
230
+ tcg_gen_andc_i64(m, m, k);
231
+ tcg_gen_or_i64(d, n, m);
232
+}
233
+
234
+static void gen_bsl1n_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
235
+ TCGv_vec m, TCGv_vec k)
236
+{
237
+ if (TCG_TARGET_HAS_bitsel_vec) {
238
+ tcg_gen_not_vec(vece, n, n);
239
+ tcg_gen_bitsel_vec(vece, d, k, n, m);
240
+ } else {
241
+ tcg_gen_andc_vec(vece, n, k, n);
242
+ tcg_gen_andc_vec(vece, m, m, k);
243
+ tcg_gen_or_vec(vece, d, n, m);
244
+ }
245
+}
246
+
247
+static void gen_bsl1n(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
248
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
249
+{
250
+ static const GVecGen4 op = {
251
+ .fni8 = gen_bsl1n_i64,
252
+ .fniv = gen_bsl1n_vec,
253
+ .fno = gen_helper_sve2_bsl1n,
254
+ .vece = MO_64,
255
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
256
+ };
257
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
258
+}
259
+
260
+static bool trans_BSL1N(DisasContext *s, arg_rrrr_esz *a)
261
+{
262
+ return do_sve2_zzzz_fn(s, a, gen_bsl1n);
263
+}
264
+
265
+static void gen_bsl2n_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
266
+{
267
+ /*
268
+ * Z[dn] = (n & k) | (~m & ~k)
269
+ * = | ~(m | k)
270
+ */
271
+ tcg_gen_and_i64(n, n, k);
272
+ if (TCG_TARGET_HAS_orc_i64) {
273
+ tcg_gen_or_i64(m, m, k);
274
+ tcg_gen_orc_i64(d, n, m);
275
+ } else {
276
+ tcg_gen_nor_i64(m, m, k);
277
+ tcg_gen_or_i64(d, n, m);
278
+ }
279
+}
280
+
281
+static void gen_bsl2n_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
282
+ TCGv_vec m, TCGv_vec k)
283
+{
284
+ if (TCG_TARGET_HAS_bitsel_vec) {
285
+ tcg_gen_not_vec(vece, m, m);
286
+ tcg_gen_bitsel_vec(vece, d, k, n, m);
287
+ } else {
288
+ tcg_gen_and_vec(vece, n, n, k);
289
+ tcg_gen_or_vec(vece, m, m, k);
290
+ tcg_gen_orc_vec(vece, d, n, m);
291
+ }
292
+}
293
+
294
+static void gen_bsl2n(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
295
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
296
+{
297
+ static const GVecGen4 op = {
298
+ .fni8 = gen_bsl2n_i64,
299
+ .fniv = gen_bsl2n_vec,
300
+ .fno = gen_helper_sve2_bsl2n,
301
+ .vece = MO_64,
302
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
303
+ };
304
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
305
+}
306
+
307
+static bool trans_BSL2N(DisasContext *s, arg_rrrr_esz *a)
308
+{
309
+ return do_sve2_zzzz_fn(s, a, gen_bsl2n);
310
+}
311
+
312
+static void gen_nbsl_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
313
+{
314
+ tcg_gen_and_i64(n, n, k);
315
+ tcg_gen_andc_i64(m, m, k);
316
+ tcg_gen_nor_i64(d, n, m);
317
+}
318
+
319
+static void gen_nbsl_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
320
+ TCGv_vec m, TCGv_vec k)
321
+{
322
+ tcg_gen_bitsel_vec(vece, d, k, n, m);
323
+ tcg_gen_not_vec(vece, d, d);
324
+}
325
+
326
+static void gen_nbsl(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
327
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
328
+{
329
+ static const GVecGen4 op = {
330
+ .fni8 = gen_nbsl_i64,
331
+ .fniv = gen_nbsl_vec,
332
+ .fno = gen_helper_sve2_nbsl,
333
+ .vece = MO_64,
334
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
335
+ };
336
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
337
+}
338
+
339
+static bool trans_NBSL(DisasContext *s, arg_rrrr_esz *a)
340
+{
341
+ return do_sve2_zzzz_fn(s, a, gen_nbsl);
342
+}
343
+
344
/*
345
*** SVE Integer Arithmetic - Unpredicated Group
346
*/
347
--
348
2.20.1
349
350
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-34-richard.henderson@linaro.org
7
Message-Id: <20200415145915.2859-1-steplong@quicinc.com>
8
[rth: Expanded comment for do_match2]
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper-sve.h | 10 ++++++
13
target/arm/sve.decode | 5 +++
14
target/arm/sve_helper.c | 64 ++++++++++++++++++++++++++++++++++++++
15
target/arm/translate-sve.c | 22 +++++++++++++
16
4 files changed, 101 insertions(+)
17
18
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-sve.h
21
+++ b/target/arm/helper-sve.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve2_uqrshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_3(sve2_uqrshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_3(sve2_uqrshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
27
+ i32, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
29
+ i32, ptr, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_5(sve2_nmatch_ppzz_b, TCG_CALL_NO_RWG,
32
+ i32, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve2_nmatch_ppzz_h, TCG_CALL_NO_RWG,
34
+ i32, ptr, ptr, ptr, ptr, i32)
35
+
36
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
37
void, ptr, ptr, ptr, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
39
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve.decode
42
+++ b/target/arm/sve.decode
43
@@ -XXX,XX +XXX,XX @@ UQSHRNT 01000101 .. 1 ..... 00 1101 ..... ..... @rd_rn_tszimm_shr
44
UQRSHRNB 01000101 .. 1 ..... 00 1110 ..... ..... @rd_rn_tszimm_shr
45
UQRSHRNT 01000101 .. 1 ..... 00 1111 ..... ..... @rd_rn_tszimm_shr
46
47
+### SVE2 Character Match
48
+
49
+MATCH 01000101 .. 1 ..... 100 ... ..... 0 .... @pd_pg_rn_rm
50
+NMATCH 01000101 .. 1 ..... 100 ... ..... 1 .... @pd_pg_rn_rm
51
+
52
## SVE2 floating-point pairwise operations
53
54
FADDP 01100100 .. 010 00 0 100 ... ..... ..... @rdn_pg_rm
55
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/sve_helper.c
58
+++ b/target/arm/sve_helper.c
59
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_nbsl)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
60
d[i] = ~((n[i] & k[i]) | (m[i] & ~k[i]));
61
}
62
}
63
+
64
+/*
65
+ * Returns true if m0 or m1 contains the low uint8_t/uint16_t in n.
66
+ * See hasless(v,1) from
67
+ * https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord
68
+ */
69
+static inline bool do_match2(uint64_t n, uint64_t m0, uint64_t m1, int esz)
70
+{
71
+ int bits = 8 << esz;
72
+ uint64_t ones = dup_const(esz, 1);
73
+ uint64_t signs = ones << (bits - 1);
74
+ uint64_t cmp0, cmp1;
75
+
76
+ cmp1 = dup_const(esz, n);
77
+ cmp0 = cmp1 ^ m0;
78
+ cmp1 = cmp1 ^ m1;
79
+ cmp0 = (cmp0 - ones) & ~cmp0;
80
+ cmp1 = (cmp1 - ones) & ~cmp1;
81
+ return (cmp0 | cmp1) & signs;
82
+}
83
+
84
+static inline uint32_t do_match(void *vd, void *vn, void *vm, void *vg,
85
+ uint32_t desc, int esz, bool nmatch)
86
+{
87
+ uint16_t esz_mask = pred_esz_masks[esz];
88
+ intptr_t opr_sz = simd_oprsz(desc);
89
+ uint32_t flags = PREDTEST_INIT;
90
+ intptr_t i, j, k;
91
+
92
+ for (i = 0; i < opr_sz; i += 16) {
93
+ uint64_t m0 = *(uint64_t *)(vm + i);
94
+ uint64_t m1 = *(uint64_t *)(vm + i + 8);
95
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)) & esz_mask;
96
+ uint16_t out = 0;
97
+
98
+ for (j = 0; j < 16; j += 8) {
99
+ uint64_t n = *(uint64_t *)(vn + i + j);
100
+
101
+ for (k = 0; k < 8; k += 1 << esz) {
102
+ if (pg & (1 << (j + k))) {
103
+ bool o = do_match2(n >> (k * 8), m0, m1, esz);
104
+ out |= (o ^ nmatch) << (j + k);
105
+ }
106
+ }
107
+ }
108
+ *(uint16_t *)(vd + H1_2(i >> 3)) = out;
109
+ flags = iter_predtest_fwd(out, pg, flags);
110
+ }
111
+ return flags;
112
+}
113
+
114
+#define DO_PPZZ_MATCH(NAME, ESZ, INV) \
115
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
116
+{ \
117
+ return do_match(vd, vn, vm, vg, desc, ESZ, INV); \
118
+}
119
+
120
+DO_PPZZ_MATCH(sve2_match_ppzz_b, MO_8, false)
121
+DO_PPZZ_MATCH(sve2_match_ppzz_h, MO_16, false)
122
+
123
+DO_PPZZ_MATCH(sve2_nmatch_ppzz_b, MO_8, true)
124
+DO_PPZZ_MATCH(sve2_nmatch_ppzz_h, MO_16, true)
125
+
126
+#undef DO_PPZZ_MATCH
127
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/translate-sve.c
130
+++ b/target/arm/translate-sve.c
131
@@ -XXX,XX +XXX,XX @@ static bool trans_UQRSHRNT(DisasContext *s, arg_rri_esz *a)
132
return do_sve2_shr_narrow(s, a, ops);
133
}
134
135
+static bool do_sve2_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
136
+ gen_helper_gvec_flags_4 *fn)
137
+{
138
+ if (!dc_isar_feature(aa64_sve2, s)) {
139
+ return false;
140
+ }
141
+ return do_ppzz_flags(s, a, fn);
142
+}
143
+
144
+#define DO_SVE2_PPZZ_MATCH(NAME, name) \
145
+static bool trans_##NAME(DisasContext *s, arg_rprr_esz *a) \
146
+{ \
147
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
148
+ gen_helper_sve2_##name##_ppzz_b, gen_helper_sve2_##name##_ppzz_h, \
149
+ NULL, NULL \
150
+ }; \
151
+ return do_sve2_ppzz_flags(s, a, fns[a->esz]); \
152
+}
153
+
154
+DO_SVE2_PPZZ_MATCH(MATCH, match)
155
+DO_SVE2_PPZZ_MATCH(NMATCH, nmatch)
156
+
157
static bool do_sve2_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
158
gen_helper_gvec_4_ptr *fn)
159
{
160
--
161
2.20.1
162
163
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-35-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 14 ++++++++++
9
target/arm/sve.decode | 14 ++++++++++
10
target/arm/sve_helper.c | 30 +++++++++++++++++++++
11
target/arm/translate-sve.c | 54 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 112 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_bcax, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_5(sve2_bsl1n, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve2_bsl2n, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_5(sve2_nbsl, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_zzzw_h, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_zzzw_s, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_zzzw_d, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_h, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_s, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_d, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/sve.decode
39
+++ b/target/arm/sve.decode
40
@@ -XXX,XX +XXX,XX @@ FMAXNMP 01100100 .. 010 10 0 100 ... ..... ..... @rdn_pg_rm
41
FMINNMP 01100100 .. 010 10 1 100 ... ..... ..... @rdn_pg_rm
42
FMAXP 01100100 .. 010 11 0 100 ... ..... ..... @rdn_pg_rm
43
FMINP 01100100 .. 010 11 1 100 ... ..... ..... @rdn_pg_rm
44
+
45
+#### SVE Integer Multiply-Add (unpredicated)
46
+
47
+## SVE2 saturating multiply-add long
48
+
49
+SQDMLALB_zzzw 01000100 .. 0 ..... 0110 00 ..... ..... @rda_rn_rm
50
+SQDMLALT_zzzw 01000100 .. 0 ..... 0110 01 ..... ..... @rda_rn_rm
51
+SQDMLSLB_zzzw 01000100 .. 0 ..... 0110 10 ..... ..... @rda_rn_rm
52
+SQDMLSLT_zzzw 01000100 .. 0 ..... 0110 11 ..... ..... @rda_rn_rm
53
+
54
+## SVE2 saturating multiply-add interleaved long
55
+
56
+SQDMLALBT 01000100 .. 0 ..... 00001 0 ..... ..... @rda_rn_rm
57
+SQDMLSLBT 01000100 .. 0 ..... 00001 1 ..... ..... @rda_rn_rm
58
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/sve_helper.c
61
+++ b/target/arm/sve_helper.c
62
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_adcl_d)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
63
}
64
}
65
66
+#define DO_SQDMLAL(NAME, TYPEW, TYPEN, HW, HN, DMUL_OP, SUM_OP) \
67
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
68
+{ \
69
+ intptr_t i, opr_sz = simd_oprsz(desc); \
70
+ int sel1 = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN); \
71
+ int sel2 = extract32(desc, SIMD_DATA_SHIFT + 1, 1) * sizeof(TYPEN); \
72
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
73
+ TYPEW nn = *(TYPEN *)(vn + HN(i + sel1)); \
74
+ TYPEW mm = *(TYPEN *)(vm + HN(i + sel2)); \
75
+ TYPEW aa = *(TYPEW *)(va + HW(i)); \
76
+ *(TYPEW *)(vd + HW(i)) = SUM_OP(aa, DMUL_OP(nn, mm)); \
77
+ } \
78
+}
79
+
80
+DO_SQDMLAL(sve2_sqdmlal_zzzw_h, int16_t, int8_t, H1_2, H1,
81
+ do_sqdmull_h, DO_SQADD_H)
82
+DO_SQDMLAL(sve2_sqdmlal_zzzw_s, int32_t, int16_t, H1_4, H1_2,
83
+ do_sqdmull_s, DO_SQADD_S)
84
+DO_SQDMLAL(sve2_sqdmlal_zzzw_d, int64_t, int32_t, , H1_4,
85
+ do_sqdmull_d, do_sqadd_d)
86
+
87
+DO_SQDMLAL(sve2_sqdmlsl_zzzw_h, int16_t, int8_t, H1_2, H1,
88
+ do_sqdmull_h, DO_SQSUB_H)
89
+DO_SQDMLAL(sve2_sqdmlsl_zzzw_s, int32_t, int16_t, H1_4, H1_2,
90
+ do_sqdmull_s, DO_SQSUB_S)
91
+DO_SQDMLAL(sve2_sqdmlsl_zzzw_d, int64_t, int32_t, , H1_4,
92
+ do_sqdmull_d, do_sqsub_d)
93
+
94
+#undef DO_SQDMLAL
95
+
96
#define DO_BITPERM(NAME, TYPE, OP) \
97
void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
98
{ \
99
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/translate-sve.c
102
+++ b/target/arm/translate-sve.c
103
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZPZZ_FP(FMAXNMP, fmaxnmp)
104
DO_SVE2_ZPZZ_FP(FMINNMP, fminnmp)
105
DO_SVE2_ZPZZ_FP(FMAXP, fmaxp)
106
DO_SVE2_ZPZZ_FP(FMINP, fminp)
107
+
108
+/*
109
+ * SVE Integer Multiply-Add (unpredicated)
110
+ */
111
+
112
+static bool do_sqdmlal_zzzw(DisasContext *s, arg_rrrr_esz *a,
113
+ bool sel1, bool sel2)
114
+{
115
+ static gen_helper_gvec_4 * const fns[] = {
116
+ NULL, gen_helper_sve2_sqdmlal_zzzw_h,
117
+ gen_helper_sve2_sqdmlal_zzzw_s, gen_helper_sve2_sqdmlal_zzzw_d,
118
+ };
119
+ return do_sve2_zzzz_ool(s, a, fns[a->esz], (sel2 << 1) | sel1);
120
+}
121
+
122
+static bool do_sqdmlsl_zzzw(DisasContext *s, arg_rrrr_esz *a,
123
+ bool sel1, bool sel2)
124
+{
125
+ static gen_helper_gvec_4 * const fns[] = {
126
+ NULL, gen_helper_sve2_sqdmlsl_zzzw_h,
127
+ gen_helper_sve2_sqdmlsl_zzzw_s, gen_helper_sve2_sqdmlsl_zzzw_d,
128
+ };
129
+ return do_sve2_zzzz_ool(s, a, fns[a->esz], (sel2 << 1) | sel1);
130
+}
131
+
132
+static bool trans_SQDMLALB_zzzw(DisasContext *s, arg_rrrr_esz *a)
133
+{
134
+ return do_sqdmlal_zzzw(s, a, false, false);
135
+}
136
+
137
+static bool trans_SQDMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
138
+{
139
+ return do_sqdmlal_zzzw(s, a, true, true);
140
+}
141
+
142
+static bool trans_SQDMLALBT(DisasContext *s, arg_rrrr_esz *a)
143
+{
144
+ return do_sqdmlal_zzzw(s, a, false, true);
145
+}
146
+
147
+static bool trans_SQDMLSLB_zzzw(DisasContext *s, arg_rrrr_esz *a)
148
+{
149
+ return do_sqdmlsl_zzzw(s, a, false, false);
150
+}
151
+
152
+static bool trans_SQDMLSLT_zzzw(DisasContext *s, arg_rrrr_esz *a)
153
+{
154
+ return do_sqdmlsl_zzzw(s, a, true, true);
155
+}
156
+
157
+static bool trans_SQDMLSLBT(DisasContext *s, arg_rrrr_esz *a)
158
+{
159
+ return do_sqdmlsl_zzzw(s, a, false, true);
160
+}
161
--
162
2.20.1
163
164
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
SVE2 has two additional sizes of the operation and unlike NEON,
4
there is no saturation flag. Create new entry points for SVE2
5
that do not set QC.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210525010358.152808-36-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.h | 17 ++++
13
target/arm/sve.decode | 5 ++
14
target/arm/translate-sve.c | 18 +++++
15
target/arm/vec_helper.c | 161 +++++++++++++++++++++++++++++++++++--
16
4 files changed, 195 insertions(+), 6 deletions(-)
17
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
21
+++ b/target/arm/helper.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_qrdmlah_s32, TCG_CALL_NO_RWG,
23
DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s32, TCG_CALL_NO_RWG,
24
void, ptr, ptr, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_b, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_b, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_h, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_h, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_s, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_s, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_d, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_d, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+
43
DEF_HELPER_FLAGS_4(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
DEF_HELPER_FLAGS_4(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
DEF_HELPER_FLAGS_4(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/sve.decode
49
+++ b/target/arm/sve.decode
50
@@ -XXX,XX +XXX,XX @@ SQDMLSLT_zzzw 01000100 .. 0 ..... 0110 11 ..... ..... @rda_rn_rm
51
52
SQDMLALBT 01000100 .. 0 ..... 00001 0 ..... ..... @rda_rn_rm
53
SQDMLSLBT 01000100 .. 0 ..... 00001 1 ..... ..... @rda_rn_rm
54
+
55
+## SVE2 saturating multiply-add high
56
+
57
+SQRDMLAH_zzzz 01000100 .. 0 ..... 01110 0 ..... ..... @rda_rn_rm
58
+SQRDMLSH_zzzz 01000100 .. 0 ..... 01110 1 ..... ..... @rda_rn_rm
59
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/translate-sve.c
62
+++ b/target/arm/translate-sve.c
63
@@ -XXX,XX +XXX,XX @@ static bool trans_SQDMLSLBT(DisasContext *s, arg_rrrr_esz *a)
64
{
65
return do_sqdmlsl_zzzw(s, a, false, true);
66
}
67
+
68
+static bool trans_SQRDMLAH_zzzz(DisasContext *s, arg_rrrr_esz *a)
69
+{
70
+ static gen_helper_gvec_4 * const fns[] = {
71
+ gen_helper_sve2_sqrdmlah_b, gen_helper_sve2_sqrdmlah_h,
72
+ gen_helper_sve2_sqrdmlah_s, gen_helper_sve2_sqrdmlah_d,
73
+ };
74
+ return do_sve2_zzzz_ool(s, a, fns[a->esz], 0);
75
+}
76
+
77
+static bool trans_SQRDMLSH_zzzz(DisasContext *s, arg_rrrr_esz *a)
78
+{
79
+ static gen_helper_gvec_4 * const fns[] = {
80
+ gen_helper_sve2_sqrdmlsh_b, gen_helper_sve2_sqrdmlsh_h,
81
+ gen_helper_sve2_sqrdmlsh_s, gen_helper_sve2_sqrdmlsh_d,
82
+ };
83
+ return do_sve2_zzzz_ool(s, a, fns[a->esz], 0);
84
+}
85
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/vec_helper.c
88
+++ b/target/arm/vec_helper.c
89
@@ -XXX,XX +XXX,XX @@
90
#include "exec/helper-proto.h"
91
#include "tcg/tcg-gvec-desc.h"
92
#include "fpu/softfloat.h"
93
+#include "qemu/int128.h"
94
#include "vec_internal.h"
95
96
/* Note that vector data is stored in host-endian 64-bit chunks,
97
@@ -XXX,XX +XXX,XX @@
98
#define H4(x) (x)
99
#endif
100
101
+/* Signed saturating rounding doubling multiply-accumulate high half, 8-bit */
102
+static int8_t do_sqrdmlah_b(int8_t src1, int8_t src2, int8_t src3,
103
+ bool neg, bool round)
104
+{
105
+ /*
106
+ * Simplify:
107
+ * = ((a3 << 8) + ((e1 * e2) << 1) + (round << 7)) >> 8
108
+ * = ((a3 << 7) + (e1 * e2) + (round << 6)) >> 7
109
+ */
110
+ int32_t ret = (int32_t)src1 * src2;
111
+ if (neg) {
112
+ ret = -ret;
113
+ }
114
+ ret += ((int32_t)src3 << 7) + (round << 6);
115
+ ret >>= 7;
116
+
117
+ if (ret != (int8_t)ret) {
118
+ ret = (ret < 0 ? INT8_MIN : INT8_MAX);
119
+ }
120
+ return ret;
121
+}
122
+
123
+void HELPER(sve2_sqrdmlah_b)(void *vd, void *vn, void *vm,
124
+ void *va, uint32_t desc)
125
+{
126
+ intptr_t i, opr_sz = simd_oprsz(desc);
127
+ int8_t *d = vd, *n = vn, *m = vm, *a = va;
128
+
129
+ for (i = 0; i < opr_sz; ++i) {
130
+ d[i] = do_sqrdmlah_b(n[i], m[i], a[i], false, true);
131
+ }
132
+}
133
+
134
+void HELPER(sve2_sqrdmlsh_b)(void *vd, void *vn, void *vm,
135
+ void *va, uint32_t desc)
136
+{
137
+ intptr_t i, opr_sz = simd_oprsz(desc);
138
+ int8_t *d = vd, *n = vn, *m = vm, *a = va;
139
+
140
+ for (i = 0; i < opr_sz; ++i) {
141
+ d[i] = do_sqrdmlah_b(n[i], m[i], a[i], true, true);
142
+ }
143
+}
144
+
145
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
146
static int16_t do_sqrdmlah_h(int16_t src1, int16_t src2, int16_t src3,
147
bool neg, bool round, uint32_t *sat)
148
{
149
- /*
150
- * Simplify:
151
- * = ((a3 << 16) + ((e1 * e2) << 1) + (1 << 15)) >> 16
152
- * = ((a3 << 15) + (e1 * e2) + (1 << 14)) >> 15
153
- */
154
+ /* Simplify similarly to do_sqrdmlah_b above. */
155
int32_t ret = (int32_t)src1 * src2;
156
if (neg) {
157
ret = -ret;
158
@@ -XXX,XX +XXX,XX @@ void HELPER(neon_sqrdmulh_h)(void *vd, void *vn, void *vm,
159
clear_tail(d, opr_sz, simd_maxsz(desc));
160
}
161
162
+void HELPER(sve2_sqrdmlah_h)(void *vd, void *vn, void *vm,
163
+ void *va, uint32_t desc)
164
+{
165
+ intptr_t i, opr_sz = simd_oprsz(desc);
166
+ int16_t *d = vd, *n = vn, *m = vm, *a = va;
167
+ uint32_t discard;
168
+
169
+ for (i = 0; i < opr_sz / 2; ++i) {
170
+ d[i] = do_sqrdmlah_h(n[i], m[i], a[i], false, true, &discard);
171
+ }
172
+}
173
+
174
+void HELPER(sve2_sqrdmlsh_h)(void *vd, void *vn, void *vm,
175
+ void *va, uint32_t desc)
176
+{
177
+ intptr_t i, opr_sz = simd_oprsz(desc);
178
+ int16_t *d = vd, *n = vn, *m = vm, *a = va;
179
+ uint32_t discard;
180
+
181
+ for (i = 0; i < opr_sz / 2; ++i) {
182
+ d[i] = do_sqrdmlah_h(n[i], m[i], a[i], true, true, &discard);
183
+ }
184
+}
185
+
186
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
187
static int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
188
bool neg, bool round, uint32_t *sat)
189
{
190
- /* Simplify similarly to int_qrdmlah_s16 above. */
191
+ /* Simplify similarly to do_sqrdmlah_b above. */
192
int64_t ret = (int64_t)src1 * src2;
193
if (neg) {
194
ret = -ret;
195
@@ -XXX,XX +XXX,XX @@ void HELPER(neon_sqrdmulh_s)(void *vd, void *vn, void *vm,
196
clear_tail(d, opr_sz, simd_maxsz(desc));
197
}
198
199
+void HELPER(sve2_sqrdmlah_s)(void *vd, void *vn, void *vm,
200
+ void *va, uint32_t desc)
201
+{
202
+ intptr_t i, opr_sz = simd_oprsz(desc);
203
+ int32_t *d = vd, *n = vn, *m = vm, *a = va;
204
+ uint32_t discard;
205
+
206
+ for (i = 0; i < opr_sz / 4; ++i) {
207
+ d[i] = do_sqrdmlah_s(n[i], m[i], a[i], false, true, &discard);
208
+ }
209
+}
210
+
211
+void HELPER(sve2_sqrdmlsh_s)(void *vd, void *vn, void *vm,
212
+ void *va, uint32_t desc)
213
+{
214
+ intptr_t i, opr_sz = simd_oprsz(desc);
215
+ int32_t *d = vd, *n = vn, *m = vm, *a = va;
216
+ uint32_t discard;
217
+
218
+ for (i = 0; i < opr_sz / 4; ++i) {
219
+ d[i] = do_sqrdmlah_s(n[i], m[i], a[i], true, true, &discard);
220
+ }
221
+}
222
+
223
+/* Signed saturating rounding doubling multiply-accumulate high half, 64-bit */
224
+static int64_t do_sat128_d(Int128 r)
225
+{
226
+ int64_t ls = int128_getlo(r);
227
+ int64_t hs = int128_gethi(r);
228
+
229
+ if (unlikely(hs != (ls >> 63))) {
230
+ return hs < 0 ? INT64_MIN : INT64_MAX;
231
+ }
232
+ return ls;
233
+}
234
+
235
+static int64_t do_sqrdmlah_d(int64_t n, int64_t m, int64_t a,
236
+ bool neg, bool round)
237
+{
238
+ uint64_t l, h;
239
+ Int128 r, t;
240
+
241
+ /* As in do_sqrdmlah_b, but with 128-bit arithmetic. */
242
+ muls64(&l, &h, m, n);
243
+ r = int128_make128(l, h);
244
+ if (neg) {
245
+ r = int128_neg(r);
246
+ }
247
+ if (a) {
248
+ t = int128_exts64(a);
249
+ t = int128_lshift(t, 63);
250
+ r = int128_add(r, t);
251
+ }
252
+ if (round) {
253
+ t = int128_exts64(1ll << 62);
254
+ r = int128_add(r, t);
255
+ }
256
+ r = int128_rshift(r, 63);
257
+
258
+ return do_sat128_d(r);
259
+}
260
+
261
+void HELPER(sve2_sqrdmlah_d)(void *vd, void *vn, void *vm,
262
+ void *va, uint32_t desc)
263
+{
264
+ intptr_t i, opr_sz = simd_oprsz(desc);
265
+ int64_t *d = vd, *n = vn, *m = vm, *a = va;
266
+
267
+ for (i = 0; i < opr_sz / 8; ++i) {
268
+ d[i] = do_sqrdmlah_d(n[i], m[i], a[i], false, true);
269
+ }
270
+}
271
+
272
+void HELPER(sve2_sqrdmlsh_d)(void *vd, void *vn, void *vm,
273
+ void *va, uint32_t desc)
274
+{
275
+ intptr_t i, opr_sz = simd_oprsz(desc);
276
+ int64_t *d = vd, *n = vn, *m = vm, *a = va;
277
+
278
+ for (i = 0; i < opr_sz / 8; ++i) {
279
+ d[i] = do_sqrdmlah_d(n[i], m[i], a[i], true, true);
280
+ }
281
+}
282
+
283
/* Integer 8 and 16-bit dot-product.
284
*
285
* Note that for the loops herein, host endianness does not matter
286
--
287
2.20.1
288
289
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-37-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 28 ++++++++++++++
9
target/arm/sve.decode | 11 ++++++
10
target/arm/sve_helper.c | 18 +++++++++
11
target/arm/translate-sve.c | 76 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 133 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_s, TCG_CALL_NO_RWG,
19
void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_smlal_zzzw_h, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve2_smlal_zzzw_s, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_smlal_zzzw_d, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_5(sve2_umlal_zzzw_h, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve2_umlal_zzzw_s, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(sve2_umlal_zzzw_d, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_5(sve2_smlsl_zzzw_h, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_5(sve2_smlsl_zzzw_s, TCG_CALL_NO_RWG,
40
+ void, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(sve2_smlsl_zzzw_d, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_h, TCG_CALL_NO_RWG,
45
+ void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_s, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_d, TCG_CALL_NO_RWG,
49
+ void, ptr, ptr, ptr, ptr, i32)
50
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/sve.decode
53
+++ b/target/arm/sve.decode
54
@@ -XXX,XX +XXX,XX @@ SQDMLSLBT 01000100 .. 0 ..... 00001 1 ..... ..... @rda_rn_rm
55
56
SQRDMLAH_zzzz 01000100 .. 0 ..... 01110 0 ..... ..... @rda_rn_rm
57
SQRDMLSH_zzzz 01000100 .. 0 ..... 01110 1 ..... ..... @rda_rn_rm
58
+
59
+## SVE2 integer multiply-add long
60
+
61
+SMLALB_zzzw 01000100 .. 0 ..... 010 000 ..... ..... @rda_rn_rm
62
+SMLALT_zzzw 01000100 .. 0 ..... 010 001 ..... ..... @rda_rn_rm
63
+UMLALB_zzzw 01000100 .. 0 ..... 010 010 ..... ..... @rda_rn_rm
64
+UMLALT_zzzw 01000100 .. 0 ..... 010 011 ..... ..... @rda_rn_rm
65
+SMLSLB_zzzw 01000100 .. 0 ..... 010 100 ..... ..... @rda_rn_rm
66
+SMLSLT_zzzw 01000100 .. 0 ..... 010 101 ..... ..... @rda_rn_rm
67
+UMLSLB_zzzw 01000100 .. 0 ..... 010 110 ..... ..... @rda_rn_rm
68
+UMLSLT_zzzw 01000100 .. 0 ..... 010 111 ..... ..... @rda_rn_rm
69
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/sve_helper.c
72
+++ b/target/arm/sve_helper.c
73
@@ -XXX,XX +XXX,XX @@ DO_ZZZW_ACC(sve2_uabal_h, uint16_t, uint8_t, H1_2, H1, DO_ABD)
74
DO_ZZZW_ACC(sve2_uabal_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
75
DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , H1_4, DO_ABD)
76
77
+DO_ZZZW_ACC(sve2_smlal_zzzw_h, int16_t, int8_t, H1_2, H1, DO_MUL)
78
+DO_ZZZW_ACC(sve2_smlal_zzzw_s, int32_t, int16_t, H1_4, H1_2, DO_MUL)
79
+DO_ZZZW_ACC(sve2_smlal_zzzw_d, int64_t, int32_t, , H1_4, DO_MUL)
80
+
81
+DO_ZZZW_ACC(sve2_umlal_zzzw_h, uint16_t, uint8_t, H1_2, H1, DO_MUL)
82
+DO_ZZZW_ACC(sve2_umlal_zzzw_s, uint32_t, uint16_t, H1_4, H1_2, DO_MUL)
83
+DO_ZZZW_ACC(sve2_umlal_zzzw_d, uint64_t, uint32_t, , H1_4, DO_MUL)
84
+
85
+#define DO_NMUL(N, M) -(N * M)
86
+
87
+DO_ZZZW_ACC(sve2_smlsl_zzzw_h, int16_t, int8_t, H1_2, H1, DO_NMUL)
88
+DO_ZZZW_ACC(sve2_smlsl_zzzw_s, int32_t, int16_t, H1_4, H1_2, DO_NMUL)
89
+DO_ZZZW_ACC(sve2_smlsl_zzzw_d, int64_t, int32_t, , H1_4, DO_NMUL)
90
+
91
+DO_ZZZW_ACC(sve2_umlsl_zzzw_h, uint16_t, uint8_t, H1_2, H1, DO_NMUL)
92
+DO_ZZZW_ACC(sve2_umlsl_zzzw_s, uint32_t, uint16_t, H1_4, H1_2, DO_NMUL)
93
+DO_ZZZW_ACC(sve2_umlsl_zzzw_d, uint64_t, uint32_t, , H1_4, DO_NMUL)
94
+
95
#undef DO_ZZZW_ACC
96
97
#define DO_XTNB(NAME, TYPE, OP) \
98
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/translate-sve.c
101
+++ b/target/arm/translate-sve.c
102
@@ -XXX,XX +XXX,XX @@ static bool trans_SQRDMLSH_zzzz(DisasContext *s, arg_rrrr_esz *a)
103
};
104
return do_sve2_zzzz_ool(s, a, fns[a->esz], 0);
105
}
106
+
107
+static bool do_smlal_zzzw(DisasContext *s, arg_rrrr_esz *a, bool sel)
108
+{
109
+ static gen_helper_gvec_4 * const fns[] = {
110
+ NULL, gen_helper_sve2_smlal_zzzw_h,
111
+ gen_helper_sve2_smlal_zzzw_s, gen_helper_sve2_smlal_zzzw_d,
112
+ };
113
+ return do_sve2_zzzz_ool(s, a, fns[a->esz], sel);
114
+}
115
+
116
+static bool trans_SMLALB_zzzw(DisasContext *s, arg_rrrr_esz *a)
117
+{
118
+ return do_smlal_zzzw(s, a, false);
119
+}
120
+
121
+static bool trans_SMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
122
+{
123
+ return do_smlal_zzzw(s, a, true);
124
+}
125
+
126
+static bool do_umlal_zzzw(DisasContext *s, arg_rrrr_esz *a, bool sel)
127
+{
128
+ static gen_helper_gvec_4 * const fns[] = {
129
+ NULL, gen_helper_sve2_umlal_zzzw_h,
130
+ gen_helper_sve2_umlal_zzzw_s, gen_helper_sve2_umlal_zzzw_d,
131
+ };
132
+ return do_sve2_zzzz_ool(s, a, fns[a->esz], sel);
133
+}
134
+
135
+static bool trans_UMLALB_zzzw(DisasContext *s, arg_rrrr_esz *a)
136
+{
137
+ return do_umlal_zzzw(s, a, false);
138
+}
139
+
140
+static bool trans_UMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
141
+{
142
+ return do_umlal_zzzw(s, a, true);
143
+}
144
+
145
+static bool do_smlsl_zzzw(DisasContext *s, arg_rrrr_esz *a, bool sel)
146
+{
147
+ static gen_helper_gvec_4 * const fns[] = {
148
+ NULL, gen_helper_sve2_smlsl_zzzw_h,
149
+ gen_helper_sve2_smlsl_zzzw_s, gen_helper_sve2_smlsl_zzzw_d,
150
+ };
151
+ return do_sve2_zzzz_ool(s, a, fns[a->esz], sel);
152
+}
153
+
154
+static bool trans_SMLSLB_zzzw(DisasContext *s, arg_rrrr_esz *a)
155
+{
156
+ return do_smlsl_zzzw(s, a, false);
157
+}
158
+
159
+static bool trans_SMLSLT_zzzw(DisasContext *s, arg_rrrr_esz *a)
160
+{
161
+ return do_smlsl_zzzw(s, a, true);
162
+}
163
+
164
+static bool do_umlsl_zzzw(DisasContext *s, arg_rrrr_esz *a, bool sel)
165
+{
166
+ static gen_helper_gvec_4 * const fns[] = {
167
+ NULL, gen_helper_sve2_umlsl_zzzw_h,
168
+ gen_helper_sve2_umlsl_zzzw_s, gen_helper_sve2_umlsl_zzzw_d,
169
+ };
170
+ return do_sve2_zzzz_ool(s, a, fns[a->esz], sel);
171
+}
172
+
173
+static bool trans_UMLSLB_zzzw(DisasContext *s, arg_rrrr_esz *a)
174
+{
175
+ return do_umlsl_zzzw(s, a, false);
176
+}
177
+
178
+static bool trans_UMLSLT_zzzw(DisasContext *s, arg_rrrr_esz *a)
179
+{
180
+ return do_umlsl_zzzw(s, a, true);
181
+}
182
--
183
2.20.1
184
185
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-38-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 18 +++++++++++++++
9
target/arm/vec_internal.h | 5 +++++
10
target/arm/sve.decode | 5 +++++
11
target/arm/sve_helper.c | 46 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-sve.c | 32 ++++++++++++++++++++++++++
13
target/arm/vec_helper.c | 15 ++++++-------
14
6 files changed, 113 insertions(+), 8 deletions(-)
15
16
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-sve.h
19
+++ b/target/arm/helper-sve.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_s, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_d, TCG_CALL_NO_RWG,
23
void, ptr, ptr, ptr, ptr, i32)
24
+
25
+DEF_HELPER_FLAGS_5(sve2_cmla_zzzz_b, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_cmla_zzzz_h, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_5(sve2_cmla_zzzz_s, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_5(sve2_cmla_zzzz_d, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_b, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_h, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_s, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_d, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/vec_internal.h
45
+++ b/target/arm/vec_internal.h
46
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_suqrshl_d(int64_t src, int64_t shift,
47
return do_uqrshl_d(src, shift, round, sat);
48
}
49
50
+int8_t do_sqrdmlah_b(int8_t, int8_t, int8_t, bool, bool);
51
+int16_t do_sqrdmlah_h(int16_t, int16_t, int16_t, bool, bool, uint32_t *);
52
+int32_t do_sqrdmlah_s(int32_t, int32_t, int32_t, bool, bool, uint32_t *);
53
+int64_t do_sqrdmlah_d(int64_t, int64_t, int64_t, bool, bool);
54
+
55
#endif /* TARGET_ARM_VEC_INTERNALS_H */
56
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/sve.decode
59
+++ b/target/arm/sve.decode
60
@@ -XXX,XX +XXX,XX @@ SMLSLB_zzzw 01000100 .. 0 ..... 010 100 ..... ..... @rda_rn_rm
61
SMLSLT_zzzw 01000100 .. 0 ..... 010 101 ..... ..... @rda_rn_rm
62
UMLSLB_zzzw 01000100 .. 0 ..... 010 110 ..... ..... @rda_rn_rm
63
UMLSLT_zzzw 01000100 .. 0 ..... 010 111 ..... ..... @rda_rn_rm
64
+
65
+## SVE2 complex integer multiply-add
66
+
67
+CMLA_zzzz 01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5 ra=%reg_movprfx
68
+SQRDCMLAH_zzzz 01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5 ra=%reg_movprfx
69
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/sve_helper.c
72
+++ b/target/arm/sve_helper.c
73
@@ -XXX,XX +XXX,XX @@ DO_SQDMLAL(sve2_sqdmlsl_zzzw_d, int64_t, int32_t, , H1_4,
74
75
#undef DO_SQDMLAL
76
77
+#define DO_CMLA_FUNC(NAME, TYPE, H, OP) \
78
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
79
+{ \
80
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(TYPE); \
81
+ int rot = simd_data(desc); \
82
+ int sel_a = rot & 1, sel_b = sel_a ^ 1; \
83
+ bool sub_r = rot == 1 || rot == 2; \
84
+ bool sub_i = rot >= 2; \
85
+ TYPE *d = vd, *n = vn, *m = vm, *a = va; \
86
+ for (i = 0; i < opr_sz; i += 2) { \
87
+ TYPE elt1_a = n[H(i + sel_a)]; \
88
+ TYPE elt2_a = m[H(i + sel_a)]; \
89
+ TYPE elt2_b = m[H(i + sel_b)]; \
90
+ d[H(i)] = OP(elt1_a, elt2_a, a[H(i)], sub_r); \
91
+ d[H(i + 1)] = OP(elt1_a, elt2_b, a[H(i + 1)], sub_i); \
92
+ } \
93
+}
94
+
95
+#define DO_CMLA(N, M, A, S) (A + (N * M) * (S ? -1 : 1))
96
+
97
+DO_CMLA_FUNC(sve2_cmla_zzzz_b, uint8_t, H1, DO_CMLA)
98
+DO_CMLA_FUNC(sve2_cmla_zzzz_h, uint16_t, H2, DO_CMLA)
99
+DO_CMLA_FUNC(sve2_cmla_zzzz_s, uint32_t, H4, DO_CMLA)
100
+DO_CMLA_FUNC(sve2_cmla_zzzz_d, uint64_t, , DO_CMLA)
101
+
102
+#define DO_SQRDMLAH_B(N, M, A, S) \
103
+ do_sqrdmlah_b(N, M, A, S, true)
104
+#define DO_SQRDMLAH_H(N, M, A, S) \
105
+ ({ uint32_t discard; do_sqrdmlah_h(N, M, A, S, true, &discard); })
106
+#define DO_SQRDMLAH_S(N, M, A, S) \
107
+ ({ uint32_t discard; do_sqrdmlah_s(N, M, A, S, true, &discard); })
108
+#define DO_SQRDMLAH_D(N, M, A, S) \
109
+ do_sqrdmlah_d(N, M, A, S, true)
110
+
111
+DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_b, int8_t, H1, DO_SQRDMLAH_B)
112
+DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_h, int16_t, H2, DO_SQRDMLAH_H)
113
+DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_s, int32_t, H4, DO_SQRDMLAH_S)
114
+DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_d, int64_t, , DO_SQRDMLAH_D)
115
+
116
+#undef DO_CMLA
117
+#undef DO_CMLA_FUNC
118
+#undef DO_SQRDMLAH_B
119
+#undef DO_SQRDMLAH_H
120
+#undef DO_SQRDMLAH_S
121
+#undef DO_SQRDMLAH_D
122
+
123
#define DO_BITPERM(NAME, TYPE, OP) \
124
void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
125
{ \
126
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/target/arm/translate-sve.c
129
+++ b/target/arm/translate-sve.c
130
@@ -XXX,XX +XXX,XX @@ static bool trans_UMLSLT_zzzw(DisasContext *s, arg_rrrr_esz *a)
131
{
132
return do_umlsl_zzzw(s, a, true);
133
}
134
+
135
+static bool trans_CMLA_zzzz(DisasContext *s, arg_CMLA_zzzz *a)
136
+{
137
+ static gen_helper_gvec_4 * const fns[] = {
138
+ gen_helper_sve2_cmla_zzzz_b, gen_helper_sve2_cmla_zzzz_h,
139
+ gen_helper_sve2_cmla_zzzz_s, gen_helper_sve2_cmla_zzzz_d,
140
+ };
141
+
142
+ if (!dc_isar_feature(aa64_sve2, s)) {
143
+ return false;
144
+ }
145
+ if (sve_access_check(s)) {
146
+ gen_gvec_ool_zzzz(s, fns[a->esz], a->rd, a->rn, a->rm, a->ra, a->rot);
147
+ }
148
+ return true;
149
+}
150
+
151
+static bool trans_SQRDCMLAH_zzzz(DisasContext *s, arg_SQRDCMLAH_zzzz *a)
152
+{
153
+ static gen_helper_gvec_4 * const fns[] = {
154
+ gen_helper_sve2_sqrdcmlah_zzzz_b, gen_helper_sve2_sqrdcmlah_zzzz_h,
155
+ gen_helper_sve2_sqrdcmlah_zzzz_s, gen_helper_sve2_sqrdcmlah_zzzz_d,
156
+ };
157
+
158
+ if (!dc_isar_feature(aa64_sve2, s)) {
159
+ return false;
160
+ }
161
+ if (sve_access_check(s)) {
162
+ gen_gvec_ool_zzzz(s, fns[a->esz], a->rd, a->rn, a->rm, a->ra, a->rot);
163
+ }
164
+ return true;
165
+}
166
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/arm/vec_helper.c
169
+++ b/target/arm/vec_helper.c
170
@@ -XXX,XX +XXX,XX @@
171
#endif
172
173
/* Signed saturating rounding doubling multiply-accumulate high half, 8-bit */
174
-static int8_t do_sqrdmlah_b(int8_t src1, int8_t src2, int8_t src3,
175
- bool neg, bool round)
176
+int8_t do_sqrdmlah_b(int8_t src1, int8_t src2, int8_t src3,
177
+ bool neg, bool round)
178
{
179
/*
180
* Simplify:
181
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmlsh_b)(void *vd, void *vn, void *vm,
182
}
183
184
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
185
-static int16_t do_sqrdmlah_h(int16_t src1, int16_t src2, int16_t src3,
186
- bool neg, bool round, uint32_t *sat)
187
+int16_t do_sqrdmlah_h(int16_t src1, int16_t src2, int16_t src3,
188
+ bool neg, bool round, uint32_t *sat)
189
{
190
/* Simplify similarly to do_sqrdmlah_b above. */
191
int32_t ret = (int32_t)src1 * src2;
192
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmlsh_h)(void *vd, void *vn, void *vm,
193
}
194
195
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
196
-static int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
197
- bool neg, bool round, uint32_t *sat)
198
+int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
199
+ bool neg, bool round, uint32_t *sat)
200
{
201
/* Simplify similarly to do_sqrdmlah_b above. */
202
int64_t ret = (int64_t)src1 * src2;
203
@@ -XXX,XX +XXX,XX @@ static int64_t do_sat128_d(Int128 r)
204
return ls;
205
}
206
207
-static int64_t do_sqrdmlah_d(int64_t n, int64_t m, int64_t a,
208
- bool neg, bool round)
209
+int64_t do_sqrdmlah_d(int64_t n, int64_t m, int64_t a, bool neg, bool round)
210
{
211
uint64_t l, h;
212
Int128 r, t;
213
--
214
2.20.1
215
216
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-39-richard.henderson@linaro.org
7
Message-Id: <20200417162231.10374-2-steplong@quicinc.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper-sve.h | 8 ++++++++
12
target/arm/sve.decode | 5 +++++
13
target/arm/sve_helper.c | 36 ++++++++++++++++++++++++++++++++++++
14
target/arm/translate-sve.c | 13 +++++++++++++
15
4 files changed, 62 insertions(+)
16
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
20
+++ b/target/arm/helper-sve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve2_uqrshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_3(sve2_uqrshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_3(sve2_uqrshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(sve2_addhnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve2_addhnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve2_addhnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(sve2_addhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve2_addhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve2_addhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+
33
DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
34
i32, ptr, ptr, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
36
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/sve.decode
39
+++ b/target/arm/sve.decode
40
@@ -XXX,XX +XXX,XX @@ UQSHRNT 01000101 .. 1 ..... 00 1101 ..... ..... @rd_rn_tszimm_shr
41
UQRSHRNB 01000101 .. 1 ..... 00 1110 ..... ..... @rd_rn_tszimm_shr
42
UQRSHRNT 01000101 .. 1 ..... 00 1111 ..... ..... @rd_rn_tszimm_shr
43
44
+## SVE2 integer add/subtract narrow high part
45
+
46
+ADDHNB 01000101 .. 1 ..... 011 000 ..... ..... @rd_rn_rm
47
+ADDHNT 01000101 .. 1 ..... 011 001 ..... ..... @rd_rn_rm
48
+
49
### SVE2 Character Match
50
51
MATCH 01000101 .. 1 ..... 100 ... ..... 0 .... @pd_pg_rn_rm
52
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/sve_helper.c
55
+++ b/target/arm/sve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_SHRNT(sve2_uqrshrnt_d, uint64_t, uint32_t, , H1_4, DO_UQRSHRN_D)
57
#undef DO_SHRNB
58
#undef DO_SHRNT
59
60
+#define DO_BINOPNB(NAME, TYPEW, TYPEN, SHIFT, OP) \
61
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
62
+{ \
63
+ intptr_t i, opr_sz = simd_oprsz(desc); \
64
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
65
+ TYPEW nn = *(TYPEW *)(vn + i); \
66
+ TYPEW mm = *(TYPEW *)(vm + i); \
67
+ *(TYPEW *)(vd + i) = (TYPEN)OP(nn, mm, SHIFT); \
68
+ } \
69
+}
70
+
71
+#define DO_BINOPNT(NAME, TYPEW, TYPEN, SHIFT, HW, HN, OP) \
72
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
73
+{ \
74
+ intptr_t i, opr_sz = simd_oprsz(desc); \
75
+ for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
76
+ TYPEW nn = *(TYPEW *)(vn + HW(i)); \
77
+ TYPEW mm = *(TYPEW *)(vm + HW(i)); \
78
+ *(TYPEN *)(vd + HN(i + sizeof(TYPEN))) = OP(nn, mm, SHIFT); \
79
+ } \
80
+}
81
+
82
+#define DO_ADDHN(N, M, SH) ((N + M) >> SH)
83
+
84
+DO_BINOPNB(sve2_addhnb_h, uint16_t, uint8_t, 8, DO_ADDHN)
85
+DO_BINOPNB(sve2_addhnb_s, uint32_t, uint16_t, 16, DO_ADDHN)
86
+DO_BINOPNB(sve2_addhnb_d, uint64_t, uint32_t, 32, DO_ADDHN)
87
+
88
+DO_BINOPNT(sve2_addhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_ADDHN)
89
+DO_BINOPNT(sve2_addhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_ADDHN)
90
+DO_BINOPNT(sve2_addhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_ADDHN)
91
+
92
+#undef DO_ADDHN
93
+
94
+#undef DO_BINOPNB
95
+
96
/* Fully general four-operand expander, controlled by a predicate.
97
*/
98
#define DO_ZPZZZ(NAME, TYPE, H, OP) \
99
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/translate-sve.c
102
+++ b/target/arm/translate-sve.c
103
@@ -XXX,XX +XXX,XX @@ static bool trans_UQRSHRNT(DisasContext *s, arg_rri_esz *a)
104
return do_sve2_shr_narrow(s, a, ops);
105
}
106
107
+#define DO_SVE2_ZZZ_NARROW(NAME, name) \
108
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a) \
109
+{ \
110
+ static gen_helper_gvec_3 * const fns[4] = { \
111
+ NULL, gen_helper_sve2_##name##_h, \
112
+ gen_helper_sve2_##name##_s, gen_helper_sve2_##name##_d, \
113
+ }; \
114
+ return do_sve2_zzz_ool(s, a, fns[a->esz]); \
115
+}
116
+
117
+DO_SVE2_ZZZ_NARROW(ADDHNB, addhnb)
118
+DO_SVE2_ZZZ_NARROW(ADDHNT, addhnt)
119
+
120
static bool do_sve2_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
121
gen_helper_gvec_flags_4 *fn)
122
{
123
--
124
2.20.1
125
126
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-40-richard.henderson@linaro.org
7
Message-Id: <20200417162231.10374-3-steplong@quicinc.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper-sve.h | 8 ++++++++
12
target/arm/sve.decode | 2 ++
13
target/arm/sve_helper.c | 10 ++++++++++
14
target/arm/translate-sve.c | 2 ++
15
4 files changed, 22 insertions(+)
16
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
20
+++ b/target/arm/helper-sve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_addhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(sve2_addhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(sve2_addhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(sve2_raddhnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve2_raddhnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve2_raddhnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(sve2_raddhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve2_raddhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve2_raddhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+
33
DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
34
i32, ptr, ptr, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
36
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/sve.decode
39
+++ b/target/arm/sve.decode
40
@@ -XXX,XX +XXX,XX @@ UQRSHRNT 01000101 .. 1 ..... 00 1111 ..... ..... @rd_rn_tszimm_shr
41
42
ADDHNB 01000101 .. 1 ..... 011 000 ..... ..... @rd_rn_rm
43
ADDHNT 01000101 .. 1 ..... 011 001 ..... ..... @rd_rn_rm
44
+RADDHNB 01000101 .. 1 ..... 011 010 ..... ..... @rd_rn_rm
45
+RADDHNT 01000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
46
47
### SVE2 Character Match
48
49
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/sve_helper.c
52
+++ b/target/arm/sve_helper.c
53
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
54
}
55
56
#define DO_ADDHN(N, M, SH) ((N + M) >> SH)
57
+#define DO_RADDHN(N, M, SH) ((N + M + ((__typeof(N))1 << (SH - 1))) >> SH)
58
59
DO_BINOPNB(sve2_addhnb_h, uint16_t, uint8_t, 8, DO_ADDHN)
60
DO_BINOPNB(sve2_addhnb_s, uint32_t, uint16_t, 16, DO_ADDHN)
61
@@ -XXX,XX +XXX,XX @@ DO_BINOPNT(sve2_addhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_ADDHN)
62
DO_BINOPNT(sve2_addhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_ADDHN)
63
DO_BINOPNT(sve2_addhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_ADDHN)
64
65
+DO_BINOPNB(sve2_raddhnb_h, uint16_t, uint8_t, 8, DO_RADDHN)
66
+DO_BINOPNB(sve2_raddhnb_s, uint32_t, uint16_t, 16, DO_RADDHN)
67
+DO_BINOPNB(sve2_raddhnb_d, uint64_t, uint32_t, 32, DO_RADDHN)
68
+
69
+DO_BINOPNT(sve2_raddhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_RADDHN)
70
+DO_BINOPNT(sve2_raddhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_RADDHN)
71
+DO_BINOPNT(sve2_raddhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_RADDHN)
72
+
73
+#undef DO_RADDHN
74
#undef DO_ADDHN
75
76
#undef DO_BINOPNB
77
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/translate-sve.c
80
+++ b/target/arm/translate-sve.c
81
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a) \
82
83
DO_SVE2_ZZZ_NARROW(ADDHNB, addhnb)
84
DO_SVE2_ZZZ_NARROW(ADDHNT, addhnt)
85
+DO_SVE2_ZZZ_NARROW(RADDHNB, raddhnb)
86
+DO_SVE2_ZZZ_NARROW(RADDHNT, raddhnt)
87
88
static bool do_sve2_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
89
gen_helper_gvec_flags_4 *fn)
90
--
91
2.20.1
92
93
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-41-richard.henderson@linaro.org
7
Message-Id: <20200417162231.10374-4-steplong@quicinc.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper-sve.h | 8 ++++++++
12
target/arm/sve.decode | 2 ++
13
target/arm/sve_helper.c | 10 ++++++++++
14
target/arm/translate-sve.c | 3 +++
15
4 files changed, 23 insertions(+)
16
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
20
+++ b/target/arm/helper-sve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_raddhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(sve2_raddhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(sve2_raddhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(sve2_subhnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve2_subhnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve2_subhnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(sve2_subhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve2_subhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve2_subhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+
33
DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
34
i32, ptr, ptr, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
36
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/sve.decode
39
+++ b/target/arm/sve.decode
40
@@ -XXX,XX +XXX,XX @@ ADDHNB 01000101 .. 1 ..... 011 000 ..... ..... @rd_rn_rm
41
ADDHNT 01000101 .. 1 ..... 011 001 ..... ..... @rd_rn_rm
42
RADDHNB 01000101 .. 1 ..... 011 010 ..... ..... @rd_rn_rm
43
RADDHNT 01000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
44
+SUBHNB 01000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
45
+SUBHNT 01000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
46
47
### SVE2 Character Match
48
49
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/sve_helper.c
52
+++ b/target/arm/sve_helper.c
53
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
54
55
#define DO_ADDHN(N, M, SH) ((N + M) >> SH)
56
#define DO_RADDHN(N, M, SH) ((N + M + ((__typeof(N))1 << (SH - 1))) >> SH)
57
+#define DO_SUBHN(N, M, SH) ((N - M) >> SH)
58
59
DO_BINOPNB(sve2_addhnb_h, uint16_t, uint8_t, 8, DO_ADDHN)
60
DO_BINOPNB(sve2_addhnb_s, uint32_t, uint16_t, 16, DO_ADDHN)
61
@@ -XXX,XX +XXX,XX @@ DO_BINOPNT(sve2_raddhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_RADDHN)
62
DO_BINOPNT(sve2_raddhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_RADDHN)
63
DO_BINOPNT(sve2_raddhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_RADDHN)
64
65
+DO_BINOPNB(sve2_subhnb_h, uint16_t, uint8_t, 8, DO_SUBHN)
66
+DO_BINOPNB(sve2_subhnb_s, uint32_t, uint16_t, 16, DO_SUBHN)
67
+DO_BINOPNB(sve2_subhnb_d, uint64_t, uint32_t, 32, DO_SUBHN)
68
+
69
+DO_BINOPNT(sve2_subhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_SUBHN)
70
+DO_BINOPNT(sve2_subhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_SUBHN)
71
+DO_BINOPNT(sve2_subhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_SUBHN)
72
+
73
+#undef DO_SUBHN
74
#undef DO_RADDHN
75
#undef DO_ADDHN
76
77
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/translate-sve.c
80
+++ b/target/arm/translate-sve.c
81
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_NARROW(ADDHNT, addhnt)
82
DO_SVE2_ZZZ_NARROW(RADDHNB, raddhnb)
83
DO_SVE2_ZZZ_NARROW(RADDHNT, raddhnt)
84
85
+DO_SVE2_ZZZ_NARROW(SUBHNB, subhnb)
86
+DO_SVE2_ZZZ_NARROW(SUBHNT, subhnt)
87
+
88
static bool do_sve2_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
89
gen_helper_gvec_flags_4 *fn)
90
{
91
--
92
2.20.1
93
94
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
This completes the section 'SVE2 integer add/subtract narrow high part'
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Stephen Long <steplong@quicinc.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525010358.152808-42-richard.henderson@linaro.org
9
Message-Id: <20200417162231.10374-5-steplong@quicinc.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/helper-sve.h | 8 ++++++++
14
target/arm/sve.decode | 2 ++
15
target/arm/sve_helper.c | 10 ++++++++++
16
target/arm/translate-sve.c | 2 ++
17
4 files changed, 22 insertions(+)
18
19
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-sve.h
22
+++ b/target/arm/helper-sve.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_subhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_4(sve2_subhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_4(sve2_subhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
27
+DEF_HELPER_FLAGS_4(sve2_rsubhnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve2_rsubhnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve2_rsubhnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_4(sve2_rsubhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(sve2_rsubhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve2_rsubhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+
35
DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
36
i32, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
38
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/sve.decode
41
+++ b/target/arm/sve.decode
42
@@ -XXX,XX +XXX,XX @@ RADDHNB 01000101 .. 1 ..... 011 010 ..... ..... @rd_rn_rm
43
RADDHNT 01000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
44
SUBHNB 01000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
45
SUBHNT 01000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
46
+RSUBHNB 01000101 .. 1 ..... 011 110 ..... ..... @rd_rn_rm
47
+RSUBHNT 01000101 .. 1 ..... 011 111 ..... ..... @rd_rn_rm
48
49
### SVE2 Character Match
50
51
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/sve_helper.c
54
+++ b/target/arm/sve_helper.c
55
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
56
#define DO_ADDHN(N, M, SH) ((N + M) >> SH)
57
#define DO_RADDHN(N, M, SH) ((N + M + ((__typeof(N))1 << (SH - 1))) >> SH)
58
#define DO_SUBHN(N, M, SH) ((N - M) >> SH)
59
+#define DO_RSUBHN(N, M, SH) ((N - M + ((__typeof(N))1 << (SH - 1))) >> SH)
60
61
DO_BINOPNB(sve2_addhnb_h, uint16_t, uint8_t, 8, DO_ADDHN)
62
DO_BINOPNB(sve2_addhnb_s, uint32_t, uint16_t, 16, DO_ADDHN)
63
@@ -XXX,XX +XXX,XX @@ DO_BINOPNT(sve2_subhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_SUBHN)
64
DO_BINOPNT(sve2_subhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_SUBHN)
65
DO_BINOPNT(sve2_subhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_SUBHN)
66
67
+DO_BINOPNB(sve2_rsubhnb_h, uint16_t, uint8_t, 8, DO_RSUBHN)
68
+DO_BINOPNB(sve2_rsubhnb_s, uint32_t, uint16_t, 16, DO_RSUBHN)
69
+DO_BINOPNB(sve2_rsubhnb_d, uint64_t, uint32_t, 32, DO_RSUBHN)
70
+
71
+DO_BINOPNT(sve2_rsubhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_RSUBHN)
72
+DO_BINOPNT(sve2_rsubhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_RSUBHN)
73
+DO_BINOPNT(sve2_rsubhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_RSUBHN)
74
+
75
+#undef DO_RSUBHN
76
#undef DO_SUBHN
77
#undef DO_RADDHN
78
#undef DO_ADDHN
79
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/translate-sve.c
82
+++ b/target/arm/translate-sve.c
83
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_NARROW(RADDHNT, raddhnt)
84
85
DO_SVE2_ZZZ_NARROW(SUBHNB, subhnb)
86
DO_SVE2_ZZZ_NARROW(SUBHNT, subhnt)
87
+DO_SVE2_ZZZ_NARROW(RSUBHNB, rsubhnb)
88
+DO_SVE2_ZZZ_NARROW(RSUBHNT, rsubhnt)
89
90
static bool do_sve2_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
91
gen_helper_gvec_flags_4 *fn)
92
--
93
2.20.1
94
95
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-43-richard.henderson@linaro.org
7
Message-Id: <20200416173109.8856-1-steplong@quicinc.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper-sve.h | 7 ++
12
target/arm/sve.decode | 6 ++
13
target/arm/sve_helper.c | 131 +++++++++++++++++++++++++++++++++++++
14
target/arm/translate-sve.c | 19 ++++++
15
4 files changed, 163 insertions(+)
16
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
20
+++ b/target/arm/helper-sve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_nmatch_ppzz_b, TCG_CALL_NO_RWG,
22
DEF_HELPER_FLAGS_5(sve2_nmatch_ppzz_h, TCG_CALL_NO_RWG,
23
i32, ptr, ptr, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_5(sve2_histcnt_s, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_histcnt_d, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_4(sve2_histseg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+
32
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
33
void, ptr, ptr, ptr, ptr, ptr, i32)
34
DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
35
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/sve.decode
38
+++ b/target/arm/sve.decode
39
@@ -XXX,XX +XXX,XX @@
40
&rprrr_esz rn=%reg_movprfx
41
@rdn_pg_rm_ra ........ esz:2 . ra:5 ... pg:3 rm:5 rd:5 \
42
&rprrr_esz rn=%reg_movprfx
43
+@rd_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 &rprr_esz
44
45
# One register operand, with governing predicate, vector element size
46
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
47
@@ -XXX,XX +XXX,XX @@ RSUBHNT 01000101 .. 1 ..... 011 111 ..... ..... @rd_rn_rm
48
MATCH 01000101 .. 1 ..... 100 ... ..... 0 .... @pd_pg_rn_rm
49
NMATCH 01000101 .. 1 ..... 100 ... ..... 1 .... @pd_pg_rn_rm
50
51
+### SVE2 Histogram Computation
52
+
53
+HISTCNT 01000101 .. 1 ..... 110 ... ..... ..... @rd_pg_rn_rm
54
+HISTSEG 01000101 .. 1 ..... 101 000 ..... ..... @rd_rn_rm
55
+
56
## SVE2 floating-point pairwise operations
57
58
FADDP 01100100 .. 010 00 0 100 ... ..... ..... @rdn_pg_rm
59
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/sve_helper.c
62
+++ b/target/arm/sve_helper.c
63
@@ -XXX,XX +XXX,XX @@ DO_PPZZ_MATCH(sve2_nmatch_ppzz_b, MO_8, true)
64
DO_PPZZ_MATCH(sve2_nmatch_ppzz_h, MO_16, true)
65
66
#undef DO_PPZZ_MATCH
67
+
68
+void HELPER(sve2_histcnt_s)(void *vd, void *vn, void *vm, void *vg,
69
+ uint32_t desc)
70
+{
71
+ ARMVectorReg scratch;
72
+ intptr_t i, j;
73
+ intptr_t opr_sz = simd_oprsz(desc);
74
+ uint32_t *d = vd, *n = vn, *m = vm;
75
+ uint8_t *pg = vg;
76
+
77
+ if (d == n) {
78
+ n = memcpy(&scratch, n, opr_sz);
79
+ if (d == m) {
80
+ m = n;
81
+ }
82
+ } else if (d == m) {
83
+ m = memcpy(&scratch, m, opr_sz);
84
+ }
85
+
86
+ for (i = 0; i < opr_sz; i += 4) {
87
+ uint64_t count = 0;
88
+ uint8_t pred;
89
+
90
+ pred = pg[H1(i >> 3)] >> (i & 7);
91
+ if (pred & 1) {
92
+ uint32_t nn = n[H4(i >> 2)];
93
+
94
+ for (j = 0; j <= i; j += 4) {
95
+ pred = pg[H1(j >> 3)] >> (j & 7);
96
+ if ((pred & 1) && nn == m[H4(j >> 2)]) {
97
+ ++count;
98
+ }
99
+ }
100
+ }
101
+ d[H4(i >> 2)] = count;
102
+ }
103
+}
104
+
105
+void HELPER(sve2_histcnt_d)(void *vd, void *vn, void *vm, void *vg,
106
+ uint32_t desc)
107
+{
108
+ ARMVectorReg scratch;
109
+ intptr_t i, j;
110
+ intptr_t opr_sz = simd_oprsz(desc);
111
+ uint64_t *d = vd, *n = vn, *m = vm;
112
+ uint8_t *pg = vg;
113
+
114
+ if (d == n) {
115
+ n = memcpy(&scratch, n, opr_sz);
116
+ if (d == m) {
117
+ m = n;
118
+ }
119
+ } else if (d == m) {
120
+ m = memcpy(&scratch, m, opr_sz);
121
+ }
122
+
123
+ for (i = 0; i < opr_sz / 8; ++i) {
124
+ uint64_t count = 0;
125
+ if (pg[H1(i)] & 1) {
126
+ uint64_t nn = n[i];
127
+ for (j = 0; j <= i; ++j) {
128
+ if ((pg[H1(j)] & 1) && nn == m[j]) {
129
+ ++count;
130
+ }
131
+ }
132
+ }
133
+ d[i] = count;
134
+ }
135
+}
136
+
137
+/*
138
+ * Returns the number of bytes in m0 and m1 that match n.
139
+ * Unlike do_match2 we don't just need true/false, we need an exact count.
140
+ * This requires two extra logical operations.
141
+ */
142
+static inline uint64_t do_histseg_cnt(uint8_t n, uint64_t m0, uint64_t m1)
143
+{
144
+ const uint64_t mask = dup_const(MO_8, 0x7f);
145
+ uint64_t cmp0, cmp1;
146
+
147
+ cmp1 = dup_const(MO_8, n);
148
+ cmp0 = cmp1 ^ m0;
149
+ cmp1 = cmp1 ^ m1;
150
+
151
+ /*
152
+ * 1: clear msb of each byte to avoid carry to next byte (& mask)
153
+ * 2: carry in to msb if byte != 0 (+ mask)
154
+ * 3: set msb if cmp has msb set (| cmp)
155
+ * 4: set ~msb to ignore them (| mask)
156
+ * We now have 0xff for byte != 0 or 0x7f for byte == 0.
157
+ * 5: invert, resulting in 0x80 if and only if byte == 0.
158
+ */
159
+ cmp0 = ~(((cmp0 & mask) + mask) | cmp0 | mask);
160
+ cmp1 = ~(((cmp1 & mask) + mask) | cmp1 | mask);
161
+
162
+ /*
163
+ * Combine the two compares in a way that the bits do
164
+ * not overlap, and so preserves the count of set bits.
165
+ * If the host has an efficient instruction for ctpop,
166
+ * then ctpop(x) + ctpop(y) has the same number of
167
+ * operations as ctpop(x | (y >> 1)). If the host does
168
+ * not have an efficient ctpop, then we only want to
169
+ * use it once.
170
+ */
171
+ return ctpop64(cmp0 | (cmp1 >> 1));
172
+}
173
+
174
+void HELPER(sve2_histseg)(void *vd, void *vn, void *vm, uint32_t desc)
175
+{
176
+ intptr_t i, j;
177
+ intptr_t opr_sz = simd_oprsz(desc);
178
+
179
+ for (i = 0; i < opr_sz; i += 16) {
180
+ uint64_t n0 = *(uint64_t *)(vn + i);
181
+ uint64_t m0 = *(uint64_t *)(vm + i);
182
+ uint64_t n1 = *(uint64_t *)(vn + i + 8);
183
+ uint64_t m1 = *(uint64_t *)(vm + i + 8);
184
+ uint64_t out0 = 0;
185
+ uint64_t out1 = 0;
186
+
187
+ for (j = 0; j < 64; j += 8) {
188
+ uint64_t cnt0 = do_histseg_cnt(n0 >> j, m0, m1);
189
+ uint64_t cnt1 = do_histseg_cnt(n1 >> j, m0, m1);
190
+ out0 |= cnt0 << j;
191
+ out1 |= cnt1 << j;
192
+ }
193
+
194
+ *(uint64_t *)(vd + i) = out0;
195
+ *(uint64_t *)(vd + i + 8) = out1;
196
+ }
197
+}
198
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
199
index XXXXXXX..XXXXXXX 100644
200
--- a/target/arm/translate-sve.c
201
+++ b/target/arm/translate-sve.c
202
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rprr_esz *a) \
203
DO_SVE2_PPZZ_MATCH(MATCH, match)
204
DO_SVE2_PPZZ_MATCH(NMATCH, nmatch)
205
206
+static bool trans_HISTCNT(DisasContext *s, arg_rprr_esz *a)
207
+{
208
+ static gen_helper_gvec_4 * const fns[2] = {
209
+ gen_helper_sve2_histcnt_s, gen_helper_sve2_histcnt_d
210
+ };
211
+ if (a->esz < 2) {
212
+ return false;
213
+ }
214
+ return do_sve2_zpzz_ool(s, a, fns[a->esz - 2]);
215
+}
216
+
217
+static bool trans_HISTSEG(DisasContext *s, arg_rrr_esz *a)
218
+{
219
+ if (a->esz != 0) {
220
+ return false;
221
+ }
222
+ return do_sve2_zzz_ool(s, a, gen_helper_sve2_histseg);
223
+}
224
+
225
static bool do_sve2_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
226
gen_helper_gvec_4_ptr *fn)
227
{
228
--
229
2.20.1
230
231
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Add decoding logic for SVE2 64-bit/32-bit scatter non-temporal
4
store insns.
5
6
64-bit
7
* STNT1B (vector plus scalar)
8
* STNT1H (vector plus scalar)
9
* STNT1W (vector plus scalar)
10
* STNT1D (vector plus scalar)
11
12
32-bit
13
* STNT1B (vector plus scalar)
14
* STNT1H (vector plus scalar)
15
* STNT1W (vector plus scalar)
16
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Stephen Long <steplong@quicinc.com>
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20210525010358.152808-45-richard.henderson@linaro.org
21
Message-Id: <20200422141553.8037-1-steplong@quicinc.com>
22
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
target/arm/sve.decode | 10 ++++++++++
26
target/arm/translate-sve.c | 8 ++++++++
27
2 files changed, 18 insertions(+)
28
29
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/sve.decode
32
+++ b/target/arm/sve.decode
33
@@ -XXX,XX +XXX,XX @@ UMLSLT_zzzw 01000100 .. 0 ..... 010 111 ..... ..... @rda_rn_rm
34
35
CMLA_zzzz 01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5 ra=%reg_movprfx
36
SQRDCMLAH_zzzz 01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5 ra=%reg_movprfx
37
+
38
+### SVE2 Memory Store Group
39
+
40
+# SVE2 64-bit scatter non-temporal store (vector plus scalar)
41
+STNT1_zprz 1110010 .. 00 ..... 001 ... ..... ..... \
42
+ @rprr_scatter_store xs=2 esz=3 scale=0
43
+
44
+# SVE2 32-bit scatter non-temporal store (vector plus scalar)
45
+STNT1_zprz 1110010 .. 10 ..... 001 ... ..... ..... \
46
+ @rprr_scatter_store xs=0 esz=2 scale=0
47
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-sve.c
50
+++ b/target/arm/translate-sve.c
51
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
52
return true;
53
}
54
55
+static bool trans_STNT1_zprz(DisasContext *s, arg_ST1_zprz *a)
56
+{
57
+ if (!dc_isar_feature(aa64_sve2, s)) {
58
+ return false;
59
+ }
60
+ return trans_ST1_zprz(s, a);
61
+}
62
+
63
/*
64
* Prefetches
65
*/
66
--
67
2.20.1
68
69
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Add decoding logic for SVE2 64-bit/32-bit gather non-temporal
4
load insns.
5
6
64-bit
7
* LDNT1SB
8
* LDNT1B (vector plus scalar)
9
* LDNT1SH
10
* LDNT1H (vector plus scalar)
11
* LDNT1SW
12
* LDNT1W (vector plus scalar)
13
* LDNT1D (vector plus scalar)
14
15
32-bit
16
* LDNT1SB
17
* LDNT1B (vector plus scalar)
18
* LDNT1SH
19
* LDNT1H (vector plus scalar)
20
* LDNT1W (vector plus scalar)
21
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Stephen Long <steplong@quicinc.com>
24
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20210525010358.152808-46-richard.henderson@linaro.org
26
Message-Id: <20200422152343.12493-1-steplong@quicinc.com>
27
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
30
target/arm/sve.decode | 11 +++++++++++
31
target/arm/translate-sve.c | 8 ++++++++
32
2 files changed, 19 insertions(+)
33
34
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/sve.decode
37
+++ b/target/arm/sve.decode
38
@@ -XXX,XX +XXX,XX @@ UMLSLT_zzzw 01000100 .. 0 ..... 010 111 ..... ..... @rda_rn_rm
39
CMLA_zzzz 01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5 ra=%reg_movprfx
40
SQRDCMLAH_zzzz 01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5 ra=%reg_movprfx
41
42
+### SVE2 Memory Gather Load Group
43
+
44
+# SVE2 64-bit gather non-temporal load
45
+# (scalar plus unpacked 32-bit unscaled offsets)
46
+LDNT1_zprz 1100010 msz:2 00 rm:5 1 u:1 0 pg:3 rn:5 rd:5 \
47
+ &rprr_gather_load xs=0 esz=3 scale=0 ff=0
48
+
49
+# SVE2 32-bit gather non-temporal load (scalar plus 32-bit unscaled offsets)
50
+LDNT1_zprz 1000010 msz:2 00 rm:5 10 u:1 pg:3 rn:5 rd:5 \
51
+ &rprr_gather_load xs=0 esz=2 scale=0 ff=0
52
+
53
### SVE2 Memory Store Group
54
55
# SVE2 64-bit scatter non-temporal store (vector plus scalar)
56
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-sve.c
59
+++ b/target/arm/translate-sve.c
60
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
61
return true;
62
}
63
64
+static bool trans_LDNT1_zprz(DisasContext *s, arg_LD1_zprz *a)
65
+{
66
+ if (!dc_isar_feature(aa64_sve2, s)) {
67
+ return false;
68
+ }
69
+ return trans_LD1_zprz(s, a);
70
+}
71
+
72
/* Indexed by [mte][be][xs][msz]. */
73
static gen_helper_gvec_mem_scatter * const scatter_store_fn32[2][2][2][3] = {
74
{ /* MTE Inactive */
75
--
76
2.20.1
77
78
diff view generated by jsdifflib
Deleted patch
1
From: Stephen Long <steplong@quicinc.com>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-48-richard.henderson@linaro.org
7
Message-Id: <20200423180347.9403-1-steplong@quicinc.com>
8
[rth: Rename the trans_* functions to *_sve2.]
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/sve.decode | 11 +++++++++--
13
target/arm/translate-sve.c | 35 ++++++++++++++++++++++++++++++-----
14
2 files changed, 39 insertions(+), 7 deletions(-)
15
16
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve.decode
19
+++ b/target/arm/sve.decode
20
@@ -XXX,XX +XXX,XX @@ CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
21
22
### SVE Permute - Extract Group
23
24
-# SVE extract vector (immediate offset)
25
+# SVE extract vector (destructive)
26
EXT 00000101 001 ..... 000 ... rm:5 rd:5 \
27
&rrri rn=%reg_movprfx imm=%imm8_16_10
28
29
+# SVE2 extract vector (constructive)
30
+EXT_sve2 00000101 011 ..... 000 ... rn:5 rd:5 \
31
+ &rri imm=%imm8_16_10
32
+
33
### SVE Permute - Unpredicated Group
34
35
# SVE broadcast general register
36
@@ -XXX,XX +XXX,XX @@ REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
37
REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
38
RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
39
40
-# SVE vector splice (predicated)
41
+# SVE vector splice (predicated, destructive)
42
SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
43
44
+# SVE2 vector splice (predicated, constructive)
45
+SPLICE_sve2 00000101 .. 101 101 100 ... ..... ..... @rd_pg_rn
46
+
47
### SVE Select Vectors Group
48
49
# SVE select vector elements (predicated)
50
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-sve.c
53
+++ b/target/arm/translate-sve.c
54
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_z_i(DisasContext *s, arg_CPY_z_i *a)
55
*** SVE Permute Extract Group
56
*/
57
58
-static bool trans_EXT(DisasContext *s, arg_EXT *a)
59
+static bool do_EXT(DisasContext *s, int rd, int rn, int rm, int imm)
60
{
61
if (!sve_access_check(s)) {
62
return true;
63
}
64
65
unsigned vsz = vec_full_reg_size(s);
66
- unsigned n_ofs = a->imm >= vsz ? 0 : a->imm;
67
+ unsigned n_ofs = imm >= vsz ? 0 : imm;
68
unsigned n_siz = vsz - n_ofs;
69
- unsigned d = vec_full_reg_offset(s, a->rd);
70
- unsigned n = vec_full_reg_offset(s, a->rn);
71
- unsigned m = vec_full_reg_offset(s, a->rm);
72
+ unsigned d = vec_full_reg_offset(s, rd);
73
+ unsigned n = vec_full_reg_offset(s, rn);
74
+ unsigned m = vec_full_reg_offset(s, rm);
75
76
/* Use host vector move insns if we have appropriate sizes
77
* and no unfortunate overlap.
78
@@ -XXX,XX +XXX,XX @@ static bool trans_EXT(DisasContext *s, arg_EXT *a)
79
return true;
80
}
81
82
+static bool trans_EXT(DisasContext *s, arg_EXT *a)
83
+{
84
+ return do_EXT(s, a->rd, a->rn, a->rm, a->imm);
85
+}
86
+
87
+static bool trans_EXT_sve2(DisasContext *s, arg_rri *a)
88
+{
89
+ if (!dc_isar_feature(aa64_sve2, s)) {
90
+ return false;
91
+ }
92
+ return do_EXT(s, a->rd, a->rn, (a->rn + 1) % 32, a->imm);
93
+}
94
+
95
/*
96
*** SVE Permute - Unpredicated Group
97
*/
98
@@ -XXX,XX +XXX,XX @@ static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a)
99
return true;
100
}
101
102
+static bool trans_SPLICE_sve2(DisasContext *s, arg_rpr_esz *a)
103
+{
104
+ if (!dc_isar_feature(aa64_sve2, s)) {
105
+ return false;
106
+ }
107
+ if (sve_access_check(s)) {
108
+ gen_gvec_ool_zzzp(s, gen_helper_sve_splice,
109
+ a->rd, a->rn, (a->rn + 1) % 32, a->pg, a->esz);
110
+ }
111
+ return true;
112
+}
113
+
114
/*
115
*** SVE Integer Compare - Vectors Group
116
*/
117
--
118
2.20.1
119
120
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The signed dot product routines produce a signed result.
4
Since we use -fwrapv, there is no functional change.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525010358.152808-49-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/vec_helper.c | 8 ++++----
12
1 file changed, 4 insertions(+), 4 deletions(-)
13
14
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/vec_helper.c
17
+++ b/target/arm/vec_helper.c
18
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmlsh_d)(void *vd, void *vn, void *vm,
19
void HELPER(gvec_sdot_b)(void *vd, void *vn, void *vm, uint32_t desc)
20
{
21
intptr_t i, opr_sz = simd_oprsz(desc);
22
- uint32_t *d = vd;
23
+ int32_t *d = vd;
24
int8_t *n = vn, *m = vm;
25
26
for (i = 0; i < opr_sz / 4; ++i) {
27
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_b)(void *vd, void *vn, void *vm, uint32_t desc)
28
void HELPER(gvec_sdot_h)(void *vd, void *vn, void *vm, uint32_t desc)
29
{
30
intptr_t i, opr_sz = simd_oprsz(desc);
31
- uint64_t *d = vd;
32
+ int64_t *d = vd;
33
int16_t *n = vn, *m = vm;
34
35
for (i = 0; i < opr_sz / 8; ++i) {
36
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
37
{
38
intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
39
intptr_t index = simd_data(desc);
40
- uint32_t *d = vd;
41
+ int32_t *d = vd;
42
int8_t *n = vn;
43
int8_t *m_indexed = (int8_t *)vm + H4(index) * 4;
44
45
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
46
{
47
intptr_t i, opr_sz = simd_oprsz(desc), opr_sz_8 = opr_sz / 8;
48
intptr_t index = simd_data(desc);
49
- uint64_t *d = vd;
50
+ int64_t *d = vd;
51
int16_t *n = vn;
52
int16_t *m_indexed = (int16_t *)vm + index * 4;
53
54
--
55
2.20.1
56
57
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-54-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/sve.decode | 7 +++++++
9
target/arm/translate-sve.c | 30 ++++++++++++++++++++++++++++++
10
2 files changed, 37 insertions(+)
11
12
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve.decode
15
+++ b/target/arm/sve.decode
16
@@ -XXX,XX +XXX,XX @@ MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
17
DOT_zzzz 01000100 1 sz:1 0 rm:5 00000 u:1 rn:5 rd:5 \
18
ra=%reg_movprfx
19
20
+#### SVE Multiply - Indexed
21
+
22
# SVE integer dot product (indexed)
23
SDOT_zzxw_s 01000100 10 1 ..... 000000 ..... ..... @rrxr_2 esz=2
24
SDOT_zzxw_d 01000100 11 1 ..... 000000 ..... ..... @rrxr_1 esz=3
25
UDOT_zzxw_s 01000100 10 1 ..... 000001 ..... ..... @rrxr_2 esz=2
26
UDOT_zzxw_d 01000100 11 1 ..... 000001 ..... ..... @rrxr_1 esz=3
27
28
+# SVE2 integer multiply (indexed)
29
+MUL_zzx_h 01000100 0. 1 ..... 111110 ..... ..... @rrx_3 esz=1
30
+MUL_zzx_s 01000100 10 1 ..... 111110 ..... ..... @rrx_2 esz=2
31
+MUL_zzx_d 01000100 11 1 ..... 111110 ..... ..... @rrx_1 esz=3
32
+
33
# SVE floating-point complex add (predicated)
34
FCADD 01100100 esz:2 00000 rot:1 100 pg:3 rm:5 rd:5 \
35
rn=%reg_movprfx
36
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-sve.c
39
+++ b/target/arm/translate-sve.c
40
@@ -XXX,XX +XXX,XX @@ static bool trans_DOT_zzzz(DisasContext *s, arg_DOT_zzzz *a)
41
return true;
42
}
43
44
+/*
45
+ * SVE Multiply - Indexed
46
+ */
47
+
48
static bool do_zzxz_ool(DisasContext *s, arg_rrxr_esz *a,
49
gen_helper_gvec_4 *fn)
50
{
51
@@ -XXX,XX +XXX,XX @@ DO_RRXR(trans_UDOT_zzxw_d, gen_helper_gvec_udot_idx_h)
52
53
#undef DO_RRXR
54
55
+static bool do_sve2_zzz_data(DisasContext *s, int rd, int rn, int rm, int data,
56
+ gen_helper_gvec_3 *fn)
57
+{
58
+ if (fn == NULL || !dc_isar_feature(aa64_sve2, s)) {
59
+ return false;
60
+ }
61
+ if (sve_access_check(s)) {
62
+ unsigned vsz = vec_full_reg_size(s);
63
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
64
+ vec_full_reg_offset(s, rn),
65
+ vec_full_reg_offset(s, rm),
66
+ vsz, vsz, data, fn);
67
+ }
68
+ return true;
69
+}
70
+
71
+#define DO_SVE2_RRX(NAME, FUNC) \
72
+ static bool NAME(DisasContext *s, arg_rrx_esz *a) \
73
+ { return do_sve2_zzz_data(s, a->rd, a->rn, a->rm, a->index, FUNC); }
74
+
75
+DO_SVE2_RRX(trans_MUL_zzx_h, gen_helper_gvec_mul_idx_h)
76
+DO_SVE2_RRX(trans_MUL_zzx_s, gen_helper_gvec_mul_idx_s)
77
+DO_SVE2_RRX(trans_MUL_zzx_d, gen_helper_gvec_mul_idx_d)
78
+
79
+#undef DO_SVE2_RRX
80
+
81
/*
82
*** SVE Floating Point Multiply-Add Indexed Group
83
*/
84
--
85
2.20.1
86
87
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-55-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/sve.decode | 8 ++++++++
9
target/arm/translate-sve.c | 31 +++++++++++++++++++++++++++++++
10
2 files changed, 39 insertions(+)
11
12
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve.decode
15
+++ b/target/arm/sve.decode
16
@@ -XXX,XX +XXX,XX @@ SDOT_zzxw_d 01000100 11 1 ..... 000000 ..... ..... @rrxr_1 esz=3
17
UDOT_zzxw_s 01000100 10 1 ..... 000001 ..... ..... @rrxr_2 esz=2
18
UDOT_zzxw_d 01000100 11 1 ..... 000001 ..... ..... @rrxr_1 esz=3
19
20
+# SVE2 integer multiply-add (indexed)
21
+MLA_zzxz_h 01000100 0. 1 ..... 000010 ..... ..... @rrxr_3 esz=1
22
+MLA_zzxz_s 01000100 10 1 ..... 000010 ..... ..... @rrxr_2 esz=2
23
+MLA_zzxz_d 01000100 11 1 ..... 000010 ..... ..... @rrxr_1 esz=3
24
+MLS_zzxz_h 01000100 0. 1 ..... 000011 ..... ..... @rrxr_3 esz=1
25
+MLS_zzxz_s 01000100 10 1 ..... 000011 ..... ..... @rrxr_2 esz=2
26
+MLS_zzxz_d 01000100 11 1 ..... 000011 ..... ..... @rrxr_1 esz=3
27
+
28
# SVE2 integer multiply (indexed)
29
MUL_zzx_h 01000100 0. 1 ..... 111110 ..... ..... @rrx_3 esz=1
30
MUL_zzx_s 01000100 10 1 ..... 111110 ..... ..... @rrx_2 esz=2
31
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-sve.c
34
+++ b/target/arm/translate-sve.c
35
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRX(trans_MUL_zzx_d, gen_helper_gvec_mul_idx_d)
36
37
#undef DO_SVE2_RRX
38
39
+static bool do_sve2_zzzz_data(DisasContext *s, int rd, int rn, int rm, int ra,
40
+ int data, gen_helper_gvec_4 *fn)
41
+{
42
+ if (fn == NULL || !dc_isar_feature(aa64_sve2, s)) {
43
+ return false;
44
+ }
45
+ if (sve_access_check(s)) {
46
+ unsigned vsz = vec_full_reg_size(s);
47
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
48
+ vec_full_reg_offset(s, rn),
49
+ vec_full_reg_offset(s, rm),
50
+ vec_full_reg_offset(s, ra),
51
+ vsz, vsz, data, fn);
52
+ }
53
+ return true;
54
+}
55
+
56
+#define DO_SVE2_RRXR(NAME, FUNC) \
57
+ static bool NAME(DisasContext *s, arg_rrxr_esz *a) \
58
+ { return do_sve2_zzzz_data(s, a->rd, a->rn, a->rm, a->ra, a->index, FUNC); }
59
+
60
+DO_SVE2_RRXR(trans_MLA_zzxz_h, gen_helper_gvec_mla_idx_h)
61
+DO_SVE2_RRXR(trans_MLA_zzxz_s, gen_helper_gvec_mla_idx_s)
62
+DO_SVE2_RRXR(trans_MLA_zzxz_d, gen_helper_gvec_mla_idx_d)
63
+
64
+DO_SVE2_RRXR(trans_MLS_zzxz_h, gen_helper_gvec_mls_idx_h)
65
+DO_SVE2_RRXR(trans_MLS_zzxz_s, gen_helper_gvec_mls_idx_s)
66
+DO_SVE2_RRXR(trans_MLS_zzxz_d, gen_helper_gvec_mls_idx_d)
67
+
68
+#undef DO_SVE2_RRXR
69
+
70
/*
71
*** SVE Floating Point Multiply-Add Indexed Group
72
*/
73
--
74
2.20.1
75
76
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-56-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 14 ++++++++++++++
9
target/arm/sve.decode | 8 ++++++++
10
target/arm/sve_helper.c | 36 ++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 8 ++++++++
12
4 files changed, 66 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_zzzz_d, TCG_CALL_NO_RWG,
19
20
DEF_HELPER_FLAGS_6(fmmla_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_6(fmmla_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_idx_h, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_idx_s, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_idx_d, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_h, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_s, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_d, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/sve.decode
39
+++ b/target/arm/sve.decode
40
@@ -XXX,XX +XXX,XX @@ MLS_zzxz_h 01000100 0. 1 ..... 000011 ..... ..... @rrxr_3 esz=1
41
MLS_zzxz_s 01000100 10 1 ..... 000011 ..... ..... @rrxr_2 esz=2
42
MLS_zzxz_d 01000100 11 1 ..... 000011 ..... ..... @rrxr_1 esz=3
43
44
+# SVE2 saturating multiply-add high (indexed)
45
+SQRDMLAH_zzxz_h 01000100 0. 1 ..... 000100 ..... ..... @rrxr_3 esz=1
46
+SQRDMLAH_zzxz_s 01000100 10 1 ..... 000100 ..... ..... @rrxr_2 esz=2
47
+SQRDMLAH_zzxz_d 01000100 11 1 ..... 000100 ..... ..... @rrxr_1 esz=3
48
+SQRDMLSH_zzxz_h 01000100 0. 1 ..... 000101 ..... ..... @rrxr_3 esz=1
49
+SQRDMLSH_zzxz_s 01000100 10 1 ..... 000101 ..... ..... @rrxr_2 esz=2
50
+SQRDMLSH_zzxz_d 01000100 11 1 ..... 000101 ..... ..... @rrxr_1 esz=3
51
+
52
# SVE2 integer multiply (indexed)
53
MUL_zzx_h 01000100 0. 1 ..... 111110 ..... ..... @rrx_3 esz=1
54
MUL_zzx_s 01000100 10 1 ..... 111110 ..... ..... @rrx_2 esz=2
55
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/sve_helper.c
58
+++ b/target/arm/sve_helper.c
59
@@ -XXX,XX +XXX,XX @@ DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_d, int64_t, , DO_SQRDMLAH_D)
60
#undef DO_SQRDMLAH_S
61
#undef DO_SQRDMLAH_D
62
63
+#define DO_ZZXZ(NAME, TYPE, H, OP) \
64
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
65
+{ \
66
+ intptr_t oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
67
+ intptr_t i, j, idx = simd_data(desc); \
68
+ TYPE *d = vd, *a = va, *n = vn, *m = (TYPE *)vm + H(idx); \
69
+ for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
70
+ TYPE mm = m[i]; \
71
+ for (j = 0; j < segment; j++) { \
72
+ d[i + j] = OP(n[i + j], mm, a[i + j]); \
73
+ } \
74
+ } \
75
+}
76
+
77
+#define DO_SQRDMLAH_H(N, M, A) \
78
+ ({ uint32_t discard; do_sqrdmlah_h(N, M, A, false, true, &discard); })
79
+#define DO_SQRDMLAH_S(N, M, A) \
80
+ ({ uint32_t discard; do_sqrdmlah_s(N, M, A, false, true, &discard); })
81
+#define DO_SQRDMLAH_D(N, M, A) do_sqrdmlah_d(N, M, A, false, true)
82
+
83
+DO_ZZXZ(sve2_sqrdmlah_idx_h, int16_t, H2, DO_SQRDMLAH_H)
84
+DO_ZZXZ(sve2_sqrdmlah_idx_s, int32_t, H4, DO_SQRDMLAH_S)
85
+DO_ZZXZ(sve2_sqrdmlah_idx_d, int64_t, , DO_SQRDMLAH_D)
86
+
87
+#define DO_SQRDMLSH_H(N, M, A) \
88
+ ({ uint32_t discard; do_sqrdmlah_h(N, M, A, true, true, &discard); })
89
+#define DO_SQRDMLSH_S(N, M, A) \
90
+ ({ uint32_t discard; do_sqrdmlah_s(N, M, A, true, true, &discard); })
91
+#define DO_SQRDMLSH_D(N, M, A) do_sqrdmlah_d(N, M, A, true, true)
92
+
93
+DO_ZZXZ(sve2_sqrdmlsh_idx_h, int16_t, H2, DO_SQRDMLSH_H)
94
+DO_ZZXZ(sve2_sqrdmlsh_idx_s, int32_t, H4, DO_SQRDMLSH_S)
95
+DO_ZZXZ(sve2_sqrdmlsh_idx_d, int64_t, , DO_SQRDMLSH_D)
96
+
97
+#undef DO_ZZXZ
98
+
99
#define DO_BITPERM(NAME, TYPE, OP) \
100
void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
101
{ \
102
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/translate-sve.c
105
+++ b/target/arm/translate-sve.c
106
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRXR(trans_MLS_zzxz_h, gen_helper_gvec_mls_idx_h)
107
DO_SVE2_RRXR(trans_MLS_zzxz_s, gen_helper_gvec_mls_idx_s)
108
DO_SVE2_RRXR(trans_MLS_zzxz_d, gen_helper_gvec_mls_idx_d)
109
110
+DO_SVE2_RRXR(trans_SQRDMLAH_zzxz_h, gen_helper_sve2_sqrdmlah_idx_h)
111
+DO_SVE2_RRXR(trans_SQRDMLAH_zzxz_s, gen_helper_sve2_sqrdmlah_idx_s)
112
+DO_SVE2_RRXR(trans_SQRDMLAH_zzxz_d, gen_helper_sve2_sqrdmlah_idx_d)
113
+
114
+DO_SVE2_RRXR(trans_SQRDMLSH_zzxz_h, gen_helper_sve2_sqrdmlsh_idx_h)
115
+DO_SVE2_RRXR(trans_SQRDMLSH_zzxz_s, gen_helper_sve2_sqrdmlsh_idx_s)
116
+DO_SVE2_RRXR(trans_SQRDMLSH_zzxz_d, gen_helper_sve2_sqrdmlsh_idx_d)
117
+
118
#undef DO_SVE2_RRXR
119
120
/*
121
--
122
2.20.1
123
124
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-57-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 9 +++++++++
9
target/arm/sve.decode | 18 ++++++++++++++++++
10
target/arm/sve_helper.c | 30 ++++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 19 +++++++++++++++++++
12
4 files changed, 76 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_s, TCG_CALL_NO_RWG,
19
void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_idx_s, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_idx_d, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_idx_s, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_idx_d, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve.decode
34
+++ b/target/arm/sve.decode
35
@@ -XXX,XX +XXX,XX @@
36
%size_23 23:2
37
%dtype_23_13 23:2 13:2
38
%index3_22_19 22:1 19:2
39
+%index3_19_11 19:2 11:1
40
+%index2_20_11 20:1 11:1
41
42
# A combination of tsz:imm3 -- extract esize.
43
%tszimm_esz 22:2 5:5 !function=tszimm_esz
44
@@ -XXX,XX +XXX,XX @@
45
@rrxr_1 ........ .. . index:1 rm:4 ...... rn:5 rd:5 \
46
&rrxr_esz ra=%reg_movprfx
47
48
+# Three registers and a scalar by N-bit index, alternate
49
+@rrxr_3a ........ .. ... rm:3 ...... rn:5 rd:5 \
50
+ &rrxr_esz ra=%reg_movprfx index=%index3_19_11
51
+@rrxr_2a ........ .. .. rm:4 ...... rn:5 rd:5 \
52
+ &rrxr_esz ra=%reg_movprfx index=%index2_20_11
53
+
54
###########################################################################
55
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
56
57
@@ -XXX,XX +XXX,XX @@ SQRDMLSH_zzxz_h 01000100 0. 1 ..... 000101 ..... ..... @rrxr_3 esz=1
58
SQRDMLSH_zzxz_s 01000100 10 1 ..... 000101 ..... ..... @rrxr_2 esz=2
59
SQRDMLSH_zzxz_d 01000100 11 1 ..... 000101 ..... ..... @rrxr_1 esz=3
60
61
+# SVE2 saturating multiply-add (indexed)
62
+SQDMLALB_zzxw_s 01000100 10 1 ..... 0010.0 ..... ..... @rrxr_3a esz=2
63
+SQDMLALB_zzxw_d 01000100 11 1 ..... 0010.0 ..... ..... @rrxr_2a esz=3
64
+SQDMLALT_zzxw_s 01000100 10 1 ..... 0010.1 ..... ..... @rrxr_3a esz=2
65
+SQDMLALT_zzxw_d 01000100 11 1 ..... 0010.1 ..... ..... @rrxr_2a esz=3
66
+SQDMLSLB_zzxw_s 01000100 10 1 ..... 0011.0 ..... ..... @rrxr_3a esz=2
67
+SQDMLSLB_zzxw_d 01000100 11 1 ..... 0011.0 ..... ..... @rrxr_2a esz=3
68
+SQDMLSLT_zzxw_s 01000100 10 1 ..... 0011.1 ..... ..... @rrxr_3a esz=2
69
+SQDMLSLT_zzxw_d 01000100 11 1 ..... 0011.1 ..... ..... @rrxr_2a esz=3
70
+
71
# SVE2 integer multiply (indexed)
72
MUL_zzx_h 01000100 0. 1 ..... 111110 ..... ..... @rrx_3 esz=1
73
MUL_zzx_s 01000100 10 1 ..... 111110 ..... ..... @rrx_2 esz=2
74
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/sve_helper.c
77
+++ b/target/arm/sve_helper.c
78
@@ -XXX,XX +XXX,XX @@ DO_ZZXZ(sve2_sqrdmlsh_idx_d, int64_t, , DO_SQRDMLSH_D)
79
80
#undef DO_ZZXZ
81
82
+#define DO_ZZXW(NAME, TYPEW, TYPEN, HW, HN, OP) \
83
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
84
+{ \
85
+ intptr_t i, j, oprsz = simd_oprsz(desc); \
86
+ intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN); \
87
+ intptr_t idx = extract32(desc, SIMD_DATA_SHIFT + 1, 3) * sizeof(TYPEN); \
88
+ for (i = 0; i < oprsz; i += 16) { \
89
+ TYPEW mm = *(TYPEN *)(vm + HN(i + idx)); \
90
+ for (j = 0; j < 16; j += sizeof(TYPEW)) { \
91
+ TYPEW nn = *(TYPEN *)(vn + HN(i + j + sel)); \
92
+ TYPEW aa = *(TYPEW *)(va + HW(i + j)); \
93
+ *(TYPEW *)(vd + HW(i + j)) = OP(nn, mm, aa); \
94
+ } \
95
+ } \
96
+}
97
+
98
+#define DO_SQDMLAL_S(N, M, A) DO_SQADD_S(A, do_sqdmull_s(N, M))
99
+#define DO_SQDMLAL_D(N, M, A) do_sqadd_d(A, do_sqdmull_d(N, M))
100
+
101
+DO_ZZXW(sve2_sqdmlal_idx_s, int32_t, int16_t, H1_4, H1_2, DO_SQDMLAL_S)
102
+DO_ZZXW(sve2_sqdmlal_idx_d, int64_t, int32_t, , H1_4, DO_SQDMLAL_D)
103
+
104
+#define DO_SQDMLSL_S(N, M, A) DO_SQSUB_S(A, do_sqdmull_s(N, M))
105
+#define DO_SQDMLSL_D(N, M, A) do_sqsub_d(A, do_sqdmull_d(N, M))
106
+
107
+DO_ZZXW(sve2_sqdmlsl_idx_s, int32_t, int16_t, H1_4, H1_2, DO_SQDMLSL_S)
108
+DO_ZZXW(sve2_sqdmlsl_idx_d, int64_t, int32_t, , H1_4, DO_SQDMLSL_D)
109
+
110
+#undef DO_ZZXW
111
+
112
#define DO_BITPERM(NAME, TYPE, OP) \
113
void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
114
{ \
115
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/translate-sve.c
118
+++ b/target/arm/translate-sve.c
119
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRXR(trans_SQRDMLSH_zzxz_d, gen_helper_sve2_sqrdmlsh_idx_d)
120
121
#undef DO_SVE2_RRXR
122
123
+#define DO_SVE2_RRXR_TB(NAME, FUNC, TOP) \
124
+ static bool NAME(DisasContext *s, arg_rrxr_esz *a) \
125
+ { \
126
+ return do_sve2_zzzz_data(s, a->rd, a->rn, a->rm, a->rd, \
127
+ (a->index << 1) | TOP, FUNC); \
128
+ }
129
+
130
+DO_SVE2_RRXR_TB(trans_SQDMLALB_zzxw_s, gen_helper_sve2_sqdmlal_idx_s, false)
131
+DO_SVE2_RRXR_TB(trans_SQDMLALB_zzxw_d, gen_helper_sve2_sqdmlal_idx_d, false)
132
+DO_SVE2_RRXR_TB(trans_SQDMLALT_zzxw_s, gen_helper_sve2_sqdmlal_idx_s, true)
133
+DO_SVE2_RRXR_TB(trans_SQDMLALT_zzxw_d, gen_helper_sve2_sqdmlal_idx_d, true)
134
+
135
+DO_SVE2_RRXR_TB(trans_SQDMLSLB_zzxw_s, gen_helper_sve2_sqdmlsl_idx_s, false)
136
+DO_SVE2_RRXR_TB(trans_SQDMLSLB_zzxw_d, gen_helper_sve2_sqdmlsl_idx_d, false)
137
+DO_SVE2_RRXR_TB(trans_SQDMLSLT_zzxw_s, gen_helper_sve2_sqdmlsl_idx_s, true)
138
+DO_SVE2_RRXR_TB(trans_SQDMLSLT_zzxw_d, gen_helper_sve2_sqdmlsl_idx_d, true)
139
+
140
+#undef DO_SVE2_RRXR_TB
141
+
142
/*
143
*** SVE Floating Point Multiply-Add Indexed Group
144
*/
145
--
146
2.20.1
147
148
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-58-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 5 +++++
9
target/arm/sve.decode | 12 ++++++++++++
10
target/arm/sve_helper.c | 20 ++++++++++++++++++++
11
target/arm/translate-sve.c | 14 ++++++++++++++
12
4 files changed, 51 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_sqdmlsl_idx_s, TCG_CALL_NO_RWG,
19
void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve2_sqdmlsl_idx_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_s, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_d, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve.decode
30
+++ b/target/arm/sve.decode
31
@@ -XXX,XX +XXX,XX @@
32
@rrx_2 ........ .. . index:2 rm:3 ...... rn:5 rd:5 &rrx_esz
33
@rrx_1 ........ .. . index:1 rm:4 ...... rn:5 rd:5 &rrx_esz
34
35
+# Two registers and a scalar by N-bit index, alternate
36
+@rrx_3a ........ .. . .. rm:3 ...... rn:5 rd:5 \
37
+ &rrx_esz index=%index3_19_11
38
+@rrx_2a ........ .. . . rm:4 ...... rn:5 rd:5 \
39
+ &rrx_esz index=%index2_20_11
40
+
41
# Three registers and a scalar by N-bit index
42
@rrxr_3 ........ .. . .. rm:3 ...... rn:5 rd:5 \
43
&rrxr_esz ra=%reg_movprfx index=%index3_22_19
44
@@ -XXX,XX +XXX,XX @@ SQDMLSLB_zzxw_d 01000100 11 1 ..... 0011.0 ..... ..... @rrxr_2a esz=3
45
SQDMLSLT_zzxw_s 01000100 10 1 ..... 0011.1 ..... ..... @rrxr_3a esz=2
46
SQDMLSLT_zzxw_d 01000100 11 1 ..... 0011.1 ..... ..... @rrxr_2a esz=3
47
48
+# SVE2 saturating multiply (indexed)
49
+SQDMULLB_zzx_s 01000100 10 1 ..... 1110.0 ..... ..... @rrx_3a esz=2
50
+SQDMULLB_zzx_d 01000100 11 1 ..... 1110.0 ..... ..... @rrx_2a esz=3
51
+SQDMULLT_zzx_s 01000100 10 1 ..... 1110.1 ..... ..... @rrx_3a esz=2
52
+SQDMULLT_zzx_d 01000100 11 1 ..... 1110.1 ..... ..... @rrx_2a esz=3
53
+
54
# SVE2 integer multiply (indexed)
55
MUL_zzx_h 01000100 0. 1 ..... 111110 ..... ..... @rrx_3 esz=1
56
MUL_zzx_s 01000100 10 1 ..... 111110 ..... ..... @rrx_2 esz=2
57
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/sve_helper.c
60
+++ b/target/arm/sve_helper.c
61
@@ -XXX,XX +XXX,XX @@ DO_ZZXW(sve2_sqdmlsl_idx_d, int64_t, int32_t, , H1_4, DO_SQDMLSL_D)
62
63
#undef DO_ZZXW
64
65
+#define DO_ZZX(NAME, TYPEW, TYPEN, HW, HN, OP) \
66
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
67
+{ \
68
+ intptr_t i, j, oprsz = simd_oprsz(desc); \
69
+ intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN); \
70
+ intptr_t idx = extract32(desc, SIMD_DATA_SHIFT + 1, 3) * sizeof(TYPEN); \
71
+ for (i = 0; i < oprsz; i += 16) { \
72
+ TYPEW mm = *(TYPEN *)(vm + HN(i + idx)); \
73
+ for (j = 0; j < 16; j += sizeof(TYPEW)) { \
74
+ TYPEW nn = *(TYPEN *)(vn + HN(i + j + sel)); \
75
+ *(TYPEW *)(vd + HW(i + j)) = OP(nn, mm); \
76
+ } \
77
+ } \
78
+}
79
+
80
+DO_ZZX(sve2_sqdmull_idx_s, int32_t, int16_t, H1_4, H1_2, do_sqdmull_s)
81
+DO_ZZX(sve2_sqdmull_idx_d, int64_t, int32_t, , H1_4, do_sqdmull_d)
82
+
83
+#undef DO_ZZX
84
+
85
#define DO_BITPERM(NAME, TYPE, OP) \
86
void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
87
{ \
88
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate-sve.c
91
+++ b/target/arm/translate-sve.c
92
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRX(trans_MUL_zzx_d, gen_helper_gvec_mul_idx_d)
93
94
#undef DO_SVE2_RRX
95
96
+#define DO_SVE2_RRX_TB(NAME, FUNC, TOP) \
97
+ static bool NAME(DisasContext *s, arg_rrx_esz *a) \
98
+ { \
99
+ return do_sve2_zzz_data(s, a->rd, a->rn, a->rm, \
100
+ (a->index << 1) | TOP, FUNC); \
101
+ }
102
+
103
+DO_SVE2_RRX_TB(trans_SQDMULLB_zzx_s, gen_helper_sve2_sqdmull_idx_s, false)
104
+DO_SVE2_RRX_TB(trans_SQDMULLB_zzx_d, gen_helper_sve2_sqdmull_idx_d, false)
105
+DO_SVE2_RRX_TB(trans_SQDMULLT_zzx_s, gen_helper_sve2_sqdmull_idx_s, true)
106
+DO_SVE2_RRX_TB(trans_SQDMULLT_zzx_d, gen_helper_sve2_sqdmull_idx_d, true)
107
+
108
+#undef DO_SVE2_RRX_TB
109
+
110
static bool do_sve2_zzzz_data(DisasContext *s, int rd, int rn, int rm, int ra,
111
int data, gen_helper_gvec_4 *fn)
112
{
113
--
114
2.20.1
115
116
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-59-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper.h | 10 +++++
9
target/arm/sve.decode | 4 ++
10
target/arm/translate-sve.c | 18 ++++++++
11
target/arm/vec_helper.c | 84 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 116 insertions(+)
13
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
17
+++ b/target/arm/helper.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(neon_sqrdmulh_h, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_5(neon_sqrdmulh_s, TCG_CALL_NO_RWG,
20
void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
27
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+
32
DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
34
#ifdef TARGET_AARCH64
35
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/sve.decode
38
+++ b/target/arm/sve.decode
39
@@ -XXX,XX +XXX,XX @@ SMULH_zzz 00000100 .. 1 ..... 0110 10 ..... ..... @rd_rn_rm
40
UMULH_zzz 00000100 .. 1 ..... 0110 11 ..... ..... @rd_rn_rm
41
PMUL_zzz 00000100 00 1 ..... 0110 01 ..... ..... @rd_rn_rm_e0
42
43
+# SVE2 signed saturating doubling multiply high (unpredicated)
44
+SQDMULH_zzz 00000100 .. 1 ..... 0111 00 ..... ..... @rd_rn_rm
45
+SQRDMULH_zzz 00000100 .. 1 ..... 0111 01 ..... ..... @rd_rn_rm
46
+
47
### SVE2 Integer - Predicated
48
49
SADALP_zpzz 01000100 .. 000 100 101 ... ..... ..... @rdm_pg_rn
50
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-sve.c
53
+++ b/target/arm/translate-sve.c
54
@@ -XXX,XX +XXX,XX @@ static bool trans_PMUL_zzz(DisasContext *s, arg_rrr_esz *a)
55
return do_sve2_zzz_ool(s, a, gen_helper_gvec_pmul_b);
56
}
57
58
+static bool trans_SQDMULH_zzz(DisasContext *s, arg_rrr_esz *a)
59
+{
60
+ static gen_helper_gvec_3 * const fns[4] = {
61
+ gen_helper_sve2_sqdmulh_b, gen_helper_sve2_sqdmulh_h,
62
+ gen_helper_sve2_sqdmulh_s, gen_helper_sve2_sqdmulh_d,
63
+ };
64
+ return do_sve2_zzz_ool(s, a, fns[a->esz]);
65
+}
66
+
67
+static bool trans_SQRDMULH_zzz(DisasContext *s, arg_rrr_esz *a)
68
+{
69
+ static gen_helper_gvec_3 * const fns[4] = {
70
+ gen_helper_sve2_sqrdmulh_b, gen_helper_sve2_sqrdmulh_h,
71
+ gen_helper_sve2_sqrdmulh_s, gen_helper_sve2_sqrdmulh_d,
72
+ };
73
+ return do_sve2_zzz_ool(s, a, fns[a->esz]);
74
+}
75
+
76
/*
77
* SVE2 Integer - Predicated
78
*/
79
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/vec_helper.c
82
+++ b/target/arm/vec_helper.c
83
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmlsh_b)(void *vd, void *vn, void *vm,
84
}
85
}
86
87
+void HELPER(sve2_sqdmulh_b)(void *vd, void *vn, void *vm, uint32_t desc)
88
+{
89
+ intptr_t i, opr_sz = simd_oprsz(desc);
90
+ int8_t *d = vd, *n = vn, *m = vm;
91
+
92
+ for (i = 0; i < opr_sz; ++i) {
93
+ d[i] = do_sqrdmlah_b(n[i], m[i], 0, false, false);
94
+ }
95
+}
96
+
97
+void HELPER(sve2_sqrdmulh_b)(void *vd, void *vn, void *vm, uint32_t desc)
98
+{
99
+ intptr_t i, opr_sz = simd_oprsz(desc);
100
+ int8_t *d = vd, *n = vn, *m = vm;
101
+
102
+ for (i = 0; i < opr_sz; ++i) {
103
+ d[i] = do_sqrdmlah_b(n[i], m[i], 0, false, true);
104
+ }
105
+}
106
+
107
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
108
int16_t do_sqrdmlah_h(int16_t src1, int16_t src2, int16_t src3,
109
bool neg, bool round, uint32_t *sat)
110
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmlsh_h)(void *vd, void *vn, void *vm,
111
}
112
}
113
114
+void HELPER(sve2_sqdmulh_h)(void *vd, void *vn, void *vm, uint32_t desc)
115
+{
116
+ intptr_t i, opr_sz = simd_oprsz(desc);
117
+ int16_t *d = vd, *n = vn, *m = vm;
118
+ uint32_t discard;
119
+
120
+ for (i = 0; i < opr_sz / 2; ++i) {
121
+ d[i] = do_sqrdmlah_h(n[i], m[i], 0, false, false, &discard);
122
+ }
123
+}
124
+
125
+void HELPER(sve2_sqrdmulh_h)(void *vd, void *vn, void *vm, uint32_t desc)
126
+{
127
+ intptr_t i, opr_sz = simd_oprsz(desc);
128
+ int16_t *d = vd, *n = vn, *m = vm;
129
+ uint32_t discard;
130
+
131
+ for (i = 0; i < opr_sz / 2; ++i) {
132
+ d[i] = do_sqrdmlah_h(n[i], m[i], 0, false, true, &discard);
133
+ }
134
+}
135
+
136
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
137
int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
138
bool neg, bool round, uint32_t *sat)
139
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmlsh_s)(void *vd, void *vn, void *vm,
140
}
141
}
142
143
+void HELPER(sve2_sqdmulh_s)(void *vd, void *vn, void *vm, uint32_t desc)
144
+{
145
+ intptr_t i, opr_sz = simd_oprsz(desc);
146
+ int32_t *d = vd, *n = vn, *m = vm;
147
+ uint32_t discard;
148
+
149
+ for (i = 0; i < opr_sz / 4; ++i) {
150
+ d[i] = do_sqrdmlah_s(n[i], m[i], 0, false, false, &discard);
151
+ }
152
+}
153
+
154
+void HELPER(sve2_sqrdmulh_s)(void *vd, void *vn, void *vm, uint32_t desc)
155
+{
156
+ intptr_t i, opr_sz = simd_oprsz(desc);
157
+ int32_t *d = vd, *n = vn, *m = vm;
158
+ uint32_t discard;
159
+
160
+ for (i = 0; i < opr_sz / 4; ++i) {
161
+ d[i] = do_sqrdmlah_s(n[i], m[i], 0, false, true, &discard);
162
+ }
163
+}
164
+
165
/* Signed saturating rounding doubling multiply-accumulate high half, 64-bit */
166
static int64_t do_sat128_d(Int128 r)
167
{
168
@@ -XXX,XX +XXX,XX @@ void HELPER(sve2_sqrdmlsh_d)(void *vd, void *vn, void *vm,
169
}
170
}
171
172
+void HELPER(sve2_sqdmulh_d)(void *vd, void *vn, void *vm, uint32_t desc)
173
+{
174
+ intptr_t i, opr_sz = simd_oprsz(desc);
175
+ int64_t *d = vd, *n = vn, *m = vm;
176
+
177
+ for (i = 0; i < opr_sz / 8; ++i) {
178
+ d[i] = do_sqrdmlah_d(n[i], m[i], 0, false, false);
179
+ }
180
+}
181
+
182
+void HELPER(sve2_sqrdmulh_d)(void *vd, void *vn, void *vm, uint32_t desc)
183
+{
184
+ intptr_t i, opr_sz = simd_oprsz(desc);
185
+ int64_t *d = vd, *n = vn, *m = vm;
186
+
187
+ for (i = 0; i < opr_sz / 8; ++i) {
188
+ d[i] = do_sqrdmlah_d(n[i], m[i], 0, false, true);
189
+ }
190
+}
191
+
192
/* Integer 8 and 16-bit dot-product.
193
*
194
* Note that for the loops herein, host endianness does not matter
195
--
196
2.20.1
197
198
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20210525010358.152808-61-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 17 +++++++++++++++++
9
target/arm/sve.decode | 18 ++++++++++++++++++
10
target/arm/sve_helper.c | 16 ++++++++++++++++
11
target/arm/translate-sve.c | 20 ++++++++++++++++++++
12
4 files changed, 71 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_s, TCG_CALL_NO_RWG,
19
void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_smlal_idx_s, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve2_smlal_idx_d, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_smlsl_idx_s, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_5(sve2_smlsl_idx_d, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_5(sve2_umlal_idx_s, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sve2_umlal_idx_d, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(sve2_umlsl_idx_s, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve2_umlsl_idx_d, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, i32)
39
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve.decode
42
+++ b/target/arm/sve.decode
43
@@ -XXX,XX +XXX,XX @@ SQDMLSLB_zzxw_d 01000100 11 1 ..... 0011.0 ..... ..... @rrxr_2a esz=3
44
SQDMLSLT_zzxw_s 01000100 10 1 ..... 0011.1 ..... ..... @rrxr_3a esz=2
45
SQDMLSLT_zzxw_d 01000100 11 1 ..... 0011.1 ..... ..... @rrxr_2a esz=3
46
47
+# SVE2 multiply-add long (indexed)
48
+SMLALB_zzxw_s 01000100 10 1 ..... 1000.0 ..... ..... @rrxr_3a esz=2
49
+SMLALB_zzxw_d 01000100 11 1 ..... 1000.0 ..... ..... @rrxr_2a esz=3
50
+SMLALT_zzxw_s 01000100 10 1 ..... 1000.1 ..... ..... @rrxr_3a esz=2
51
+SMLALT_zzxw_d 01000100 11 1 ..... 1000.1 ..... ..... @rrxr_2a esz=3
52
+UMLALB_zzxw_s 01000100 10 1 ..... 1001.0 ..... ..... @rrxr_3a esz=2
53
+UMLALB_zzxw_d 01000100 11 1 ..... 1001.0 ..... ..... @rrxr_2a esz=3
54
+UMLALT_zzxw_s 01000100 10 1 ..... 1001.1 ..... ..... @rrxr_3a esz=2
55
+UMLALT_zzxw_d 01000100 11 1 ..... 1001.1 ..... ..... @rrxr_2a esz=3
56
+SMLSLB_zzxw_s 01000100 10 1 ..... 1010.0 ..... ..... @rrxr_3a esz=2
57
+SMLSLB_zzxw_d 01000100 11 1 ..... 1010.0 ..... ..... @rrxr_2a esz=3
58
+SMLSLT_zzxw_s 01000100 10 1 ..... 1010.1 ..... ..... @rrxr_3a esz=2
59
+SMLSLT_zzxw_d 01000100 11 1 ..... 1010.1 ..... ..... @rrxr_2a esz=3
60
+UMLSLB_zzxw_s 01000100 10 1 ..... 1011.0 ..... ..... @rrxr_3a esz=2
61
+UMLSLB_zzxw_d 01000100 11 1 ..... 1011.0 ..... ..... @rrxr_2a esz=3
62
+UMLSLT_zzxw_s 01000100 10 1 ..... 1011.1 ..... ..... @rrxr_3a esz=2
63
+UMLSLT_zzxw_d 01000100 11 1 ..... 1011.1 ..... ..... @rrxr_2a esz=3
64
+
65
# SVE2 saturating multiply (indexed)
66
SQDMULLB_zzx_s 01000100 10 1 ..... 1110.0 ..... ..... @rrx_3a esz=2
67
SQDMULLB_zzx_d 01000100 11 1 ..... 1110.0 ..... ..... @rrx_2a esz=3
68
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/sve_helper.c
71
+++ b/target/arm/sve_helper.c
72
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
73
} \
74
}
75
76
+#define DO_MLA(N, M, A) (A + N * M)
77
+
78
+DO_ZZXW(sve2_smlal_idx_s, int32_t, int16_t, H1_4, H1_2, DO_MLA)
79
+DO_ZZXW(sve2_smlal_idx_d, int64_t, int32_t, , H1_4, DO_MLA)
80
+DO_ZZXW(sve2_umlal_idx_s, uint32_t, uint16_t, H1_4, H1_2, DO_MLA)
81
+DO_ZZXW(sve2_umlal_idx_d, uint64_t, uint32_t, , H1_4, DO_MLA)
82
+
83
+#define DO_MLS(N, M, A) (A - N * M)
84
+
85
+DO_ZZXW(sve2_smlsl_idx_s, int32_t, int16_t, H1_4, H1_2, DO_MLS)
86
+DO_ZZXW(sve2_smlsl_idx_d, int64_t, int32_t, , H1_4, DO_MLS)
87
+DO_ZZXW(sve2_umlsl_idx_s, uint32_t, uint16_t, H1_4, H1_2, DO_MLS)
88
+DO_ZZXW(sve2_umlsl_idx_d, uint64_t, uint32_t, , H1_4, DO_MLS)
89
+
90
#define DO_SQDMLAL_S(N, M, A) DO_SQADD_S(A, do_sqdmull_s(N, M))
91
#define DO_SQDMLAL_D(N, M, A) do_sqadd_d(A, do_sqdmull_d(N, M))
92
93
@@ -XXX,XX +XXX,XX @@ DO_ZZXW(sve2_sqdmlal_idx_d, int64_t, int32_t, , H1_4, DO_SQDMLAL_D)
94
DO_ZZXW(sve2_sqdmlsl_idx_s, int32_t, int16_t, H1_4, H1_2, DO_SQDMLSL_S)
95
DO_ZZXW(sve2_sqdmlsl_idx_d, int64_t, int32_t, , H1_4, DO_SQDMLSL_D)
96
97
+#undef DO_MLA
98
+#undef DO_MLS
99
#undef DO_ZZXW
100
101
#define DO_ZZX(NAME, TYPEW, TYPEN, HW, HN, OP) \
102
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/translate-sve.c
105
+++ b/target/arm/translate-sve.c
106
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRXR_TB(trans_SQDMLSLB_zzxw_d, gen_helper_sve2_sqdmlsl_idx_d, false)
107
DO_SVE2_RRXR_TB(trans_SQDMLSLT_zzxw_s, gen_helper_sve2_sqdmlsl_idx_s, true)
108
DO_SVE2_RRXR_TB(trans_SQDMLSLT_zzxw_d, gen_helper_sve2_sqdmlsl_idx_d, true)
109
110
+DO_SVE2_RRXR_TB(trans_SMLALB_zzxw_s, gen_helper_sve2_smlal_idx_s, false)
111
+DO_SVE2_RRXR_TB(trans_SMLALB_zzxw_d, gen_helper_sve2_smlal_idx_d, false)
112
+DO_SVE2_RRXR_TB(trans_SMLALT_zzxw_s, gen_helper_sve2_smlal_idx_s, true)
113
+DO_SVE2_RRXR_TB(trans_SMLALT_zzxw_d, gen_helper_sve2_smlal_idx_d, true)
114
+
115
+DO_SVE2_RRXR_TB(trans_UMLALB_zzxw_s, gen_helper_sve2_umlal_idx_s, false)
116
+DO_SVE2_RRXR_TB(trans_UMLALB_zzxw_d, gen_helper_sve2_umlal_idx_d, false)
117
+DO_SVE2_RRXR_TB(trans_UMLALT_zzxw_s, gen_helper_sve2_umlal_idx_s, true)
118
+DO_SVE2_RRXR_TB(trans_UMLALT_zzxw_d, gen_helper_sve2_umlal_idx_d, true)
119
+
120
+DO_SVE2_RRXR_TB(trans_SMLSLB_zzxw_s, gen_helper_sve2_smlsl_idx_s, false)
121
+DO_SVE2_RRXR_TB(trans_SMLSLB_zzxw_d, gen_helper_sve2_smlsl_idx_d, false)
122
+DO_SVE2_RRXR_TB(trans_SMLSLT_zzxw_s, gen_helper_sve2_smlsl_idx_s, true)
123
+DO_SVE2_RRXR_TB(trans_SMLSLT_zzxw_d, gen_helper_sve2_smlsl_idx_d, true)
124
+
125
+DO_SVE2_RRXR_TB(trans_UMLSLB_zzxw_s, gen_helper_sve2_umlsl_idx_s, false)
126
+DO_SVE2_RRXR_TB(trans_UMLSLB_zzxw_d, gen_helper_sve2_umlsl_idx_d, false)
127
+DO_SVE2_RRXR_TB(trans_UMLSLT_zzxw_s, gen_helper_sve2_umlsl_idx_s, true)
128
+DO_SVE2_RRXR_TB(trans_UMLSLT_zzxw_d, gen_helper_sve2_umlsl_idx_d, true)
129
+
130
#undef DO_SVE2_RRXR_TB
131
132
/*
133
--
134
2.20.1
135
136
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20210525010358.152808-62-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 5 +++++
9
target/arm/sve.decode | 10 ++++++++++
10
target/arm/sve_helper.c | 6 ++++++
11
target/arm/translate-sve.c | 10 ++++++++++
12
4 files changed, 31 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_umlsl_idx_s, TCG_CALL_NO_RWG,
19
void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve2_umlsl_idx_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_4(sve2_smull_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve2_smull_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve2_umull_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve2_umull_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve.decode
30
+++ b/target/arm/sve.decode
31
@@ -XXX,XX +XXX,XX @@ UMLSLB_zzxw_d 01000100 11 1 ..... 1011.0 ..... ..... @rrxr_2a esz=3
32
UMLSLT_zzxw_s 01000100 10 1 ..... 1011.1 ..... ..... @rrxr_3a esz=2
33
UMLSLT_zzxw_d 01000100 11 1 ..... 1011.1 ..... ..... @rrxr_2a esz=3
34
35
+# SVE2 integer multiply long (indexed)
36
+SMULLB_zzx_s 01000100 10 1 ..... 1100.0 ..... ..... @rrx_3a esz=2
37
+SMULLB_zzx_d 01000100 11 1 ..... 1100.0 ..... ..... @rrx_2a esz=3
38
+SMULLT_zzx_s 01000100 10 1 ..... 1100.1 ..... ..... @rrx_3a esz=2
39
+SMULLT_zzx_d 01000100 11 1 ..... 1100.1 ..... ..... @rrx_2a esz=3
40
+UMULLB_zzx_s 01000100 10 1 ..... 1101.0 ..... ..... @rrx_3a esz=2
41
+UMULLB_zzx_d 01000100 11 1 ..... 1101.0 ..... ..... @rrx_2a esz=3
42
+UMULLT_zzx_s 01000100 10 1 ..... 1101.1 ..... ..... @rrx_3a esz=2
43
+UMULLT_zzx_d 01000100 11 1 ..... 1101.1 ..... ..... @rrx_2a esz=3
44
+
45
# SVE2 saturating multiply (indexed)
46
SQDMULLB_zzx_s 01000100 10 1 ..... 1110.0 ..... ..... @rrx_3a esz=2
47
SQDMULLB_zzx_d 01000100 11 1 ..... 1110.0 ..... ..... @rrx_2a esz=3
48
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/sve_helper.c
51
+++ b/target/arm/sve_helper.c
52
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
53
DO_ZZX(sve2_sqdmull_idx_s, int32_t, int16_t, H1_4, H1_2, do_sqdmull_s)
54
DO_ZZX(sve2_sqdmull_idx_d, int64_t, int32_t, , H1_4, do_sqdmull_d)
55
56
+DO_ZZX(sve2_smull_idx_s, int32_t, int16_t, H1_4, H1_2, DO_MUL)
57
+DO_ZZX(sve2_smull_idx_d, int64_t, int32_t, , H1_4, DO_MUL)
58
+
59
+DO_ZZX(sve2_umull_idx_s, uint32_t, uint16_t, H1_4, H1_2, DO_MUL)
60
+DO_ZZX(sve2_umull_idx_d, uint64_t, uint32_t, , H1_4, DO_MUL)
61
+
62
#undef DO_ZZX
63
64
#define DO_BITPERM(NAME, TYPE, OP) \
65
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate-sve.c
68
+++ b/target/arm/translate-sve.c
69
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRX_TB(trans_SQDMULLB_zzx_d, gen_helper_sve2_sqdmull_idx_d, false)
70
DO_SVE2_RRX_TB(trans_SQDMULLT_zzx_s, gen_helper_sve2_sqdmull_idx_s, true)
71
DO_SVE2_RRX_TB(trans_SQDMULLT_zzx_d, gen_helper_sve2_sqdmull_idx_d, true)
72
73
+DO_SVE2_RRX_TB(trans_SMULLB_zzx_s, gen_helper_sve2_smull_idx_s, false)
74
+DO_SVE2_RRX_TB(trans_SMULLB_zzx_d, gen_helper_sve2_smull_idx_d, false)
75
+DO_SVE2_RRX_TB(trans_SMULLT_zzx_s, gen_helper_sve2_smull_idx_s, true)
76
+DO_SVE2_RRX_TB(trans_SMULLT_zzx_d, gen_helper_sve2_smull_idx_d, true)
77
+
78
+DO_SVE2_RRX_TB(trans_UMULLB_zzx_s, gen_helper_sve2_umull_idx_s, false)
79
+DO_SVE2_RRX_TB(trans_UMULLB_zzx_d, gen_helper_sve2_umull_idx_d, false)
80
+DO_SVE2_RRX_TB(trans_UMULLT_zzx_s, gen_helper_sve2_umull_idx_s, true)
81
+DO_SVE2_RRX_TB(trans_UMULLT_zzx_d, gen_helper_sve2_umull_idx_d, true)
82
+
83
#undef DO_SVE2_RRX_TB
84
85
static bool do_sve2_zzzz_data(DisasContext *s, int rd, int rn, int rm, int ra,
86
--
87
2.20.1
88
89
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20210525010358.152808-63-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-sve.h | 9 +++++++++
9
target/arm/sve.decode | 12 ++++++++++++
10
target/arm/sve_helper.c | 28 ++++++++++++++++++++++++++++
11
target/arm/translate-sve.c | 15 +++++++++++++++
12
4 files changed, 64 insertions(+)
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve2_smull_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve2_smull_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve2_umull_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(sve2_umull_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
+
23
+DEF_HELPER_FLAGS_5(sve2_cmla_idx_h, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve2_cmla_idx_s, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_idx_h, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah_idx_s, TCG_CALL_NO_RWG,
30
+ void, ptr, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve.decode
34
+++ b/target/arm/sve.decode
35
@@ -XXX,XX +XXX,XX @@ SQDMLSLB_zzxw_d 01000100 11 1 ..... 0011.0 ..... ..... @rrxr_2a esz=3
36
SQDMLSLT_zzxw_s 01000100 10 1 ..... 0011.1 ..... ..... @rrxr_3a esz=2
37
SQDMLSLT_zzxw_d 01000100 11 1 ..... 0011.1 ..... ..... @rrxr_2a esz=3
38
39
+# SVE2 complex integer multiply-add (indexed)
40
+CMLA_zzxz_h 01000100 10 1 index:2 rm:3 0110 rot:2 rn:5 rd:5 \
41
+ ra=%reg_movprfx
42
+CMLA_zzxz_s 01000100 11 1 index:1 rm:4 0110 rot:2 rn:5 rd:5 \
43
+ ra=%reg_movprfx
44
+
45
+# SVE2 complex saturating integer multiply-add (indexed)
46
+SQRDCMLAH_zzxz_h 01000100 10 1 index:2 rm:3 0111 rot:2 rn:5 rd:5 \
47
+ ra=%reg_movprfx
48
+SQRDCMLAH_zzxz_s 01000100 11 1 index:1 rm:4 0111 rot:2 rn:5 rd:5 \
49
+ ra=%reg_movprfx
50
+
51
# SVE2 multiply-add long (indexed)
52
SMLALB_zzxw_s 01000100 10 1 ..... 1000.0 ..... ..... @rrxr_3a esz=2
53
SMLALB_zzxw_d 01000100 11 1 ..... 1000.0 ..... ..... @rrxr_2a esz=3
54
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/sve_helper.c
57
+++ b/target/arm/sve_helper.c
58
@@ -XXX,XX +XXX,XX @@ DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_h, int16_t, H2, DO_SQRDMLAH_H)
59
DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_s, int32_t, H4, DO_SQRDMLAH_S)
60
DO_CMLA_FUNC(sve2_sqrdcmlah_zzzz_d, int64_t, , DO_SQRDMLAH_D)
61
62
+#define DO_CMLA_IDX_FUNC(NAME, TYPE, H, OP) \
63
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
64
+{ \
65
+ intptr_t i, j, oprsz = simd_oprsz(desc); \
66
+ int rot = extract32(desc, SIMD_DATA_SHIFT, 2); \
67
+ int idx = extract32(desc, SIMD_DATA_SHIFT + 2, 2) * 2; \
68
+ int sel_a = rot & 1, sel_b = sel_a ^ 1; \
69
+ bool sub_r = rot == 1 || rot == 2; \
70
+ bool sub_i = rot >= 2; \
71
+ TYPE *d = vd, *n = vn, *m = vm, *a = va; \
72
+ for (i = 0; i < oprsz / sizeof(TYPE); i += 16 / sizeof(TYPE)) { \
73
+ TYPE elt2_a = m[H(i + idx + sel_a)]; \
74
+ TYPE elt2_b = m[H(i + idx + sel_b)]; \
75
+ for (j = 0; j < 16 / sizeof(TYPE); j += 2) { \
76
+ TYPE elt1_a = n[H(i + j + sel_a)]; \
77
+ d[H2(i + j)] = OP(elt1_a, elt2_a, a[H(i + j)], sub_r); \
78
+ d[H2(i + j + 1)] = OP(elt1_a, elt2_b, a[H(i + j + 1)], sub_i); \
79
+ } \
80
+ } \
81
+}
82
+
83
+DO_CMLA_IDX_FUNC(sve2_cmla_idx_h, int16_t, H2, DO_CMLA)
84
+DO_CMLA_IDX_FUNC(sve2_cmla_idx_s, int32_t, H4, DO_CMLA)
85
+
86
+DO_CMLA_IDX_FUNC(sve2_sqrdcmlah_idx_h, int16_t, H2, DO_SQRDMLAH_H)
87
+DO_CMLA_IDX_FUNC(sve2_sqrdcmlah_idx_s, int32_t, H4, DO_SQRDMLAH_S)
88
+
89
#undef DO_CMLA
90
#undef DO_CMLA_FUNC
91
+#undef DO_CMLA_IDX_FUNC
92
#undef DO_SQRDMLAH_B
93
#undef DO_SQRDMLAH_H
94
#undef DO_SQRDMLAH_S
95
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/arm/translate-sve.c
98
+++ b/target/arm/translate-sve.c
99
@@ -XXX,XX +XXX,XX @@ DO_SVE2_RRXR_TB(trans_UMLSLT_zzxw_d, gen_helper_sve2_umlsl_idx_d, true)
100
101
#undef DO_SVE2_RRXR_TB
102
103
+#define DO_SVE2_RRXR_ROT(NAME, FUNC) \
104
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
105
+ { \
106
+ return do_sve2_zzzz_data(s, a->rd, a->rn, a->rm, a->ra, \
107
+ (a->index << 2) | a->rot, FUNC); \
108
+ }
109
+
110
+DO_SVE2_RRXR_ROT(CMLA_zzxz_h, gen_helper_sve2_cmla_idx_h)
111
+DO_SVE2_RRXR_ROT(CMLA_zzxz_s, gen_helper_sve2_cmla_idx_s)
112
+
113
+DO_SVE2_RRXR_ROT(SQRDCMLAH_zzxz_h, gen_helper_sve2_sqrdcmlah_idx_h)
114
+DO_SVE2_RRXR_ROT(SQRDCMLAH_zzxz_s, gen_helper_sve2_sqrdcmlah_idx_s)
115
+
116
+#undef DO_SVE2_RRXR_ROT
117
+
118
/*
119
*** SVE Floating Point Multiply-Add Indexed Group
120
*/
121
--
122
2.20.1
123
124
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-68-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper.h | 1 +
9
target/arm/sve.decode | 4 ++++
10
target/arm/translate-sve.c | 16 ++++++++++++++++
11
target/arm/vec_helper.c | 1 +
12
4 files changed, 22 insertions(+)
13
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
17
+++ b/target/arm/helper.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_5(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_5(gvec_udot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_5(gvec_usdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
24
DEF_HELPER_FLAGS_5(gvec_sdot_idx_b, TCG_CALL_NO_RWG,
25
void, ptr, ptr, ptr, ptr, i32)
26
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/sve.decode
29
+++ b/target/arm/sve.decode
30
@@ -XXX,XX +XXX,XX @@ UMLSLT_zzzw 01000100 .. 0 ..... 010 111 ..... ..... @rda_rn_rm
31
CMLA_zzzz 01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5 ra=%reg_movprfx
32
SQRDCMLAH_zzzz 01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5 ra=%reg_movprfx
33
34
+## SVE mixed sign dot product
35
+
36
+USDOT_zzzz 01000100 .. 0 ..... 011 110 ..... ..... @rda_rn_rm
37
+
38
### SVE2 floating point matrix multiply accumulate
39
40
FMMLA 01100100 .. 1 ..... 111001 ..... ..... @rda_rn_rm
41
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/translate-sve.c
44
+++ b/target/arm/translate-sve.c
45
@@ -XXX,XX +XXX,XX @@ static bool trans_SQRDCMLAH_zzzz(DisasContext *s, arg_SQRDCMLAH_zzzz *a)
46
}
47
return true;
48
}
49
+
50
+static bool trans_USDOT_zzzz(DisasContext *s, arg_USDOT_zzzz *a)
51
+{
52
+ if (a->esz != 2 || !dc_isar_feature(aa64_sve_i8mm, s)) {
53
+ return false;
54
+ }
55
+ if (sve_access_check(s)) {
56
+ unsigned vsz = vec_full_reg_size(s);
57
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
58
+ vec_full_reg_offset(s, a->rn),
59
+ vec_full_reg_offset(s, a->rm),
60
+ vec_full_reg_offset(s, a->ra),
61
+ vsz, vsz, 0, gen_helper_gvec_usdot_b);
62
+ }
63
+ return true;
64
+}
65
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/vec_helper.c
68
+++ b/target/arm/vec_helper.c
69
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
70
71
DO_DOT(gvec_sdot_b, int32_t, int8_t, int8_t)
72
DO_DOT(gvec_udot_b, uint32_t, uint8_t, uint8_t)
73
+DO_DOT(gvec_usdot_b, uint32_t, uint8_t, int8_t)
74
DO_DOT(gvec_sdot_h, int64_t, int16_t, int16_t)
75
DO_DOT(gvec_udot_h, uint64_t, uint16_t, uint16_t)
76
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525010358.152808-69-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/sve.decode | 6 ++++++
9
target/arm/translate-sve.c | 11 +++++++++++
10
2 files changed, 17 insertions(+)
11
12
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve.decode
15
+++ b/target/arm/sve.decode
16
@@ -XXX,XX +XXX,XX @@ STNT1_zprz 1110010 .. 00 ..... 001 ... ..... ..... \
17
# SVE2 32-bit scatter non-temporal store (vector plus scalar)
18
STNT1_zprz 1110010 .. 10 ..... 001 ... ..... ..... \
19
@rprr_scatter_store xs=0 esz=2 scale=0
20
+
21
+### SVE2 Crypto Extensions
22
+
23
+# SVE2 crypto unary operations
24
+# AESMC and AESIMC
25
+AESMC 01000101 00 10000011100 decrypt:1 00000 rd:5
26
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-sve.c
29
+++ b/target/arm/translate-sve.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_USDOT_zzzz(DisasContext *s, arg_USDOT_zzzz *a)
31
}
32
return true;
33
}
34
+
35
+static bool trans_AESMC(DisasContext *s, arg_AESMC *a)
36
+{
37
+ if (!dc_isar_feature(aa64_sve2_aes, s)) {
38
+ return false;
39
+ }
40
+ if (sve_access_check(s)) {
41
+ gen_gvec_ool_zz(s, gen_helper_crypto_aesmc, a->rd, a->rd, a->decrypt);
42
+ }
43
+ return true;
44
+}
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Stephen Long <steplong@quicinc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525010358.152808-73-richard.henderson@linaro.org
7
Message-Id: <20200428174332.17162-2-steplong@quicinc.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper-sve.h | 5 +++++
12
target/arm/sve.decode | 4 ++++
13
target/arm/sve_helper.c | 20 ++++++++++++++++++++
14
target/arm/translate-sve.c | 16 ++++++++++++++++
15
4 files changed, 45 insertions(+)
16
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
20
+++ b/target/arm/helper-sve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_cdot_idx_s, TCG_CALL_NO_RWG,
22
void, ptr, ptr, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_5(sve2_cdot_idx_d, TCG_CALL_NO_RWG,
24
void, ptr, ptr, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/sve.decode
33
+++ b/target/arm/sve.decode
34
@@ -XXX,XX +XXX,XX @@ SM4E 01000101 00 10001 1 11100 0 ..... ..... @rdn_rm_e0
35
# SVE2 crypto constructive binary operations
36
SM4EKEY 01000101 00 1 ..... 11110 0 ..... ..... @rd_rn_rm_e0
37
RAX1 01000101 00 1 ..... 11110 1 ..... ..... @rd_rn_rm_e0
38
+
39
+### SVE2 floating-point convert precision odd elements
40
+FCVTNT_sh 01100100 10 0010 00 101 ... ..... ..... @rd_pg_rn_e0
41
+FCVTNT_ds 01100100 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
42
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/sve_helper.c
45
+++ b/target/arm/sve_helper.c
46
@@ -XXX,XX +XXX,XX @@ void HELPER(fmmla_d)(void *vd, void *vn, void *vm, void *va,
47
d[3] = float64_add(a[3], float64_add(p0, p1, status), status);
48
}
49
}
50
+
51
+#define DO_FCVTNT(NAME, TYPEW, TYPEN, HW, HN, OP) \
52
+void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
53
+{ \
54
+ intptr_t i = simd_oprsz(desc); \
55
+ uint64_t *g = vg; \
56
+ do { \
57
+ uint64_t pg = g[(i - 1) >> 6]; \
58
+ do { \
59
+ i -= sizeof(TYPEW); \
60
+ if (likely((pg >> (i & 63)) & 1)) { \
61
+ TYPEW nn = *(TYPEW *)(vn + HW(i)); \
62
+ *(TYPEN *)(vd + HN(i + sizeof(TYPEN))) = OP(nn, status); \
63
+ } \
64
+ } while (i & 63); \
65
+ } while (i != 0); \
66
+}
67
+
68
+DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
69
+DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, , H1_4, float64_to_float32)
70
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate-sve.c
73
+++ b/target/arm/translate-sve.c
74
@@ -XXX,XX +XXX,XX @@ static bool trans_RAX1(DisasContext *s, arg_rrr_esz *a)
75
}
76
return true;
77
}
78
+
79
+static bool trans_FCVTNT_sh(DisasContext *s, arg_rpr_esz *a)
80
+{
81
+ if (!dc_isar_feature(aa64_sve2, s)) {
82
+ return false;
83
+ }
84
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve2_fcvtnt_sh);
85
+}
86
+
87
+static bool trans_FCVTNT_ds(DisasContext *s, arg_rpr_esz *a)
88
+{
89
+ if (!dc_isar_feature(aa64_sve2, s)) {
90
+ return false;
91
+ }
92
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve2_fcvtnt_ds);
93
+}
94
--
95
2.20.1
96
97
diff view generated by jsdifflib