1
Nothing earth-shaking in here, just a lot of refactoring and cleanup
1
Another very large pullreq (this one mostly because it has
2
and a few bugfixes. I suspect I'll have another pullreq to come in
2
RTH's decodetree conversion series in it), but this should be
3
the early part of next week...
3
the last of the really large things in my to-review queue...
4
4
5
The following changes since commit 19591e9e0938ea5066984553c256a043bd5d822f:
5
thanks
6
-- PMM
6
7
7
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-08-27 16:59:02 +0100)
8
The following changes since commit 83aaec1d5a49f158abaa31797a0f976b3c07e5ca:
9
10
Merge tag 'pull-tcg-20241212' of https://gitlab.com/rth7680/qemu into staging (2024-12-12 18:45:39 -0500)
8
11
9
are available in the Git repository at:
12
are available in the Git repository at:
10
13
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200828
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20241213
12
15
13
for you to fetch changes up to ed78849d9711805bda37ee026018d6ee7a606d0e:
16
for you to fetch changes up to 48e652c4bd9570f6f24def25355cb3009a7300f8:
14
17
15
target/arm: Convert sq{, r}dmulh to gvec for aa64 advsimd (2020-08-28 10:02:50 +0100)
18
target/arm: Simplify condition for tlbi_el2_cp_reginfo[] (2024-12-13 15:41:09 +0000)
16
19
17
----------------------------------------------------------------
20
----------------------------------------------------------------
18
target-arm queue:
21
target-arm queue:
19
* target/arm: Cleanup and refactoring preparatory to SVE2
22
* Finish conversion of A64 decoder to decodetree
20
* armsse: Define ARMSSEClass correctly
23
* Use float_round_to_odd in helper_fcvtx_f64_to_f32
21
* hw/misc/unimp: Improve information provided in log messages
24
* Move TLBI insn emulation code out to its own source file
22
* hw/qdev-clock: Avoid calling qdev_connect_clock_in after DeviceRealize
25
* docs/system/arm: fix broken links, document undocumented properties
23
* hw/arm/xilinx_zynq: Call qdev_connect_clock_in() before DeviceRealize
26
* MAINTAINERS: correct an email address
24
* hw/net/allwinner-sun8i-emac: Use AddressSpace for DMA transfers
25
* hw/sd/allwinner-sdhost: Use AddressSpace for DMA transfers
26
* target/arm: Fill in the WnR syndrome bit in mte_check_fail
27
* target/arm: Clarify HCR_EL2 ARMCPRegInfo type
28
* hw/arm/musicpal: Use AddressSpace for DMA transfers
29
* hw/clock: Minor cleanups
30
* hw/arm/sbsa-ref: fix typo breaking PCIe IRQs
31
27
32
----------------------------------------------------------------
28
----------------------------------------------------------------
33
Eduardo Habkost (1):
29
Brian Cain (1):
34
armsse: Define ARMSSEClass correctly
30
MAINTAINERS: correct my email address
35
31
36
Graeme Gregory (1):
32
Peter Maydell (10):
37
hw/arm/sbsa-ref: fix typo breaking PCIe IRQs
33
target/arm: Move some TLBI insns to their own source file
34
target/arm: Move TLBI insns for AArch32 EL2 to tlbi_insn_helper.c
35
target/arm: Move AArch64 TLBI insns from v8_cp_reginfo[]
36
target/arm: Move the AArch64 EL2 TLBI insns
37
target/arm: Move AArch64 EL3 TLBI insns
38
target/arm: Move TLBI range insns
39
target/arm: Move the TLBI OS insns to tlb-insns.c.
40
target/arm: Move small helper functions to tlb-insns.c
41
target/arm: Move RME TLB insns to tlb-insns.c
42
target/arm: Simplify condition for tlbi_el2_cp_reginfo[]
38
43
39
Philippe Mathieu-Daudé (14):
44
Pierrick Bouvier (4):
40
hw/clock: Remove unused clock_init*() functions
45
docs/system/arm/orangepi: update links
41
hw/clock: Let clock_set() return boolean value
46
docs/system/arm/fby35: document execute-in-place property
42
hw/clock: Only propagate clock changes if the clock is changed
47
docs/system/arm/xlnx-versal-virt: document ospi-flash property
43
hw/arm/musicpal: Use AddressSpace for DMA transfers
48
docs/system/arm/virt: document missing properties
44
target/arm: Clarify HCR_EL2 ARMCPRegInfo type
45
hw/sd/allwinner-sdhost: Use AddressSpace for DMA transfers
46
hw/net/allwinner-sun8i-emac: Use AddressSpace for DMA transfers
47
hw/arm/xilinx_zynq: Uninline cadence_uart_create()
48
hw/arm/xilinx_zynq: Call qdev_connect_clock_in() before DeviceRealize
49
hw/qdev-clock: Uninline qdev_connect_clock_in()
50
hw/qdev-clock: Avoid calling qdev_connect_clock_in after DeviceRealize
51
hw/misc/unimp: Display value after offset
52
hw/misc/unimp: Display the value with width of the access size
53
hw/misc/unimp: Display the offset with width of the region size
54
49
55
Richard Henderson (19):
50
Richard Henderson (70):
56
target/arm: Pass the entire mte descriptor to mte_check_fail
51
target/arm: Add section labels for "Data Processing (register)"
57
target/arm: Fill in the WnR syndrome bit in mte_check_fail
52
target/arm: Convert UDIV, SDIV to decodetree
58
qemu/int128: Add int128_lshift
53
target/arm: Convert LSLV, LSRV, ASRV, RORV to decodetree
59
target/arm: Split out gen_gvec_fn_zz
54
target/arm: Convert CRC32, CRC32C to decodetree
60
target/arm: Split out gen_gvec_fn_zzz, do_zzz_fn
55
target/arm: Convert SUBP, IRG, GMI to decodetree
61
target/arm: Rearrange {sve,fp}_check_access assert
56
target/arm: Convert PACGA to decodetree
62
target/arm: Merge do_vector2_p into do_mov_p
57
target/arm: Convert RBIT, REV16, REV32, REV64 to decodetree
63
target/arm: Clean up 4-operand predicate expansion
58
target/arm: Convert CLZ, CLS to decodetree
64
target/arm: Use tcg_gen_gvec_bitsel for trans_SEL_pppp
59
target/arm: Convert PAC[ID]*, AUT[ID]* to decodetree
65
target/arm: Split out gen_gvec_ool_zzzp
60
target/arm: Convert XPAC[ID] to decodetree
66
target/arm: Merge helper_sve_clr_* and helper_sve_movz_*
61
target/arm: Convert disas_logic_reg to decodetree
67
target/arm: Split out gen_gvec_ool_zzp
62
target/arm: Convert disas_add_sub_ext_reg to decodetree
68
target/arm: Split out gen_gvec_ool_zzz
63
target/arm: Convert disas_add_sub_reg to decodetree
69
target/arm: Split out gen_gvec_ool_zz
64
target/arm: Convert disas_data_proc_3src to decodetree
70
target/arm: Tidy SVE tszimm shift formats
65
target/arm: Convert disas_adc_sbc to decodetree
71
target/arm: Generalize inl_qrdmlah_* helper functions
66
target/arm: Convert RMIF to decodetree
72
target/arm: Convert integer multiply (indexed) to gvec for aa64 advsimd
67
target/arm: Convert SETF8, SETF16 to decodetree
73
target/arm: Convert integer multiply-add (indexed) to gvec for aa64 advsimd
68
target/arm: Convert CCMP, CCMN to decodetree
74
target/arm: Convert sq{, r}dmulh to gvec for aa64 advsimd
69
target/arm: Convert disas_cond_select to decodetree
70
target/arm: Introduce fp_access_check_scalar_hsd
71
target/arm: Introduce fp_access_check_vector_hsd
72
target/arm: Convert FCMP, FCMPE, FCCMP, FCCMPE to decodetree
73
target/arm: Fix decode of fp16 vector fabs, fneg, fsqrt
74
target/arm: Convert FMOV, FABS, FNEG (scalar) to decodetree
75
target/arm: Pass fpstatus to vfp_sqrt*
76
target/arm: Remove helper_sqrt_f16
77
target/arm: Convert FSQRT (scalar) to decodetree
78
target/arm: Convert FRINT[NPMSAXI] (scalar) to decodetree
79
target/arm: Convert BFCVT to decodetree
80
target/arm: Convert FRINT{32, 64}[ZX] (scalar) to decodetree
81
target/arm: Convert FCVT (scalar) to decodetree
82
target/arm: Convert handle_fpfpcvt to decodetree
83
target/arm: Convert FJCVTZS to decodetree
84
target/arm: Convert handle_fmov to decodetree
85
target/arm: Convert SQABS, SQNEG to decodetree
86
target/arm: Convert ABS, NEG to decodetree
87
target/arm: Introduce gen_gvec_cls, gen_gvec_clz
88
target/arm: Convert CLS, CLZ (vector) to decodetree
89
target/arm: Introduce gen_gvec_cnt, gen_gvec_rbit
90
target/arm: Convert CNT, NOT, RBIT (vector) to decodetree
91
target/arm: Convert CMGT, CMGE, GMLT, GMLE, CMEQ (zero) to decodetree
92
target/arm: Introduce gen_gvec_rev{16,32,64}
93
target/arm: Convert handle_rev to decodetree
94
target/arm: Move helper_neon_addlp_{s8, s16} to neon_helper.c
95
target/arm: Introduce gen_gvec_{s,u}{add,ada}lp
96
target/arm: Convert handle_2misc_pairwise to decodetree
97
target/arm: Remove helper_neon_{add,sub}l_u{16,32}
98
target/arm: Introduce clear_vec
99
target/arm: Convert XTN, SQXTUN, SQXTN, UQXTN to decodetree
100
target/arm: Convert FCVTN, BFCVTN to decodetree
101
target/arm: Convert FCVTXN to decodetree
102
target/arm: Convert SHLL to decodetree
103
target/arm: Implement gen_gvec_fabs, gen_gvec_fneg
104
target/arm: Convert FABS, FNEG (vector) to decodetree
105
target/arm: Convert FSQRT (vector) to decodetree
106
target/arm: Convert FRINT* (vector) to decodetree
107
target/arm: Convert FCVT* (vector, integer) scalar to decodetree
108
target/arm: Convert FCVT* (vector, fixed-point) scalar to decodetree
109
target/arm: Convert [US]CVTF (vector, integer) scalar to decodetree
110
target/arm: Convert [US]CVTF (vector, fixed-point) scalar to decodetree
111
target/arm: Rename helper_gvec_vcvt_[hf][su] with _rz
112
target/arm: Convert [US]CVTF (vector) to decodetree
113
target/arm: Convert FCVTZ[SU] (vector, fixed-point) to decodetree
114
target/arm: Convert FCVT* (vector, integer) to decodetree
115
target/arm: Convert handle_2misc_fcmp_zero to decodetree
116
target/arm: Convert FRECPE, FRECPX, FRSQRTE to decodetree
117
target/arm: Introduce gen_gvec_urecpe, gen_gvec_ursqrte
118
target/arm: Convert URECPE and URSQRTE to decodetree
119
target/arm: Convert FCVTL to decodetree
120
target/arm: Use float_round_to_odd in helper_fcvtx_f64_to_f32
75
121
76
include/hw/arm/armsse.h | 2 +-
122
MAINTAINERS | 2 +-
77
include/hw/char/cadence_uart.h | 17 --
123
docs/system/arm/fby35.rst | 5 +
78
include/hw/clock.h | 30 +--
124
docs/system/arm/orangepi.rst | 4 +-
79
include/hw/misc/unimp.h | 1 +
125
docs/system/arm/virt.rst | 16 +
80
include/hw/net/allwinner-sun8i-emac.h | 6 +
126
docs/system/arm/xlnx-versal-virt.rst | 3 +
81
include/hw/qdev-clock.h | 8 +-
127
target/arm/helper.h | 43 +-
82
include/hw/sd/allwinner-sdhost.h | 6 +
128
target/arm/internals.h | 9 +
83
include/qemu/int128.h | 16 ++
129
target/arm/tcg/helper-a64.h | 7 -
84
target/arm/helper-sve.h | 5 -
130
target/arm/tcg/translate.h | 35 +
85
target/arm/helper.h | 28 +++
131
target/arm/tcg/a64.decode | 502 ++-
86
target/arm/translate.h | 1 +
132
target/arm/helper.c | 1208 +-------
87
target/arm/sve.decode | 35 ++-
133
target/arm/tcg-stubs.c | 5 +
88
hw/arm/allwinner-a10.c | 2 +
134
target/arm/tcg/gengvec.c | 369 +++
89
hw/arm/allwinner-h3.c | 4 +
135
target/arm/tcg/helper-a64.c | 122 +-
90
hw/arm/armsse.c | 1 +
136
target/arm/tcg/neon_helper.c | 106 +-
91
hw/arm/musicpal.c | 45 ++--
137
target/arm/tcg/tlb-insns.c | 1266 ++++++++
92
hw/arm/sbsa-ref.c | 2 +-
138
target/arm/tcg/translate-a64.c | 5670 +++++++++++-----------------------
93
hw/arm/xilinx_zynq.c | 24 +-
139
target/arm/tcg/translate-neon.c | 337 +-
94
hw/core/clock.c | 7 +-
140
target/arm/tcg/translate-vfp.c | 6 +-
95
hw/core/qdev-clock.c | 6 +
141
target/arm/tcg/vec_helper.c | 65 +-
96
hw/misc/unimp.c | 14 +-
142
target/arm/vfp_helper.c | 16 +-
97
hw/net/allwinner-sun8i-emac.c | 46 ++--
143
target/arm/tcg/meson.build | 1 +
98
hw/sd/allwinner-sdhost.c | 37 +++-
144
22 files changed, 4203 insertions(+), 5594 deletions(-)
99
target/arm/helper.c | 1 -
145
create mode 100644 target/arm/tcg/tlb-insns.c
100
target/arm/mte_helper.c | 19 +-
101
target/arm/sve_helper.c | 70 ++----
102
target/arm/translate-a64.c | 110 ++++++++--
103
target/arm/translate-sve.c | 399 ++++++++++++++--------------------
104
target/arm/vec_helper.c | 182 +++++++++++-----
105
29 files changed, 629 insertions(+), 495 deletions(-)
106
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Clock canonical name is set in device_set_realized (see the block
3
At the same time, use ### to separate 3rd-level sections.
4
added to hw/core/qdev.c in commit 0e6934f264).
4
We already use ### for 4.1.92 Data Processing (immediate),
5
If we connect a clock after the device is realized, this code is
5
but not the two following two third-level sections:
6
not executed. This is currently not a problem as this name is only
6
4.1.93 Branches, and 4.1.94 Loads and stores.
7
used for trace events, however this disrupt tracing.
8
7
9
Add a comment to document qdev_connect_clock_in() must be called
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
before the device is realized, and assert this condition.
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
10
Message-id: 20241211163036.2297116-2-richard.henderson@linaro.org
12
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-id: 20200803105647.22223-5-f4bug@amsat.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
12
---
17
include/hw/qdev-clock.h | 2 ++
13
target/arm/tcg/a64.decode | 19 +++++++++++++++++--
18
hw/core/qdev-clock.c | 1 +
14
1 file changed, 17 insertions(+), 2 deletions(-)
19
2 files changed, 3 insertions(+)
20
15
21
diff --git a/include/hw/qdev-clock.h b/include/hw/qdev-clock.h
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/qdev-clock.h
18
--- a/target/arm/tcg/a64.decode
24
+++ b/include/hw/qdev-clock.h
19
+++ b/target/arm/tcg/a64.decode
25
@@ -XXX,XX +XXX,XX @@ Clock *qdev_get_clock_out(DeviceState *dev, const char *name);
20
@@ -XXX,XX +XXX,XX @@ UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_32
26
*
21
EXTR 1 00 100111 1 0 rm:5 imm:6 rn:5 rd:5 &extract sf=1
27
* Set the source clock of input clock @name of device @dev to @source.
22
EXTR 0 00 100111 0 0 rm:5 0 imm:5 rn:5 rd:5 &extract sf=0
28
* @source period update will be propagated to @name clock.
23
29
+ *
24
-# Branches
30
+ * Must be called before @dev is realized.
25
+### Branches
31
*/
26
32
void qdev_connect_clock_in(DeviceState *dev, const char *name, Clock *source);
27
%imm26 0:s26 !function=times_4
33
28
@branch . ..... .......................... &i imm=%imm26
34
diff --git a/hw/core/qdev-clock.c b/hw/core/qdev-clock.c
29
@@ -XXX,XX +XXX,XX @@ HLT 1101 0100 010 ................ 000 00 @i16
35
index XXXXXXX..XXXXXXX 100644
30
# DCPS2 1101 0100 101 ................ 000 10 @i16
36
--- a/hw/core/qdev-clock.c
31
# DCPS3 1101 0100 101 ................ 000 11 @i16
37
+++ b/hw/core/qdev-clock.c
32
38
@@ -XXX,XX +XXX,XX @@ Clock *qdev_alias_clock(DeviceState *dev, const char *name,
33
-# Loads and stores
39
34
+### Loads and stores
40
void qdev_connect_clock_in(DeviceState *dev, const char *name, Clock *source)
35
41
{
36
&stxr rn rt rt2 rs sz lasr
42
+ assert(!dev->realized);
37
&stlr rn rt sz lasr
43
clock_set_source(qdev_get_clock_in(dev, name), source);
38
@@ -XXX,XX +XXX,XX @@ CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy
44
}
39
CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy
40
CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
41
42
+### Data Processing (register)
43
+
44
+# Data Processing (2-source)
45
+# Data Processing (1-source)
46
+# Logical (shifted reg)
47
+# Add/subtract (shifted reg)
48
+# Add/subtract (extended reg)
49
+# Add/subtract (carry)
50
+# Rotate right into flags
51
+# Evaluate into flags
52
+# Conditional compare (regster)
53
+# Conditional compare (immediate)
54
+# Conditional select
55
+# Data Processing (3-source)
56
+
57
### Cryptographic AES
58
59
AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
45
--
60
--
46
2.20.1
61
2.34.1
47
62
48
63
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
To better align the read/write accesses, display the value after
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
the offset (read accesses only display the offset).
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
5
Message-id: 20241211163036.2297116-3-richard.henderson@linaro.org
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20200812190206.31595-2-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
7
---
11
hw/misc/unimp.c | 8 ++++----
8
target/arm/tcg/a64.decode | 7 ++++
12
1 file changed, 4 insertions(+), 4 deletions(-)
9
target/arm/tcg/translate-a64.c | 64 +++++++++++++++++-----------------
10
2 files changed, 39 insertions(+), 32 deletions(-)
13
11
14
diff --git a/hw/misc/unimp.c b/hw/misc/unimp.c
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/misc/unimp.c
14
--- a/target/arm/tcg/a64.decode
17
+++ b/hw/misc/unimp.c
15
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ static uint64_t unimp_read(void *opaque, hwaddr offset, unsigned size)
16
@@ -XXX,XX +XXX,XX @@
19
{
17
&r rn
20
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
18
&ri rd imm
21
19
&rri_sf rd rn imm sf
22
- qemu_log_mask(LOG_UNIMP, "%s: unimplemented device read "
20
+&rrr_sf rd rn rm sf
23
+ qemu_log_mask(LOG_UNIMP, "%s: unimplemented device read "
21
&i imm
24
"(size %d, offset 0x%" HWADDR_PRIx ")\n",
22
&rr_e rd rn esz
25
s->name, size, offset);
23
&rri_e rd rn imm esz
26
return 0;
24
@@ -XXX,XX +XXX,XX @@ CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
27
@@ -XXX,XX +XXX,XX @@ static void unimp_write(void *opaque, hwaddr offset,
25
### Data Processing (register)
28
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
26
29
27
# Data Processing (2-source)
30
qemu_log_mask(LOG_UNIMP, "%s: unimplemented device write "
28
+
31
- "(size %d, value 0x%" PRIx64
29
+@rrr_sf sf:1 .......... rm:5 ...... rn:5 rd:5 &rrr_sf
32
- ", offset 0x%" HWADDR_PRIx ")\n",
30
+
33
- s->name, size, value, offset);
31
+UDIV . 00 11010110 ..... 00001 0 ..... ..... @rrr_sf
34
+ "(size %d, offset 0x%" HWADDR_PRIx
32
+SDIV . 00 11010110 ..... 00001 1 ..... ..... @rrr_sf
35
+ ", value 0x%" PRIx64 ")\n",
33
+
36
+ s->name, size, offset, value);
34
# Data Processing (1-source)
35
# Logical (shifted reg)
36
# Add/subtract (shifted reg)
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ TRANS(UQRSHRN_si, do_scalar_shift_imm_narrow, a, uqrshrn_fns, 0, false)
42
TRANS(SQSHRUN_si, do_scalar_shift_imm_narrow, a, sqshrun_fns, MO_SIGN, false)
43
TRANS(SQRSHRUN_si, do_scalar_shift_imm_narrow, a, sqrshrun_fns, MO_SIGN, false)
44
45
+static bool do_div(DisasContext *s, arg_rrr_sf *a, bool is_signed)
46
+{
47
+ TCGv_i64 tcg_n, tcg_m, tcg_rd;
48
+ tcg_rd = cpu_reg(s, a->rd);
49
+
50
+ if (!a->sf && is_signed) {
51
+ tcg_n = tcg_temp_new_i64();
52
+ tcg_m = tcg_temp_new_i64();
53
+ tcg_gen_ext32s_i64(tcg_n, cpu_reg(s, a->rn));
54
+ tcg_gen_ext32s_i64(tcg_m, cpu_reg(s, a->rm));
55
+ } else {
56
+ tcg_n = read_cpu_reg(s, a->rn, a->sf);
57
+ tcg_m = read_cpu_reg(s, a->rm, a->sf);
58
+ }
59
+
60
+ if (is_signed) {
61
+ gen_helper_sdiv64(tcg_rd, tcg_n, tcg_m);
62
+ } else {
63
+ gen_helper_udiv64(tcg_rd, tcg_n, tcg_m);
64
+ }
65
+
66
+ if (!a->sf) { /* zero extend final result */
67
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
68
+ }
69
+ return true;
70
+}
71
+
72
+TRANS(SDIV, do_div, a, true)
73
+TRANS(UDIV, do_div, a, false)
74
+
75
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
76
* Note that it is the caller's responsibility to ensure that the
77
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
78
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
79
#undef MAP
37
}
80
}
38
81
39
static const MemoryRegionOps unimp_ops = {
82
-static void handle_div(DisasContext *s, bool is_signed, unsigned int sf,
83
- unsigned int rm, unsigned int rn, unsigned int rd)
84
-{
85
- TCGv_i64 tcg_n, tcg_m, tcg_rd;
86
- tcg_rd = cpu_reg(s, rd);
87
-
88
- if (!sf && is_signed) {
89
- tcg_n = tcg_temp_new_i64();
90
- tcg_m = tcg_temp_new_i64();
91
- tcg_gen_ext32s_i64(tcg_n, cpu_reg(s, rn));
92
- tcg_gen_ext32s_i64(tcg_m, cpu_reg(s, rm));
93
- } else {
94
- tcg_n = read_cpu_reg(s, rn, sf);
95
- tcg_m = read_cpu_reg(s, rm, sf);
96
- }
97
-
98
- if (is_signed) {
99
- gen_helper_sdiv64(tcg_rd, tcg_n, tcg_m);
100
- } else {
101
- gen_helper_udiv64(tcg_rd, tcg_n, tcg_m);
102
- }
103
-
104
- if (!sf) { /* zero extend final result */
105
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
106
- }
107
-}
108
109
/* LSLV, LSRV, ASRV, RORV */
110
static void handle_shift_reg(DisasContext *s,
111
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
112
}
113
}
114
break;
115
- case 2: /* UDIV */
116
- handle_div(s, false, sf, rm, rn, rd);
117
- break;
118
- case 3: /* SDIV */
119
- handle_div(s, true, sf, rm, rn, rd);
120
- break;
121
case 4: /* IRG */
122
if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
123
goto do_unallocated;
124
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
125
}
126
default:
127
do_unallocated:
128
+ case 2: /* UDIV */
129
+ case 3: /* SDIV */
130
unallocated_encoding(s);
131
break;
132
}
40
--
133
--
41
2.20.1
134
2.34.1
42
135
43
136
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
As we want to call qdev_connect_clock_in() before the device
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
is realized, we need to uninline cadence_uart_create() first.
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
5
Message-id: 20241211163036.2297116-4-richard.henderson@linaro.org
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20200803105647.22223-2-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
7
---
11
include/hw/char/cadence_uart.h | 17 -----------------
8
target/arm/tcg/a64.decode | 4 +++
12
hw/arm/xilinx_zynq.c | 14 ++++++++++++--
9
target/arm/tcg/translate-a64.c | 46 ++++++++++++++++------------------
13
2 files changed, 12 insertions(+), 19 deletions(-)
10
2 files changed, 25 insertions(+), 25 deletions(-)
14
11
15
diff --git a/include/hw/char/cadence_uart.h b/include/hw/char/cadence_uart.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/char/cadence_uart.h
14
--- a/target/arm/tcg/a64.decode
18
+++ b/include/hw/char/cadence_uart.h
15
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ typedef struct {
16
@@ -XXX,XX +XXX,XX @@ CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
20
Clock *refclk;
17
21
} CadenceUARTState;
18
UDIV . 00 11010110 ..... 00001 0 ..... ..... @rrr_sf
22
19
SDIV . 00 11010110 ..... 00001 1 ..... ..... @rrr_sf
23
-static inline DeviceState *cadence_uart_create(hwaddr addr,
20
+LSLV . 00 11010110 ..... 00100 0 ..... ..... @rrr_sf
24
- qemu_irq irq,
21
+LSRV . 00 11010110 ..... 00100 1 ..... ..... @rrr_sf
25
- Chardev *chr)
22
+ASRV . 00 11010110 ..... 00101 0 ..... ..... @rrr_sf
23
+RORV . 00 11010110 ..... 00101 1 ..... ..... @rrr_sf
24
25
# Data Processing (1-source)
26
# Logical (shifted reg)
27
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/tcg/translate-a64.c
30
+++ b/target/arm/tcg/translate-a64.c
31
@@ -XXX,XX +XXX,XX @@ static void shift_reg_imm(TCGv_i64 dst, TCGv_i64 src, int sf,
32
}
33
}
34
35
+static bool do_shift_reg(DisasContext *s, arg_rrr_sf *a,
36
+ enum a64_shift_type shift_type)
37
+{
38
+ TCGv_i64 tcg_shift = tcg_temp_new_i64();
39
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
40
+ TCGv_i64 tcg_rn = read_cpu_reg(s, a->rn, a->sf);
41
+
42
+ tcg_gen_andi_i64(tcg_shift, cpu_reg(s, a->rm), a->sf ? 63 : 31);
43
+ shift_reg(tcg_rd, tcg_rn, a->sf, shift_type, tcg_shift);
44
+ return true;
45
+}
46
+
47
+TRANS(LSLV, do_shift_reg, a, A64_SHIFT_TYPE_LSL)
48
+TRANS(LSRV, do_shift_reg, a, A64_SHIFT_TYPE_LSR)
49
+TRANS(ASRV, do_shift_reg, a, A64_SHIFT_TYPE_ASR)
50
+TRANS(RORV, do_shift_reg, a, A64_SHIFT_TYPE_ROR)
51
+
52
/* Logical (shifted register)
53
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
54
* +----+-----+-----------+-------+---+------+--------+------+------+
55
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
56
}
57
58
59
-/* LSLV, LSRV, ASRV, RORV */
60
-static void handle_shift_reg(DisasContext *s,
61
- enum a64_shift_type shift_type, unsigned int sf,
62
- unsigned int rm, unsigned int rn, unsigned int rd)
26
-{
63
-{
27
- DeviceState *dev;
64
- TCGv_i64 tcg_shift = tcg_temp_new_i64();
28
- SysBusDevice *s;
65
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
66
- TCGv_i64 tcg_rn = read_cpu_reg(s, rn, sf);
29
-
67
-
30
- dev = qdev_new(TYPE_CADENCE_UART);
68
- tcg_gen_andi_i64(tcg_shift, cpu_reg(s, rm), sf ? 63 : 31);
31
- s = SYS_BUS_DEVICE(dev);
69
- shift_reg(tcg_rd, tcg_rn, sf, shift_type, tcg_shift);
32
- qdev_prop_set_chr(dev, "chardev", chr);
33
- sysbus_realize_and_unref(s, &error_fatal);
34
- sysbus_mmio_map(s, 0, addr);
35
- sysbus_connect_irq(s, 0, irq);
36
-
37
- return dev;
38
-}
70
-}
39
-
71
-
40
#endif
72
/* CRC32[BHWX], CRC32C[BHWX] */
41
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
73
static void handle_crc32(DisasContext *s,
42
index XXXXXXX..XXXXXXX 100644
74
unsigned int sf, unsigned int sz, bool crc32c,
43
--- a/hw/arm/xilinx_zynq.c
75
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
44
+++ b/hw/arm/xilinx_zynq.c
76
tcg_gen_or_i64(cpu_reg(s, rd), cpu_reg(s, rm), t);
45
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
77
}
46
sysbus_create_simple(TYPE_CHIPIDEA, 0xE0002000, pic[53 - IRQ_OFFSET]);
78
break;
47
sysbus_create_simple(TYPE_CHIPIDEA, 0xE0003000, pic[76 - IRQ_OFFSET]);
79
- case 8: /* LSLV */
48
80
- handle_shift_reg(s, A64_SHIFT_TYPE_LSL, sf, rm, rn, rd);
49
- dev = cadence_uart_create(0xE0000000, pic[59 - IRQ_OFFSET], serial_hd(0));
81
- break;
50
+ dev = qdev_new(TYPE_CADENCE_UART);
82
- case 9: /* LSRV */
51
+ busdev = SYS_BUS_DEVICE(dev);
83
- handle_shift_reg(s, A64_SHIFT_TYPE_LSR, sf, rm, rn, rd);
52
+ qdev_prop_set_chr(dev, "chardev", serial_hd(0));
84
- break;
53
+ sysbus_realize_and_unref(busdev, &error_fatal);
85
- case 10: /* ASRV */
54
+ sysbus_mmio_map(busdev, 0, 0xE0000000);
86
- handle_shift_reg(s, A64_SHIFT_TYPE_ASR, sf, rm, rn, rd);
55
+ sysbus_connect_irq(busdev, 0, pic[59 - IRQ_OFFSET]);
87
- break;
56
qdev_connect_clock_in(dev, "refclk",
88
- case 11: /* RORV */
57
qdev_get_clock_out(slcr, "uart0_ref_clk"));
89
- handle_shift_reg(s, A64_SHIFT_TYPE_ROR, sf, rm, rn, rd);
58
- dev = cadence_uart_create(0xE0001000, pic[82 - IRQ_OFFSET], serial_hd(1));
90
- break;
59
+ dev = qdev_new(TYPE_CADENCE_UART);
91
case 12: /* PACGA */
60
+ busdev = SYS_BUS_DEVICE(dev);
92
if (sf == 0 || !dc_isar_feature(aa64_pauth, s)) {
61
+ qdev_prop_set_chr(dev, "chardev", serial_hd(1));
93
goto do_unallocated;
62
+ sysbus_realize_and_unref(busdev, &error_fatal);
94
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
63
+ sysbus_mmio_map(busdev, 0, 0xE0001000);
95
do_unallocated:
64
+ sysbus_connect_irq(busdev, 0, pic[82 - IRQ_OFFSET]);
96
case 2: /* UDIV */
65
qdev_connect_clock_in(dev, "refclk",
97
case 3: /* SDIV */
66
qdev_get_clock_out(slcr, "uart1_ref_clk"));
98
+ case 8: /* LSLV */
67
99
+ case 9: /* LSRV */
100
+ case 10: /* ASRV */
101
+ case 11: /* RORV */
102
unallocated_encoding(s);
103
break;
104
}
68
--
105
--
69
2.20.1
106
2.34.1
70
107
71
108
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Avoid propagating the clock change when the clock does not change.
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Message-id: 20241211163036.2297116-5-richard.henderson@linaro.org
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200806123858.30058-4-f4bug@amsat.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
7
---
10
include/hw/clock.h | 5 +++--
8
target/arm/tcg/a64.decode | 12 ++++
11
1 file changed, 3 insertions(+), 2 deletions(-)
9
target/arm/tcg/translate-a64.c | 101 +++++++++++++--------------------
10
2 files changed, 53 insertions(+), 60 deletions(-)
12
11
13
diff --git a/include/hw/clock.h b/include/hw/clock.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/clock.h
14
--- a/target/arm/tcg/a64.decode
16
+++ b/include/hw/clock.h
15
+++ b/target/arm/tcg/a64.decode
17
@@ -XXX,XX +XXX,XX @@ void clock_propagate(Clock *clk);
16
@@ -XXX,XX +XXX,XX @@
18
*/
17
@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
19
static inline void clock_update(Clock *clk, uint64_t value)
18
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
20
{
19
21
- clock_set(clk, value);
20
+@rrr_b ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=0
22
- clock_propagate(clk);
21
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
23
+ if (clock_set(clk, value)) {
22
+@rrr_s ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=2
24
+ clock_propagate(clk);
23
@rrr_d ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=3
24
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
25
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
26
@@ -XXX,XX +XXX,XX @@ LSRV . 00 11010110 ..... 00100 1 ..... ..... @rrr_sf
27
ASRV . 00 11010110 ..... 00101 0 ..... ..... @rrr_sf
28
RORV . 00 11010110 ..... 00101 1 ..... ..... @rrr_sf
29
30
+CRC32 0 00 11010110 ..... 0100 00 ..... ..... @rrr_b
31
+CRC32 0 00 11010110 ..... 0100 01 ..... ..... @rrr_h
32
+CRC32 0 00 11010110 ..... 0100 10 ..... ..... @rrr_s
33
+CRC32 1 00 11010110 ..... 0100 11 ..... ..... @rrr_d
34
+
35
+CRC32C 0 00 11010110 ..... 0101 00 ..... ..... @rrr_b
36
+CRC32C 0 00 11010110 ..... 0101 01 ..... ..... @rrr_h
37
+CRC32C 0 00 11010110 ..... 0101 10 ..... ..... @rrr_s
38
+CRC32C 1 00 11010110 ..... 0101 11 ..... ..... @rrr_d
39
+
40
# Data Processing (1-source)
41
# Logical (shifted reg)
42
# Add/subtract (shifted reg)
43
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/tcg/translate-a64.c
46
+++ b/target/arm/tcg/translate-a64.c
47
@@ -XXX,XX +XXX,XX @@ TRANS(LSRV, do_shift_reg, a, A64_SHIFT_TYPE_LSR)
48
TRANS(ASRV, do_shift_reg, a, A64_SHIFT_TYPE_ASR)
49
TRANS(RORV, do_shift_reg, a, A64_SHIFT_TYPE_ROR)
50
51
+static bool do_crc32(DisasContext *s, arg_rrr_e *a, bool crc32c)
52
+{
53
+ TCGv_i64 tcg_acc, tcg_val, tcg_rd;
54
+ TCGv_i32 tcg_bytes;
55
+
56
+ switch (a->esz) {
57
+ case MO_8:
58
+ case MO_16:
59
+ case MO_32:
60
+ tcg_val = tcg_temp_new_i64();
61
+ tcg_gen_extract_i64(tcg_val, cpu_reg(s, a->rm), 0, 8 << a->esz);
62
+ break;
63
+ case MO_64:
64
+ tcg_val = cpu_reg(s, a->rm);
65
+ break;
66
+ default:
67
+ g_assert_not_reached();
25
+ }
68
+ }
69
+ tcg_acc = cpu_reg(s, a->rn);
70
+ tcg_bytes = tcg_constant_i32(1 << a->esz);
71
+ tcg_rd = cpu_reg(s, a->rd);
72
+
73
+ if (crc32c) {
74
+ gen_helper_crc32c_64(tcg_rd, tcg_acc, tcg_val, tcg_bytes);
75
+ } else {
76
+ gen_helper_crc32_64(tcg_rd, tcg_acc, tcg_val, tcg_bytes);
77
+ }
78
+ return true;
79
+}
80
+
81
+TRANS_FEAT(CRC32, aa64_crc32, do_crc32, a, false)
82
+TRANS_FEAT(CRC32C, aa64_crc32, do_crc32, a, true)
83
+
84
/* Logical (shifted register)
85
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
86
* +----+-----+-----------+-------+---+------+--------+------+------+
87
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
26
}
88
}
27
89
28
static inline void clock_update_hz(Clock *clk, unsigned hz)
90
91
-/* CRC32[BHWX], CRC32C[BHWX] */
92
-static void handle_crc32(DisasContext *s,
93
- unsigned int sf, unsigned int sz, bool crc32c,
94
- unsigned int rm, unsigned int rn, unsigned int rd)
95
-{
96
- TCGv_i64 tcg_acc, tcg_val;
97
- TCGv_i32 tcg_bytes;
98
-
99
- if (!dc_isar_feature(aa64_crc32, s)
100
- || (sf == 1 && sz != 3)
101
- || (sf == 0 && sz == 3)) {
102
- unallocated_encoding(s);
103
- return;
104
- }
105
-
106
- if (sz == 3) {
107
- tcg_val = cpu_reg(s, rm);
108
- } else {
109
- uint64_t mask;
110
- switch (sz) {
111
- case 0:
112
- mask = 0xFF;
113
- break;
114
- case 1:
115
- mask = 0xFFFF;
116
- break;
117
- case 2:
118
- mask = 0xFFFFFFFF;
119
- break;
120
- default:
121
- g_assert_not_reached();
122
- }
123
- tcg_val = tcg_temp_new_i64();
124
- tcg_gen_andi_i64(tcg_val, cpu_reg(s, rm), mask);
125
- }
126
-
127
- tcg_acc = cpu_reg(s, rn);
128
- tcg_bytes = tcg_constant_i32(1 << sz);
129
-
130
- if (crc32c) {
131
- gen_helper_crc32c_64(cpu_reg(s, rd), tcg_acc, tcg_val, tcg_bytes);
132
- } else {
133
- gen_helper_crc32_64(cpu_reg(s, rd), tcg_acc, tcg_val, tcg_bytes);
134
- }
135
-}
136
-
137
/* Data-processing (2 source)
138
* 31 30 29 28 21 20 16 15 10 9 5 4 0
139
* +----+---+---+-----------------+------+--------+------+------+
140
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
141
gen_helper_pacga(cpu_reg(s, rd), tcg_env,
142
cpu_reg(s, rn), cpu_reg_sp(s, rm));
143
break;
144
- case 16:
145
- case 17:
146
- case 18:
147
- case 19:
148
- case 20:
149
- case 21:
150
- case 22:
151
- case 23: /* CRC32 */
152
- {
153
- int sz = extract32(opcode, 0, 2);
154
- bool crc32c = extract32(opcode, 2, 1);
155
- handle_crc32(s, sf, sz, crc32c, rm, rn, rd);
156
- break;
157
- }
158
default:
159
do_unallocated:
160
case 2: /* UDIV */
161
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
162
case 9: /* LSRV */
163
case 10: /* ASRV */
164
case 11: /* RORV */
165
+ case 16:
166
+ case 17:
167
+ case 18:
168
+ case 19:
169
+ case 20:
170
+ case 21:
171
+ case 22:
172
+ case 23: /* CRC32 */
173
unallocated_encoding(s);
174
break;
175
}
29
--
176
--
30
2.20.1
177
2.34.1
31
178
32
179
diff view generated by jsdifflib
1
From: Graeme Gregory <graeme@nuviainc.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Fixing a typo in a previous patch that translated an "i" to a 1
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
and therefore breaking the allocation of PCIe interrupts. This was
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
discovered when virtio-net-pci devices ceased to function correctly.
5
Message-id: 20241211163036.2297116-6-richard.henderson@linaro.org
6
7
Cc: qemu-stable@nongnu.org
8
Fixes: 48ba18e6d3f3 ("hw/arm/sbsa-ref: Simplify by moving the gic in the machine state")
9
Signed-off-by: Graeme Gregory <graeme@nuviainc.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20200821083853.356490-1-graeme@nuviainc.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
hw/arm/sbsa-ref.c | 2 +-
8
target/arm/tcg/a64.decode | 7 +++
15
1 file changed, 1 insertion(+), 1 deletion(-)
9
target/arm/tcg/translate-a64.c | 94 +++++++++++++++++++---------------
10
2 files changed, 59 insertions(+), 42 deletions(-)
16
11
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
18
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
14
--- a/target/arm/tcg/a64.decode
20
+++ b/hw/arm/sbsa-ref.c
15
+++ b/target/arm/tcg/a64.decode
21
@@ -XXX,XX +XXX,XX @@ static void create_pcie(SBSAMachineState *sms)
16
@@ -XXX,XX +XXX,XX @@
22
17
%hlm 11:1 20:2
23
for (i = 0; i < GPEX_NUM_IRQS; i++) {
18
24
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
19
&r rn
25
- qdev_get_gpio_in(sms->gic, irq + 1));
20
+&rrr rd rn rm
26
+ qdev_get_gpio_in(sms->gic, irq + i));
21
&ri rd imm
27
gpex_set_irq_num(GPEX_HOST(dev), i, irq + i);
22
&rri_sf rd rn imm sf
23
&rrr_sf rd rn rm sf
24
@@ -XXX,XX +XXX,XX @@ CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
25
26
# Data Processing (2-source)
27
28
+@rrr . .......... rm:5 ...... rn:5 rd:5 &rrr
29
@rrr_sf sf:1 .......... rm:5 ...... rn:5 rd:5 &rrr_sf
30
31
UDIV . 00 11010110 ..... 00001 0 ..... ..... @rrr_sf
32
@@ -XXX,XX +XXX,XX @@ CRC32C 0 00 11010110 ..... 0101 01 ..... ..... @rrr_h
33
CRC32C 0 00 11010110 ..... 0101 10 ..... ..... @rrr_s
34
CRC32C 1 00 11010110 ..... 0101 11 ..... ..... @rrr_d
35
36
+SUBP 1 00 11010110 ..... 000000 ..... ..... @rrr
37
+SUBPS 1 01 11010110 ..... 000000 ..... ..... @rrr
38
+IRG 1 00 11010110 ..... 000100 ..... ..... @rrr
39
+GMI 1 00 11010110 ..... 000101 ..... ..... @rrr
40
+
41
# Data Processing (1-source)
42
# Logical (shifted reg)
43
# Add/subtract (shifted reg)
44
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/translate-a64.c
47
+++ b/target/arm/tcg/translate-a64.c
48
@@ -XXX,XX +XXX,XX @@ static bool do_crc32(DisasContext *s, arg_rrr_e *a, bool crc32c)
49
TRANS_FEAT(CRC32, aa64_crc32, do_crc32, a, false)
50
TRANS_FEAT(CRC32C, aa64_crc32, do_crc32, a, true)
51
52
+static bool do_subp(DisasContext *s, arg_rrr *a, bool setflag)
53
+{
54
+ TCGv_i64 tcg_n = read_cpu_reg_sp(s, a->rn, true);
55
+ TCGv_i64 tcg_m = read_cpu_reg_sp(s, a->rm, true);
56
+ TCGv_i64 tcg_d = cpu_reg(s, a->rd);
57
+
58
+ tcg_gen_sextract_i64(tcg_n, tcg_n, 0, 56);
59
+ tcg_gen_sextract_i64(tcg_m, tcg_m, 0, 56);
60
+
61
+ if (setflag) {
62
+ gen_sub_CC(true, tcg_d, tcg_n, tcg_m);
63
+ } else {
64
+ tcg_gen_sub_i64(tcg_d, tcg_n, tcg_m);
65
+ }
66
+ return true;
67
+}
68
+
69
+TRANS_FEAT(SUBP, aa64_mte_insn_reg, do_subp, a, false)
70
+TRANS_FEAT(SUBPS, aa64_mte_insn_reg, do_subp, a, true)
71
+
72
+static bool trans_IRG(DisasContext *s, arg_rrr *a)
73
+{
74
+ if (dc_isar_feature(aa64_mte_insn_reg, s)) {
75
+ TCGv_i64 tcg_rd = cpu_reg_sp(s, a->rd);
76
+ TCGv_i64 tcg_rn = cpu_reg_sp(s, a->rn);
77
+
78
+ if (s->ata[0]) {
79
+ gen_helper_irg(tcg_rd, tcg_env, tcg_rn, cpu_reg(s, a->rm));
80
+ } else {
81
+ gen_address_with_allocation_tag0(tcg_rd, tcg_rn);
82
+ }
83
+ return true;
84
+ }
85
+ return false;
86
+}
87
+
88
+static bool trans_GMI(DisasContext *s, arg_rrr *a)
89
+{
90
+ if (dc_isar_feature(aa64_mte_insn_reg, s)) {
91
+ TCGv_i64 t = tcg_temp_new_i64();
92
+
93
+ tcg_gen_extract_i64(t, cpu_reg_sp(s, a->rn), 56, 4);
94
+ tcg_gen_shl_i64(t, tcg_constant_i64(1), t);
95
+ tcg_gen_or_i64(cpu_reg(s, a->rd), cpu_reg(s, a->rm), t);
96
+ return true;
97
+ }
98
+ return false;
99
+}
100
+
101
/* Logical (shifted register)
102
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
103
* +----+-----+-----------+-------+---+------+--------+------+------+
104
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
28
}
105
}
29
106
107
switch (opcode) {
108
- case 0: /* SUBP(S) */
109
- if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
110
- goto do_unallocated;
111
- } else {
112
- TCGv_i64 tcg_n, tcg_m, tcg_d;
113
-
114
- tcg_n = read_cpu_reg_sp(s, rn, true);
115
- tcg_m = read_cpu_reg_sp(s, rm, true);
116
- tcg_gen_sextract_i64(tcg_n, tcg_n, 0, 56);
117
- tcg_gen_sextract_i64(tcg_m, tcg_m, 0, 56);
118
- tcg_d = cpu_reg(s, rd);
119
-
120
- if (setflag) {
121
- gen_sub_CC(true, tcg_d, tcg_n, tcg_m);
122
- } else {
123
- tcg_gen_sub_i64(tcg_d, tcg_n, tcg_m);
124
- }
125
- }
126
- break;
127
- case 4: /* IRG */
128
- if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
129
- goto do_unallocated;
130
- }
131
- if (s->ata[0]) {
132
- gen_helper_irg(cpu_reg_sp(s, rd), tcg_env,
133
- cpu_reg_sp(s, rn), cpu_reg(s, rm));
134
- } else {
135
- gen_address_with_allocation_tag0(cpu_reg_sp(s, rd),
136
- cpu_reg_sp(s, rn));
137
- }
138
- break;
139
- case 5: /* GMI */
140
- if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
141
- goto do_unallocated;
142
- } else {
143
- TCGv_i64 t = tcg_temp_new_i64();
144
-
145
- tcg_gen_extract_i64(t, cpu_reg_sp(s, rn), 56, 4);
146
- tcg_gen_shl_i64(t, tcg_constant_i64(1), t);
147
- tcg_gen_or_i64(cpu_reg(s, rd), cpu_reg(s, rm), t);
148
- }
149
- break;
150
case 12: /* PACGA */
151
if (sf == 0 || !dc_isar_feature(aa64_pauth, s)) {
152
goto do_unallocated;
153
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
154
break;
155
default:
156
do_unallocated:
157
+ case 0: /* SUBP(S) */
158
case 2: /* UDIV */
159
case 3: /* SDIV */
160
+ case 4: /* IRG */
161
+ case 5: /* GMI */
162
case 8: /* LSLV */
163
case 9: /* LSRV */
164
case 10: /* ASRV */
30
--
165
--
31
2.20.1
166
2.34.1
32
167
33
168
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Let clock_set() return a boolean value whether the clock
3
Remove disas_data_proc_2src, as this was the last insn
4
has been updated or not.
4
decoded by that function.
5
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200806123858.30058-3-f4bug@amsat.org
8
Message-id: 20241211163036.2297116-7-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/hw/clock.h | 12 +++++++-----
11
target/arm/tcg/a64.decode | 2 ++
12
hw/core/clock.c | 7 ++++++-
12
target/arm/tcg/translate-a64.c | 65 ++++++----------------------------
13
2 files changed, 13 insertions(+), 6 deletions(-)
13
2 files changed, 13 insertions(+), 54 deletions(-)
14
14
15
diff --git a/include/hw/clock.h b/include/hw/clock.h
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/clock.h
17
--- a/target/arm/tcg/a64.decode
18
+++ b/include/hw/clock.h
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ void clock_set_source(Clock *clk, Clock *src);
19
@@ -XXX,XX +XXX,XX @@ SUBPS 1 01 11010110 ..... 000000 ..... ..... @rrr
20
* @value: the clock's value, 0 means unclocked
20
IRG 1 00 11010110 ..... 000100 ..... ..... @rrr
21
*
21
GMI 1 00 11010110 ..... 000101 ..... ..... @rrr
22
* Set the local cached period value of @clk to @value.
22
23
+ *
23
+PACGA 1 00 11010110 ..... 001100 ..... ..... @rrr
24
+ * @return: true if the clock is changed.
24
+
25
*/
25
# Data Processing (1-source)
26
-void clock_set(Clock *clk, uint64_t value);
26
# Logical (shifted reg)
27
+bool clock_set(Clock *clk, uint64_t value);
27
# Add/subtract (shifted reg)
28
28
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
29
-static inline void clock_set_hz(Clock *clk, unsigned hz)
29
index XXXXXXX..XXXXXXX 100644
30
+static inline bool clock_set_hz(Clock *clk, unsigned hz)
30
--- a/target/arm/tcg/translate-a64.c
31
{
31
+++ b/target/arm/tcg/translate-a64.c
32
- clock_set(clk, CLOCK_PERIOD_FROM_HZ(hz));
32
@@ -XXX,XX +XXX,XX @@ static bool trans_GMI(DisasContext *s, arg_rrr *a)
33
+ return clock_set(clk, CLOCK_PERIOD_FROM_HZ(hz));
33
return false;
34
}
34
}
35
35
36
-static inline void clock_set_ns(Clock *clk, unsigned ns)
36
+static bool trans_PACGA(DisasContext *s, arg_rrr *a)
37
+static inline bool clock_set_ns(Clock *clk, unsigned ns)
37
+{
38
{
38
+ if (dc_isar_feature(aa64_pauth, s)) {
39
- clock_set(clk, CLOCK_PERIOD_FROM_NS(ns));
39
+ gen_helper_pacga(cpu_reg(s, a->rd), tcg_env,
40
+ return clock_set(clk, CLOCK_PERIOD_FROM_NS(ns));
40
+ cpu_reg(s, a->rn), cpu_reg_sp(s, a->rm));
41
+ return true;
42
+ }
43
+ return false;
44
+}
45
+
46
/* Logical (shifted register)
47
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
48
* +----+-----+-----------+-------+---+------+--------+------+------+
49
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
41
}
50
}
42
51
43
/**
52
44
diff --git a/hw/core/clock.c b/hw/core/clock.c
53
-/* Data-processing (2 source)
45
index XXXXXXX..XXXXXXX 100644
54
- * 31 30 29 28 21 20 16 15 10 9 5 4 0
46
--- a/hw/core/clock.c
55
- * +----+---+---+-----------------+------+--------+------+------+
47
+++ b/hw/core/clock.c
56
- * | sf | 0 | S | 1 1 0 1 0 1 1 0 | Rm | opcode | Rn | Rd |
48
@@ -XXX,XX +XXX,XX @@ void clock_clear_callback(Clock *clk)
57
- * +----+---+---+-----------------+------+--------+------+------+
49
clock_set_callback(clk, NULL, NULL);
58
- */
50
}
59
-static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
51
60
-{
52
-void clock_set(Clock *clk, uint64_t period)
61
- unsigned int sf, rm, opcode, rn, rd, setflag;
53
+bool clock_set(Clock *clk, uint64_t period)
62
- sf = extract32(insn, 31, 1);
54
{
63
- setflag = extract32(insn, 29, 1);
55
+ if (clk->period == period) {
64
- rm = extract32(insn, 16, 5);
56
+ return false;
65
- opcode = extract32(insn, 10, 6);
57
+ }
66
- rn = extract32(insn, 5, 5);
58
trace_clock_set(CLOCK_PATH(clk), CLOCK_PERIOD_TO_NS(clk->period),
67
- rd = extract32(insn, 0, 5);
59
CLOCK_PERIOD_TO_NS(period));
68
-
60
clk->period = period;
69
- if (setflag && opcode != 0) {
61
+
70
- unallocated_encoding(s);
62
+ return true;
71
- return;
63
}
72
- }
64
73
-
65
static void clock_propagate_period(Clock *clk, bool call_callbacks)
74
- switch (opcode) {
75
- case 12: /* PACGA */
76
- if (sf == 0 || !dc_isar_feature(aa64_pauth, s)) {
77
- goto do_unallocated;
78
- }
79
- gen_helper_pacga(cpu_reg(s, rd), tcg_env,
80
- cpu_reg(s, rn), cpu_reg_sp(s, rm));
81
- break;
82
- default:
83
- do_unallocated:
84
- case 0: /* SUBP(S) */
85
- case 2: /* UDIV */
86
- case 3: /* SDIV */
87
- case 4: /* IRG */
88
- case 5: /* GMI */
89
- case 8: /* LSLV */
90
- case 9: /* LSRV */
91
- case 10: /* ASRV */
92
- case 11: /* RORV */
93
- case 16:
94
- case 17:
95
- case 18:
96
- case 19:
97
- case 20:
98
- case 21:
99
- case 22:
100
- case 23: /* CRC32 */
101
- unallocated_encoding(s);
102
- break;
103
- }
104
-}
105
-
106
/*
107
* Data processing - register
108
* 31 30 29 28 25 21 20 16 10 0
109
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
110
if (op0) { /* (1 source) */
111
disas_data_proc_1src(s, insn);
112
} else { /* (2 source) */
113
- disas_data_proc_2src(s, insn);
114
+ goto do_unallocated;
115
}
116
break;
117
case 0x8 ... 0xf: /* (3 source) */
66
--
118
--
67
2.20.1
119
2.34.1
68
120
69
121
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 11 +++
9
target/arm/tcg/translate-a64.c | 137 +++++++++++++++------------------
10
2 files changed, 72 insertions(+), 76 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
&r rn
18
&rrr rd rn rm
19
&ri rd imm
20
+&rr rd rn
21
+&rr_sf rd rn sf
22
&rri_sf rd rn imm sf
23
&rrr_sf rd rn rm sf
24
&i imm
25
@@ -XXX,XX +XXX,XX @@ GMI 1 00 11010110 ..... 000101 ..... ..... @rrr
26
PACGA 1 00 11010110 ..... 001100 ..... ..... @rrr
27
28
# Data Processing (1-source)
29
+
30
+@rr . .......... ..... ...... rn:5 rd:5 &rr
31
+@rr_sf sf:1 .......... ..... ...... rn:5 rd:5 &rr_sf
32
+
33
+RBIT . 10 11010110 00000 000000 ..... ..... @rr_sf
34
+REV16 . 10 11010110 00000 000001 ..... ..... @rr_sf
35
+REV32 . 10 11010110 00000 000010 ..... ..... @rr_sf
36
+REV64 1 10 11010110 00000 000011 ..... ..... @rr
37
+
38
# Logical (shifted reg)
39
# Add/subtract (shifted reg)
40
# Add/subtract (extended reg)
41
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/tcg/translate-a64.c
44
+++ b/target/arm/tcg/translate-a64.c
45
@@ -XXX,XX +XXX,XX @@ static bool trans_PACGA(DisasContext *s, arg_rrr *a)
46
return false;
47
}
48
49
+typedef void ArithOneOp(TCGv_i64, TCGv_i64);
50
+
51
+static bool gen_rr(DisasContext *s, int rd, int rn, ArithOneOp fn)
52
+{
53
+ fn(cpu_reg(s, rd), cpu_reg(s, rn));
54
+ return true;
55
+}
56
+
57
+static void gen_rbit32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
58
+{
59
+ TCGv_i32 t32 = tcg_temp_new_i32();
60
+
61
+ tcg_gen_extrl_i64_i32(t32, tcg_rn);
62
+ gen_helper_rbit(t32, t32);
63
+ tcg_gen_extu_i32_i64(tcg_rd, t32);
64
+}
65
+
66
+static void gen_rev16_xx(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 mask)
67
+{
68
+ TCGv_i64 tcg_tmp = tcg_temp_new_i64();
69
+
70
+ tcg_gen_shri_i64(tcg_tmp, tcg_rn, 8);
71
+ tcg_gen_and_i64(tcg_rd, tcg_rn, mask);
72
+ tcg_gen_and_i64(tcg_tmp, tcg_tmp, mask);
73
+ tcg_gen_shli_i64(tcg_rd, tcg_rd, 8);
74
+ tcg_gen_or_i64(tcg_rd, tcg_rd, tcg_tmp);
75
+}
76
+
77
+static void gen_rev16_32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
78
+{
79
+ gen_rev16_xx(tcg_rd, tcg_rn, tcg_constant_i64(0x00ff00ff));
80
+}
81
+
82
+static void gen_rev16_64(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
83
+{
84
+ gen_rev16_xx(tcg_rd, tcg_rn, tcg_constant_i64(0x00ff00ff00ff00ffull));
85
+}
86
+
87
+static void gen_rev_32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
88
+{
89
+ tcg_gen_bswap32_i64(tcg_rd, tcg_rn, TCG_BSWAP_OZ);
90
+}
91
+
92
+static void gen_rev32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
93
+{
94
+ tcg_gen_bswap64_i64(tcg_rd, tcg_rn);
95
+ tcg_gen_rotri_i64(tcg_rd, tcg_rd, 32);
96
+}
97
+
98
+TRANS(RBIT, gen_rr, a->rd, a->rn, a->sf ? gen_helper_rbit64 : gen_rbit32)
99
+TRANS(REV16, gen_rr, a->rd, a->rn, a->sf ? gen_rev16_64 : gen_rev16_32)
100
+TRANS(REV32, gen_rr, a->rd, a->rn, a->sf ? gen_rev32 : gen_rev_32)
101
+TRANS(REV64, gen_rr, a->rd, a->rn, tcg_gen_bswap64_i64)
102
+
103
/* Logical (shifted register)
104
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
105
* +----+-----+-----------+-------+---+------+--------+------+------+
106
@@ -XXX,XX +XXX,XX @@ static void handle_cls(DisasContext *s, unsigned int sf,
107
}
108
}
109
110
-static void handle_rbit(DisasContext *s, unsigned int sf,
111
- unsigned int rn, unsigned int rd)
112
-{
113
- TCGv_i64 tcg_rd, tcg_rn;
114
- tcg_rd = cpu_reg(s, rd);
115
- tcg_rn = cpu_reg(s, rn);
116
-
117
- if (sf) {
118
- gen_helper_rbit64(tcg_rd, tcg_rn);
119
- } else {
120
- TCGv_i32 tcg_tmp32 = tcg_temp_new_i32();
121
- tcg_gen_extrl_i64_i32(tcg_tmp32, tcg_rn);
122
- gen_helper_rbit(tcg_tmp32, tcg_tmp32);
123
- tcg_gen_extu_i32_i64(tcg_rd, tcg_tmp32);
124
- }
125
-}
126
-
127
-/* REV with sf==1, opcode==3 ("REV64") */
128
-static void handle_rev64(DisasContext *s, unsigned int sf,
129
- unsigned int rn, unsigned int rd)
130
-{
131
- if (!sf) {
132
- unallocated_encoding(s);
133
- return;
134
- }
135
- tcg_gen_bswap64_i64(cpu_reg(s, rd), cpu_reg(s, rn));
136
-}
137
-
138
-/* REV with sf==0, opcode==2
139
- * REV32 (sf==1, opcode==2)
140
- */
141
-static void handle_rev32(DisasContext *s, unsigned int sf,
142
- unsigned int rn, unsigned int rd)
143
-{
144
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
145
- TCGv_i64 tcg_rn = cpu_reg(s, rn);
146
-
147
- if (sf) {
148
- tcg_gen_bswap64_i64(tcg_rd, tcg_rn);
149
- tcg_gen_rotri_i64(tcg_rd, tcg_rd, 32);
150
- } else {
151
- tcg_gen_bswap32_i64(tcg_rd, tcg_rn, TCG_BSWAP_OZ);
152
- }
153
-}
154
-
155
-/* REV16 (opcode==1) */
156
-static void handle_rev16(DisasContext *s, unsigned int sf,
157
- unsigned int rn, unsigned int rd)
158
-{
159
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
160
- TCGv_i64 tcg_tmp = tcg_temp_new_i64();
161
- TCGv_i64 tcg_rn = read_cpu_reg(s, rn, sf);
162
- TCGv_i64 mask = tcg_constant_i64(sf ? 0x00ff00ff00ff00ffull : 0x00ff00ff);
163
-
164
- tcg_gen_shri_i64(tcg_tmp, tcg_rn, 8);
165
- tcg_gen_and_i64(tcg_rd, tcg_rn, mask);
166
- tcg_gen_and_i64(tcg_tmp, tcg_tmp, mask);
167
- tcg_gen_shli_i64(tcg_rd, tcg_rd, 8);
168
- tcg_gen_or_i64(tcg_rd, tcg_rd, tcg_tmp);
169
-}
170
-
171
/* Data-processing (1 source)
172
* 31 30 29 28 21 20 16 15 10 9 5 4 0
173
* +----+---+---+-----------------+---------+--------+------+------+
174
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
175
#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
176
177
switch (MAP(sf, opcode2, opcode)) {
178
- case MAP(0, 0x00, 0x00): /* RBIT */
179
- case MAP(1, 0x00, 0x00):
180
- handle_rbit(s, sf, rn, rd);
181
- break;
182
- case MAP(0, 0x00, 0x01): /* REV16 */
183
- case MAP(1, 0x00, 0x01):
184
- handle_rev16(s, sf, rn, rd);
185
- break;
186
- case MAP(0, 0x00, 0x02): /* REV/REV32 */
187
- case MAP(1, 0x00, 0x02):
188
- handle_rev32(s, sf, rn, rd);
189
- break;
190
- case MAP(1, 0x00, 0x03): /* REV64 */
191
- handle_rev64(s, sf, rn, rd);
192
- break;
193
case MAP(0, 0x00, 0x04): /* CLZ */
194
case MAP(1, 0x00, 0x04):
195
handle_clz(s, sf, rn, rd);
196
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
197
break;
198
default:
199
do_unallocated:
200
+ case MAP(0, 0x00, 0x00): /* RBIT */
201
+ case MAP(1, 0x00, 0x00):
202
+ case MAP(0, 0x00, 0x01): /* REV16 */
203
+ case MAP(1, 0x00, 0x01):
204
+ case MAP(0, 0x00, 0x02): /* REV/REV32 */
205
+ case MAP(1, 0x00, 0x02):
206
+ case MAP(1, 0x00, 0x03): /* REV64 */
207
unallocated_encoding(s);
208
break;
209
}
210
--
211
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The gvec operation was added after the initial implementation
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
of the SEL instruction and was missed in the conversion.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20241211163036.2297116-9-richard.henderson@linaro.org
8
Message-id: 20200815013145.539409-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
7
---
11
target/arm/translate-sve.c | 31 ++++++++-----------------------
8
target/arm/tcg/a64.decode | 3 ++
12
1 file changed, 8 insertions(+), 23 deletions(-)
9
target/arm/tcg/translate-a64.c | 72 ++++++++++++++--------------------
10
2 files changed, 33 insertions(+), 42 deletions(-)
13
11
14
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-sve.c
14
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/translate-sve.c
15
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ static bool trans_EOR_pppp(DisasContext *s, arg_rprr_s *a)
16
@@ -XXX,XX +XXX,XX @@ REV16 . 10 11010110 00000 000001 ..... ..... @rr_sf
19
return do_pppp_flags(s, a, &op);
17
REV32 . 10 11010110 00000 000010 ..... ..... @rr_sf
18
REV64 1 10 11010110 00000 000011 ..... ..... @rr
19
20
+CLZ . 10 11010110 00000 000100 ..... ..... @rr_sf
21
+CLS . 10 11010110 00000 000101 ..... ..... @rr_sf
22
+
23
# Logical (shifted reg)
24
# Add/subtract (shifted reg)
25
# Add/subtract (extended reg)
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/translate-a64.c
29
+++ b/target/arm/tcg/translate-a64.c
30
@@ -XXX,XX +XXX,XX @@ TRANS(REV16, gen_rr, a->rd, a->rn, a->sf ? gen_rev16_64 : gen_rev16_32)
31
TRANS(REV32, gen_rr, a->rd, a->rn, a->sf ? gen_rev32 : gen_rev_32)
32
TRANS(REV64, gen_rr, a->rd, a->rn, tcg_gen_bswap64_i64)
33
34
+static void gen_clz32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
35
+{
36
+ TCGv_i32 t32 = tcg_temp_new_i32();
37
+
38
+ tcg_gen_extrl_i64_i32(t32, tcg_rn);
39
+ tcg_gen_clzi_i32(t32, t32, 32);
40
+ tcg_gen_extu_i32_i64(tcg_rd, t32);
41
+}
42
+
43
+static void gen_clz64(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
44
+{
45
+ tcg_gen_clzi_i64(tcg_rd, tcg_rn, 64);
46
+}
47
+
48
+static void gen_cls32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
49
+{
50
+ TCGv_i32 t32 = tcg_temp_new_i32();
51
+
52
+ tcg_gen_extrl_i64_i32(t32, tcg_rn);
53
+ tcg_gen_clrsb_i32(t32, t32);
54
+ tcg_gen_extu_i32_i64(tcg_rd, t32);
55
+}
56
+
57
+TRANS(CLZ, gen_rr, a->rd, a->rn, a->sf ? gen_clz64 : gen_clz32)
58
+TRANS(CLS, gen_rr, a->rd, a->rn, a->sf ? tcg_gen_clrsb_i64 : gen_cls32)
59
+
60
/* Logical (shifted register)
61
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
62
* +----+-----+-----------+-------+---+------+--------+------+------+
63
@@ -XXX,XX +XXX,XX @@ static void disas_cond_select(DisasContext *s, uint32_t insn)
64
}
20
}
65
}
21
66
22
-static void gen_sel_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
67
-static void handle_clz(DisasContext *s, unsigned int sf,
68
- unsigned int rn, unsigned int rd)
23
-{
69
-{
24
- tcg_gen_and_i64(pn, pn, pg);
70
- TCGv_i64 tcg_rd, tcg_rn;
25
- tcg_gen_andc_i64(pm, pm, pg);
71
- tcg_rd = cpu_reg(s, rd);
26
- tcg_gen_or_i64(pd, pn, pm);
72
- tcg_rn = cpu_reg(s, rn);
73
-
74
- if (sf) {
75
- tcg_gen_clzi_i64(tcg_rd, tcg_rn, 64);
76
- } else {
77
- TCGv_i32 tcg_tmp32 = tcg_temp_new_i32();
78
- tcg_gen_extrl_i64_i32(tcg_tmp32, tcg_rn);
79
- tcg_gen_clzi_i32(tcg_tmp32, tcg_tmp32, 32);
80
- tcg_gen_extu_i32_i64(tcg_rd, tcg_tmp32);
81
- }
27
-}
82
-}
28
-
83
-
29
-static void gen_sel_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
84
-static void handle_cls(DisasContext *s, unsigned int sf,
30
- TCGv_vec pm, TCGv_vec pg)
85
- unsigned int rn, unsigned int rd)
31
-{
86
-{
32
- tcg_gen_and_vec(vece, pn, pn, pg);
87
- TCGv_i64 tcg_rd, tcg_rn;
33
- tcg_gen_andc_vec(vece, pm, pm, pg);
88
- tcg_rd = cpu_reg(s, rd);
34
- tcg_gen_or_vec(vece, pd, pn, pm);
89
- tcg_rn = cpu_reg(s, rn);
90
-
91
- if (sf) {
92
- tcg_gen_clrsb_i64(tcg_rd, tcg_rn);
93
- } else {
94
- TCGv_i32 tcg_tmp32 = tcg_temp_new_i32();
95
- tcg_gen_extrl_i64_i32(tcg_tmp32, tcg_rn);
96
- tcg_gen_clrsb_i32(tcg_tmp32, tcg_tmp32);
97
- tcg_gen_extu_i32_i64(tcg_rd, tcg_tmp32);
98
- }
35
-}
99
-}
36
-
100
-
37
static bool trans_SEL_pppp(DisasContext *s, arg_rprr_s *a)
101
/* Data-processing (1 source)
38
{
102
* 31 30 29 28 21 20 16 15 10 9 5 4 0
39
- static const GVecGen4 op = {
103
* +----+---+---+-----------------+---------+--------+------+------+
40
- .fni8 = gen_sel_pg_i64,
104
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
41
- .fniv = gen_sel_pg_vec,
105
#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
42
- .fno = gen_helper_sve_sel_pppp,
106
43
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
107
switch (MAP(sf, opcode2, opcode)) {
44
- };
108
- case MAP(0, 0x00, 0x04): /* CLZ */
45
-
109
- case MAP(1, 0x00, 0x04):
46
if (a->s) {
110
- handle_clz(s, sf, rn, rd);
47
return false;
111
- break;
112
- case MAP(0, 0x00, 0x05): /* CLS */
113
- case MAP(1, 0x00, 0x05):
114
- handle_cls(s, sf, rn, rd);
115
- break;
116
case MAP(1, 0x01, 0x00): /* PACIA */
117
if (s->pauth_active) {
118
tcg_rd = cpu_reg(s, rd);
119
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
120
case MAP(0, 0x00, 0x02): /* REV/REV32 */
121
case MAP(1, 0x00, 0x02):
122
case MAP(1, 0x00, 0x03): /* REV64 */
123
+ case MAP(0, 0x00, 0x04): /* CLZ */
124
+ case MAP(1, 0x00, 0x04):
125
+ case MAP(0, 0x00, 0x05): /* CLS */
126
+ case MAP(1, 0x00, 0x05):
127
unallocated_encoding(s);
128
break;
48
}
129
}
49
- return do_pppp_flags(s, a, &op);
50
+ if (sve_access_check(s)) {
51
+ unsigned psz = pred_gvec_reg_size(s);
52
+ tcg_gen_gvec_bitsel(MO_8, pred_full_reg_offset(s, a->rd),
53
+ pred_full_reg_offset(s, a->pg),
54
+ pred_full_reg_offset(s, a->rn),
55
+ pred_full_reg_offset(s, a->rm), psz, psz);
56
+ }
57
+ return true;
58
}
59
60
static void gen_orr_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
61
--
130
--
62
2.20.1
131
2.34.1
63
132
64
133
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes PACIA, PACIZA, PACIB, PACIZB, PACDA, PACDZA, PACDB,
4
PACDZB, AUTIA, AUTIZA, AUTIB, AUTIZB, AUTDA, AUTDZA, AUTDB, AUTDZB.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 13 +++
12
target/arm/tcg/translate-a64.c | 173 +++++++++------------------------
13
2 files changed, 58 insertions(+), 128 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ REV64 1 10 11010110 00000 000011 ..... ..... @rr
20
CLZ . 10 11010110 00000 000100 ..... ..... @rr_sf
21
CLS . 10 11010110 00000 000101 ..... ..... @rr_sf
22
23
+&pacaut rd rn z
24
+@pacaut . .. ........ ..... .. z:1 ... rn:5 rd:5 &pacaut
25
+
26
+PACIA 1 10 11010110 00001 00.000 ..... ..... @pacaut
27
+PACIB 1 10 11010110 00001 00.001 ..... ..... @pacaut
28
+PACDA 1 10 11010110 00001 00.010 ..... ..... @pacaut
29
+PACDB 1 10 11010110 00001 00.011 ..... ..... @pacaut
30
+
31
+AUTIA 1 10 11010110 00001 00.100 ..... ..... @pacaut
32
+AUTIB 1 10 11010110 00001 00.101 ..... ..... @pacaut
33
+AUTDA 1 10 11010110 00001 00.110 ..... ..... @pacaut
34
+AUTDB 1 10 11010110 00001 00.111 ..... ..... @pacaut
35
+
36
# Logical (shifted reg)
37
# Add/subtract (shifted reg)
38
# Add/subtract (extended reg)
39
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/tcg/translate-a64.c
42
+++ b/target/arm/tcg/translate-a64.c
43
@@ -XXX,XX +XXX,XX @@ static void gen_cls32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
44
TRANS(CLZ, gen_rr, a->rd, a->rn, a->sf ? gen_clz64 : gen_clz32)
45
TRANS(CLS, gen_rr, a->rd, a->rn, a->sf ? tcg_gen_clrsb_i64 : gen_cls32)
46
47
+static bool gen_pacaut(DisasContext *s, arg_pacaut *a, NeonGenTwo64OpEnvFn fn)
48
+{
49
+ TCGv_i64 tcg_rd, tcg_rn;
50
+
51
+ if (a->z) {
52
+ if (a->rn != 31) {
53
+ return false;
54
+ }
55
+ tcg_rn = tcg_constant_i64(0);
56
+ } else {
57
+ tcg_rn = cpu_reg_sp(s, a->rn);
58
+ }
59
+ if (s->pauth_active) {
60
+ tcg_rd = cpu_reg(s, a->rd);
61
+ fn(tcg_rd, tcg_env, tcg_rd, tcg_rn);
62
+ }
63
+ return true;
64
+}
65
+
66
+TRANS_FEAT(PACIA, aa64_pauth, gen_pacaut, a, gen_helper_pacia)
67
+TRANS_FEAT(PACIB, aa64_pauth, gen_pacaut, a, gen_helper_pacib)
68
+TRANS_FEAT(PACDA, aa64_pauth, gen_pacaut, a, gen_helper_pacda)
69
+TRANS_FEAT(PACDB, aa64_pauth, gen_pacaut, a, gen_helper_pacdb)
70
+
71
+TRANS_FEAT(AUTIA, aa64_pauth, gen_pacaut, a, gen_helper_autia)
72
+TRANS_FEAT(AUTIB, aa64_pauth, gen_pacaut, a, gen_helper_autib)
73
+TRANS_FEAT(AUTDA, aa64_pauth, gen_pacaut, a, gen_helper_autda)
74
+TRANS_FEAT(AUTDB, aa64_pauth, gen_pacaut, a, gen_helper_autdb)
75
+
76
/* Logical (shifted register)
77
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
78
* +----+-----+-----------+-------+---+------+--------+------+------+
79
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
80
#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
81
82
switch (MAP(sf, opcode2, opcode)) {
83
- case MAP(1, 0x01, 0x00): /* PACIA */
84
- if (s->pauth_active) {
85
- tcg_rd = cpu_reg(s, rd);
86
- gen_helper_pacia(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
87
- } else if (!dc_isar_feature(aa64_pauth, s)) {
88
- goto do_unallocated;
89
- }
90
- break;
91
- case MAP(1, 0x01, 0x01): /* PACIB */
92
- if (s->pauth_active) {
93
- tcg_rd = cpu_reg(s, rd);
94
- gen_helper_pacib(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
95
- } else if (!dc_isar_feature(aa64_pauth, s)) {
96
- goto do_unallocated;
97
- }
98
- break;
99
- case MAP(1, 0x01, 0x02): /* PACDA */
100
- if (s->pauth_active) {
101
- tcg_rd = cpu_reg(s, rd);
102
- gen_helper_pacda(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
103
- } else if (!dc_isar_feature(aa64_pauth, s)) {
104
- goto do_unallocated;
105
- }
106
- break;
107
- case MAP(1, 0x01, 0x03): /* PACDB */
108
- if (s->pauth_active) {
109
- tcg_rd = cpu_reg(s, rd);
110
- gen_helper_pacdb(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
111
- } else if (!dc_isar_feature(aa64_pauth, s)) {
112
- goto do_unallocated;
113
- }
114
- break;
115
- case MAP(1, 0x01, 0x04): /* AUTIA */
116
- if (s->pauth_active) {
117
- tcg_rd = cpu_reg(s, rd);
118
- gen_helper_autia(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
119
- } else if (!dc_isar_feature(aa64_pauth, s)) {
120
- goto do_unallocated;
121
- }
122
- break;
123
- case MAP(1, 0x01, 0x05): /* AUTIB */
124
- if (s->pauth_active) {
125
- tcg_rd = cpu_reg(s, rd);
126
- gen_helper_autib(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
127
- } else if (!dc_isar_feature(aa64_pauth, s)) {
128
- goto do_unallocated;
129
- }
130
- break;
131
- case MAP(1, 0x01, 0x06): /* AUTDA */
132
- if (s->pauth_active) {
133
- tcg_rd = cpu_reg(s, rd);
134
- gen_helper_autda(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
135
- } else if (!dc_isar_feature(aa64_pauth, s)) {
136
- goto do_unallocated;
137
- }
138
- break;
139
- case MAP(1, 0x01, 0x07): /* AUTDB */
140
- if (s->pauth_active) {
141
- tcg_rd = cpu_reg(s, rd);
142
- gen_helper_autdb(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
143
- } else if (!dc_isar_feature(aa64_pauth, s)) {
144
- goto do_unallocated;
145
- }
146
- break;
147
- case MAP(1, 0x01, 0x08): /* PACIZA */
148
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
149
- goto do_unallocated;
150
- } else if (s->pauth_active) {
151
- tcg_rd = cpu_reg(s, rd);
152
- gen_helper_pacia(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
153
- }
154
- break;
155
- case MAP(1, 0x01, 0x09): /* PACIZB */
156
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
157
- goto do_unallocated;
158
- } else if (s->pauth_active) {
159
- tcg_rd = cpu_reg(s, rd);
160
- gen_helper_pacib(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
161
- }
162
- break;
163
- case MAP(1, 0x01, 0x0a): /* PACDZA */
164
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
165
- goto do_unallocated;
166
- } else if (s->pauth_active) {
167
- tcg_rd = cpu_reg(s, rd);
168
- gen_helper_pacda(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
169
- }
170
- break;
171
- case MAP(1, 0x01, 0x0b): /* PACDZB */
172
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
173
- goto do_unallocated;
174
- } else if (s->pauth_active) {
175
- tcg_rd = cpu_reg(s, rd);
176
- gen_helper_pacdb(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
177
- }
178
- break;
179
- case MAP(1, 0x01, 0x0c): /* AUTIZA */
180
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
181
- goto do_unallocated;
182
- } else if (s->pauth_active) {
183
- tcg_rd = cpu_reg(s, rd);
184
- gen_helper_autia(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
185
- }
186
- break;
187
- case MAP(1, 0x01, 0x0d): /* AUTIZB */
188
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
189
- goto do_unallocated;
190
- } else if (s->pauth_active) {
191
- tcg_rd = cpu_reg(s, rd);
192
- gen_helper_autib(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
193
- }
194
- break;
195
- case MAP(1, 0x01, 0x0e): /* AUTDZA */
196
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
197
- goto do_unallocated;
198
- } else if (s->pauth_active) {
199
- tcg_rd = cpu_reg(s, rd);
200
- gen_helper_autda(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
201
- }
202
- break;
203
- case MAP(1, 0x01, 0x0f): /* AUTDZB */
204
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
205
- goto do_unallocated;
206
- } else if (s->pauth_active) {
207
- tcg_rd = cpu_reg(s, rd);
208
- gen_helper_autdb(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
209
- }
210
- break;
211
case MAP(1, 0x01, 0x10): /* XPACI */
212
if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
213
goto do_unallocated;
214
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
215
case MAP(1, 0x00, 0x04):
216
case MAP(0, 0x00, 0x05): /* CLS */
217
case MAP(1, 0x00, 0x05):
218
+ case MAP(1, 0x01, 0x00): /* PACIA */
219
+ case MAP(1, 0x01, 0x01): /* PACIB */
220
+ case MAP(1, 0x01, 0x02): /* PACDA */
221
+ case MAP(1, 0x01, 0x03): /* PACDB */
222
+ case MAP(1, 0x01, 0x04): /* AUTIA */
223
+ case MAP(1, 0x01, 0x05): /* AUTIB */
224
+ case MAP(1, 0x01, 0x06): /* AUTDA */
225
+ case MAP(1, 0x01, 0x07): /* AUTDB */
226
+ case MAP(1, 0x01, 0x08): /* PACIZA */
227
+ case MAP(1, 0x01, 0x09): /* PACIZB */
228
+ case MAP(1, 0x01, 0x0a): /* PACDZA */
229
+ case MAP(1, 0x01, 0x0b): /* PACDZB */
230
+ case MAP(1, 0x01, 0x0c): /* AUTIZA */
231
+ case MAP(1, 0x01, 0x0d): /* AUTIZB */
232
+ case MAP(1, 0x01, 0x0e): /* AUTDZA */
233
+ case MAP(1, 0x01, 0x0f): /* AUTDZB */
234
unallocated_encoding(s);
235
break;
236
}
237
--
238
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add left-shift to match the existing right-shift.
3
Remove disas_data_proc_1src, as these were the last insns
4
decoded by that function.
4
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241211163036.2297116-11-richard.henderson@linaro.org
7
Message-id: 20200815013145.539409-2-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
include/qemu/int128.h | 16 ++++++++++++++++
11
target/arm/tcg/a64.decode | 3 ++
11
1 file changed, 16 insertions(+)
12
target/arm/tcg/translate-a64.c | 99 +++++-----------------------------
13
2 files changed, 16 insertions(+), 86 deletions(-)
12
14
13
diff --git a/include/qemu/int128.h b/include/qemu/int128.h
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/include/qemu/int128.h
17
--- a/target/arm/tcg/a64.decode
16
+++ b/include/qemu/int128.h
18
+++ b/target/arm/tcg/a64.decode
17
@@ -XXX,XX +XXX,XX @@ static inline Int128 int128_rshift(Int128 a, int n)
19
@@ -XXX,XX +XXX,XX @@ AUTIB 1 10 11010110 00001 00.101 ..... ..... @pacaut
18
return a >> n;
20
AUTDA 1 10 11010110 00001 00.110 ..... ..... @pacaut
19
}
21
AUTDB 1 10 11010110 00001 00.111 ..... ..... @pacaut
20
22
21
+static inline Int128 int128_lshift(Int128 a, int n)
23
+XPACI 1 10 11010110 00001 010000 11111 rd:5
24
+XPACD 1 10 11010110 00001 010001 11111 rd:5
25
+
26
# Logical (shifted reg)
27
# Add/subtract (shifted reg)
28
# Add/subtract (extended reg)
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(AUTIB, aa64_pauth, gen_pacaut, a, gen_helper_autib)
34
TRANS_FEAT(AUTDA, aa64_pauth, gen_pacaut, a, gen_helper_autda)
35
TRANS_FEAT(AUTDB, aa64_pauth, gen_pacaut, a, gen_helper_autdb)
36
37
+static bool do_xpac(DisasContext *s, int rd, NeonGenOne64OpEnvFn *fn)
22
+{
38
+{
23
+ return a << n;
39
+ if (s->pauth_active) {
40
+ TCGv_i64 tcg_rd = cpu_reg(s, rd);
41
+ fn(tcg_rd, tcg_env, tcg_rd);
42
+ }
43
+ return true;
24
+}
44
+}
25
+
45
+
26
static inline Int128 int128_add(Int128 a, Int128 b)
46
+TRANS_FEAT(XPACI, aa64_pauth, do_xpac, a->rd, gen_helper_xpaci)
27
{
47
+TRANS_FEAT(XPACD, aa64_pauth, do_xpac, a->rd, gen_helper_xpacd)
28
return a + b;
48
+
29
@@ -XXX,XX +XXX,XX @@ static inline Int128 int128_rshift(Int128 a, int n)
49
/* Logical (shifted register)
50
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
51
* +----+-----+-----------+-------+---+------+--------+------+------+
52
@@ -XXX,XX +XXX,XX @@ static void disas_cond_select(DisasContext *s, uint32_t insn)
30
}
53
}
31
}
54
}
32
55
33
+static inline Int128 int128_lshift(Int128 a, int n)
56
-/* Data-processing (1 source)
34
+{
57
- * 31 30 29 28 21 20 16 15 10 9 5 4 0
35
+ uint64_t l = a.lo << (n & 63);
58
- * +----+---+---+-----------------+---------+--------+------+------+
36
+ if (n >= 64) {
59
- * | sf | 1 | S | 1 1 0 1 0 1 1 0 | opcode2 | opcode | Rn | Rd |
37
+ return int128_make128(0, l);
60
- * +----+---+---+-----------------+---------+--------+------+------+
38
+ } else if (n > 0) {
61
- */
39
+ return int128_make128(l, (a.hi << n) | (a.lo >> (64 - n)));
62
-static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
40
+ }
63
-{
41
+ return a;
64
- unsigned int sf, opcode, opcode2, rn, rd;
42
+}
65
- TCGv_i64 tcg_rd;
43
+
66
-
44
static inline Int128 int128_add(Int128 a, Int128 b)
67
- if (extract32(insn, 29, 1)) {
68
- unallocated_encoding(s);
69
- return;
70
- }
71
-
72
- sf = extract32(insn, 31, 1);
73
- opcode = extract32(insn, 10, 6);
74
- opcode2 = extract32(insn, 16, 5);
75
- rn = extract32(insn, 5, 5);
76
- rd = extract32(insn, 0, 5);
77
-
78
-#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
79
-
80
- switch (MAP(sf, opcode2, opcode)) {
81
- case MAP(1, 0x01, 0x10): /* XPACI */
82
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
83
- goto do_unallocated;
84
- } else if (s->pauth_active) {
85
- tcg_rd = cpu_reg(s, rd);
86
- gen_helper_xpaci(tcg_rd, tcg_env, tcg_rd);
87
- }
88
- break;
89
- case MAP(1, 0x01, 0x11): /* XPACD */
90
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
91
- goto do_unallocated;
92
- } else if (s->pauth_active) {
93
- tcg_rd = cpu_reg(s, rd);
94
- gen_helper_xpacd(tcg_rd, tcg_env, tcg_rd);
95
- }
96
- break;
97
- default:
98
- do_unallocated:
99
- case MAP(0, 0x00, 0x00): /* RBIT */
100
- case MAP(1, 0x00, 0x00):
101
- case MAP(0, 0x00, 0x01): /* REV16 */
102
- case MAP(1, 0x00, 0x01):
103
- case MAP(0, 0x00, 0x02): /* REV/REV32 */
104
- case MAP(1, 0x00, 0x02):
105
- case MAP(1, 0x00, 0x03): /* REV64 */
106
- case MAP(0, 0x00, 0x04): /* CLZ */
107
- case MAP(1, 0x00, 0x04):
108
- case MAP(0, 0x00, 0x05): /* CLS */
109
- case MAP(1, 0x00, 0x05):
110
- case MAP(1, 0x01, 0x00): /* PACIA */
111
- case MAP(1, 0x01, 0x01): /* PACIB */
112
- case MAP(1, 0x01, 0x02): /* PACDA */
113
- case MAP(1, 0x01, 0x03): /* PACDB */
114
- case MAP(1, 0x01, 0x04): /* AUTIA */
115
- case MAP(1, 0x01, 0x05): /* AUTIB */
116
- case MAP(1, 0x01, 0x06): /* AUTDA */
117
- case MAP(1, 0x01, 0x07): /* AUTDB */
118
- case MAP(1, 0x01, 0x08): /* PACIZA */
119
- case MAP(1, 0x01, 0x09): /* PACIZB */
120
- case MAP(1, 0x01, 0x0a): /* PACDZA */
121
- case MAP(1, 0x01, 0x0b): /* PACDZB */
122
- case MAP(1, 0x01, 0x0c): /* AUTIZA */
123
- case MAP(1, 0x01, 0x0d): /* AUTIZB */
124
- case MAP(1, 0x01, 0x0e): /* AUTDZA */
125
- case MAP(1, 0x01, 0x0f): /* AUTDZB */
126
- unallocated_encoding(s);
127
- break;
128
- }
129
-
130
-#undef MAP
131
-}
132
-
133
-
134
/*
135
* Data processing - register
136
* 31 30 29 28 25 21 20 16 10 0
137
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
138
*/
139
static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
45
{
140
{
46
uint64_t lo = a.lo + b.lo;
141
- int op0 = extract32(insn, 30, 1);
142
int op1 = extract32(insn, 28, 1);
143
int op2 = extract32(insn, 21, 4);
144
int op3 = extract32(insn, 10, 6);
145
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
146
disas_cond_select(s, insn);
147
break;
148
149
- case 0x6: /* Data-processing */
150
- if (op0) { /* (1 source) */
151
- disas_data_proc_1src(s, insn);
152
- } else { /* (2 source) */
153
- goto do_unallocated;
154
- }
155
- break;
156
case 0x8 ... 0xf: /* (3 source) */
157
disas_data_proc_3src(s, insn);
158
break;
159
160
default:
161
do_unallocated:
162
+ case 0x6: /* Data-processing */
163
unallocated_encoding(s);
164
break;
165
}
47
--
166
--
48
2.20.1
167
2.34.1
49
168
50
169
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Allow the device to execute the DMA transfers in a different
3
This includes AND, BIC, ORR, ORN, EOR, EON, ANDS, BICS (shifted reg).
4
AddressSpace.
5
4
6
The A10 and H3 SoC keep using the system_memory address space,
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
but via the proper dma_memory_access() API.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
7
Message-id: 20241211163036.2297116-12-richard.henderson@linaro.org
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
11
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
12
Message-id: 20200814110057.307-1-f4bug@amsat.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
9
---
15
include/hw/sd/allwinner-sdhost.h | 6 ++++++
10
target/arm/tcg/a64.decode | 9 +++
16
hw/arm/allwinner-a10.c | 2 ++
11
target/arm/tcg/translate-a64.c | 117 ++++++++++++---------------------
17
hw/arm/allwinner-h3.c | 2 ++
12
2 files changed, 51 insertions(+), 75 deletions(-)
18
hw/sd/allwinner-sdhost.c | 37 ++++++++++++++++++++++++++------
19
4 files changed, 41 insertions(+), 6 deletions(-)
20
13
21
diff --git a/include/hw/sd/allwinner-sdhost.h b/include/hw/sd/allwinner-sdhost.h
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/sd/allwinner-sdhost.h
16
--- a/target/arm/tcg/a64.decode
24
+++ b/include/hw/sd/allwinner-sdhost.h
17
+++ b/target/arm/tcg/a64.decode
25
@@ -XXX,XX +XXX,XX @@ typedef struct AwSdHostState {
18
@@ -XXX,XX +XXX,XX @@ XPACI 1 10 11010110 00001 010000 11111 rd:5
26
/** Interrupt output signal to notify CPU */
19
XPACD 1 10 11010110 00001 010001 11111 rd:5
27
qemu_irq irq;
20
28
21
# Logical (shifted reg)
29
+ /** Memory region where DMA transfers are done */
30
+ MemoryRegion *dma_mr;
31
+
22
+
32
+ /** Address space used internally for DMA transfers */
23
+&logic_shift rd rn rm sf sa st n
33
+ AddressSpace dma_as;
24
+@logic_shift sf:1 .. ..... st:2 n:1 rm:5 sa:6 rn:5 rd:5 &logic_shift
34
+
25
+
35
/** Number of bytes left in current DMA transfer */
26
+AND_r . 00 01010 .. . ..... ...... ..... ..... @logic_shift
36
uint32_t transfer_cnt;
27
+ORR_r . 01 01010 .. . ..... ...... ..... ..... @logic_shift
37
28
+EOR_r . 10 01010 .. . ..... ...... ..... ..... @logic_shift
38
diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
29
+ANDS_r . 11 01010 .. . ..... ...... ..... ..... @logic_shift
30
+
31
# Add/subtract (shifted reg)
32
# Add/subtract (extended reg)
33
# Add/subtract (carry)
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/allwinner-a10.c
36
--- a/target/arm/tcg/translate-a64.c
41
+++ b/hw/arm/allwinner-a10.c
37
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static void aw_a10_realize(DeviceState *dev, Error **errp)
38
@@ -XXX,XX +XXX,XX @@ static bool do_xpac(DisasContext *s, int rd, NeonGenOne64OpEnvFn *fn)
39
TRANS_FEAT(XPACI, aa64_pauth, do_xpac, a->rd, gen_helper_xpaci)
40
TRANS_FEAT(XPACD, aa64_pauth, do_xpac, a->rd, gen_helper_xpacd)
41
42
-/* Logical (shifted register)
43
- * 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
44
- * +----+-----+-----------+-------+---+------+--------+------+------+
45
- * | sf | opc | 0 1 0 1 0 | shift | N | Rm | imm6 | Rn | Rd |
46
- * +----+-----+-----------+-------+---+------+--------+------+------+
47
- */
48
-static void disas_logic_reg(DisasContext *s, uint32_t insn)
49
+static bool do_logic_reg(DisasContext *s, arg_logic_shift *a,
50
+ ArithTwoOp *fn, ArithTwoOp *inv_fn, bool setflags)
51
{
52
TCGv_i64 tcg_rd, tcg_rn, tcg_rm;
53
- unsigned int sf, opc, shift_type, invert, rm, shift_amount, rn, rd;
54
55
- sf = extract32(insn, 31, 1);
56
- opc = extract32(insn, 29, 2);
57
- shift_type = extract32(insn, 22, 2);
58
- invert = extract32(insn, 21, 1);
59
- rm = extract32(insn, 16, 5);
60
- shift_amount = extract32(insn, 10, 6);
61
- rn = extract32(insn, 5, 5);
62
- rd = extract32(insn, 0, 5);
63
-
64
- if (!sf && (shift_amount & (1 << 5))) {
65
- unallocated_encoding(s);
66
- return;
67
+ if (!a->sf && (a->sa & (1 << 5))) {
68
+ return false;
43
}
69
}
44
70
45
/* SD/MMC */
71
- tcg_rd = cpu_reg(s, rd);
46
+ object_property_set_link(OBJECT(&s->mmc0), "dma-memory",
72
+ tcg_rd = cpu_reg(s, a->rd);
47
+ OBJECT(get_system_memory()), &error_fatal);
73
+ tcg_rn = cpu_reg(s, a->rn);
48
sysbus_realize(SYS_BUS_DEVICE(&s->mmc0), &error_fatal);
74
49
sysbus_mmio_map(SYS_BUS_DEVICE(&s->mmc0), 0, AW_A10_MMC0_BASE);
75
- if (opc == 1 && shift_amount == 0 && shift_type == 0 && rn == 31) {
50
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc0), 0, qdev_get_gpio_in(dev, 32));
76
- /* Unshifted ORR and ORN with WZR/XZR is the standard encoding for
51
diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
77
- * register-register MOV and MVN, so it is worth special casing.
52
index XXXXXXX..XXXXXXX 100644
78
- */
53
--- a/hw/arm/allwinner-h3.c
79
- tcg_rm = cpu_reg(s, rm);
54
+++ b/hw/arm/allwinner-h3.c
80
- if (invert) {
55
@@ -XXX,XX +XXX,XX @@ static void allwinner_h3_realize(DeviceState *dev, Error **errp)
81
+ tcg_rm = read_cpu_reg(s, a->rm, a->sf);
56
sysbus_mmio_map(SYS_BUS_DEVICE(&s->sid), 0, s->memmap[AW_H3_SID]);
82
+ if (a->sa) {
57
83
+ shift_reg_imm(tcg_rm, tcg_rm, a->sf, a->st, a->sa);
58
/* SD/MMC */
59
+ object_property_set_link(OBJECT(&s->mmc0), "dma-memory",
60
+ OBJECT(get_system_memory()), &error_fatal);
61
sysbus_realize(SYS_BUS_DEVICE(&s->mmc0), &error_fatal);
62
sysbus_mmio_map(SYS_BUS_DEVICE(&s->mmc0), 0, s->memmap[AW_H3_MMC0]);
63
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc0), 0,
64
diff --git a/hw/sd/allwinner-sdhost.c b/hw/sd/allwinner-sdhost.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/sd/allwinner-sdhost.c
67
+++ b/hw/sd/allwinner-sdhost.c
68
@@ -XXX,XX +XXX,XX @@
69
#include "qemu/log.h"
70
#include "qemu/module.h"
71
#include "qemu/units.h"
72
+#include "qapi/error.h"
73
#include "sysemu/blockdev.h"
74
+#include "sysemu/dma.h"
75
+#include "hw/qdev-properties.h"
76
#include "hw/irq.h"
77
#include "hw/sd/allwinner-sdhost.h"
78
#include "migration/vmstate.h"
79
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sdhost_process_desc(AwSdHostState *s,
80
uint8_t buf[1024];
81
82
/* Read descriptor */
83
- cpu_physical_memory_read(desc_addr, desc, sizeof(*desc));
84
+ dma_memory_read(&s->dma_as, desc_addr, desc, sizeof(*desc));
85
if (desc->size == 0) {
86
desc->size = klass->max_desc_size;
87
} else if (desc->size > klass->max_desc_size) {
88
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sdhost_process_desc(AwSdHostState *s,
89
90
/* Write to SD bus */
91
if (is_write) {
92
- cpu_physical_memory_read((desc->addr & DESC_SIZE_MASK) + num_done,
93
- buf, buf_bytes);
94
+ dma_memory_read(&s->dma_as,
95
+ (desc->addr & DESC_SIZE_MASK) + num_done,
96
+ buf, buf_bytes);
97
sdbus_write_data(&s->sdbus, buf, buf_bytes);
98
99
/* Read from SD bus */
100
} else {
101
sdbus_read_data(&s->sdbus, buf, buf_bytes);
102
- cpu_physical_memory_write((desc->addr & DESC_SIZE_MASK) + num_done,
103
- buf, buf_bytes);
104
+ dma_memory_write(&s->dma_as,
105
+ (desc->addr & DESC_SIZE_MASK) + num_done,
106
+ buf, buf_bytes);
107
}
108
num_done += buf_bytes;
109
}
110
111
/* Clear hold flag and flush descriptor */
112
desc->status &= ~DESC_STATUS_HOLD;
113
- cpu_physical_memory_write(desc_addr, desc, sizeof(*desc));
114
+ dma_memory_write(&s->dma_as, desc_addr, desc, sizeof(*desc));
115
116
return num_done;
117
}
118
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_allwinner_sdhost = {
119
}
120
};
121
122
+static Property allwinner_sdhost_properties[] = {
123
+ DEFINE_PROP_LINK("dma-memory", AwSdHostState, dma_mr,
124
+ TYPE_MEMORY_REGION, MemoryRegion *),
125
+ DEFINE_PROP_END_OF_LIST(),
126
+};
127
+
128
static void allwinner_sdhost_init(Object *obj)
129
{
130
AwSdHostState *s = AW_SDHOST(obj);
131
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_init(Object *obj)
132
sysbus_init_irq(SYS_BUS_DEVICE(s), &s->irq);
133
}
134
135
+static void allwinner_sdhost_realize(DeviceState *dev, Error **errp)
136
+{
137
+ AwSdHostState *s = AW_SDHOST(dev);
138
+
139
+ if (!s->dma_mr) {
140
+ error_setg(errp, TYPE_AW_SDHOST " 'dma-memory' link not set");
141
+ return;
142
+ }
84
+ }
143
+
85
+
144
+ address_space_init(&s->dma_as, s->dma_mr, "sdhost-dma");
86
+ (a->n ? inv_fn : fn)(tcg_rd, tcg_rn, tcg_rm);
87
+ if (!a->sf) {
88
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
89
+ }
90
+ if (setflags) {
91
+ gen_logic_CC(a->sf, tcg_rd);
92
+ }
93
+ return true;
145
+}
94
+}
146
+
95
+
147
static void allwinner_sdhost_reset(DeviceState *dev)
96
+static bool trans_ORR_r(DisasContext *s, arg_logic_shift *a)
148
{
97
+{
149
AwSdHostState *s = AW_SDHOST(dev);
98
+ /*
150
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_class_init(ObjectClass *klass, void *data)
99
+ * Unshifted ORR and ORN with WZR/XZR is the standard encoding for
151
100
+ * register-register MOV and MVN, so it is worth special casing.
152
dc->reset = allwinner_sdhost_reset;
101
+ */
153
dc->vmsd = &vmstate_allwinner_sdhost;
102
+ if (a->sa == 0 && a->st == 0 && a->rn == 31) {
154
+ dc->realize = allwinner_sdhost_realize;
103
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
155
+ device_class_set_props(dc, allwinner_sdhost_properties);
104
+ TCGv_i64 tcg_rm = cpu_reg(s, a->rm);
105
+
106
+ if (a->n) {
107
tcg_gen_not_i64(tcg_rd, tcg_rm);
108
- if (!sf) {
109
+ if (!a->sf) {
110
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
111
}
112
} else {
113
- if (sf) {
114
+ if (a->sf) {
115
tcg_gen_mov_i64(tcg_rd, tcg_rm);
116
} else {
117
tcg_gen_ext32u_i64(tcg_rd, tcg_rm);
118
}
119
}
120
- return;
121
+ return true;
122
}
123
124
- tcg_rm = read_cpu_reg(s, rm, sf);
125
-
126
- if (shift_amount) {
127
- shift_reg_imm(tcg_rm, tcg_rm, sf, shift_type, shift_amount);
128
- }
129
-
130
- tcg_rn = cpu_reg(s, rn);
131
-
132
- switch (opc | (invert << 2)) {
133
- case 0: /* AND */
134
- case 3: /* ANDS */
135
- tcg_gen_and_i64(tcg_rd, tcg_rn, tcg_rm);
136
- break;
137
- case 1: /* ORR */
138
- tcg_gen_or_i64(tcg_rd, tcg_rn, tcg_rm);
139
- break;
140
- case 2: /* EOR */
141
- tcg_gen_xor_i64(tcg_rd, tcg_rn, tcg_rm);
142
- break;
143
- case 4: /* BIC */
144
- case 7: /* BICS */
145
- tcg_gen_andc_i64(tcg_rd, tcg_rn, tcg_rm);
146
- break;
147
- case 5: /* ORN */
148
- tcg_gen_orc_i64(tcg_rd, tcg_rn, tcg_rm);
149
- break;
150
- case 6: /* EON */
151
- tcg_gen_eqv_i64(tcg_rd, tcg_rn, tcg_rm);
152
- break;
153
- default:
154
- assert(FALSE);
155
- break;
156
- }
157
-
158
- if (!sf) {
159
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
160
- }
161
-
162
- if (opc == 3) {
163
- gen_logic_CC(sf, tcg_rd);
164
- }
165
+ return do_logic_reg(s, a, tcg_gen_or_i64, tcg_gen_orc_i64, false);
156
}
166
}
157
167
158
static void allwinner_sdhost_sun4i_class_init(ObjectClass *klass, void *data)
168
+TRANS(AND_r, do_logic_reg, a, tcg_gen_and_i64, tcg_gen_andc_i64, false)
169
+TRANS(ANDS_r, do_logic_reg, a, tcg_gen_and_i64, tcg_gen_andc_i64, true)
170
+TRANS(EOR_r, do_logic_reg, a, tcg_gen_xor_i64, tcg_gen_eqv_i64, false)
171
+
172
/*
173
* Add/subtract (extended register)
174
*
175
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
176
/* Add/sub (shifted register) */
177
disas_add_sub_reg(s, insn);
178
}
179
- } else {
180
- /* Logical (shifted register) */
181
- disas_logic_reg(s, insn);
182
+ return;
183
}
184
- return;
185
+ goto do_unallocated;
186
}
187
188
switch (op2) {
159
--
189
--
160
2.20.1
190
2.34.1
161
162
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes ADD, SUB, ADDS, SUBS (extended register).
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-13-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 9 +++++
11
target/arm/tcg/translate-a64.c | 65 +++++++++++-----------------------
12
2 files changed, 29 insertions(+), 45 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ ANDS_r . 11 01010 .. . ..... ...... ..... ..... @logic_shift
19
20
# Add/subtract (shifted reg)
21
# Add/subtract (extended reg)
22
+
23
+&addsub_ext rd rn rm sf sa st
24
+@addsub_ext sf:1 .. ........ rm:5 st:3 sa:3 rn:5 rd:5 &addsub_ext
25
+
26
+ADD_ext . 00 01011001 ..... ... ... ..... ..... @addsub_ext
27
+SUB_ext . 10 01011001 ..... ... ... ..... ..... @addsub_ext
28
+ADDS_ext . 01 01011001 ..... ... ... ..... ..... @addsub_ext
29
+SUBS_ext . 11 01011001 ..... ... ... ..... ..... @addsub_ext
30
+
31
# Add/subtract (carry)
32
# Rotate right into flags
33
# Evaluate into flags
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ TRANS(AND_r, do_logic_reg, a, tcg_gen_and_i64, tcg_gen_andc_i64, false)
39
TRANS(ANDS_r, do_logic_reg, a, tcg_gen_and_i64, tcg_gen_andc_i64, true)
40
TRANS(EOR_r, do_logic_reg, a, tcg_gen_xor_i64, tcg_gen_eqv_i64, false)
41
42
-/*
43
- * Add/subtract (extended register)
44
- *
45
- * 31|30|29|28 24|23 22|21|20 16|15 13|12 10|9 5|4 0|
46
- * +--+--+--+-----------+-----+--+-------+------+------+----+----+
47
- * |sf|op| S| 0 1 0 1 1 | opt | 1| Rm |option| imm3 | Rn | Rd |
48
- * +--+--+--+-----------+-----+--+-------+------+------+----+----+
49
- *
50
- * sf: 0 -> 32bit, 1 -> 64bit
51
- * op: 0 -> add , 1 -> sub
52
- * S: 1 -> set flags
53
- * opt: 00
54
- * option: extension type (see DecodeRegExtend)
55
- * imm3: optional shift to Rm
56
- *
57
- * Rd = Rn + LSL(extend(Rm), amount)
58
- */
59
-static void disas_add_sub_ext_reg(DisasContext *s, uint32_t insn)
60
+static bool do_addsub_ext(DisasContext *s, arg_addsub_ext *a,
61
+ bool sub_op, bool setflags)
62
{
63
- int rd = extract32(insn, 0, 5);
64
- int rn = extract32(insn, 5, 5);
65
- int imm3 = extract32(insn, 10, 3);
66
- int option = extract32(insn, 13, 3);
67
- int rm = extract32(insn, 16, 5);
68
- int opt = extract32(insn, 22, 2);
69
- bool setflags = extract32(insn, 29, 1);
70
- bool sub_op = extract32(insn, 30, 1);
71
- bool sf = extract32(insn, 31, 1);
72
+ TCGv_i64 tcg_rm, tcg_rn, tcg_rd, tcg_result;
73
74
- TCGv_i64 tcg_rm, tcg_rn; /* temps */
75
- TCGv_i64 tcg_rd;
76
- TCGv_i64 tcg_result;
77
-
78
- if (imm3 > 4 || opt != 0) {
79
- unallocated_encoding(s);
80
- return;
81
+ if (a->sa > 4) {
82
+ return false;
83
}
84
85
/* non-flag setting ops may use SP */
86
if (!setflags) {
87
- tcg_rd = cpu_reg_sp(s, rd);
88
+ tcg_rd = cpu_reg_sp(s, a->rd);
89
} else {
90
- tcg_rd = cpu_reg(s, rd);
91
+ tcg_rd = cpu_reg(s, a->rd);
92
}
93
- tcg_rn = read_cpu_reg_sp(s, rn, sf);
94
+ tcg_rn = read_cpu_reg_sp(s, a->rn, a->sf);
95
96
- tcg_rm = read_cpu_reg(s, rm, sf);
97
- ext_and_shift_reg(tcg_rm, tcg_rm, option, imm3);
98
+ tcg_rm = read_cpu_reg(s, a->rm, a->sf);
99
+ ext_and_shift_reg(tcg_rm, tcg_rm, a->st, a->sa);
100
101
tcg_result = tcg_temp_new_i64();
102
-
103
if (!setflags) {
104
if (sub_op) {
105
tcg_gen_sub_i64(tcg_result, tcg_rn, tcg_rm);
106
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_ext_reg(DisasContext *s, uint32_t insn)
107
}
108
} else {
109
if (sub_op) {
110
- gen_sub_CC(sf, tcg_result, tcg_rn, tcg_rm);
111
+ gen_sub_CC(a->sf, tcg_result, tcg_rn, tcg_rm);
112
} else {
113
- gen_add_CC(sf, tcg_result, tcg_rn, tcg_rm);
114
+ gen_add_CC(a->sf, tcg_result, tcg_rn, tcg_rm);
115
}
116
}
117
118
- if (sf) {
119
+ if (a->sf) {
120
tcg_gen_mov_i64(tcg_rd, tcg_result);
121
} else {
122
tcg_gen_ext32u_i64(tcg_rd, tcg_result);
123
}
124
+ return true;
125
}
126
127
+TRANS(ADD_ext, do_addsub_ext, a, false, false)
128
+TRANS(SUB_ext, do_addsub_ext, a, true, false)
129
+TRANS(ADDS_ext, do_addsub_ext, a, false, true)
130
+TRANS(SUBS_ext, do_addsub_ext, a, true, true)
131
+
132
/*
133
* Add/subtract (shifted register)
134
*
135
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
136
if (!op1) {
137
if (op2 & 8) {
138
if (op2 & 1) {
139
- /* Add/sub (extended register) */
140
- disas_add_sub_ext_reg(s, insn);
141
+ goto do_unallocated;
142
} else {
143
/* Add/sub (shifted register) */
144
disas_add_sub_reg(s, insn);
145
--
146
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes ADD, SUB, ADDS, SUBS (shifted register).
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-14-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 9 +++++
11
target/arm/tcg/translate-a64.c | 64 ++++++++++------------------------
12
2 files changed, 27 insertions(+), 46 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ EOR_r . 10 01010 .. . ..... ...... ..... ..... @logic_shift
19
ANDS_r . 11 01010 .. . ..... ...... ..... ..... @logic_shift
20
21
# Add/subtract (shifted reg)
22
+
23
+&addsub_shift rd rn rm sf sa st
24
+@addsub_shift sf:1 .. ..... st:2 . rm:5 sa:6 rn:5 rd:5 &addsub_shift
25
+
26
+ADD_r . 00 01011 .. 0 ..... ...... ..... ..... @addsub_shift
27
+SUB_r . 10 01011 .. 0 ..... ...... ..... ..... @addsub_shift
28
+ADDS_r . 01 01011 .. 0 ..... ...... ..... ..... @addsub_shift
29
+SUBS_r . 11 01011 .. 0 ..... ...... ..... ..... @addsub_shift
30
+
31
# Add/subtract (extended reg)
32
33
&addsub_ext rd rn rm sf sa st
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ TRANS(SUB_ext, do_addsub_ext, a, true, false)
39
TRANS(ADDS_ext, do_addsub_ext, a, false, true)
40
TRANS(SUBS_ext, do_addsub_ext, a, true, true)
41
42
-/*
43
- * Add/subtract (shifted register)
44
- *
45
- * 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
46
- * +--+--+--+-----------+-----+--+-------+---------+------+------+
47
- * |sf|op| S| 0 1 0 1 1 |shift| 0| Rm | imm6 | Rn | Rd |
48
- * +--+--+--+-----------+-----+--+-------+---------+------+------+
49
- *
50
- * sf: 0 -> 32bit, 1 -> 64bit
51
- * op: 0 -> add , 1 -> sub
52
- * S: 1 -> set flags
53
- * shift: 00 -> LSL, 01 -> LSR, 10 -> ASR, 11 -> RESERVED
54
- * imm6: Shift amount to apply to Rm before the add/sub
55
- */
56
-static void disas_add_sub_reg(DisasContext *s, uint32_t insn)
57
+static bool do_addsub_reg(DisasContext *s, arg_addsub_shift *a,
58
+ bool sub_op, bool setflags)
59
{
60
- int rd = extract32(insn, 0, 5);
61
- int rn = extract32(insn, 5, 5);
62
- int imm6 = extract32(insn, 10, 6);
63
- int rm = extract32(insn, 16, 5);
64
- int shift_type = extract32(insn, 22, 2);
65
- bool setflags = extract32(insn, 29, 1);
66
- bool sub_op = extract32(insn, 30, 1);
67
- bool sf = extract32(insn, 31, 1);
68
+ TCGv_i64 tcg_rd, tcg_rn, tcg_rm, tcg_result;
69
70
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
71
- TCGv_i64 tcg_rn, tcg_rm;
72
- TCGv_i64 tcg_result;
73
-
74
- if ((shift_type == 3) || (!sf && (imm6 > 31))) {
75
- unallocated_encoding(s);
76
- return;
77
+ if (a->st == 3 || (!a->sf && (a->sa & 32))) {
78
+ return false;
79
}
80
81
- tcg_rn = read_cpu_reg(s, rn, sf);
82
- tcg_rm = read_cpu_reg(s, rm, sf);
83
+ tcg_rd = cpu_reg(s, a->rd);
84
+ tcg_rn = read_cpu_reg(s, a->rn, a->sf);
85
+ tcg_rm = read_cpu_reg(s, a->rm, a->sf);
86
87
- shift_reg_imm(tcg_rm, tcg_rm, sf, shift_type, imm6);
88
+ shift_reg_imm(tcg_rm, tcg_rm, a->sf, a->st, a->sa);
89
90
tcg_result = tcg_temp_new_i64();
91
-
92
if (!setflags) {
93
if (sub_op) {
94
tcg_gen_sub_i64(tcg_result, tcg_rn, tcg_rm);
95
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_reg(DisasContext *s, uint32_t insn)
96
}
97
} else {
98
if (sub_op) {
99
- gen_sub_CC(sf, tcg_result, tcg_rn, tcg_rm);
100
+ gen_sub_CC(a->sf, tcg_result, tcg_rn, tcg_rm);
101
} else {
102
- gen_add_CC(sf, tcg_result, tcg_rn, tcg_rm);
103
+ gen_add_CC(a->sf, tcg_result, tcg_rn, tcg_rm);
104
}
105
}
106
107
- if (sf) {
108
+ if (a->sf) {
109
tcg_gen_mov_i64(tcg_rd, tcg_result);
110
} else {
111
tcg_gen_ext32u_i64(tcg_rd, tcg_result);
112
}
113
+ return true;
114
}
115
116
+TRANS(ADD_r, do_addsub_reg, a, false, false)
117
+TRANS(SUB_r, do_addsub_reg, a, true, false)
118
+TRANS(ADDS_r, do_addsub_reg, a, false, true)
119
+TRANS(SUBS_r, do_addsub_reg, a, true, true)
120
+
121
/* Data-processing (3 source)
122
*
123
* 31 30 29 28 24 23 21 20 16 15 14 10 9 5 4 0
124
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
125
int op3 = extract32(insn, 10, 6);
126
127
if (!op1) {
128
- if (op2 & 8) {
129
- if (op2 & 1) {
130
- goto do_unallocated;
131
- } else {
132
- /* Add/sub (shifted register) */
133
- disas_add_sub_reg(s, insn);
134
- }
135
- return;
136
- }
137
goto do_unallocated;
138
}
139
140
--
141
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes MADD, MSUB, SMADDL, SMSUBL, UMADDL, UMSUBL, SMULH, UMULH.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-15-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 16 +++++
11
target/arm/tcg/translate-a64.c | 119 ++++++++++++---------------------
12
2 files changed, 59 insertions(+), 76 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ SUBS_ext . 11 01011001 ..... ... ... ..... ..... @addsub_ext
19
# Conditional select
20
# Data Processing (3-source)
21
22
+&rrrr rd rn rm ra
23
+@rrrr . .. ........ rm:5 . ra:5 rn:5 rd:5 &rrrr
24
+
25
+MADD_w 0 00 11011000 ..... 0 ..... ..... ..... @rrrr
26
+MSUB_w 0 00 11011000 ..... 1 ..... ..... ..... @rrrr
27
+MADD_x 1 00 11011000 ..... 0 ..... ..... ..... @rrrr
28
+MSUB_x 1 00 11011000 ..... 1 ..... ..... ..... @rrrr
29
+
30
+SMADDL 1 00 11011001 ..... 0 ..... ..... ..... @rrrr
31
+SMSUBL 1 00 11011001 ..... 1 ..... ..... ..... @rrrr
32
+UMADDL 1 00 11011101 ..... 0 ..... ..... ..... @rrrr
33
+UMSUBL 1 00 11011101 ..... 1 ..... ..... ..... @rrrr
34
+
35
+SMULH 1 00 11011010 ..... 0 11111 ..... ..... @rrr
36
+UMULH 1 00 11011110 ..... 0 11111 ..... ..... @rrr
37
+
38
### Cryptographic AES
39
40
AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
41
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/tcg/translate-a64.c
44
+++ b/target/arm/tcg/translate-a64.c
45
@@ -XXX,XX +XXX,XX @@ TRANS(SUB_r, do_addsub_reg, a, true, false)
46
TRANS(ADDS_r, do_addsub_reg, a, false, true)
47
TRANS(SUBS_r, do_addsub_reg, a, true, true)
48
49
-/* Data-processing (3 source)
50
- *
51
- * 31 30 29 28 24 23 21 20 16 15 14 10 9 5 4 0
52
- * +--+------+-----------+------+------+----+------+------+------+
53
- * |sf| op54 | 1 1 0 1 1 | op31 | Rm | o0 | Ra | Rn | Rd |
54
- * +--+------+-----------+------+------+----+------+------+------+
55
- */
56
-static void disas_data_proc_3src(DisasContext *s, uint32_t insn)
57
+static bool do_mulh(DisasContext *s, arg_rrr *a,
58
+ void (*fn)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64))
59
{
60
- int rd = extract32(insn, 0, 5);
61
- int rn = extract32(insn, 5, 5);
62
- int ra = extract32(insn, 10, 5);
63
- int rm = extract32(insn, 16, 5);
64
- int op_id = (extract32(insn, 29, 3) << 4) |
65
- (extract32(insn, 21, 3) << 1) |
66
- extract32(insn, 15, 1);
67
- bool sf = extract32(insn, 31, 1);
68
- bool is_sub = extract32(op_id, 0, 1);
69
- bool is_high = extract32(op_id, 2, 1);
70
- bool is_signed = false;
71
- TCGv_i64 tcg_op1;
72
- TCGv_i64 tcg_op2;
73
- TCGv_i64 tcg_tmp;
74
+ TCGv_i64 discard = tcg_temp_new_i64();
75
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
76
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
77
+ TCGv_i64 tcg_rm = cpu_reg(s, a->rm);
78
79
- /* Note that op_id is sf:op54:op31:o0 so it includes the 32/64 size flag */
80
- switch (op_id) {
81
- case 0x42: /* SMADDL */
82
- case 0x43: /* SMSUBL */
83
- case 0x44: /* SMULH */
84
- is_signed = true;
85
- break;
86
- case 0x0: /* MADD (32bit) */
87
- case 0x1: /* MSUB (32bit) */
88
- case 0x40: /* MADD (64bit) */
89
- case 0x41: /* MSUB (64bit) */
90
- case 0x4a: /* UMADDL */
91
- case 0x4b: /* UMSUBL */
92
- case 0x4c: /* UMULH */
93
- break;
94
- default:
95
- unallocated_encoding(s);
96
- return;
97
- }
98
+ fn(discard, tcg_rd, tcg_rn, tcg_rm);
99
+ return true;
100
+}
101
102
- if (is_high) {
103
- TCGv_i64 low_bits = tcg_temp_new_i64(); /* low bits discarded */
104
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
105
- TCGv_i64 tcg_rn = cpu_reg(s, rn);
106
- TCGv_i64 tcg_rm = cpu_reg(s, rm);
107
+TRANS(SMULH, do_mulh, a, tcg_gen_muls2_i64)
108
+TRANS(UMULH, do_mulh, a, tcg_gen_mulu2_i64)
109
110
- if (is_signed) {
111
- tcg_gen_muls2_i64(low_bits, tcg_rd, tcg_rn, tcg_rm);
112
- } else {
113
- tcg_gen_mulu2_i64(low_bits, tcg_rd, tcg_rn, tcg_rm);
114
- }
115
- return;
116
- }
117
+static bool do_muladd(DisasContext *s, arg_rrrr *a,
118
+ bool sf, bool is_sub, MemOp mop)
119
+{
120
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
121
+ TCGv_i64 tcg_op1, tcg_op2;
122
123
- tcg_op1 = tcg_temp_new_i64();
124
- tcg_op2 = tcg_temp_new_i64();
125
- tcg_tmp = tcg_temp_new_i64();
126
-
127
- if (op_id < 0x42) {
128
- tcg_gen_mov_i64(tcg_op1, cpu_reg(s, rn));
129
- tcg_gen_mov_i64(tcg_op2, cpu_reg(s, rm));
130
+ if (mop == MO_64) {
131
+ tcg_op1 = cpu_reg(s, a->rn);
132
+ tcg_op2 = cpu_reg(s, a->rm);
133
} else {
134
- if (is_signed) {
135
- tcg_gen_ext32s_i64(tcg_op1, cpu_reg(s, rn));
136
- tcg_gen_ext32s_i64(tcg_op2, cpu_reg(s, rm));
137
- } else {
138
- tcg_gen_ext32u_i64(tcg_op1, cpu_reg(s, rn));
139
- tcg_gen_ext32u_i64(tcg_op2, cpu_reg(s, rm));
140
- }
141
+ tcg_op1 = tcg_temp_new_i64();
142
+ tcg_op2 = tcg_temp_new_i64();
143
+ tcg_gen_ext_i64(tcg_op1, cpu_reg(s, a->rn), mop);
144
+ tcg_gen_ext_i64(tcg_op2, cpu_reg(s, a->rm), mop);
145
}
146
147
- if (ra == 31 && !is_sub) {
148
+ if (a->ra == 31 && !is_sub) {
149
/* Special-case MADD with rA == XZR; it is the standard MUL alias */
150
- tcg_gen_mul_i64(cpu_reg(s, rd), tcg_op1, tcg_op2);
151
+ tcg_gen_mul_i64(tcg_rd, tcg_op1, tcg_op2);
152
} else {
153
+ TCGv_i64 tcg_tmp = tcg_temp_new_i64();
154
+ TCGv_i64 tcg_ra = cpu_reg(s, a->ra);
155
+
156
tcg_gen_mul_i64(tcg_tmp, tcg_op1, tcg_op2);
157
if (is_sub) {
158
- tcg_gen_sub_i64(cpu_reg(s, rd), cpu_reg(s, ra), tcg_tmp);
159
+ tcg_gen_sub_i64(tcg_rd, tcg_ra, tcg_tmp);
160
} else {
161
- tcg_gen_add_i64(cpu_reg(s, rd), cpu_reg(s, ra), tcg_tmp);
162
+ tcg_gen_add_i64(tcg_rd, tcg_ra, tcg_tmp);
163
}
164
}
165
166
if (!sf) {
167
- tcg_gen_ext32u_i64(cpu_reg(s, rd), cpu_reg(s, rd));
168
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
169
}
170
+ return true;
171
}
172
173
+TRANS(MADD_w, do_muladd, a, false, false, MO_64)
174
+TRANS(MSUB_w, do_muladd, a, false, true, MO_64)
175
+TRANS(MADD_x, do_muladd, a, true, false, MO_64)
176
+TRANS(MSUB_x, do_muladd, a, true, true, MO_64)
177
+
178
+TRANS(SMADDL, do_muladd, a, true, false, MO_SL)
179
+TRANS(SMSUBL, do_muladd, a, true, true, MO_SL)
180
+TRANS(UMADDL, do_muladd, a, true, false, MO_UL)
181
+TRANS(UMSUBL, do_muladd, a, true, true, MO_UL)
182
+
183
/* Add/subtract (with carry)
184
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 10 9 5 4 0
185
* +--+--+--+------------------------+------+-------------+------+-----+
186
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
187
disas_cond_select(s, insn);
188
break;
189
190
- case 0x8 ... 0xf: /* (3 source) */
191
- disas_data_proc_3src(s, insn);
192
- break;
193
-
194
default:
195
do_unallocated:
196
case 0x6: /* Data-processing */
197
+ case 0x8 ... 0xf: /* (3 source) */
198
unallocated_encoding(s);
199
break;
200
}
201
--
202
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes ADC, SBC, ADCS, SBCS.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-16-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 6 +++++
11
target/arm/tcg/translate-a64.c | 43 +++++++++++++---------------------
12
2 files changed, 22 insertions(+), 27 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ ADDS_ext . 01 01011001 ..... ... ... ..... ..... @addsub_ext
19
SUBS_ext . 11 01011001 ..... ... ... ..... ..... @addsub_ext
20
21
# Add/subtract (carry)
22
+
23
+ADC . 00 11010000 ..... 000000 ..... ..... @rrr_sf
24
+ADCS . 01 11010000 ..... 000000 ..... ..... @rrr_sf
25
+SBC . 10 11010000 ..... 000000 ..... ..... @rrr_sf
26
+SBCS . 11 11010000 ..... 000000 ..... ..... @rrr_sf
27
+
28
# Rotate right into flags
29
# Evaluate into flags
30
# Conditional compare (regster)
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-a64.c
34
+++ b/target/arm/tcg/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ TRANS(SMSUBL, do_muladd, a, true, true, MO_SL)
36
TRANS(UMADDL, do_muladd, a, true, false, MO_UL)
37
TRANS(UMSUBL, do_muladd, a, true, true, MO_UL)
38
39
-/* Add/subtract (with carry)
40
- * 31 30 29 28 27 26 25 24 23 22 21 20 16 15 10 9 5 4 0
41
- * +--+--+--+------------------------+------+-------------+------+-----+
42
- * |sf|op| S| 1 1 0 1 0 0 0 0 | rm | 0 0 0 0 0 0 | Rn | Rd |
43
- * +--+--+--+------------------------+------+-------------+------+-----+
44
- */
45
-
46
-static void disas_adc_sbc(DisasContext *s, uint32_t insn)
47
+static bool do_adc_sbc(DisasContext *s, arg_rrr_sf *a,
48
+ bool is_sub, bool setflags)
49
{
50
- unsigned int sf, op, setflags, rm, rn, rd;
51
TCGv_i64 tcg_y, tcg_rn, tcg_rd;
52
53
- sf = extract32(insn, 31, 1);
54
- op = extract32(insn, 30, 1);
55
- setflags = extract32(insn, 29, 1);
56
- rm = extract32(insn, 16, 5);
57
- rn = extract32(insn, 5, 5);
58
- rd = extract32(insn, 0, 5);
59
+ tcg_rd = cpu_reg(s, a->rd);
60
+ tcg_rn = cpu_reg(s, a->rn);
61
62
- tcg_rd = cpu_reg(s, rd);
63
- tcg_rn = cpu_reg(s, rn);
64
-
65
- if (op) {
66
+ if (is_sub) {
67
tcg_y = tcg_temp_new_i64();
68
- tcg_gen_not_i64(tcg_y, cpu_reg(s, rm));
69
+ tcg_gen_not_i64(tcg_y, cpu_reg(s, a->rm));
70
} else {
71
- tcg_y = cpu_reg(s, rm);
72
+ tcg_y = cpu_reg(s, a->rm);
73
}
74
75
if (setflags) {
76
- gen_adc_CC(sf, tcg_rd, tcg_rn, tcg_y);
77
+ gen_adc_CC(a->sf, tcg_rd, tcg_rn, tcg_y);
78
} else {
79
- gen_adc(sf, tcg_rd, tcg_rn, tcg_y);
80
+ gen_adc(a->sf, tcg_rd, tcg_rn, tcg_y);
81
}
82
+ return true;
83
}
84
85
+TRANS(ADC, do_adc_sbc, a, false, false)
86
+TRANS(SBC, do_adc_sbc, a, true, false)
87
+TRANS(ADCS, do_adc_sbc, a, false, true)
88
+TRANS(SBCS, do_adc_sbc, a, true, true)
89
+
90
/*
91
* Rotate right into flags
92
* 31 30 29 21 15 10 5 4 0
93
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
94
switch (op2) {
95
case 0x0:
96
switch (op3) {
97
- case 0x00: /* Add/subtract (with carry) */
98
- disas_adc_sbc(s, insn);
99
- break;
100
-
101
case 0x01: /* Rotate right into flags */
102
case 0x21:
103
disas_rotate_right_into_flags(s, insn);
104
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
105
break;
106
107
default:
108
+ case 0x00: /* Add/subtract (with carry) */
109
goto do_unallocated;
110
}
111
break;
112
--
113
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We want to ensure that access is checked by the time we ask
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
for a specific fp/vector register. We want to ensure that
5
we do not emit two lots of code to raise an exception.
6
7
But sometimes it's difficult to cleanly organize the code
8
such that we never pass through sve_check_access exactly once.
9
Allow multiple calls so long as the result is true, that is,
10
no exception to be raised.
11
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20241211163036.2297116-17-richard.henderson@linaro.org
14
Message-id: 20200815013145.539409-5-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
7
---
17
target/arm/translate.h | 1 +
8
target/arm/tcg/a64.decode | 3 +++
18
target/arm/translate-a64.c | 27 ++++++++++++++++-----------
9
target/arm/tcg/translate-a64.c | 32 +++++++++-----------------------
19
2 files changed, 17 insertions(+), 11 deletions(-)
10
2 files changed, 12 insertions(+), 23 deletions(-)
20
11
21
diff --git a/target/arm/translate.h b/target/arm/translate.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
22
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/translate.h
14
--- a/target/arm/tcg/a64.decode
24
+++ b/target/arm/translate.h
15
+++ b/target/arm/tcg/a64.decode
25
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
16
@@ -XXX,XX +XXX,XX @@ SBC . 10 11010000 ..... 000000 ..... ..... @rrr_sf
26
* that it is set at the point where we actually touch the FP regs.
17
SBCS . 11 11010000 ..... 000000 ..... ..... @rrr_sf
27
*/
18
28
bool fp_access_checked;
19
# Rotate right into flags
29
+ bool sve_access_checked;
20
+
30
/* ARMv8 single-step state (this is distinct from the QEMU gdbstub
21
+RMIF 1 01 11010000 imm:6 00001 rn:5 0 mask:4
31
* single-step support).
22
+
32
*/
23
# Evaluate into flags
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
24
# Conditional compare (regster)
25
# Conditional compare (immediate)
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-a64.c
28
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/translate-a64.c
29
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ static void do_vec_ld(DisasContext *s, int destidx, int element,
30
@@ -XXX,XX +XXX,XX @@ TRANS(SBC, do_adc_sbc, a, true, false)
38
* unallocated-encoding checks (otherwise the syndrome information
31
TRANS(ADCS, do_adc_sbc, a, false, true)
39
* for the resulting exception will be incorrect).
32
TRANS(SBCS, do_adc_sbc, a, true, true)
40
*/
33
41
-static inline bool fp_access_check(DisasContext *s)
34
-/*
42
+static bool fp_access_check(DisasContext *s)
35
- * Rotate right into flags
36
- * 31 30 29 21 15 10 5 4 0
37
- * +--+--+--+-----------------+--------+-----------+------+--+------+
38
- * |sf|op| S| 1 1 0 1 0 0 0 0 | imm6 | 0 0 0 0 1 | Rn |o2| mask |
39
- * +--+--+--+-----------------+--------+-----------+------+--+------+
40
- */
41
-static void disas_rotate_right_into_flags(DisasContext *s, uint32_t insn)
42
+static bool trans_RMIF(DisasContext *s, arg_RMIF *a)
43
{
43
{
44
- assert(!s->fp_access_checked);
44
- int mask = extract32(insn, 0, 4);
45
- s->fp_access_checked = true;
45
- int o2 = extract32(insn, 4, 1);
46
+ if (s->fp_excp_el) {
46
- int rn = extract32(insn, 5, 5);
47
+ assert(!s->fp_access_checked);
47
- int imm6 = extract32(insn, 15, 6);
48
+ s->fp_access_checked = true;
48
- int sf_op_s = extract32(insn, 29, 3);
49
49
+ int mask = a->mask;
50
- if (!s->fp_excp_el) {
50
TCGv_i64 tcg_rn;
51
- return true;
51
TCGv_i32 nzcv;
52
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
52
53
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
53
- if (sf_op_s != 5 || o2 != 0 || !dc_isar_feature(aa64_condm_4, s)) {
54
- unallocated_encoding(s);
55
- return;
56
+ if (!dc_isar_feature(aa64_condm_4, s)) {
54
+ return false;
57
+ return false;
55
}
58
}
56
-
59
57
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
60
- tcg_rn = read_cpu_reg(s, rn, 1);
58
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
61
- tcg_gen_rotri_i64(tcg_rn, tcg_rn, imm6);
59
- return false;
62
+ tcg_rn = read_cpu_reg(s, a->rn, 1);
60
+ s->fp_access_checked = true;
63
+ tcg_gen_rotri_i64(tcg_rn, tcg_rn, a->imm);
64
65
nzcv = tcg_temp_new_i32();
66
tcg_gen_extrl_i64_i32(nzcv, tcg_rn);
67
@@ -XXX,XX +XXX,XX @@ static void disas_rotate_right_into_flags(DisasContext *s, uint32_t insn)
68
if (mask & 1) { /* V */
69
tcg_gen_shli_i32(cpu_VF, nzcv, 31 - 0);
70
}
61
+ return true;
71
+ return true;
62
}
72
}
63
73
64
/* Check that SVE access is enabled. If it is, return true.
74
/*
65
@@ -XXX,XX +XXX,XX @@ static inline bool fp_access_check(DisasContext *s)
75
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
66
bool sve_access_check(DisasContext *s)
76
switch (op2) {
67
{
77
case 0x0:
68
if (s->sve_excp_el) {
78
switch (op3) {
69
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_sve_access_trap(),
79
- case 0x01: /* Rotate right into flags */
70
- s->sve_excp_el);
80
- case 0x21:
71
+ assert(!s->sve_access_checked);
81
- disas_rotate_right_into_flags(s, insn);
72
+ s->sve_access_checked = true;
82
- break;
73
+
83
-
74
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
84
case 0x02: /* Evaluate into flags */
75
+ syn_sve_access_trap(), s->sve_excp_el);
85
case 0x12:
76
return false;
86
case 0x22:
77
}
87
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
78
+ s->sve_access_checked = true;
88
79
return fp_access_check(s);
89
default:
80
}
90
case 0x00: /* Add/subtract (with carry) */
81
91
+ case 0x01: /* Rotate right into flags */
82
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
92
+ case 0x21:
83
s->base.pc_next += 4;
93
goto do_unallocated;
84
94
}
85
s->fp_access_checked = false;
95
break;
86
+ s->sve_access_checked = false;
87
88
if (dc_isar_feature(aa64_bti, s)) {
89
if (s->base.num_insns == 1) {
90
--
96
--
91
2.20.1
97
2.34.1
92
98
93
99
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 4 +++
9
target/arm/tcg/translate-a64.c | 48 +++++-----------------------------
10
2 files changed, 11 insertions(+), 41 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SBCS . 11 11010000 ..... 000000 ..... ..... @rrr_sf
17
RMIF 1 01 11010000 imm:6 00001 rn:5 0 mask:4
18
19
# Evaluate into flags
20
+
21
+SETF8 0 01 11010000 00000 000010 rn:5 01101
22
+SETF16 0 01 11010000 00000 010010 rn:5 01101
23
+
24
# Conditional compare (regster)
25
# Conditional compare (immediate)
26
# Conditional select
27
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/tcg/translate-a64.c
30
+++ b/target/arm/tcg/translate-a64.c
31
@@ -XXX,XX +XXX,XX @@ static bool trans_RMIF(DisasContext *s, arg_RMIF *a)
32
return true;
33
}
34
35
-/*
36
- * Evaluate into flags
37
- * 31 30 29 21 15 14 10 5 4 0
38
- * +--+--+--+-----------------+---------+----+---------+------+--+------+
39
- * |sf|op| S| 1 1 0 1 0 0 0 0 | opcode2 | sz | 0 0 1 0 | Rn |o3| mask |
40
- * +--+--+--+-----------------+---------+----+---------+------+--+------+
41
- */
42
-static void disas_evaluate_into_flags(DisasContext *s, uint32_t insn)
43
+static bool do_setf(DisasContext *s, int rn, int shift)
44
{
45
- int o3_mask = extract32(insn, 0, 5);
46
- int rn = extract32(insn, 5, 5);
47
- int o2 = extract32(insn, 15, 6);
48
- int sz = extract32(insn, 14, 1);
49
- int sf_op_s = extract32(insn, 29, 3);
50
- TCGv_i32 tmp;
51
- int shift;
52
+ TCGv_i32 tmp = tcg_temp_new_i32();
53
54
- if (sf_op_s != 1 || o2 != 0 || o3_mask != 0xd ||
55
- !dc_isar_feature(aa64_condm_4, s)) {
56
- unallocated_encoding(s);
57
- return;
58
- }
59
- shift = sz ? 16 : 24; /* SETF16 or SETF8 */
60
-
61
- tmp = tcg_temp_new_i32();
62
tcg_gen_extrl_i64_i32(tmp, cpu_reg(s, rn));
63
tcg_gen_shli_i32(cpu_NF, tmp, shift);
64
tcg_gen_shli_i32(cpu_VF, tmp, shift - 1);
65
tcg_gen_mov_i32(cpu_ZF, cpu_NF);
66
tcg_gen_xor_i32(cpu_VF, cpu_VF, cpu_NF);
67
+ return true;
68
}
69
70
+TRANS_FEAT(SETF8, aa64_condm_4, do_setf, a->rn, 24)
71
+TRANS_FEAT(SETF16, aa64_condm_4, do_setf, a->rn, 16)
72
+
73
/* Conditional compare (immediate / register)
74
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
75
* +--+--+--+------------------------+--------+------+----+--+------+--+-----+
76
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
77
{
78
int op1 = extract32(insn, 28, 1);
79
int op2 = extract32(insn, 21, 4);
80
- int op3 = extract32(insn, 10, 6);
81
82
if (!op1) {
83
goto do_unallocated;
84
}
85
86
switch (op2) {
87
- case 0x0:
88
- switch (op3) {
89
- case 0x02: /* Evaluate into flags */
90
- case 0x12:
91
- case 0x22:
92
- case 0x32:
93
- disas_evaluate_into_flags(s, insn);
94
- break;
95
-
96
- default:
97
- case 0x00: /* Add/subtract (with carry) */
98
- case 0x01: /* Rotate right into flags */
99
- case 0x21:
100
- goto do_unallocated;
101
- }
102
- break;
103
-
104
case 0x2: /* Conditional compare */
105
disas_cc(s, insn); /* both imm and reg forms */
106
break;
107
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
108
109
default:
110
do_unallocated:
111
+ case 0x0:
112
case 0x6: /* Data-processing */
113
case 0x8 ... 0xf: /* (3 source) */
114
unallocated_encoding(s);
115
--
116
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 6 ++--
9
target/arm/tcg/translate-a64.c | 66 +++++++++++-----------------------
10
2 files changed, 25 insertions(+), 47 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ RMIF 1 01 11010000 imm:6 00001 rn:5 0 mask:4
17
SETF8 0 01 11010000 00000 000010 rn:5 01101
18
SETF16 0 01 11010000 00000 010010 rn:5 01101
19
20
-# Conditional compare (regster)
21
-# Conditional compare (immediate)
22
+# Conditional compare
23
+
24
+CCMP sf:1 op:1 1 11010010 y:5 cond:4 imm:1 0 rn:5 0 nzcv:4
25
+
26
# Conditional select
27
# Data Processing (3-source)
28
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static bool do_setf(DisasContext *s, int rn, int shift)
34
TRANS_FEAT(SETF8, aa64_condm_4, do_setf, a->rn, 24)
35
TRANS_FEAT(SETF16, aa64_condm_4, do_setf, a->rn, 16)
36
37
-/* Conditional compare (immediate / register)
38
- * 31 30 29 28 27 26 25 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
39
- * +--+--+--+------------------------+--------+------+----+--+------+--+-----+
40
- * |sf|op| S| 1 1 0 1 0 0 1 0 |imm5/rm | cond |i/r |o2| Rn |o3|nzcv |
41
- * +--+--+--+------------------------+--------+------+----+--+------+--+-----+
42
- * [1] y [0] [0]
43
- */
44
-static void disas_cc(DisasContext *s, uint32_t insn)
45
+/* CCMP, CCMN */
46
+static bool trans_CCMP(DisasContext *s, arg_CCMP *a)
47
{
48
- unsigned int sf, op, y, cond, rn, nzcv, is_imm;
49
- TCGv_i32 tcg_t0, tcg_t1, tcg_t2;
50
- TCGv_i64 tcg_tmp, tcg_y, tcg_rn;
51
+ TCGv_i32 tcg_t0 = tcg_temp_new_i32();
52
+ TCGv_i32 tcg_t1 = tcg_temp_new_i32();
53
+ TCGv_i32 tcg_t2 = tcg_temp_new_i32();
54
+ TCGv_i64 tcg_tmp = tcg_temp_new_i64();
55
+ TCGv_i64 tcg_rn, tcg_y;
56
DisasCompare c;
57
-
58
- if (!extract32(insn, 29, 1)) {
59
- unallocated_encoding(s);
60
- return;
61
- }
62
- if (insn & (1 << 10 | 1 << 4)) {
63
- unallocated_encoding(s);
64
- return;
65
- }
66
- sf = extract32(insn, 31, 1);
67
- op = extract32(insn, 30, 1);
68
- is_imm = extract32(insn, 11, 1);
69
- y = extract32(insn, 16, 5); /* y = rm (reg) or imm5 (imm) */
70
- cond = extract32(insn, 12, 4);
71
- rn = extract32(insn, 5, 5);
72
- nzcv = extract32(insn, 0, 4);
73
+ unsigned nzcv;
74
75
/* Set T0 = !COND. */
76
- tcg_t0 = tcg_temp_new_i32();
77
- arm_test_cc(&c, cond);
78
+ arm_test_cc(&c, a->cond);
79
tcg_gen_setcondi_i32(tcg_invert_cond(c.cond), tcg_t0, c.value, 0);
80
81
/* Load the arguments for the new comparison. */
82
- if (is_imm) {
83
- tcg_y = tcg_temp_new_i64();
84
- tcg_gen_movi_i64(tcg_y, y);
85
+ if (a->imm) {
86
+ tcg_y = tcg_constant_i64(a->y);
87
} else {
88
- tcg_y = cpu_reg(s, y);
89
+ tcg_y = cpu_reg(s, a->y);
90
}
91
- tcg_rn = cpu_reg(s, rn);
92
+ tcg_rn = cpu_reg(s, a->rn);
93
94
/* Set the flags for the new comparison. */
95
- tcg_tmp = tcg_temp_new_i64();
96
- if (op) {
97
- gen_sub_CC(sf, tcg_tmp, tcg_rn, tcg_y);
98
+ if (a->op) {
99
+ gen_sub_CC(a->sf, tcg_tmp, tcg_rn, tcg_y);
100
} else {
101
- gen_add_CC(sf, tcg_tmp, tcg_rn, tcg_y);
102
+ gen_add_CC(a->sf, tcg_tmp, tcg_rn, tcg_y);
103
}
104
105
- /* If COND was false, force the flags to #nzcv. Compute two masks
106
+ /*
107
+ * If COND was false, force the flags to #nzcv. Compute two masks
108
* to help with this: T1 = (COND ? 0 : -1), T2 = (COND ? -1 : 0).
109
* For tcg hosts that support ANDC, we can make do with just T1.
110
* In either case, allow the tcg optimizer to delete any unused mask.
111
*/
112
- tcg_t1 = tcg_temp_new_i32();
113
- tcg_t2 = tcg_temp_new_i32();
114
tcg_gen_neg_i32(tcg_t1, tcg_t0);
115
tcg_gen_subi_i32(tcg_t2, tcg_t0, 1);
116
117
+ nzcv = a->nzcv;
118
if (nzcv & 8) { /* N */
119
tcg_gen_or_i32(cpu_NF, cpu_NF, tcg_t1);
120
} else {
121
@@ -XXX,XX +XXX,XX @@ static void disas_cc(DisasContext *s, uint32_t insn)
122
tcg_gen_and_i32(cpu_VF, cpu_VF, tcg_t2);
123
}
124
}
125
+ return true;
126
}
127
128
/* Conditional select
129
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
130
}
131
132
switch (op2) {
133
- case 0x2: /* Conditional compare */
134
- disas_cc(s, insn); /* both imm and reg forms */
135
- break;
136
-
137
case 0x4: /* Conditional select */
138
disas_cond_select(s, insn);
139
break;
140
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
141
default:
142
do_unallocated:
143
case 0x0:
144
+ case 0x2: /* Conditional compare */
145
case 0x6: /* Data-processing */
146
case 0x8 ... 0xf: /* (3 source) */
147
unallocated_encoding(s);
148
--
149
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is the only user of the function.
3
This includes CSEL, CSINC, CSINV, CSNEG. Remove disas_data_proc_reg,
4
as these were the last insns decoded by that function.
4
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241211163036.2297116-20-richard.henderson@linaro.org
7
Message-id: 20200815013145.539409-6-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate-sve.c | 19 ++++++-------------
11
target/arm/tcg/a64.decode | 3 ++
11
1 file changed, 6 insertions(+), 13 deletions(-)
12
target/arm/tcg/translate-a64.c | 84 ++++++----------------------------
13
2 files changed, 17 insertions(+), 70 deletions(-)
12
14
13
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-sve.c
17
--- a/target/arm/tcg/a64.decode
16
+++ b/target/arm/translate-sve.c
18
+++ b/target/arm/tcg/a64.decode
17
@@ -XXX,XX +XXX,XX @@ static void do_dupi_z(DisasContext *s, int rd, uint64_t word)
19
@@ -XXX,XX +XXX,XX @@ SETF16 0 01 11010000 00000 010010 rn:5 01101
18
tcg_gen_gvec_dup_imm(MO_64, vec_full_reg_offset(s, rd), vsz, vsz, word);
20
CCMP sf:1 op:1 1 11010010 y:5 cond:4 imm:1 0 rn:5 0 nzcv:4
21
22
# Conditional select
23
+
24
+CSEL sf:1 else_inv:1 011010100 rm:5 cond:4 0 else_inc:1 rn:5 rd:5
25
+
26
# Data Processing (3-source)
27
28
&rrrr rd rn rm ra
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static bool trans_CCMP(DisasContext *s, arg_CCMP *a)
34
return true;
19
}
35
}
20
36
21
-/* Invoke a vector expander on two Pregs. */
37
-/* Conditional select
22
-static bool do_vector2_p(DisasContext *s, GVecGen2Fn *gvec_fn,
38
- * 31 30 29 28 21 20 16 15 12 11 10 9 5 4 0
23
- int esz, int rd, int rn)
39
- * +----+----+---+-----------------+------+------+-----+------+------+
24
-{
40
- * | sf | op | S | 1 1 0 1 0 1 0 0 | Rm | cond | op2 | Rn | Rd |
25
- if (sve_access_check(s)) {
41
- * +----+----+---+-----------------+------+------+-----+------+------+
26
- unsigned psz = pred_gvec_reg_size(s);
42
- */
27
- gvec_fn(esz, pred_full_reg_offset(s, rd),
43
-static void disas_cond_select(DisasContext *s, uint32_t insn)
28
- pred_full_reg_offset(s, rn), psz, psz);
44
+static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
45
{
46
- unsigned int sf, else_inv, rm, cond, else_inc, rn, rd;
47
- TCGv_i64 tcg_rd, zero;
48
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
49
+ TCGv_i64 zero = tcg_constant_i64(0);
50
DisasCompare64 c;
51
52
- if (extract32(insn, 29, 1) || extract32(insn, 11, 1)) {
53
- /* S == 1 or op2<1> == 1 */
54
- unallocated_encoding(s);
55
- return;
29
- }
56
- }
30
- return true;
57
- sf = extract32(insn, 31, 1);
58
- else_inv = extract32(insn, 30, 1);
59
- rm = extract32(insn, 16, 5);
60
- cond = extract32(insn, 12, 4);
61
- else_inc = extract32(insn, 10, 1);
62
- rn = extract32(insn, 5, 5);
63
- rd = extract32(insn, 0, 5);
64
+ a64_test_cc(&c, a->cond);
65
66
- tcg_rd = cpu_reg(s, rd);
67
-
68
- a64_test_cc(&c, cond);
69
- zero = tcg_constant_i64(0);
70
-
71
- if (rn == 31 && rm == 31 && (else_inc ^ else_inv)) {
72
+ if (a->rn == 31 && a->rm == 31 && (a->else_inc ^ a->else_inv)) {
73
/* CSET & CSETM. */
74
- if (else_inv) {
75
+ if (a->else_inv) {
76
tcg_gen_negsetcond_i64(tcg_invert_cond(c.cond),
77
tcg_rd, c.value, zero);
78
} else {
79
@@ -XXX,XX +XXX,XX @@ static void disas_cond_select(DisasContext *s, uint32_t insn)
80
tcg_rd, c.value, zero);
81
}
82
} else {
83
- TCGv_i64 t_true = cpu_reg(s, rn);
84
- TCGv_i64 t_false = read_cpu_reg(s, rm, 1);
85
- if (else_inv && else_inc) {
86
+ TCGv_i64 t_true = cpu_reg(s, a->rn);
87
+ TCGv_i64 t_false = read_cpu_reg(s, a->rm, 1);
88
+
89
+ if (a->else_inv && a->else_inc) {
90
tcg_gen_neg_i64(t_false, t_false);
91
- } else if (else_inv) {
92
+ } else if (a->else_inv) {
93
tcg_gen_not_i64(t_false, t_false);
94
- } else if (else_inc) {
95
+ } else if (a->else_inc) {
96
tcg_gen_addi_i64(t_false, t_false, 1);
97
}
98
tcg_gen_movcond_i64(c.cond, tcg_rd, c.value, zero, t_true, t_false);
99
}
100
101
- if (!sf) {
102
+ if (!a->sf) {
103
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
104
}
31
-}
105
-}
32
-
106
-
33
/* Invoke a vector expander on three Pregs. */
107
-/*
34
static bool do_vector3_p(DisasContext *s, GVecGen3Fn *gvec_fn,
108
- * Data processing - register
35
int esz, int rd, int rn, int rm)
109
- * 31 30 29 28 25 21 20 16 10 0
36
@@ -XXX,XX +XXX,XX @@ static bool do_vecop4_p(DisasContext *s, const GVecGen4 *gvec_op,
110
- * +--+---+--+---+-------+-----+-------+-------+---------+
37
/* Invoke a vector move on two Pregs. */
111
- * | |op0| |op1| 1 0 1 | op2 | | op3 | |
38
static bool do_mov_p(DisasContext *s, int rd, int rn)
112
- * +--+---+--+---+-------+-----+-------+-------+---------+
39
{
113
- */
40
- return do_vector2_p(s, tcg_gen_gvec_mov, 0, rd, rn);
114
-static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
41
+ if (sve_access_check(s)) {
115
-{
42
+ unsigned psz = pred_gvec_reg_size(s);
116
- int op1 = extract32(insn, 28, 1);
43
+ tcg_gen_gvec_mov(MO_8, pred_full_reg_offset(s, rd),
117
- int op2 = extract32(insn, 21, 4);
44
+ pred_full_reg_offset(s, rn), psz, psz);
118
-
45
+ }
119
- if (!op1) {
120
- goto do_unallocated;
121
- }
122
-
123
- switch (op2) {
124
- case 0x4: /* Conditional select */
125
- disas_cond_select(s, insn);
126
- break;
127
-
128
- default:
129
- do_unallocated:
130
- case 0x0:
131
- case 0x2: /* Conditional compare */
132
- case 0x6: /* Data-processing */
133
- case 0x8 ... 0xf: /* (3 source) */
134
- unallocated_encoding(s);
135
- break;
136
- }
46
+ return true;
137
+ return true;
47
}
138
}
48
139
49
/* Set the cpu flags as per a return from an SVE helper. */
140
static void handle_fp_compare(DisasContext *s, int size,
141
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
142
static void disas_a64_legacy(DisasContext *s, uint32_t insn)
143
{
144
switch (extract32(insn, 25, 4)) {
145
- case 0x5:
146
- case 0xd: /* Data processing - register */
147
- disas_data_proc_reg(s, insn);
148
- break;
149
case 0x7:
150
case 0xf: /* Data processing - SIMD and floating point */
151
disas_data_proc_simd_fp(s, insn);
50
--
152
--
51
2.20.1
153
2.34.1
52
53
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Provide a simple way to check for float64, float32,
4
and float16 support, as well as the fpu enabled.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241211163036.2297116-21-richard.henderson@linaro.org
5
Message-id: 20200815013145.539409-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate-sve.c | 20 ++++++++++++--------
11
target/arm/tcg/translate-a64.c | 62 ++++++++++++++++++----------------
9
1 file changed, 12 insertions(+), 8 deletions(-)
12
1 file changed, 32 insertions(+), 30 deletions(-)
10
13
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
16
--- a/target/arm/tcg/translate-a64.c
14
+++ b/target/arm/translate-sve.c
17
+++ b/target/arm/tcg/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
18
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
16
return size_for_gvec(pred_full_reg_size(s));
19
return true;
17
}
20
}
18
21
19
+/* Invoke an out-of-line helper on 2 Zregs. */
22
+/*
20
+static void gen_gvec_ool_zz(DisasContext *s, gen_helper_gvec_2 *fn,
23
+ * Return <0 for non-supported element sizes, with MO_16 controlled by
21
+ int rd, int rn, int data)
24
+ * FEAT_FP16; return 0 for fp disabled; otherwise return >0 for success.
25
+ */
26
+static int fp_access_check_scalar_hsd(DisasContext *s, MemOp esz)
22
+{
27
+{
23
+ unsigned vsz = vec_full_reg_size(s);
28
+ switch (esz) {
24
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
29
+ case MO_64:
25
+ vec_full_reg_offset(s, rn),
30
+ case MO_32:
26
+ vsz, vsz, data, fn);
31
+ break;
32
+ case MO_16:
33
+ if (!dc_isar_feature(aa64_fp16, s)) {
34
+ return -1;
35
+ }
36
+ break;
37
+ default:
38
+ return -1;
39
+ }
40
+ return fp_access_check(s);
27
+}
41
+}
28
+
42
+
29
/* Invoke an out-of-line helper on 3 Zregs. */
43
/*
30
static void gen_gvec_ool_zzz(DisasContext *s, gen_helper_gvec_3 *fn,
44
* Check that SVE access is enabled. If it is, return true.
31
int rd, int rn, int rm, int data)
45
* If not, emit code to generate an appropriate exception and return false.
32
@@ -XXX,XX +XXX,XX @@ static bool trans_FEXPA(DisasContext *s, arg_rr_esz *a)
46
@@ -XXX,XX +XXX,XX @@ static bool trans_FCSEL(DisasContext *s, arg_FCSEL *a)
33
return false;
47
{
48
TCGv_i64 t_true, t_false;
49
DisasCompare64 c;
50
+ int check = fp_access_check_scalar_hsd(s, a->esz);
51
52
- switch (a->esz) {
53
- case MO_32:
54
- case MO_64:
55
- break;
56
- case MO_16:
57
- if (!dc_isar_feature(aa64_fp16, s)) {
58
- return false;
59
- }
60
- break;
61
- default:
62
- return false;
63
- }
64
-
65
- if (!fp_access_check(s)) {
66
- return true;
67
+ if (check <= 0) {
68
+ return check == 0;
34
}
69
}
35
if (sve_access_check(s)) {
70
36
- unsigned vsz = vec_full_reg_size(s);
71
/* Zero extend sreg & hreg inputs to 64 bits now. */
37
- tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
72
@@ -XXX,XX +XXX,XX @@ TRANS(FMINV_s, do_fp_reduction, a, gen_helper_vfp_mins)
38
- vec_full_reg_offset(s, a->rn),
73
39
- vsz, vsz, 0, fns[a->esz]);
74
static bool trans_FMOVI_s(DisasContext *s, arg_FMOVI_s *a)
40
+ gen_gvec_ool_zz(s, fns[a->esz], a->rd, a->rn, 0);
75
{
76
- switch (a->esz) {
77
- case MO_32:
78
- case MO_64:
79
- break;
80
- case MO_16:
81
- if (!dc_isar_feature(aa64_fp16, s)) {
82
- return false;
83
- }
84
- break;
85
- default:
86
- return false;
87
- }
88
- if (fp_access_check(s)) {
89
- uint64_t imm = vfp_expand_imm(a->esz, a->imm);
90
- write_fp_dreg(s, a->rd, tcg_constant_i64(imm));
91
+ int check = fp_access_check_scalar_hsd(s, a->esz);
92
+ uint64_t imm;
93
+
94
+ if (check <= 0) {
95
+ return check == 0;
41
}
96
}
97
+
98
+ imm = vfp_expand_imm(a->esz, a->imm);
99
+ write_fp_dreg(s, a->rd, tcg_constant_i64(imm));
42
return true;
100
return true;
43
}
101
}
44
@@ -XXX,XX +XXX,XX @@ static bool trans_REV_v(DisasContext *s, arg_rr_esz *a)
102
45
};
46
47
if (sve_access_check(s)) {
48
- unsigned vsz = vec_full_reg_size(s);
49
- tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
50
- vec_full_reg_offset(s, a->rn),
51
- vsz, vsz, 0, fns[a->esz]);
52
+ gen_gvec_ool_zz(s, fns[a->esz], a->rd, a->rn, 0);
53
}
54
return true;
55
}
56
--
103
--
57
2.20.1
104
2.34.1
58
59
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Provide a simple way to check for float64, float32, and float16
4
support vs vector width, as well as the fpu enabled.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-22-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/translate-a64.c | 135 +++++++++++++--------------------
12
1 file changed, 54 insertions(+), 81 deletions(-)
13
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static int fp_access_check_scalar_hsd(DisasContext *s, MemOp esz)
19
return fp_access_check(s);
20
}
21
22
+/* Likewise, but vector MO_64 must have two elements. */
23
+static int fp_access_check_vector_hsd(DisasContext *s, bool is_q, MemOp esz)
24
+{
25
+ switch (esz) {
26
+ case MO_64:
27
+ if (!is_q) {
28
+ return -1;
29
+ }
30
+ break;
31
+ case MO_32:
32
+ break;
33
+ case MO_16:
34
+ if (!dc_isar_feature(aa64_fp16, s)) {
35
+ return -1;
36
+ }
37
+ break;
38
+ default:
39
+ return -1;
40
+ }
41
+ return fp_access_check(s);
42
+}
43
+
44
/*
45
* Check that SVE access is enabled. If it is, return true.
46
* If not, emit code to generate an appropriate exception and return false.
47
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a, int data,
48
gen_helper_gvec_3_ptr * const fns[3])
49
{
50
MemOp esz = a->esz;
51
+ int check = fp_access_check_vector_hsd(s, a->q, esz);
52
53
- switch (esz) {
54
- case MO_64:
55
- if (!a->q) {
56
- return false;
57
- }
58
- break;
59
- case MO_32:
60
- break;
61
- case MO_16:
62
- if (!dc_isar_feature(aa64_fp16, s)) {
63
- return false;
64
- }
65
- break;
66
- default:
67
- return false;
68
- }
69
- if (fp_access_check(s)) {
70
- gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
71
- esz == MO_16, data, fns[esz - 1]);
72
+ if (check <= 0) {
73
+ return check == 0;
74
}
75
+
76
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
77
+ esz == MO_16, data, fns[esz - 1]);
78
return true;
79
}
80
81
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FCADD_270, aa64_fcma, do_fp3_vector, a, 1, f_vector_fcadd)
82
83
static bool trans_FCMLA_v(DisasContext *s, arg_FCMLA_v *a)
84
{
85
- gen_helper_gvec_4_ptr *fn;
86
+ static gen_helper_gvec_4_ptr * const fn[] = {
87
+ [MO_16] = gen_helper_gvec_fcmlah,
88
+ [MO_32] = gen_helper_gvec_fcmlas,
89
+ [MO_64] = gen_helper_gvec_fcmlad,
90
+ };
91
+ int check;
92
93
if (!dc_isar_feature(aa64_fcma, s)) {
94
return false;
95
}
96
- switch (a->esz) {
97
- case MO_64:
98
- if (!a->q) {
99
- return false;
100
- }
101
- fn = gen_helper_gvec_fcmlad;
102
- break;
103
- case MO_32:
104
- fn = gen_helper_gvec_fcmlas;
105
- break;
106
- case MO_16:
107
- if (!dc_isar_feature(aa64_fp16, s)) {
108
- return false;
109
- }
110
- fn = gen_helper_gvec_fcmlah;
111
- break;
112
- default:
113
- return false;
114
- }
115
- if (fp_access_check(s)) {
116
- gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
117
- a->esz == MO_16, a->rot, fn);
118
+
119
+ check = fp_access_check_vector_hsd(s, a->q, a->esz);
120
+ if (check <= 0) {
121
+ return check == 0;
122
}
123
+
124
+ gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
125
+ a->esz == MO_16, a->rot, fn[a->esz]);
126
return true;
127
}
128
129
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
130
gen_helper_gvec_3_ptr * const fns[3])
131
{
132
MemOp esz = a->esz;
133
+ int check = fp_access_check_vector_hsd(s, a->q, esz);
134
135
- switch (esz) {
136
- case MO_64:
137
- if (!a->q) {
138
- return false;
139
- }
140
- break;
141
- case MO_32:
142
- break;
143
- case MO_16:
144
- if (!dc_isar_feature(aa64_fp16, s)) {
145
- return false;
146
- }
147
- break;
148
- default:
149
- g_assert_not_reached();
150
- }
151
- if (fp_access_check(s)) {
152
- gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
153
- esz == MO_16, a->idx, fns[esz - 1]);
154
+ if (check <= 0) {
155
+ return check == 0;
156
}
157
+
158
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
159
+ esz == MO_16, a->idx, fns[esz - 1]);
160
return true;
161
}
162
163
@@ -XXX,XX +XXX,XX @@ static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
164
gen_helper_gvec_fmla_idx_d,
165
};
166
MemOp esz = a->esz;
167
+ int check = fp_access_check_vector_hsd(s, a->q, esz);
168
169
- switch (esz) {
170
- case MO_64:
171
- if (!a->q) {
172
- return false;
173
- }
174
- break;
175
- case MO_32:
176
- break;
177
- case MO_16:
178
- if (!dc_isar_feature(aa64_fp16, s)) {
179
- return false;
180
- }
181
- break;
182
- default:
183
- g_assert_not_reached();
184
- }
185
- if (fp_access_check(s)) {
186
- gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
187
- esz == MO_16, (a->idx << 1) | neg,
188
- fns[esz - 1]);
189
+ if (check <= 0) {
190
+ return check == 0;
191
}
192
+
193
+ gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
194
+ esz == MO_16, (a->idx << 1) | neg,
195
+ fns[esz - 1]);
196
return true;
197
}
198
199
--
200
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Model after gen_gvec_fn_zzz et al.
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20241211163036.2297116-23-richard.henderson@linaro.org
7
Message-id: 20200815013145.539409-9-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
7
---
10
target/arm/translate-sve.c | 35 ++++++++++++++++-------------------
8
target/arm/tcg/a64.decode | 8 +
11
1 file changed, 16 insertions(+), 19 deletions(-)
9
target/arm/tcg/translate-a64.c | 283 ++++++++++++---------------------
10
2 files changed, 112 insertions(+), 179 deletions(-)
12
11
13
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-sve.c
14
--- a/target/arm/tcg/a64.decode
16
+++ b/target/arm/translate-sve.c
15
+++ b/target/arm/tcg/a64.decode
17
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
16
@@ -XXX,XX +XXX,XX @@ FMINV_s 0110 1110 10 11000 01111 10 ..... ..... @rr_q1e2
18
return size_for_gvec(pred_full_reg_size(s));
17
19
}
18
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
20
19
21
-/* Invoke a vector expander on two Zregs. */
20
+# Floating-point Compare
22
+/* Invoke an out-of-line helper on 3 Zregs and a predicate. */
21
+
23
+static void gen_gvec_ool_zzzp(DisasContext *s, gen_helper_gvec_4 *fn,
22
+FCMP 00011110 .. 1 rm:5 001000 rn:5 e:1 z:1 000 esz=%esz_hsd
24
+ int rd, int rn, int rm, int pg, int data)
23
+
25
+{
24
+# Floating-point Conditional Compare
26
+ unsigned vsz = vec_full_reg_size(s);
25
+
27
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
26
+FCCMP 00011110 .. 1 rm:5 cond:4 01 rn:5 e:1 nzcv:4 esz=%esz_hsd
28
+ vec_full_reg_offset(s, rn),
27
+
29
+ vec_full_reg_offset(s, rm),
28
# Advanced SIMD Modified Immediate / Shift by Immediate
30
+ pred_full_reg_offset(s, pg),
29
31
+ vsz, vsz, data, fn);
30
%abcdefgh 16:3 5:5
32
+}
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
33
32
index XXXXXXX..XXXXXXX 100644
34
+/* Invoke a vector expander on two Zregs. */
33
--- a/target/arm/tcg/translate-a64.c
35
static void gen_gvec_fn_zz(DisasContext *s, GVecGen2Fn *gvec_fn,
34
+++ b/target/arm/tcg/translate-a64.c
36
int esz, int rd, int rn)
35
@@ -XXX,XX +XXX,XX @@ static bool trans_FMOVI_s(DisasContext *s, arg_FMOVI_s *a)
37
{
38
@@ -XXX,XX +XXX,XX @@ static bool trans_UQSUB_zzz(DisasContext *s, arg_rrr_esz *a)
39
40
static bool do_zpzz_ool(DisasContext *s, arg_rprr_esz *a, gen_helper_gvec_4 *fn)
41
{
42
- unsigned vsz = vec_full_reg_size(s);
43
if (fn == NULL) {
44
return false;
45
}
46
if (sve_access_check(s)) {
47
- tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
48
- vec_full_reg_offset(s, a->rn),
49
- vec_full_reg_offset(s, a->rm),
50
- pred_full_reg_offset(s, a->pg),
51
- vsz, vsz, 0, fn);
52
+ gen_gvec_ool_zzzp(s, fn, a->rd, a->rn, a->rm, a->pg, 0);
53
}
54
return true;
36
return true;
55
}
37
}
56
@@ -XXX,XX +XXX,XX @@ static void do_sel_z(DisasContext *s, int rd, int rn, int rm, int pg, int esz)
38
57
gen_helper_sve_sel_zpzz_b, gen_helper_sve_sel_zpzz_h,
39
+/*
58
gen_helper_sve_sel_zpzz_s, gen_helper_sve_sel_zpzz_d
40
+ * Floating point compare, conditional compare
59
};
41
+ */
60
- unsigned vsz = vec_full_reg_size(s);
42
+
61
- tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
43
+static void handle_fp_compare(DisasContext *s, int size,
62
- vec_full_reg_offset(s, rn),
44
+ unsigned int rn, unsigned int rm,
63
- vec_full_reg_offset(s, rm),
45
+ bool cmp_with_zero, bool signal_all_nans)
64
- pred_full_reg_offset(s, pg),
46
+{
65
- vsz, vsz, 0, fns[esz]);
47
+ TCGv_i64 tcg_flags = tcg_temp_new_i64();
66
+ gen_gvec_ool_zzzp(s, fns[esz], rd, rn, rm, pg, 0);
48
+ TCGv_ptr fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
67
}
49
+
68
50
+ if (size == MO_64) {
69
#define DO_ZPZZ(NAME, name) \
51
+ TCGv_i64 tcg_vn, tcg_vm;
70
@@ -XXX,XX +XXX,XX @@ static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a)
52
+
71
static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a)
53
+ tcg_vn = read_fp_dreg(s, rn);
72
{
54
+ if (cmp_with_zero) {
73
if (sve_access_check(s)) {
55
+ tcg_vm = tcg_constant_i64(0);
74
- unsigned vsz = vec_full_reg_size(s);
56
+ } else {
75
- tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
57
+ tcg_vm = read_fp_dreg(s, rm);
76
- vec_full_reg_offset(s, a->rn),
58
+ }
77
- vec_full_reg_offset(s, a->rm),
59
+ if (signal_all_nans) {
78
- pred_full_reg_offset(s, a->pg),
60
+ gen_helper_vfp_cmped_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
79
- vsz, vsz, a->esz, gen_helper_sve_splice);
61
+ } else {
80
+ gen_gvec_ool_zzzp(s, gen_helper_sve_splice,
62
+ gen_helper_vfp_cmpd_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
81
+ a->rd, a->rn, a->rm, a->pg, 0);
63
+ }
82
}
64
+ } else {
65
+ TCGv_i32 tcg_vn = tcg_temp_new_i32();
66
+ TCGv_i32 tcg_vm = tcg_temp_new_i32();
67
+
68
+ read_vec_element_i32(s, tcg_vn, rn, 0, size);
69
+ if (cmp_with_zero) {
70
+ tcg_gen_movi_i32(tcg_vm, 0);
71
+ } else {
72
+ read_vec_element_i32(s, tcg_vm, rm, 0, size);
73
+ }
74
+
75
+ switch (size) {
76
+ case MO_32:
77
+ if (signal_all_nans) {
78
+ gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
79
+ } else {
80
+ gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
81
+ }
82
+ break;
83
+ case MO_16:
84
+ if (signal_all_nans) {
85
+ gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
86
+ } else {
87
+ gen_helper_vfp_cmph_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
88
+ }
89
+ break;
90
+ default:
91
+ g_assert_not_reached();
92
+ }
93
+ }
94
+
95
+ gen_set_nzcv(tcg_flags);
96
+}
97
+
98
+/* FCMP, FCMPE */
99
+static bool trans_FCMP(DisasContext *s, arg_FCMP *a)
100
+{
101
+ int check = fp_access_check_scalar_hsd(s, a->esz);
102
+
103
+ if (check <= 0) {
104
+ return check == 0;
105
+ }
106
+
107
+ handle_fp_compare(s, a->esz, a->rn, a->rm, a->z, a->e);
108
+ return true;
109
+}
110
+
111
+/* FCCMP, FCCMPE */
112
+static bool trans_FCCMP(DisasContext *s, arg_FCCMP *a)
113
+{
114
+ TCGLabel *label_continue = NULL;
115
+ int check = fp_access_check_scalar_hsd(s, a->esz);
116
+
117
+ if (check <= 0) {
118
+ return check == 0;
119
+ }
120
+
121
+ if (a->cond < 0x0e) { /* not always */
122
+ TCGLabel *label_match = gen_new_label();
123
+ label_continue = gen_new_label();
124
+ arm_gen_test_cc(a->cond, label_match);
125
+ /* nomatch: */
126
+ gen_set_nzcv(tcg_constant_i64(a->nzcv << 28));
127
+ tcg_gen_br(label_continue);
128
+ gen_set_label(label_match);
129
+ }
130
+
131
+ handle_fp_compare(s, a->esz, a->rn, a->rm, false, a->e);
132
+
133
+ if (label_continue) {
134
+ gen_set_label(label_continue);
135
+ }
136
+ return true;
137
+}
138
+
139
/*
140
* Advanced SIMD Modified Immediate
141
*/
142
@@ -XXX,XX +XXX,XX @@ static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
83
return true;
143
return true;
84
}
144
}
145
146
-static void handle_fp_compare(DisasContext *s, int size,
147
- unsigned int rn, unsigned int rm,
148
- bool cmp_with_zero, bool signal_all_nans)
149
-{
150
- TCGv_i64 tcg_flags = tcg_temp_new_i64();
151
- TCGv_ptr fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
152
-
153
- if (size == MO_64) {
154
- TCGv_i64 tcg_vn, tcg_vm;
155
-
156
- tcg_vn = read_fp_dreg(s, rn);
157
- if (cmp_with_zero) {
158
- tcg_vm = tcg_constant_i64(0);
159
- } else {
160
- tcg_vm = read_fp_dreg(s, rm);
161
- }
162
- if (signal_all_nans) {
163
- gen_helper_vfp_cmped_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
164
- } else {
165
- gen_helper_vfp_cmpd_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
166
- }
167
- } else {
168
- TCGv_i32 tcg_vn = tcg_temp_new_i32();
169
- TCGv_i32 tcg_vm = tcg_temp_new_i32();
170
-
171
- read_vec_element_i32(s, tcg_vn, rn, 0, size);
172
- if (cmp_with_zero) {
173
- tcg_gen_movi_i32(tcg_vm, 0);
174
- } else {
175
- read_vec_element_i32(s, tcg_vm, rm, 0, size);
176
- }
177
-
178
- switch (size) {
179
- case MO_32:
180
- if (signal_all_nans) {
181
- gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
182
- } else {
183
- gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
184
- }
185
- break;
186
- case MO_16:
187
- if (signal_all_nans) {
188
- gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
189
- } else {
190
- gen_helper_vfp_cmph_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
191
- }
192
- break;
193
- default:
194
- g_assert_not_reached();
195
- }
196
- }
197
-
198
- gen_set_nzcv(tcg_flags);
199
-}
200
-
201
-/* Floating point compare
202
- * 31 30 29 28 24 23 22 21 20 16 15 14 13 10 9 5 4 0
203
- * +---+---+---+-----------+------+---+------+-----+---------+------+-------+
204
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | op | 1 0 0 0 | Rn | op2 |
205
- * +---+---+---+-----------+------+---+------+-----+---------+------+-------+
206
- */
207
-static void disas_fp_compare(DisasContext *s, uint32_t insn)
208
-{
209
- unsigned int mos, type, rm, op, rn, opc, op2r;
210
- int size;
211
-
212
- mos = extract32(insn, 29, 3);
213
- type = extract32(insn, 22, 2);
214
- rm = extract32(insn, 16, 5);
215
- op = extract32(insn, 14, 2);
216
- rn = extract32(insn, 5, 5);
217
- opc = extract32(insn, 3, 2);
218
- op2r = extract32(insn, 0, 3);
219
-
220
- if (mos || op || op2r) {
221
- unallocated_encoding(s);
222
- return;
223
- }
224
-
225
- switch (type) {
226
- case 0:
227
- size = MO_32;
228
- break;
229
- case 1:
230
- size = MO_64;
231
- break;
232
- case 3:
233
- size = MO_16;
234
- if (dc_isar_feature(aa64_fp16, s)) {
235
- break;
236
- }
237
- /* fallthru */
238
- default:
239
- unallocated_encoding(s);
240
- return;
241
- }
242
-
243
- if (!fp_access_check(s)) {
244
- return;
245
- }
246
-
247
- handle_fp_compare(s, size, rn, rm, opc & 1, opc & 2);
248
-}
249
-
250
-/* Floating point conditional compare
251
- * 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
252
- * +---+---+---+-----------+------+---+------+------+-----+------+----+------+
253
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | cond | 0 1 | Rn | op | nzcv |
254
- * +---+---+---+-----------+------+---+------+------+-----+------+----+------+
255
- */
256
-static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
257
-{
258
- unsigned int mos, type, rm, cond, rn, op, nzcv;
259
- TCGLabel *label_continue = NULL;
260
- int size;
261
-
262
- mos = extract32(insn, 29, 3);
263
- type = extract32(insn, 22, 2);
264
- rm = extract32(insn, 16, 5);
265
- cond = extract32(insn, 12, 4);
266
- rn = extract32(insn, 5, 5);
267
- op = extract32(insn, 4, 1);
268
- nzcv = extract32(insn, 0, 4);
269
-
270
- if (mos) {
271
- unallocated_encoding(s);
272
- return;
273
- }
274
-
275
- switch (type) {
276
- case 0:
277
- size = MO_32;
278
- break;
279
- case 1:
280
- size = MO_64;
281
- break;
282
- case 3:
283
- size = MO_16;
284
- if (dc_isar_feature(aa64_fp16, s)) {
285
- break;
286
- }
287
- /* fallthru */
288
- default:
289
- unallocated_encoding(s);
290
- return;
291
- }
292
-
293
- if (!fp_access_check(s)) {
294
- return;
295
- }
296
-
297
- if (cond < 0x0e) { /* not always */
298
- TCGLabel *label_match = gen_new_label();
299
- label_continue = gen_new_label();
300
- arm_gen_test_cc(cond, label_match);
301
- /* nomatch: */
302
- gen_set_nzcv(tcg_constant_i64(nzcv << 28));
303
- tcg_gen_br(label_continue);
304
- gen_set_label(label_match);
305
- }
306
-
307
- handle_fp_compare(s, size, rn, rm, false, op);
308
-
309
- if (cond < 0x0e) {
310
- gen_set_label(label_continue);
311
- }
312
-}
313
-
314
/* Floating-point data-processing (1 source) - half precision */
315
static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
316
{
317
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
318
disas_fp_fixed_conv(s, insn);
319
} else {
320
switch (extract32(insn, 10, 2)) {
321
- case 1:
322
- /* Floating point conditional compare */
323
- disas_fp_ccomp(s, insn);
324
- break;
325
- case 2:
326
- /* Floating point data-processing (2 source) */
327
- unallocated_encoding(s); /* in decodetree */
328
- break;
329
- case 3:
330
- /* Floating point conditional select */
331
+ case 1: /* Floating point conditional compare */
332
+ case 2: /* Floating point data-processing (2 source) */
333
+ case 3: /* Floating point conditional select */
334
unallocated_encoding(s); /* in decodetree */
335
break;
336
case 0:
337
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
338
break;
339
case 1: /* [15:12] == xx10 */
340
/* Floating point compare */
341
- disas_fp_compare(s, insn);
342
+ unallocated_encoding(s); /* in decodetree */
343
break;
344
case 2: /* [15:12] == x100 */
345
/* Floating point data-processing (1 source) */
85
--
346
--
86
2.20.1
347
2.34.1
87
88
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rather than require the user to fill in the immediate (shl or shr),
3
These opcodes are only supported as vector operations,
4
create full formats that include the immediate.
4
not as advsimd scalar. Set only_in_vector, and remove
5
the unreachable implementation of scalar fneg.
5
6
7
Reported-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20200815013145.539409-14-richard.henderson@linaro.org
10
Message-id: 20241211163036.2297116-24-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
target/arm/sve.decode | 35 ++++++++++++++++-------------------
13
target/arm/tcg/translate-a64.c | 6 +++---
12
1 file changed, 16 insertions(+), 19 deletions(-)
14
1 file changed, 3 insertions(+), 3 deletions(-)
13
15
14
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
16
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/sve.decode
18
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/sve.decode
19
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
19
@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
21
break;
20
22
case 0x2f: /* FABS */
21
# Two register operand, one immediate operand, with predicate,
23
case 0x6f: /* FNEG */
22
-# element size encoded as TSZHL. User must fill in imm.
24
+ only_in_vector = true;
23
-@rdn_pg_tszimm ........ .. ... ... ... pg:3 ..... rd:5 \
25
need_fpst = false;
24
- &rpri_esz rn=%reg_movprfx esz=%tszimm_esz
26
break;
25
+# element size encoded as TSZHL.
27
case 0x7d: /* FRSQRTE */
26
+@rdn_pg_tszimm_shl ........ .. ... ... ... pg:3 ..... rd:5 \
28
+ break;
27
+ &rpri_esz rn=%reg_movprfx esz=%tszimm_esz imm=%tszimm_shl
29
case 0x7f: /* FSQRT (vector) */
28
+@rdn_pg_tszimm_shr ........ .. ... ... ... pg:3 ..... rd:5 \
30
+ only_in_vector = true;
29
+ &rpri_esz rn=%reg_movprfx esz=%tszimm_esz imm=%tszimm_shr
31
break;
30
32
default:
31
# Similarly without predicate.
33
unallocated_encoding(s);
32
-@rd_rn_tszimm ........ .. ... ... ...... rn:5 rd:5 \
34
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
33
- &rri_esz esz=%tszimm16_esz
35
case 0x7b: /* FCVTZU */
34
+@rd_rn_tszimm_shl ........ .. ... ... ...... rn:5 rd:5 \
36
gen_helper_advsimd_f16touinth(tcg_res, tcg_op, tcg_fpstatus);
35
+ &rri_esz esz=%tszimm16_esz imm=%tszimm16_shl
37
break;
36
+@rd_rn_tszimm_shr ........ .. ... ... ...... rn:5 rd:5 \
38
- case 0x6f: /* FNEG */
37
+ &rri_esz esz=%tszimm16_esz imm=%tszimm16_shr
39
- tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
38
40
- break;
39
# Two register operand, one immediate operand, with 4-bit predicate.
41
case 0x7d: /* FRSQRTE */
40
# User must fill in imm.
42
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
41
@@ -XXX,XX +XXX,XX @@ UMINV 00000100 .. 001 011 001 ... ..... ..... @rd_pg_rn
43
break;
42
### SVE Shift by Immediate - Predicated Group
43
44
# SVE bitwise shift by immediate (predicated)
45
-ASR_zpzi 00000100 .. 000 000 100 ... .. ... ..... \
46
- @rdn_pg_tszimm imm=%tszimm_shr
47
-LSR_zpzi 00000100 .. 000 001 100 ... .. ... ..... \
48
- @rdn_pg_tszimm imm=%tszimm_shr
49
-LSL_zpzi 00000100 .. 000 011 100 ... .. ... ..... \
50
- @rdn_pg_tszimm imm=%tszimm_shl
51
-ASRD 00000100 .. 000 100 100 ... .. ... ..... \
52
- @rdn_pg_tszimm imm=%tszimm_shr
53
+ASR_zpzi 00000100 .. 000 000 100 ... .. ... ..... @rdn_pg_tszimm_shr
54
+LSR_zpzi 00000100 .. 000 001 100 ... .. ... ..... @rdn_pg_tszimm_shr
55
+LSL_zpzi 00000100 .. 000 011 100 ... .. ... ..... @rdn_pg_tszimm_shl
56
+ASRD 00000100 .. 000 100 100 ... .. ... ..... @rdn_pg_tszimm_shr
57
58
# SVE bitwise shift by vector (predicated)
59
ASR_zpzz 00000100 .. 010 000 100 ... ..... ..... @rdn_pg_rm
60
@@ -XXX,XX +XXX,XX @@ RDVL 00000100 101 11111 01010 imm:s6 rd:5
61
### SVE Bitwise Shift - Unpredicated Group
62
63
# SVE bitwise shift by immediate (unpredicated)
64
-ASR_zzi 00000100 .. 1 ..... 1001 00 ..... ..... \
65
- @rd_rn_tszimm imm=%tszimm16_shr
66
-LSR_zzi 00000100 .. 1 ..... 1001 01 ..... ..... \
67
- @rd_rn_tszimm imm=%tszimm16_shr
68
-LSL_zzi 00000100 .. 1 ..... 1001 11 ..... ..... \
69
- @rd_rn_tszimm imm=%tszimm16_shl
70
+ASR_zzi 00000100 .. 1 ..... 1001 00 ..... ..... @rd_rn_tszimm_shr
71
+LSR_zzi 00000100 .. 1 ..... 1001 01 ..... ..... @rd_rn_tszimm_shr
72
+LSL_zzi 00000100 .. 1 ..... 1001 11 ..... ..... @rd_rn_tszimm_shl
73
74
# SVE bitwise shift by wide elements (unpredicated)
75
# Note esz != 3
76
--
44
--
77
2.20.1
45
2.34.1
78
79
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-25-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 7 +++
9
target/arm/tcg/translate-a64.c | 105 +++++++++++++++++++++++----------
10
2 files changed, 81 insertions(+), 31 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
18
@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
19
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
20
+@rr_hsd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_hsd
21
22
@rrr_b ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=0
23
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
24
@@ -XXX,XX +XXX,XX @@ FMAXV_s 0110 1110 00 11000 01111 10 ..... ..... @rr_q1e2
25
FMINV_h 0.00 1110 10 11000 01111 10 ..... ..... @qrr_h
26
FMINV_s 0110 1110 10 11000 01111 10 ..... ..... @rr_q1e2
27
28
+# Floating-point data processing (1 source)
29
+
30
+FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
31
+FABS_s 00011110 .. 1 000001 10000 ..... ..... @rr_hsd
32
+FNEG_s 00011110 .. 1 000010 10000 ..... ..... @rr_hsd
33
+
34
# Floating-point Immediate
35
36
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
42
return true;
43
}
44
45
+typedef struct FPScalar1Int {
46
+ void (*gen_h)(TCGv_i32, TCGv_i32);
47
+ void (*gen_s)(TCGv_i32, TCGv_i32);
48
+ void (*gen_d)(TCGv_i64, TCGv_i64);
49
+} FPScalar1Int;
50
+
51
+static bool do_fp1_scalar_int(DisasContext *s, arg_rr_e *a,
52
+ const FPScalar1Int *f)
53
+{
54
+ switch (a->esz) {
55
+ case MO_64:
56
+ if (fp_access_check(s)) {
57
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
58
+ f->gen_d(t, t);
59
+ write_fp_dreg(s, a->rd, t);
60
+ }
61
+ break;
62
+ case MO_32:
63
+ if (fp_access_check(s)) {
64
+ TCGv_i32 t = read_fp_sreg(s, a->rn);
65
+ f->gen_s(t, t);
66
+ write_fp_sreg(s, a->rd, t);
67
+ }
68
+ break;
69
+ case MO_16:
70
+ if (!dc_isar_feature(aa64_fp16, s)) {
71
+ return false;
72
+ }
73
+ if (fp_access_check(s)) {
74
+ TCGv_i32 t = read_fp_hreg(s, a->rn);
75
+ f->gen_h(t, t);
76
+ write_fp_sreg(s, a->rd, t);
77
+ }
78
+ break;
79
+ default:
80
+ return false;
81
+ }
82
+ return true;
83
+}
84
+
85
+static const FPScalar1Int f_scalar_fmov = {
86
+ tcg_gen_mov_i32,
87
+ tcg_gen_mov_i32,
88
+ tcg_gen_mov_i64,
89
+};
90
+TRANS(FMOV_s, do_fp1_scalar_int, a, &f_scalar_fmov)
91
+
92
+static const FPScalar1Int f_scalar_fabs = {
93
+ gen_vfp_absh,
94
+ gen_vfp_abss,
95
+ gen_vfp_absd,
96
+};
97
+TRANS(FABS_s, do_fp1_scalar_int, a, &f_scalar_fabs)
98
+
99
+static const FPScalar1Int f_scalar_fneg = {
100
+ gen_vfp_negh,
101
+ gen_vfp_negs,
102
+ gen_vfp_negd,
103
+};
104
+TRANS(FNEG_s, do_fp1_scalar_int, a, &f_scalar_fneg)
105
+
106
/* Floating-point data-processing (1 source) - half precision */
107
static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
108
{
109
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
110
TCGv_i32 tcg_res = tcg_temp_new_i32();
111
112
switch (opcode) {
113
- case 0x0: /* FMOV */
114
- tcg_gen_mov_i32(tcg_res, tcg_op);
115
- break;
116
- case 0x1: /* FABS */
117
- gen_vfp_absh(tcg_res, tcg_op);
118
- break;
119
- case 0x2: /* FNEG */
120
- gen_vfp_negh(tcg_res, tcg_op);
121
- break;
122
case 0x3: /* FSQRT */
123
fpst = fpstatus_ptr(FPST_FPCR_F16);
124
gen_helper_sqrt_f16(tcg_res, tcg_op, fpst);
125
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
126
gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
127
break;
128
default:
129
+ case 0x0: /* FMOV */
130
+ case 0x1: /* FABS */
131
+ case 0x2: /* FNEG */
132
g_assert_not_reached();
133
}
134
135
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
136
tcg_res = tcg_temp_new_i32();
137
138
switch (opcode) {
139
- case 0x0: /* FMOV */
140
- tcg_gen_mov_i32(tcg_res, tcg_op);
141
- goto done;
142
- case 0x1: /* FABS */
143
- gen_vfp_abss(tcg_res, tcg_op);
144
- goto done;
145
- case 0x2: /* FNEG */
146
- gen_vfp_negs(tcg_res, tcg_op);
147
- goto done;
148
case 0x3: /* FSQRT */
149
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
150
goto done;
151
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
152
gen_fpst = gen_helper_frint64_s;
153
break;
154
default:
155
+ case 0x0: /* FMOV */
156
+ case 0x1: /* FABS */
157
+ case 0x2: /* FNEG */
158
g_assert_not_reached();
159
}
160
161
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
162
TCGv_ptr fpst;
163
int rmode = -1;
164
165
- switch (opcode) {
166
- case 0x0: /* FMOV */
167
- gen_gvec_fn2(s, false, rd, rn, tcg_gen_gvec_mov, 0);
168
- return;
169
- }
170
-
171
tcg_op = read_fp_dreg(s, rn);
172
tcg_res = tcg_temp_new_i64();
173
174
switch (opcode) {
175
- case 0x1: /* FABS */
176
- gen_vfp_absd(tcg_res, tcg_op);
177
- goto done;
178
- case 0x2: /* FNEG */
179
- gen_vfp_negd(tcg_res, tcg_op);
180
- goto done;
181
case 0x3: /* FSQRT */
182
gen_helper_vfp_sqrtd(tcg_res, tcg_op, tcg_env);
183
goto done;
184
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
185
gen_fpst = gen_helper_frint64_d;
186
break;
187
default:
188
+ case 0x0: /* FMOV */
189
+ case 0x1: /* FABS */
190
+ case 0x2: /* FNEG */
191
g_assert_not_reached();
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
195
goto do_unallocated;
196
}
197
/* fall through */
198
- case 0x0 ... 0x3:
199
+ case 0x3:
200
case 0x8 ... 0xc:
201
case 0xe ... 0xf:
202
/* 32-to-32 and 64-to-64 ops */
203
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
204
205
default:
206
do_unallocated:
207
+ case 0x0: /* FMOV */
208
+ case 0x1: /* FABS */
209
+ case 0x2: /* FNEG */
210
unallocated_encoding(s);
211
break;
212
}
213
--
214
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move the check for !S into do_pppp_flags, which allows to merge in
3
Pass fpstatus not env, like most other fp helpers.
4
do_vecop4_p. Split out gen_gvec_fn_ppp without sve_access_check,
5
to mirror gen_gvec_fn_zzz.
6
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20241211163036.2297116-26-richard.henderson@linaro.org
9
Message-id: 20200815013145.539409-7-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/translate-sve.c | 111 ++++++++++++++-----------------------
10
target/arm/helper.h | 6 +++---
13
1 file changed, 43 insertions(+), 68 deletions(-)
11
target/arm/tcg/translate-a64.c | 15 +++++++--------
12
target/arm/tcg/translate-vfp.c | 6 +++---
13
target/arm/vfp_helper.c | 12 ++++++------
14
4 files changed, 19 insertions(+), 20 deletions(-)
14
15
15
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-sve.c
18
--- a/target/arm/helper.h
18
+++ b/target/arm/translate-sve.c
19
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ static void do_dupi_z(DisasContext *s, int rd, uint64_t word)
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_maxnumd, f64, f64, f64, ptr)
21
DEF_HELPER_3(vfp_minnumh, f16, f16, f16, ptr)
22
DEF_HELPER_3(vfp_minnums, f32, f32, f32, ptr)
23
DEF_HELPER_3(vfp_minnumd, f64, f64, f64, ptr)
24
-DEF_HELPER_2(vfp_sqrth, f16, f16, env)
25
-DEF_HELPER_2(vfp_sqrts, f32, f32, env)
26
-DEF_HELPER_2(vfp_sqrtd, f64, f64, env)
27
+DEF_HELPER_2(vfp_sqrth, f16, f16, ptr)
28
+DEF_HELPER_2(vfp_sqrts, f32, f32, ptr)
29
+DEF_HELPER_2(vfp_sqrtd, f64, f64, ptr)
30
DEF_HELPER_3(vfp_cmph, void, f16, f16, env)
31
DEF_HELPER_3(vfp_cmps, void, f32, f32, env)
32
DEF_HELPER_3(vfp_cmpd, void, f64, f64, env)
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
38
39
switch (opcode) {
40
case 0x3: /* FSQRT */
41
- gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
42
- goto done;
43
+ gen_fpst = gen_helper_vfp_sqrts;
44
+ break;
45
case 0x6: /* BFCVT */
46
gen_fpst = gen_helper_bfcvt;
47
break;
48
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
49
gen_fpst(tcg_res, tcg_op, fpst);
50
}
51
52
- done:
53
write_fp_sreg(s, rd, tcg_res);
20
}
54
}
21
55
22
/* Invoke a vector expander on three Pregs. */
56
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
23
-static bool do_vector3_p(DisasContext *s, GVecGen3Fn *gvec_fn,
57
24
- int esz, int rd, int rn, int rm)
58
switch (opcode) {
25
+static void gen_gvec_fn_ppp(DisasContext *s, GVecGen3Fn *gvec_fn,
59
case 0x3: /* FSQRT */
26
+ int rd, int rn, int rm)
60
- gen_helper_vfp_sqrtd(tcg_res, tcg_op, tcg_env);
61
- goto done;
62
+ gen_fpst = gen_helper_vfp_sqrtd;
63
+ break;
64
case 0x8: /* FRINTN */
65
case 0x9: /* FRINTP */
66
case 0xa: /* FRINTM */
67
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
68
gen_fpst(tcg_res, tcg_op, fpst);
69
}
70
71
- done:
72
write_fp_dreg(s, rd, tcg_res);
73
}
74
75
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
76
gen_vfp_negd(tcg_rd, tcg_rn);
77
break;
78
case 0x7f: /* FSQRT */
79
- gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_env);
80
+ gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_fpstatus);
81
break;
82
case 0x1a: /* FCVTNS */
83
case 0x1b: /* FCVTMS */
84
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
85
handle_2misc_fcmp_zero(s, opcode, false, u, is_q, size, rn, rd);
86
return;
87
case 0x7f: /* FSQRT */
88
+ need_fpstatus = true;
89
if (size == 3 && !is_q) {
90
unallocated_encoding(s);
91
return;
92
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
93
gen_vfp_negs(tcg_res, tcg_op);
94
break;
95
case 0x7f: /* FSQRT */
96
- gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
97
+ gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_fpstatus);
98
break;
99
case 0x1a: /* FCVTNS */
100
case 0x1b: /* FCVTMS */
101
diff --git a/target/arm/tcg/translate-vfp.c b/target/arm/tcg/translate-vfp.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/target/arm/tcg/translate-vfp.c
104
+++ b/target/arm/tcg/translate-vfp.c
105
@@ -XXX,XX +XXX,XX @@ DO_VFP_2OP(VNEG, dp, gen_vfp_negd, aa32_fpdp_v2)
106
107
static void gen_VSQRT_hp(TCGv_i32 vd, TCGv_i32 vm)
27
{
108
{
28
- if (sve_access_check(s)) {
109
- gen_helper_vfp_sqrth(vd, vm, tcg_env);
29
- unsigned psz = pred_gvec_reg_size(s);
110
+ gen_helper_vfp_sqrth(vd, vm, fpstatus_ptr(FPST_FPCR_F16));
30
- gvec_fn(esz, pred_full_reg_offset(s, rd),
31
- pred_full_reg_offset(s, rn),
32
- pred_full_reg_offset(s, rm), psz, psz);
33
- }
34
- return true;
35
-}
36
-
37
-/* Invoke a vector operation on four Pregs. */
38
-static bool do_vecop4_p(DisasContext *s, const GVecGen4 *gvec_op,
39
- int rd, int rn, int rm, int rg)
40
-{
41
- if (sve_access_check(s)) {
42
- unsigned psz = pred_gvec_reg_size(s);
43
- tcg_gen_gvec_4(pred_full_reg_offset(s, rd),
44
- pred_full_reg_offset(s, rn),
45
- pred_full_reg_offset(s, rm),
46
- pred_full_reg_offset(s, rg),
47
- psz, psz, gvec_op);
48
- }
49
- return true;
50
+ unsigned psz = pred_gvec_reg_size(s);
51
+ gvec_fn(MO_64, pred_full_reg_offset(s, rd),
52
+ pred_full_reg_offset(s, rn),
53
+ pred_full_reg_offset(s, rm), psz, psz);
54
}
111
}
55
112
56
/* Invoke a vector move on two Pregs. */
113
static void gen_VSQRT_sp(TCGv_i32 vd, TCGv_i32 vm)
57
@@ -XXX,XX +XXX,XX @@ static bool do_pppp_flags(DisasContext *s, arg_rprr_s *a,
114
{
58
int mofs = pred_full_reg_offset(s, a->rm);
115
- gen_helper_vfp_sqrts(vd, vm, tcg_env);
59
int gofs = pred_full_reg_offset(s, a->pg);
116
+ gen_helper_vfp_sqrts(vd, vm, fpstatus_ptr(FPST_FPCR));
60
61
+ if (!a->s) {
62
+ tcg_gen_gvec_4(dofs, nofs, mofs, gofs, psz, psz, gvec_op);
63
+ return true;
64
+ }
65
+
66
if (psz == 8) {
67
/* Do the operation and the flags generation in temps. */
68
TCGv_i64 pd = tcg_temp_new_i64();
69
@@ -XXX,XX +XXX,XX @@ static bool trans_AND_pppp(DisasContext *s, arg_rprr_s *a)
70
.fno = gen_helper_sve_and_pppp,
71
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
72
};
73
- if (a->s) {
74
- return do_pppp_flags(s, a, &op);
75
- } else if (a->rn == a->rm) {
76
- if (a->pg == a->rn) {
77
- return do_mov_p(s, a->rd, a->rn);
78
- } else {
79
- return do_vector3_p(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->pg);
80
+
81
+ if (!a->s) {
82
+ if (!sve_access_check(s)) {
83
+ return true;
84
+ }
85
+ if (a->rn == a->rm) {
86
+ if (a->pg == a->rn) {
87
+ do_mov_p(s, a->rd, a->rn);
88
+ } else {
89
+ gen_gvec_fn_ppp(s, tcg_gen_gvec_and, a->rd, a->rn, a->pg);
90
+ }
91
+ return true;
92
+ } else if (a->pg == a->rn || a->pg == a->rm) {
93
+ gen_gvec_fn_ppp(s, tcg_gen_gvec_and, a->rd, a->rn, a->rm);
94
+ return true;
95
}
96
- } else if (a->pg == a->rn || a->pg == a->rm) {
97
- return do_vector3_p(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->rm);
98
- } else {
99
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
100
}
101
+ return do_pppp_flags(s, a, &op);
102
}
117
}
103
118
104
static void gen_bic_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
119
static void gen_VSQRT_dp(TCGv_i64 vd, TCGv_i64 vm)
105
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_pppp(DisasContext *s, arg_rprr_s *a)
120
{
106
.fno = gen_helper_sve_bic_pppp,
121
- gen_helper_vfp_sqrtd(vd, vm, tcg_env);
107
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
122
+ gen_helper_vfp_sqrtd(vd, vm, fpstatus_ptr(FPST_FPCR));
108
};
109
- if (a->s) {
110
- return do_pppp_flags(s, a, &op);
111
- } else if (a->pg == a->rn) {
112
- return do_vector3_p(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
113
- } else {
114
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
115
+
116
+ if (!a->s && a->pg == a->rn) {
117
+ if (sve_access_check(s)) {
118
+ gen_gvec_fn_ppp(s, tcg_gen_gvec_andc, a->rd, a->rn, a->rm);
119
+ }
120
+ return true;
121
}
122
+ return do_pppp_flags(s, a, &op);
123
}
123
}
124
124
125
static void gen_eor_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
125
DO_VFP_2OP(VSQRT, hp, gen_VSQRT_hp, aa32_fp16_arith)
126
@@ -XXX,XX +XXX,XX @@ static bool trans_EOR_pppp(DisasContext *s, arg_rprr_s *a)
126
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
127
.fno = gen_helper_sve_eor_pppp,
127
index XXXXXXX..XXXXXXX 100644
128
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
128
--- a/target/arm/vfp_helper.c
129
};
129
+++ b/target/arm/vfp_helper.c
130
- if (a->s) {
130
@@ -XXX,XX +XXX,XX @@ VFP_BINOP(minnum)
131
- return do_pppp_flags(s, a, &op);
131
VFP_BINOP(maxnum)
132
- } else {
132
#undef VFP_BINOP
133
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
133
134
- }
134
-dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, CPUARMState *env)
135
+ return do_pppp_flags(s, a, &op);
135
+dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, void *fpstp)
136
{
137
- return float16_sqrt(a, &env->vfp.fp_status_f16);
138
+ return float16_sqrt(a, fpstp);
136
}
139
}
137
140
138
static void gen_sel_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
141
-float32 VFP_HELPER(sqrt, s)(float32 a, CPUARMState *env)
139
@@ -XXX,XX +XXX,XX @@ static bool trans_SEL_pppp(DisasContext *s, arg_rprr_s *a)
142
+float32 VFP_HELPER(sqrt, s)(float32 a, void *fpstp)
140
.fno = gen_helper_sve_sel_pppp,
143
{
141
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
144
- return float32_sqrt(a, &env->vfp.fp_status);
142
};
145
+ return float32_sqrt(a, fpstp);
143
+
144
if (a->s) {
145
return false;
146
- } else {
147
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
148
}
149
+ return do_pppp_flags(s, a, &op);
150
}
146
}
151
147
152
static void gen_orr_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
148
-float64 VFP_HELPER(sqrt, d)(float64 a, CPUARMState *env)
153
@@ -XXX,XX +XXX,XX @@ static bool trans_ORR_pppp(DisasContext *s, arg_rprr_s *a)
149
+float64 VFP_HELPER(sqrt, d)(float64 a, void *fpstp)
154
.fno = gen_helper_sve_orr_pppp,
150
{
155
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
151
- return float64_sqrt(a, &env->vfp.fp_status);
156
};
152
+ return float64_sqrt(a, fpstp);
157
- if (a->s) {
158
- return do_pppp_flags(s, a, &op);
159
- } else if (a->pg == a->rn && a->rn == a->rm) {
160
+
161
+ if (!a->s && a->pg == a->rn && a->rn == a->rm) {
162
return do_mov_p(s, a->rd, a->rn);
163
- } else {
164
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
165
}
166
+ return do_pppp_flags(s, a, &op);
167
}
153
}
168
154
169
static void gen_orn_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
155
static void softfloat_to_vfp_compare(CPUARMState *env, FloatRelation cmp)
170
@@ -XXX,XX +XXX,XX @@ static bool trans_ORN_pppp(DisasContext *s, arg_rprr_s *a)
171
.fno = gen_helper_sve_orn_pppp,
172
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
173
};
174
- if (a->s) {
175
- return do_pppp_flags(s, a, &op);
176
- } else {
177
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
178
- }
179
+ return do_pppp_flags(s, a, &op);
180
}
181
182
static void gen_nor_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
183
@@ -XXX,XX +XXX,XX @@ static bool trans_NOR_pppp(DisasContext *s, arg_rprr_s *a)
184
.fno = gen_helper_sve_nor_pppp,
185
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
};
187
- if (a->s) {
188
- return do_pppp_flags(s, a, &op);
189
- } else {
190
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
191
- }
192
+ return do_pppp_flags(s, a, &op);
193
}
194
195
static void gen_nand_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
196
@@ -XXX,XX +XXX,XX @@ static bool trans_NAND_pppp(DisasContext *s, arg_rprr_s *a)
197
.fno = gen_helper_sve_nand_pppp,
198
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
199
};
200
- if (a->s) {
201
- return do_pppp_flags(s, a, &op);
202
- } else {
203
- return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
204
- }
205
+ return do_pppp_flags(s, a, &op);
206
}
207
208
/*
209
--
156
--
210
2.20.1
157
2.34.1
211
212
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
clock_init*() inlined funtions are simple wrappers around
3
This function is identical with helper_vfp_sqrth.
4
clock_set*() and are not used. Remove them in favor of clock_set*().
4
Replace all uses.
5
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200806123858.30058-2-f4bug@amsat.org
8
Message-id: 20241211163036.2297116-27-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/hw/clock.h | 13 -------------
11
target/arm/tcg/helper-a64.h | 1 -
12
1 file changed, 13 deletions(-)
12
target/arm/tcg/helper-a64.c | 11 -----------
13
target/arm/tcg/translate-a64.c | 4 ++--
14
3 files changed, 2 insertions(+), 14 deletions(-)
13
15
14
diff --git a/include/hw/clock.h b/include/hw/clock.h
16
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/clock.h
18
--- a/target/arm/tcg/helper-a64.h
17
+++ b/include/hw/clock.h
19
+++ b/target/arm/tcg/helper-a64.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool clock_is_enabled(const Clock *clk)
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(advsimd_rinth_exact, f16, f16, ptr)
19
return clock_get(clk) != 0;
21
DEF_HELPER_2(advsimd_rinth, f16, f16, ptr)
22
DEF_HELPER_2(advsimd_f16tosinth, i32, f16, ptr)
23
DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
24
-DEF_HELPER_2(sqrt_f16, f16, f16, ptr)
25
26
DEF_HELPER_2(exception_return, void, env, i64)
27
DEF_HELPER_FLAGS_2(dc_zva, TCG_CALL_NO_WG, void, env, i64)
28
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/tcg/helper-a64.c
31
+++ b/target/arm/tcg/helper-a64.c
32
@@ -XXX,XX +XXX,XX @@ illegal_return:
33
"resuming execution at 0x%" PRIx64 "\n", cur_el, env->pc);
20
}
34
}
21
35
22
-static inline void clock_init(Clock *clk, uint64_t value)
36
-/*
37
- * Square Root and Reciprocal square root
38
- */
39
-
40
-uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
23
-{
41
-{
24
- clock_set(clk, value);
42
- float_status *s = fpstp;
25
-}
43
-
26
-static inline void clock_init_hz(Clock *clk, uint64_t value)
44
- return float16_sqrt(a, s);
27
-{
28
- clock_set_hz(clk, value);
29
-}
30
-static inline void clock_init_ns(Clock *clk, uint64_t value)
31
-{
32
- clock_set_ns(clk, value);
33
-}
45
-}
34
-
46
-
35
#endif /* QEMU_HW_CLOCK_H */
47
void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
48
{
49
uintptr_t ra = GETPC();
50
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/translate-a64.c
53
+++ b/target/arm/tcg/translate-a64.c
54
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
55
switch (opcode) {
56
case 0x3: /* FSQRT */
57
fpst = fpstatus_ptr(FPST_FPCR_F16);
58
- gen_helper_sqrt_f16(tcg_res, tcg_op, fpst);
59
+ gen_helper_vfp_sqrth(tcg_res, tcg_op, fpst);
60
break;
61
case 0x8: /* FRINTN */
62
case 0x9: /* FRINTP */
63
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
64
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
65
break;
66
case 0x7f: /* FSQRT */
67
- gen_helper_sqrt_f16(tcg_res, tcg_op, tcg_fpstatus);
68
+ gen_helper_vfp_sqrth(tcg_res, tcg_op, tcg_fpstatus);
69
break;
70
default:
71
g_assert_not_reached();
36
--
72
--
37
2.20.1
73
2.34.1
38
74
39
75
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 1 +
9
target/arm/tcg/translate-a64.c | 72 ++++++++++++++++++++++++++++------
10
2 files changed, 62 insertions(+), 11 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FMINV_s 0110 1110 10 11000 01111 10 ..... ..... @rr_q1e2
17
FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
18
FABS_s 00011110 .. 1 000001 10000 ..... ..... @rr_hsd
19
FNEG_s 00011110 .. 1 000010 10000 ..... ..... @rr_hsd
20
+FSQRT_s 00011110 .. 1 000011 10000 ..... ..... @rr_hsd
21
22
# Floating-point Immediate
23
24
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/tcg/translate-a64.c
27
+++ b/target/arm/tcg/translate-a64.c
28
@@ -XXX,XX +XXX,XX @@ static const FPScalar1Int f_scalar_fneg = {
29
};
30
TRANS(FNEG_s, do_fp1_scalar_int, a, &f_scalar_fneg)
31
32
+typedef struct FPScalar1 {
33
+ void (*gen_h)(TCGv_i32, TCGv_i32, TCGv_ptr);
34
+ void (*gen_s)(TCGv_i32, TCGv_i32, TCGv_ptr);
35
+ void (*gen_d)(TCGv_i64, TCGv_i64, TCGv_ptr);
36
+} FPScalar1;
37
+
38
+static bool do_fp1_scalar(DisasContext *s, arg_rr_e *a,
39
+ const FPScalar1 *f, int rmode)
40
+{
41
+ TCGv_i32 tcg_rmode = NULL;
42
+ TCGv_ptr fpst;
43
+ TCGv_i64 t64;
44
+ TCGv_i32 t32;
45
+ int check = fp_access_check_scalar_hsd(s, a->esz);
46
+
47
+ if (check <= 0) {
48
+ return check == 0;
49
+ }
50
+
51
+ fpst = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
52
+ if (rmode >= 0) {
53
+ tcg_rmode = gen_set_rmode(rmode, fpst);
54
+ }
55
+
56
+ switch (a->esz) {
57
+ case MO_64:
58
+ t64 = read_fp_dreg(s, a->rn);
59
+ f->gen_d(t64, t64, fpst);
60
+ write_fp_dreg(s, a->rd, t64);
61
+ break;
62
+ case MO_32:
63
+ t32 = read_fp_sreg(s, a->rn);
64
+ f->gen_s(t32, t32, fpst);
65
+ write_fp_sreg(s, a->rd, t32);
66
+ break;
67
+ case MO_16:
68
+ t32 = read_fp_hreg(s, a->rn);
69
+ f->gen_h(t32, t32, fpst);
70
+ write_fp_sreg(s, a->rd, t32);
71
+ break;
72
+ default:
73
+ g_assert_not_reached();
74
+ }
75
+
76
+ if (rmode >= 0) {
77
+ gen_restore_rmode(tcg_rmode, fpst);
78
+ }
79
+ return true;
80
+}
81
+
82
+static const FPScalar1 f_scalar_fsqrt = {
83
+ gen_helper_vfp_sqrth,
84
+ gen_helper_vfp_sqrts,
85
+ gen_helper_vfp_sqrtd,
86
+};
87
+TRANS(FSQRT_s, do_fp1_scalar, a, &f_scalar_fsqrt, -1)
88
+
89
/* Floating-point data-processing (1 source) - half precision */
90
static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
91
{
92
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
93
TCGv_i32 tcg_res = tcg_temp_new_i32();
94
95
switch (opcode) {
96
- case 0x3: /* FSQRT */
97
- fpst = fpstatus_ptr(FPST_FPCR_F16);
98
- gen_helper_vfp_sqrth(tcg_res, tcg_op, fpst);
99
- break;
100
case 0x8: /* FRINTN */
101
case 0x9: /* FRINTP */
102
case 0xa: /* FRINTM */
103
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
104
case 0x0: /* FMOV */
105
case 0x1: /* FABS */
106
case 0x2: /* FNEG */
107
+ case 0x3: /* FSQRT */
108
g_assert_not_reached();
109
}
110
111
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
112
tcg_res = tcg_temp_new_i32();
113
114
switch (opcode) {
115
- case 0x3: /* FSQRT */
116
- gen_fpst = gen_helper_vfp_sqrts;
117
- break;
118
case 0x6: /* BFCVT */
119
gen_fpst = gen_helper_bfcvt;
120
break;
121
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
122
case 0x0: /* FMOV */
123
case 0x1: /* FABS */
124
case 0x2: /* FNEG */
125
+ case 0x3: /* FSQRT */
126
g_assert_not_reached();
127
}
128
129
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
130
tcg_res = tcg_temp_new_i64();
131
132
switch (opcode) {
133
- case 0x3: /* FSQRT */
134
- gen_fpst = gen_helper_vfp_sqrtd;
135
- break;
136
case 0x8: /* FRINTN */
137
case 0x9: /* FRINTP */
138
case 0xa: /* FRINTM */
139
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
140
case 0x0: /* FMOV */
141
case 0x1: /* FABS */
142
case 0x2: /* FNEG */
143
+ case 0x3: /* FSQRT */
144
g_assert_not_reached();
145
}
146
147
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
148
goto do_unallocated;
149
}
150
/* fall through */
151
- case 0x3:
152
case 0x8 ... 0xc:
153
case 0xe ... 0xf:
154
/* 32-to-32 and 64-to-64 ops */
155
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
156
case 0x0: /* FMOV */
157
case 0x1: /* FABS */
158
case 0x2: /* FNEG */
159
+ case 0x3: /* FSQRT */
160
unallocated_encoding(s);
161
break;
162
}
163
--
164
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_fp_1src_half as these were the last insns
4
decoded by that function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-29-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 8 +++
12
target/arm/tcg/translate-a64.c | 117 +++++++++++----------------------
13
2 files changed, 46 insertions(+), 79 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FABS_s 00011110 .. 1 000001 10000 ..... ..... @rr_hsd
20
FNEG_s 00011110 .. 1 000010 10000 ..... ..... @rr_hsd
21
FSQRT_s 00011110 .. 1 000011 10000 ..... ..... @rr_hsd
22
23
+FRINTN_s 00011110 .. 1 001000 10000 ..... ..... @rr_hsd
24
+FRINTP_s 00011110 .. 1 001001 10000 ..... ..... @rr_hsd
25
+FRINTM_s 00011110 .. 1 001010 10000 ..... ..... @rr_hsd
26
+FRINTZ_s 00011110 .. 1 001011 10000 ..... ..... @rr_hsd
27
+FRINTA_s 00011110 .. 1 001100 10000 ..... ..... @rr_hsd
28
+FRINTX_s 00011110 .. 1 001110 10000 ..... ..... @rr_hsd
29
+FRINTI_s 00011110 .. 1 001111 10000 ..... ..... @rr_hsd
30
+
31
# Floating-point Immediate
32
33
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ static const FPScalar1 f_scalar_fsqrt = {
39
};
40
TRANS(FSQRT_s, do_fp1_scalar, a, &f_scalar_fsqrt, -1)
41
42
-/* Floating-point data-processing (1 source) - half precision */
43
-static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
44
-{
45
- TCGv_ptr fpst = NULL;
46
- TCGv_i32 tcg_op = read_fp_hreg(s, rn);
47
- TCGv_i32 tcg_res = tcg_temp_new_i32();
48
+static const FPScalar1 f_scalar_frint = {
49
+ gen_helper_advsimd_rinth,
50
+ gen_helper_rints,
51
+ gen_helper_rintd,
52
+};
53
+TRANS(FRINTN_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_TIEEVEN)
54
+TRANS(FRINTP_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_POSINF)
55
+TRANS(FRINTM_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_NEGINF)
56
+TRANS(FRINTZ_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_ZERO)
57
+TRANS(FRINTA_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_TIEAWAY)
58
+TRANS(FRINTI_s, do_fp1_scalar, a, &f_scalar_frint, -1)
59
60
- switch (opcode) {
61
- case 0x8: /* FRINTN */
62
- case 0x9: /* FRINTP */
63
- case 0xa: /* FRINTM */
64
- case 0xb: /* FRINTZ */
65
- case 0xc: /* FRINTA */
66
- {
67
- TCGv_i32 tcg_rmode;
68
-
69
- fpst = fpstatus_ptr(FPST_FPCR_F16);
70
- tcg_rmode = gen_set_rmode(opcode & 7, fpst);
71
- gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
72
- gen_restore_rmode(tcg_rmode, fpst);
73
- break;
74
- }
75
- case 0xe: /* FRINTX */
76
- fpst = fpstatus_ptr(FPST_FPCR_F16);
77
- gen_helper_advsimd_rinth_exact(tcg_res, tcg_op, fpst);
78
- break;
79
- case 0xf: /* FRINTI */
80
- fpst = fpstatus_ptr(FPST_FPCR_F16);
81
- gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
82
- break;
83
- default:
84
- case 0x0: /* FMOV */
85
- case 0x1: /* FABS */
86
- case 0x2: /* FNEG */
87
- case 0x3: /* FSQRT */
88
- g_assert_not_reached();
89
- }
90
-
91
- write_fp_sreg(s, rd, tcg_res);
92
-}
93
+static const FPScalar1 f_scalar_frintx = {
94
+ gen_helper_advsimd_rinth_exact,
95
+ gen_helper_rints_exact,
96
+ gen_helper_rintd_exact,
97
+};
98
+TRANS(FRINTX_s, do_fp1_scalar, a, &f_scalar_frintx, -1)
99
100
/* Floating-point data-processing (1 source) - single precision */
101
static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
102
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
103
case 0x6: /* BFCVT */
104
gen_fpst = gen_helper_bfcvt;
105
break;
106
- case 0x8: /* FRINTN */
107
- case 0x9: /* FRINTP */
108
- case 0xa: /* FRINTM */
109
- case 0xb: /* FRINTZ */
110
- case 0xc: /* FRINTA */
111
- rmode = opcode & 7;
112
- gen_fpst = gen_helper_rints;
113
- break;
114
- case 0xe: /* FRINTX */
115
- gen_fpst = gen_helper_rints_exact;
116
- break;
117
- case 0xf: /* FRINTI */
118
- gen_fpst = gen_helper_rints;
119
- break;
120
case 0x10: /* FRINT32Z */
121
rmode = FPROUNDING_ZERO;
122
gen_fpst = gen_helper_frint32_s;
123
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
124
case 0x1: /* FABS */
125
case 0x2: /* FNEG */
126
case 0x3: /* FSQRT */
127
+ case 0x8: /* FRINTN */
128
+ case 0x9: /* FRINTP */
129
+ case 0xa: /* FRINTM */
130
+ case 0xb: /* FRINTZ */
131
+ case 0xc: /* FRINTA */
132
+ case 0xe: /* FRINTX */
133
+ case 0xf: /* FRINTI */
134
g_assert_not_reached();
135
}
136
137
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
138
tcg_res = tcg_temp_new_i64();
139
140
switch (opcode) {
141
- case 0x8: /* FRINTN */
142
- case 0x9: /* FRINTP */
143
- case 0xa: /* FRINTM */
144
- case 0xb: /* FRINTZ */
145
- case 0xc: /* FRINTA */
146
- rmode = opcode & 7;
147
- gen_fpst = gen_helper_rintd;
148
- break;
149
- case 0xe: /* FRINTX */
150
- gen_fpst = gen_helper_rintd_exact;
151
- break;
152
- case 0xf: /* FRINTI */
153
- gen_fpst = gen_helper_rintd;
154
- break;
155
case 0x10: /* FRINT32Z */
156
rmode = FPROUNDING_ZERO;
157
gen_fpst = gen_helper_frint32_d;
158
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
159
case 0x1: /* FABS */
160
case 0x2: /* FNEG */
161
case 0x3: /* FSQRT */
162
+ case 0x8: /* FRINTN */
163
+ case 0x9: /* FRINTP */
164
+ case 0xa: /* FRINTM */
165
+ case 0xb: /* FRINTZ */
166
+ case 0xc: /* FRINTA */
167
+ case 0xe: /* FRINTX */
168
+ case 0xf: /* FRINTI */
169
g_assert_not_reached();
170
}
171
172
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
173
if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
174
goto do_unallocated;
175
}
176
- /* fall through */
177
- case 0x8 ... 0xc:
178
- case 0xe ... 0xf:
179
/* 32-to-32 and 64-to-64 ops */
180
switch (type) {
181
case 0:
182
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
183
handle_fp_1src_double(s, opcode, rd, rn);
184
break;
185
case 3:
186
- if (!dc_isar_feature(aa64_fp16, s)) {
187
- goto do_unallocated;
188
- }
189
-
190
- if (!fp_access_check(s)) {
191
- return;
192
- }
193
- handle_fp_1src_half(s, opcode, rd, rn);
194
- break;
195
default:
196
goto do_unallocated;
197
}
198
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
199
case 0x1: /* FABS */
200
case 0x2: /* FNEG */
201
case 0x3: /* FSQRT */
202
+ case 0x8: /* FRINTN */
203
+ case 0x9: /* FRINTP */
204
+ case 0xa: /* FRINTM */
205
+ case 0xb: /* FRINTZ */
206
+ case 0xc: /* FRINTA */
207
+ case 0xe: /* FRINTX */
208
+ case 0xf: /* FRINTI */
209
unallocated_encoding(s);
210
break;
211
}
212
--
213
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-30-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 3 +++
9
target/arm/tcg/translate-a64.c | 26 +++++++-------------------
10
2 files changed, 10 insertions(+), 19 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
&qrrrr_e q rd rn rm ra esz
18
19
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
20
+@rr_s ........ ... ..... ...... rn:5 rd:5 &rr_e esz=2
21
@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
22
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
23
@rr_hsd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_hsd
24
@@ -XXX,XX +XXX,XX @@ FRINTA_s 00011110 .. 1 001100 10000 ..... ..... @rr_hsd
25
FRINTX_s 00011110 .. 1 001110 10000 ..... ..... @rr_hsd
26
FRINTI_s 00011110 .. 1 001111 10000 ..... ..... @rr_hsd
27
28
+BFCVT_s 00011110 01 1 000110 10000 ..... ..... @rr_s
29
+
30
# Floating-point Immediate
31
32
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ static const FPScalar1 f_scalar_frintx = {
38
};
39
TRANS(FRINTX_s, do_fp1_scalar, a, &f_scalar_frintx, -1)
40
41
+static const FPScalar1 f_scalar_bfcvt = {
42
+ .gen_s = gen_helper_bfcvt,
43
+};
44
+TRANS_FEAT(BFCVT_s, aa64_bf16, do_fp1_scalar, a, &f_scalar_bfcvt, -1)
45
+
46
/* Floating-point data-processing (1 source) - single precision */
47
static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
48
{
49
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
50
tcg_res = tcg_temp_new_i32();
51
52
switch (opcode) {
53
- case 0x6: /* BFCVT */
54
- gen_fpst = gen_helper_bfcvt;
55
- break;
56
case 0x10: /* FRINT32Z */
57
rmode = FPROUNDING_ZERO;
58
gen_fpst = gen_helper_frint32_s;
59
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
60
case 0x1: /* FABS */
61
case 0x2: /* FNEG */
62
case 0x3: /* FSQRT */
63
+ case 0x6: /* BFCVT */
64
case 0x8: /* FRINTN */
65
case 0x9: /* FRINTP */
66
case 0xa: /* FRINTM */
67
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
68
}
69
break;
70
71
- case 0x6:
72
- switch (type) {
73
- case 1: /* BFCVT */
74
- if (!dc_isar_feature(aa64_bf16, s)) {
75
- goto do_unallocated;
76
- }
77
- if (!fp_access_check(s)) {
78
- return;
79
- }
80
- handle_fp_1src_single(s, opcode, rd, rn);
81
- break;
82
- default:
83
- goto do_unallocated;
84
- }
85
- break;
86
-
87
default:
88
do_unallocated:
89
case 0x0: /* FMOV */
90
case 0x1: /* FABS */
91
case 0x2: /* FNEG */
92
case 0x3: /* FSQRT */
93
+ case 0x6: /* BFCVT */
94
case 0x8: /* FRINTN */
95
case 0x9: /* FRINTP */
96
case 0xa: /* FRINTM */
97
--
98
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_fp_1src_single and handle_fp_1src_double as
4
these were the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-31-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 5 ++
12
target/arm/tcg/translate-a64.c | 146 ++++-----------------------------
13
2 files changed, 22 insertions(+), 129 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FRINTI_s 00011110 .. 1 001111 10000 ..... ..... @rr_hsd
20
21
BFCVT_s 00011110 01 1 000110 10000 ..... ..... @rr_s
22
23
+FRINT32Z_s 00011110 0. 1 010000 10000 ..... ..... @rr_sd
24
+FRINT32X_s 00011110 0. 1 010001 10000 ..... ..... @rr_sd
25
+FRINT64Z_s 00011110 0. 1 010010 10000 ..... ..... @rr_sd
26
+FRINT64X_s 00011110 0. 1 010011 10000 ..... ..... @rr_sd
27
+
28
# Floating-point Immediate
29
30
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-a64.c
34
+++ b/target/arm/tcg/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static const FPScalar1 f_scalar_bfcvt = {
36
};
37
TRANS_FEAT(BFCVT_s, aa64_bf16, do_fp1_scalar, a, &f_scalar_bfcvt, -1)
38
39
-/* Floating-point data-processing (1 source) - single precision */
40
-static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
41
-{
42
- void (*gen_fpst)(TCGv_i32, TCGv_i32, TCGv_ptr);
43
- TCGv_i32 tcg_op, tcg_res;
44
- TCGv_ptr fpst;
45
- int rmode = -1;
46
+static const FPScalar1 f_scalar_frint32 = {
47
+ NULL,
48
+ gen_helper_frint32_s,
49
+ gen_helper_frint32_d,
50
+};
51
+TRANS_FEAT(FRINT32Z_s, aa64_frint, do_fp1_scalar, a,
52
+ &f_scalar_frint32, FPROUNDING_ZERO)
53
+TRANS_FEAT(FRINT32X_s, aa64_frint, do_fp1_scalar, a, &f_scalar_frint32, -1)
54
55
- tcg_op = read_fp_sreg(s, rn);
56
- tcg_res = tcg_temp_new_i32();
57
-
58
- switch (opcode) {
59
- case 0x10: /* FRINT32Z */
60
- rmode = FPROUNDING_ZERO;
61
- gen_fpst = gen_helper_frint32_s;
62
- break;
63
- case 0x11: /* FRINT32X */
64
- gen_fpst = gen_helper_frint32_s;
65
- break;
66
- case 0x12: /* FRINT64Z */
67
- rmode = FPROUNDING_ZERO;
68
- gen_fpst = gen_helper_frint64_s;
69
- break;
70
- case 0x13: /* FRINT64X */
71
- gen_fpst = gen_helper_frint64_s;
72
- break;
73
- default:
74
- case 0x0: /* FMOV */
75
- case 0x1: /* FABS */
76
- case 0x2: /* FNEG */
77
- case 0x3: /* FSQRT */
78
- case 0x6: /* BFCVT */
79
- case 0x8: /* FRINTN */
80
- case 0x9: /* FRINTP */
81
- case 0xa: /* FRINTM */
82
- case 0xb: /* FRINTZ */
83
- case 0xc: /* FRINTA */
84
- case 0xe: /* FRINTX */
85
- case 0xf: /* FRINTI */
86
- g_assert_not_reached();
87
- }
88
-
89
- fpst = fpstatus_ptr(FPST_FPCR);
90
- if (rmode >= 0) {
91
- TCGv_i32 tcg_rmode = gen_set_rmode(rmode, fpst);
92
- gen_fpst(tcg_res, tcg_op, fpst);
93
- gen_restore_rmode(tcg_rmode, fpst);
94
- } else {
95
- gen_fpst(tcg_res, tcg_op, fpst);
96
- }
97
-
98
- write_fp_sreg(s, rd, tcg_res);
99
-}
100
-
101
-/* Floating-point data-processing (1 source) - double precision */
102
-static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
103
-{
104
- void (*gen_fpst)(TCGv_i64, TCGv_i64, TCGv_ptr);
105
- TCGv_i64 tcg_op, tcg_res;
106
- TCGv_ptr fpst;
107
- int rmode = -1;
108
-
109
- tcg_op = read_fp_dreg(s, rn);
110
- tcg_res = tcg_temp_new_i64();
111
-
112
- switch (opcode) {
113
- case 0x10: /* FRINT32Z */
114
- rmode = FPROUNDING_ZERO;
115
- gen_fpst = gen_helper_frint32_d;
116
- break;
117
- case 0x11: /* FRINT32X */
118
- gen_fpst = gen_helper_frint32_d;
119
- break;
120
- case 0x12: /* FRINT64Z */
121
- rmode = FPROUNDING_ZERO;
122
- gen_fpst = gen_helper_frint64_d;
123
- break;
124
- case 0x13: /* FRINT64X */
125
- gen_fpst = gen_helper_frint64_d;
126
- break;
127
- default:
128
- case 0x0: /* FMOV */
129
- case 0x1: /* FABS */
130
- case 0x2: /* FNEG */
131
- case 0x3: /* FSQRT */
132
- case 0x8: /* FRINTN */
133
- case 0x9: /* FRINTP */
134
- case 0xa: /* FRINTM */
135
- case 0xb: /* FRINTZ */
136
- case 0xc: /* FRINTA */
137
- case 0xe: /* FRINTX */
138
- case 0xf: /* FRINTI */
139
- g_assert_not_reached();
140
- }
141
-
142
- fpst = fpstatus_ptr(FPST_FPCR);
143
- if (rmode >= 0) {
144
- TCGv_i32 tcg_rmode = gen_set_rmode(rmode, fpst);
145
- gen_fpst(tcg_res, tcg_op, fpst);
146
- gen_restore_rmode(tcg_rmode, fpst);
147
- } else {
148
- gen_fpst(tcg_res, tcg_op, fpst);
149
- }
150
-
151
- write_fp_dreg(s, rd, tcg_res);
152
-}
153
+static const FPScalar1 f_scalar_frint64 = {
154
+ NULL,
155
+ gen_helper_frint64_s,
156
+ gen_helper_frint64_d,
157
+};
158
+TRANS_FEAT(FRINT64Z_s, aa64_frint, do_fp1_scalar, a,
159
+ &f_scalar_frint64, FPROUNDING_ZERO)
160
+TRANS_FEAT(FRINT64X_s, aa64_frint, do_fp1_scalar, a, &f_scalar_frint64, -1)
161
162
static void handle_fp_fcvt(DisasContext *s, int opcode,
163
int rd, int rn, int dtype, int ntype)
164
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
165
break;
166
}
167
168
- case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
169
- if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
170
- goto do_unallocated;
171
- }
172
- /* 32-to-32 and 64-to-64 ops */
173
- switch (type) {
174
- case 0:
175
- if (!fp_access_check(s)) {
176
- return;
177
- }
178
- handle_fp_1src_single(s, opcode, rd, rn);
179
- break;
180
- case 1:
181
- if (!fp_access_check(s)) {
182
- return;
183
- }
184
- handle_fp_1src_double(s, opcode, rd, rn);
185
- break;
186
- case 3:
187
- default:
188
- goto do_unallocated;
189
- }
190
- break;
191
-
192
default:
193
do_unallocated:
194
case 0x0: /* FMOV */
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
196
case 0xc: /* FRINTA */
197
case 0xe: /* FRINTX */
198
case 0xf: /* FRINTI */
199
+ case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
200
unallocated_encoding(s);
201
break;
202
}
203
--
204
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_fp_fcvt and disas_fp_1src as these were
4
the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-32-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 7 ++
12
target/arm/tcg/translate-a64.c | 172 +++++++++++++--------------------
13
2 files changed, 74 insertions(+), 105 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FRINT32X_s 00011110 0. 1 010001 10000 ..... ..... @rr_sd
20
FRINT64Z_s 00011110 0. 1 010010 10000 ..... ..... @rr_sd
21
FRINT64X_s 00011110 0. 1 010011 10000 ..... ..... @rr_sd
22
23
+FCVT_s_ds 00011110 00 1 000101 10000 ..... ..... @rr
24
+FCVT_s_hs 00011110 00 1 000111 10000 ..... ..... @rr
25
+FCVT_s_sd 00011110 01 1 000100 10000 ..... ..... @rr
26
+FCVT_s_hd 00011110 01 1 000111 10000 ..... ..... @rr
27
+FCVT_s_sh 00011110 11 1 000100 10000 ..... ..... @rr
28
+FCVT_s_dh 00011110 11 1 000101 10000 ..... ..... @rr
29
+
30
# Floating-point Immediate
31
32
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FRINT64Z_s, aa64_frint, do_fp1_scalar, a,
38
&f_scalar_frint64, FPROUNDING_ZERO)
39
TRANS_FEAT(FRINT64X_s, aa64_frint, do_fp1_scalar, a, &f_scalar_frint64, -1)
40
41
-static void handle_fp_fcvt(DisasContext *s, int opcode,
42
- int rd, int rn, int dtype, int ntype)
43
+static bool trans_FCVT_s_ds(DisasContext *s, arg_rr *a)
44
{
45
- switch (ntype) {
46
- case 0x0:
47
- {
48
- TCGv_i32 tcg_rn = read_fp_sreg(s, rn);
49
- if (dtype == 1) {
50
- /* Single to double */
51
- TCGv_i64 tcg_rd = tcg_temp_new_i64();
52
- gen_helper_vfp_fcvtds(tcg_rd, tcg_rn, tcg_env);
53
- write_fp_dreg(s, rd, tcg_rd);
54
- } else {
55
- /* Single to half */
56
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
57
- TCGv_i32 ahp = get_ahp_flag();
58
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
59
+ if (fp_access_check(s)) {
60
+ TCGv_i32 tcg_rn = read_fp_sreg(s, a->rn);
61
+ TCGv_i64 tcg_rd = tcg_temp_new_i64();
62
63
- gen_helper_vfp_fcvt_f32_to_f16(tcg_rd, tcg_rn, fpst, ahp);
64
- /* write_fp_sreg is OK here because top half of tcg_rd is zero */
65
- write_fp_sreg(s, rd, tcg_rd);
66
- }
67
- break;
68
- }
69
- case 0x1:
70
- {
71
- TCGv_i64 tcg_rn = read_fp_dreg(s, rn);
72
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
73
- if (dtype == 0) {
74
- /* Double to single */
75
- gen_helper_vfp_fcvtsd(tcg_rd, tcg_rn, tcg_env);
76
- } else {
77
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
78
- TCGv_i32 ahp = get_ahp_flag();
79
- /* Double to half */
80
- gen_helper_vfp_fcvt_f64_to_f16(tcg_rd, tcg_rn, fpst, ahp);
81
- /* write_fp_sreg is OK here because top half of tcg_rd is zero */
82
- }
83
- write_fp_sreg(s, rd, tcg_rd);
84
- break;
85
- }
86
- case 0x3:
87
- {
88
- TCGv_i32 tcg_rn = read_fp_sreg(s, rn);
89
- TCGv_ptr tcg_fpst = fpstatus_ptr(FPST_FPCR);
90
- TCGv_i32 tcg_ahp = get_ahp_flag();
91
- tcg_gen_ext16u_i32(tcg_rn, tcg_rn);
92
- if (dtype == 0) {
93
- /* Half to single */
94
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
95
- gen_helper_vfp_fcvt_f16_to_f32(tcg_rd, tcg_rn, tcg_fpst, tcg_ahp);
96
- write_fp_sreg(s, rd, tcg_rd);
97
- } else {
98
- /* Half to double */
99
- TCGv_i64 tcg_rd = tcg_temp_new_i64();
100
- gen_helper_vfp_fcvt_f16_to_f64(tcg_rd, tcg_rn, tcg_fpst, tcg_ahp);
101
- write_fp_dreg(s, rd, tcg_rd);
102
- }
103
- break;
104
- }
105
- default:
106
- g_assert_not_reached();
107
+ gen_helper_vfp_fcvtds(tcg_rd, tcg_rn, tcg_env);
108
+ write_fp_dreg(s, a->rd, tcg_rd);
109
}
110
+ return true;
111
}
112
113
-/* Floating point data-processing (1 source)
114
- * 31 30 29 28 24 23 22 21 20 15 14 10 9 5 4 0
115
- * +---+---+---+-----------+------+---+--------+-----------+------+------+
116
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | opcode | 1 0 0 0 0 | Rn | Rd |
117
- * +---+---+---+-----------+------+---+--------+-----------+------+------+
118
- */
119
-static void disas_fp_1src(DisasContext *s, uint32_t insn)
120
+static bool trans_FCVT_s_hs(DisasContext *s, arg_rr *a)
121
{
122
- int mos = extract32(insn, 29, 3);
123
- int type = extract32(insn, 22, 2);
124
- int opcode = extract32(insn, 15, 6);
125
- int rn = extract32(insn, 5, 5);
126
- int rd = extract32(insn, 0, 5);
127
+ if (fp_access_check(s)) {
128
+ TCGv_i32 tmp = read_fp_sreg(s, a->rn);
129
+ TCGv_i32 ahp = get_ahp_flag();
130
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
131
132
- if (mos) {
133
- goto do_unallocated;
134
+ gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
135
+ /* write_fp_sreg is OK here because top half of result is zero */
136
+ write_fp_sreg(s, a->rd, tmp);
137
}
138
+ return true;
139
+}
140
141
- switch (opcode) {
142
- case 0x4: case 0x5: case 0x7:
143
- {
144
- /* FCVT between half, single and double precision */
145
- int dtype = extract32(opcode, 0, 2);
146
- if (type == 2 || dtype == type) {
147
- goto do_unallocated;
148
- }
149
- if (!fp_access_check(s)) {
150
- return;
151
- }
152
+static bool trans_FCVT_s_sd(DisasContext *s, arg_rr *a)
153
+{
154
+ if (fp_access_check(s)) {
155
+ TCGv_i64 tcg_rn = read_fp_dreg(s, a->rn);
156
+ TCGv_i32 tcg_rd = tcg_temp_new_i32();
157
158
- handle_fp_fcvt(s, opcode, rd, rn, dtype, type);
159
- break;
160
+ gen_helper_vfp_fcvtsd(tcg_rd, tcg_rn, tcg_env);
161
+ write_fp_sreg(s, a->rd, tcg_rd);
162
}
163
+ return true;
164
+}
165
166
- default:
167
- do_unallocated:
168
- case 0x0: /* FMOV */
169
- case 0x1: /* FABS */
170
- case 0x2: /* FNEG */
171
- case 0x3: /* FSQRT */
172
- case 0x6: /* BFCVT */
173
- case 0x8: /* FRINTN */
174
- case 0x9: /* FRINTP */
175
- case 0xa: /* FRINTM */
176
- case 0xb: /* FRINTZ */
177
- case 0xc: /* FRINTA */
178
- case 0xe: /* FRINTX */
179
- case 0xf: /* FRINTI */
180
- case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
181
- unallocated_encoding(s);
182
- break;
183
+static bool trans_FCVT_s_hd(DisasContext *s, arg_rr *a)
184
+{
185
+ if (fp_access_check(s)) {
186
+ TCGv_i64 tcg_rn = read_fp_dreg(s, a->rn);
187
+ TCGv_i32 tcg_rd = tcg_temp_new_i32();
188
+ TCGv_i32 ahp = get_ahp_flag();
189
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
190
+
191
+ gen_helper_vfp_fcvt_f64_to_f16(tcg_rd, tcg_rn, fpst, ahp);
192
+ /* write_fp_sreg is OK here because top half of tcg_rd is zero */
193
+ write_fp_sreg(s, a->rd, tcg_rd);
194
}
195
+ return true;
196
+}
197
+
198
+static bool trans_FCVT_s_sh(DisasContext *s, arg_rr *a)
199
+{
200
+ if (fp_access_check(s)) {
201
+ TCGv_i32 tcg_rn = read_fp_hreg(s, a->rn);
202
+ TCGv_i32 tcg_rd = tcg_temp_new_i32();
203
+ TCGv_ptr tcg_fpst = fpstatus_ptr(FPST_FPCR);
204
+ TCGv_i32 tcg_ahp = get_ahp_flag();
205
+
206
+ gen_helper_vfp_fcvt_f16_to_f32(tcg_rd, tcg_rn, tcg_fpst, tcg_ahp);
207
+ write_fp_sreg(s, a->rd, tcg_rd);
208
+ }
209
+ return true;
210
+}
211
+
212
+static bool trans_FCVT_s_dh(DisasContext *s, arg_rr *a)
213
+{
214
+ if (fp_access_check(s)) {
215
+ TCGv_i32 tcg_rn = read_fp_hreg(s, a->rn);
216
+ TCGv_i64 tcg_rd = tcg_temp_new_i64();
217
+ TCGv_ptr tcg_fpst = fpstatus_ptr(FPST_FPCR);
218
+ TCGv_i32 tcg_ahp = get_ahp_flag();
219
+
220
+ gen_helper_vfp_fcvt_f16_to_f64(tcg_rd, tcg_rn, tcg_fpst, tcg_ahp);
221
+ write_fp_dreg(s, a->rd, tcg_rd);
222
+ }
223
+ return true;
224
}
225
226
/* Handle floating point <=> fixed point conversions. Note that we can
227
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
228
break;
229
case 2: /* [15:12] == x100 */
230
/* Floating point data-processing (1 source) */
231
- disas_fp_1src(s, insn);
232
+ unallocated_encoding(s); /* in decodetree */
233
break;
234
case 3: /* [15:12] == 1000 */
235
unallocated_encoding(s);
236
--
237
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Model gen_gvec_fn_zzz on gen_gvec_fn3 in translate-a64.c, but
3
This includes SCVTF, UCVTF, FCVT{N,P,M,Z,A}{S,U}.
4
indicating which kind of register and in which order.
4
Remove disas_fp_fixed_conv as those were the last insns
5
decoded by that function.
5
6
6
Model do_zzz_fn on the other do_foo functions that take an
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
argument set and verify sve enabled.
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20241211163036.2297116-33-richard.henderson@linaro.org
11
Message-id: 20200815013145.539409-4-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
target/arm/translate-sve.c | 43 +++++++++++++++++++++-----------------
12
target/arm/tcg/a64.decode | 40 ++++
15
1 file changed, 24 insertions(+), 19 deletions(-)
13
target/arm/tcg/translate-a64.c | 391 ++++++++++++++-------------------
14
2 files changed, 209 insertions(+), 222 deletions(-)
16
15
17
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate-sve.c
18
--- a/target/arm/tcg/a64.decode
20
+++ b/target/arm/translate-sve.c
19
+++ b/target/arm/tcg/a64.decode
21
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn_zz(DisasContext *s, GVecGen2Fn *gvec_fn,
20
@@ -XXX,XX +XXX,XX @@ FMAXV_s 0110 1110 00 11000 01111 10 ..... ..... @rr_q1e2
21
FMINV_h 0.00 1110 10 11000 01111 10 ..... ..... @qrr_h
22
FMINV_s 0110 1110 10 11000 01111 10 ..... ..... @rr_q1e2
23
24
+# Conversion between floating-point and fixed-point (general register)
25
+
26
+&fcvt rd rn esz sf shift
27
+%fcvt_shift32 10:5 !function=rsub_32
28
+%fcvt_shift64 10:6 !function=rsub_64
29
+
30
+@fcvt32 0 ....... .. ...... 1..... rn:5 rd:5 \
31
+ &fcvt sf=0 esz=%esz_hsd shift=%fcvt_shift32
32
+@fcvt64 1 ....... .. ...... ...... rn:5 rd:5 \
33
+ &fcvt sf=1 esz=%esz_hsd shift=%fcvt_shift64
34
+
35
+SCVTF_g . 0011110 .. 000010 ...... ..... ..... @fcvt32
36
+SCVTF_g . 0011110 .. 000010 ...... ..... ..... @fcvt64
37
+UCVTF_g . 0011110 .. 000011 ...... ..... ..... @fcvt32
38
+UCVTF_g . 0011110 .. 000011 ...... ..... ..... @fcvt64
39
+
40
+FCVTZS_g . 0011110 .. 011000 ...... ..... ..... @fcvt32
41
+FCVTZS_g . 0011110 .. 011000 ...... ..... ..... @fcvt64
42
+FCVTZU_g . 0011110 .. 011001 ...... ..... ..... @fcvt32
43
+FCVTZU_g . 0011110 .. 011001 ...... ..... ..... @fcvt64
44
+
45
+# Conversion between floating-point and integer (general register)
46
+
47
+@icvt sf:1 ....... .. ...... ...... rn:5 rd:5 \
48
+ &fcvt esz=%esz_hsd shift=0
49
+
50
+SCVTF_g . 0011110 .. 100010 000000 ..... ..... @icvt
51
+UCVTF_g . 0011110 .. 100011 000000 ..... ..... @icvt
52
+
53
+FCVTNS_g . 0011110 .. 100000 000000 ..... ..... @icvt
54
+FCVTNU_g . 0011110 .. 100001 000000 ..... ..... @icvt
55
+FCVTPS_g . 0011110 .. 101000 000000 ..... ..... @icvt
56
+FCVTPU_g . 0011110 .. 101001 000000 ..... ..... @icvt
57
+FCVTMS_g . 0011110 .. 110000 000000 ..... ..... @icvt
58
+FCVTMU_g . 0011110 .. 110001 000000 ..... ..... @icvt
59
+FCVTZS_g . 0011110 .. 111000 000000 ..... ..... @icvt
60
+FCVTZU_g . 0011110 .. 111001 000000 ..... ..... @icvt
61
+FCVTAS_g . 0011110 .. 100100 000000 ..... ..... @icvt
62
+FCVTAU_g . 0011110 .. 100101 000000 ..... ..... @icvt
63
+
64
# Floating-point data processing (1 source)
65
66
FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
67
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/tcg/translate-a64.c
70
+++ b/target/arm/tcg/translate-a64.c
71
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVT_s_dh(DisasContext *s, arg_rr *a)
72
return true;
22
}
73
}
23
74
24
/* Invoke a vector expander on three Zregs. */
75
-/* Handle floating point <=> fixed point conversions. Note that we can
25
-static bool do_vector3_z(DisasContext *s, GVecGen3Fn *gvec_fn,
76
- * also deal with fp <=> integer conversions as a special case (scale == 64)
26
- int esz, int rd, int rn, int rm)
77
- * OPTME: consider handling that special case specially or at least skipping
27
+static void gen_gvec_fn_zzz(DisasContext *s, GVecGen3Fn *gvec_fn,
78
- * the call to scalbn in the helpers for zero shifts.
28
+ int esz, int rd, int rn, int rm)
79
- */
80
-static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
81
- bool itof, int rmode, int scale, int sf, int type)
82
+static bool do_cvtf_scalar(DisasContext *s, MemOp esz, int rd, int shift,
83
+ TCGv_i64 tcg_int, bool is_signed)
29
{
84
{
30
- if (sve_access_check(s)) {
85
- bool is_signed = !(opcode & 1);
31
- unsigned vsz = vec_full_reg_size(s);
86
TCGv_ptr tcg_fpstatus;
32
- gvec_fn(esz, vec_full_reg_offset(s, rd),
87
TCGv_i32 tcg_shift, tcg_single;
33
- vec_full_reg_offset(s, rn),
88
TCGv_i64 tcg_double;
34
- vec_full_reg_offset(s, rm), vsz, vsz);
89
90
- tcg_fpstatus = fpstatus_ptr(type == 3 ? FPST_FPCR_F16 : FPST_FPCR);
91
+ tcg_fpstatus = fpstatus_ptr(esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
92
+ tcg_shift = tcg_constant_i32(shift);
93
94
- tcg_shift = tcg_constant_i32(64 - scale);
95
-
96
- if (itof) {
97
- TCGv_i64 tcg_int = cpu_reg(s, rn);
98
- if (!sf) {
99
- TCGv_i64 tcg_extend = tcg_temp_new_i64();
100
-
101
- if (is_signed) {
102
- tcg_gen_ext32s_i64(tcg_extend, tcg_int);
103
- } else {
104
- tcg_gen_ext32u_i64(tcg_extend, tcg_int);
105
- }
106
-
107
- tcg_int = tcg_extend;
108
+ switch (esz) {
109
+ case MO_64:
110
+ tcg_double = tcg_temp_new_i64();
111
+ if (is_signed) {
112
+ gen_helper_vfp_sqtod(tcg_double, tcg_int, tcg_shift, tcg_fpstatus);
113
+ } else {
114
+ gen_helper_vfp_uqtod(tcg_double, tcg_int, tcg_shift, tcg_fpstatus);
115
}
116
+ write_fp_dreg(s, rd, tcg_double);
117
+ break;
118
119
- switch (type) {
120
- case 1: /* float64 */
121
- tcg_double = tcg_temp_new_i64();
122
- if (is_signed) {
123
- gen_helper_vfp_sqtod(tcg_double, tcg_int,
124
- tcg_shift, tcg_fpstatus);
125
- } else {
126
- gen_helper_vfp_uqtod(tcg_double, tcg_int,
127
- tcg_shift, tcg_fpstatus);
128
- }
129
- write_fp_dreg(s, rd, tcg_double);
130
- break;
131
-
132
- case 0: /* float32 */
133
- tcg_single = tcg_temp_new_i32();
134
- if (is_signed) {
135
- gen_helper_vfp_sqtos(tcg_single, tcg_int,
136
- tcg_shift, tcg_fpstatus);
137
- } else {
138
- gen_helper_vfp_uqtos(tcg_single, tcg_int,
139
- tcg_shift, tcg_fpstatus);
140
- }
141
- write_fp_sreg(s, rd, tcg_single);
142
- break;
143
-
144
- case 3: /* float16 */
145
- tcg_single = tcg_temp_new_i32();
146
- if (is_signed) {
147
- gen_helper_vfp_sqtoh(tcg_single, tcg_int,
148
- tcg_shift, tcg_fpstatus);
149
- } else {
150
- gen_helper_vfp_uqtoh(tcg_single, tcg_int,
151
- tcg_shift, tcg_fpstatus);
152
- }
153
- write_fp_sreg(s, rd, tcg_single);
154
- break;
155
-
156
- default:
157
- g_assert_not_reached();
158
+ case MO_32:
159
+ tcg_single = tcg_temp_new_i32();
160
+ if (is_signed) {
161
+ gen_helper_vfp_sqtos(tcg_single, tcg_int, tcg_shift, tcg_fpstatus);
162
+ } else {
163
+ gen_helper_vfp_uqtos(tcg_single, tcg_int, tcg_shift, tcg_fpstatus);
164
}
165
- } else {
166
- TCGv_i64 tcg_int = cpu_reg(s, rd);
167
- TCGv_i32 tcg_rmode;
168
+ write_fp_sreg(s, rd, tcg_single);
169
+ break;
170
171
- if (extract32(opcode, 2, 1)) {
172
- /* There are too many rounding modes to all fit into rmode,
173
- * so FCVTA[US] is a special case.
174
- */
175
- rmode = FPROUNDING_TIEAWAY;
176
+ case MO_16:
177
+ tcg_single = tcg_temp_new_i32();
178
+ if (is_signed) {
179
+ gen_helper_vfp_sqtoh(tcg_single, tcg_int, tcg_shift, tcg_fpstatus);
180
+ } else {
181
+ gen_helper_vfp_uqtoh(tcg_single, tcg_int, tcg_shift, tcg_fpstatus);
182
}
183
+ write_fp_sreg(s, rd, tcg_single);
184
+ break;
185
186
- tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
187
-
188
- switch (type) {
189
- case 1: /* float64 */
190
- tcg_double = read_fp_dreg(s, rn);
191
- if (is_signed) {
192
- if (!sf) {
193
- gen_helper_vfp_tosld(tcg_int, tcg_double,
194
- tcg_shift, tcg_fpstatus);
195
- } else {
196
- gen_helper_vfp_tosqd(tcg_int, tcg_double,
197
- tcg_shift, tcg_fpstatus);
198
- }
199
- } else {
200
- if (!sf) {
201
- gen_helper_vfp_tould(tcg_int, tcg_double,
202
- tcg_shift, tcg_fpstatus);
203
- } else {
204
- gen_helper_vfp_touqd(tcg_int, tcg_double,
205
- tcg_shift, tcg_fpstatus);
206
- }
207
- }
208
- if (!sf) {
209
- tcg_gen_ext32u_i64(tcg_int, tcg_int);
210
- }
211
- break;
212
-
213
- case 0: /* float32 */
214
- tcg_single = read_fp_sreg(s, rn);
215
- if (sf) {
216
- if (is_signed) {
217
- gen_helper_vfp_tosqs(tcg_int, tcg_single,
218
- tcg_shift, tcg_fpstatus);
219
- } else {
220
- gen_helper_vfp_touqs(tcg_int, tcg_single,
221
- tcg_shift, tcg_fpstatus);
222
- }
223
- } else {
224
- TCGv_i32 tcg_dest = tcg_temp_new_i32();
225
- if (is_signed) {
226
- gen_helper_vfp_tosls(tcg_dest, tcg_single,
227
- tcg_shift, tcg_fpstatus);
228
- } else {
229
- gen_helper_vfp_touls(tcg_dest, tcg_single,
230
- tcg_shift, tcg_fpstatus);
231
- }
232
- tcg_gen_extu_i32_i64(tcg_int, tcg_dest);
233
- }
234
- break;
235
-
236
- case 3: /* float16 */
237
- tcg_single = read_fp_sreg(s, rn);
238
- if (sf) {
239
- if (is_signed) {
240
- gen_helper_vfp_tosqh(tcg_int, tcg_single,
241
- tcg_shift, tcg_fpstatus);
242
- } else {
243
- gen_helper_vfp_touqh(tcg_int, tcg_single,
244
- tcg_shift, tcg_fpstatus);
245
- }
246
- } else {
247
- TCGv_i32 tcg_dest = tcg_temp_new_i32();
248
- if (is_signed) {
249
- gen_helper_vfp_toslh(tcg_dest, tcg_single,
250
- tcg_shift, tcg_fpstatus);
251
- } else {
252
- gen_helper_vfp_toulh(tcg_dest, tcg_single,
253
- tcg_shift, tcg_fpstatus);
254
- }
255
- tcg_gen_extu_i32_i64(tcg_int, tcg_dest);
256
- }
257
- break;
258
-
259
- default:
260
- g_assert_not_reached();
261
- }
262
-
263
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
264
+ default:
265
+ g_assert_not_reached();
266
}
267
+ return true;
268
}
269
270
-/* Floating point <-> fixed point conversions
271
- * 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
272
- * +----+---+---+-----------+------+---+-------+--------+-------+------+------+
273
- * | sf | 0 | S | 1 1 1 1 0 | type | 0 | rmode | opcode | scale | Rn | Rd |
274
- * +----+---+---+-----------+------+---+-------+--------+-------+------+------+
275
- */
276
-static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
277
+static bool do_cvtf_g(DisasContext *s, arg_fcvt *a, bool is_signed)
278
{
279
- int rd = extract32(insn, 0, 5);
280
- int rn = extract32(insn, 5, 5);
281
- int scale = extract32(insn, 10, 6);
282
- int opcode = extract32(insn, 16, 3);
283
- int rmode = extract32(insn, 19, 2);
284
- int type = extract32(insn, 22, 2);
285
- bool sbit = extract32(insn, 29, 1);
286
- bool sf = extract32(insn, 31, 1);
287
- bool itof;
288
+ TCGv_i64 tcg_int;
289
+ int check = fp_access_check_scalar_hsd(s, a->esz);
290
291
- if (sbit || (!sf && scale < 32)) {
292
- unallocated_encoding(s);
293
- return;
294
+ if (check <= 0) {
295
+ return check == 0;
296
}
297
298
- switch (type) {
299
- case 0: /* float32 */
300
- case 1: /* float64 */
301
- break;
302
- case 3: /* float16 */
303
- if (dc_isar_feature(aa64_fp16, s)) {
304
- break;
305
+ if (a->sf) {
306
+ tcg_int = cpu_reg(s, a->rn);
307
+ } else {
308
+ tcg_int = read_cpu_reg(s, a->rn, true);
309
+ if (is_signed) {
310
+ tcg_gen_ext32s_i64(tcg_int, tcg_int);
311
+ } else {
312
+ tcg_gen_ext32u_i64(tcg_int, tcg_int);
313
}
314
- /* fallthru */
315
- default:
316
- unallocated_encoding(s);
317
- return;
318
}
319
-
320
- switch ((rmode << 3) | opcode) {
321
- case 0x2: /* SCVTF */
322
- case 0x3: /* UCVTF */
323
- itof = true;
324
- break;
325
- case 0x18: /* FCVTZS */
326
- case 0x19: /* FCVTZU */
327
- itof = false;
328
- break;
329
- default:
330
- unallocated_encoding(s);
331
- return;
35
- }
332
- }
36
- return true;
333
-
37
+ unsigned vsz = vec_full_reg_size(s);
334
- if (!fp_access_check(s)) {
38
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
335
- return;
39
+ vec_full_reg_offset(s, rn),
336
- }
40
+ vec_full_reg_offset(s, rm), vsz, vsz);
337
-
338
- handle_fpfpcvt(s, rd, rn, opcode, itof, FPROUNDING_ZERO, scale, sf, type);
339
+ return do_cvtf_scalar(s, a->esz, a->rd, a->shift, tcg_int, is_signed);
41
}
340
}
42
341
43
/* Invoke a vector move on two Zregs. */
342
+TRANS(SCVTF_g, do_cvtf_g, a, true)
44
@@ -XXX,XX +XXX,XX @@ const uint64_t pred_esz_masks[4] = {
343
+TRANS(UCVTF_g, do_cvtf_g, a, false)
45
*** SVE Logical - Unpredicated Group
344
+
46
*/
345
+static void do_fcvt_scalar(DisasContext *s, MemOp out, MemOp esz,
47
346
+ TCGv_i64 tcg_out, int shift, int rn,
48
+static bool do_zzz_fn(DisasContext *s, arg_rrr_esz *a, GVecGen3Fn *gvec_fn)
347
+ ARMFPRounding rmode)
49
+{
348
+{
50
+ if (sve_access_check(s)) {
349
+ TCGv_ptr tcg_fpstatus;
51
+ gen_gvec_fn_zzz(s, gvec_fn, a->esz, a->rd, a->rn, a->rm);
350
+ TCGv_i32 tcg_shift, tcg_rmode, tcg_single;
351
+
352
+ tcg_fpstatus = fpstatus_ptr(esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
353
+ tcg_shift = tcg_constant_i32(shift);
354
+ tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
355
+
356
+ switch (esz) {
357
+ case MO_64:
358
+ read_vec_element(s, tcg_out, rn, 0, MO_64);
359
+ switch (out) {
360
+ case MO_64 | MO_SIGN:
361
+ gen_helper_vfp_tosqd(tcg_out, tcg_out, tcg_shift, tcg_fpstatus);
362
+ break;
363
+ case MO_64:
364
+ gen_helper_vfp_touqd(tcg_out, tcg_out, tcg_shift, tcg_fpstatus);
365
+ break;
366
+ case MO_32 | MO_SIGN:
367
+ gen_helper_vfp_tosld(tcg_out, tcg_out, tcg_shift, tcg_fpstatus);
368
+ break;
369
+ case MO_32:
370
+ gen_helper_vfp_tould(tcg_out, tcg_out, tcg_shift, tcg_fpstatus);
371
+ break;
372
+ default:
373
+ g_assert_not_reached();
374
+ }
375
+ break;
376
+
377
+ case MO_32:
378
+ tcg_single = read_fp_sreg(s, rn);
379
+ switch (out) {
380
+ case MO_64 | MO_SIGN:
381
+ gen_helper_vfp_tosqs(tcg_out, tcg_single, tcg_shift, tcg_fpstatus);
382
+ break;
383
+ case MO_64:
384
+ gen_helper_vfp_touqs(tcg_out, tcg_single, tcg_shift, tcg_fpstatus);
385
+ break;
386
+ case MO_32 | MO_SIGN:
387
+ gen_helper_vfp_tosls(tcg_single, tcg_single,
388
+ tcg_shift, tcg_fpstatus);
389
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
390
+ break;
391
+ case MO_32:
392
+ gen_helper_vfp_touls(tcg_single, tcg_single,
393
+ tcg_shift, tcg_fpstatus);
394
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
395
+ break;
396
+ default:
397
+ g_assert_not_reached();
398
+ }
399
+ break;
400
+
401
+ case MO_16:
402
+ tcg_single = read_fp_hreg(s, rn);
403
+ switch (out) {
404
+ case MO_64 | MO_SIGN:
405
+ gen_helper_vfp_tosqh(tcg_out, tcg_single, tcg_shift, tcg_fpstatus);
406
+ break;
407
+ case MO_64:
408
+ gen_helper_vfp_touqh(tcg_out, tcg_single, tcg_shift, tcg_fpstatus);
409
+ break;
410
+ case MO_32 | MO_SIGN:
411
+ gen_helper_vfp_toslh(tcg_single, tcg_single,
412
+ tcg_shift, tcg_fpstatus);
413
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
414
+ break;
415
+ case MO_32:
416
+ gen_helper_vfp_toulh(tcg_single, tcg_single,
417
+ tcg_shift, tcg_fpstatus);
418
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
419
+ break;
420
+ default:
421
+ g_assert_not_reached();
422
+ }
423
+ break;
424
+
425
+ default:
426
+ g_assert_not_reached();
427
+ }
428
+
429
+ gen_restore_rmode(tcg_rmode, tcg_fpstatus);
430
+}
431
+
432
+static bool do_fcvt_g(DisasContext *s, arg_fcvt *a,
433
+ ARMFPRounding rmode, bool is_signed)
434
+{
435
+ TCGv_i64 tcg_int;
436
+ int check = fp_access_check_scalar_hsd(s, a->esz);
437
+
438
+ if (check <= 0) {
439
+ return check == 0;
440
+ }
441
+
442
+ tcg_int = cpu_reg(s, a->rd);
443
+ do_fcvt_scalar(s, (a->sf ? MO_64 : MO_32) | (is_signed ? MO_SIGN : 0),
444
+ a->esz, tcg_int, a->shift, a->rn, rmode);
445
+
446
+ if (!a->sf) {
447
+ tcg_gen_ext32u_i64(tcg_int, tcg_int);
52
+ }
448
+ }
53
+ return true;
449
+ return true;
54
+}
450
+}
55
+
451
+
56
static bool trans_AND_zzz(DisasContext *s, arg_rrr_esz *a)
452
+TRANS(FCVTNS_g, do_fcvt_g, a, FPROUNDING_TIEEVEN, true)
453
+TRANS(FCVTNU_g, do_fcvt_g, a, FPROUNDING_TIEEVEN, false)
454
+TRANS(FCVTPS_g, do_fcvt_g, a, FPROUNDING_POSINF, true)
455
+TRANS(FCVTPU_g, do_fcvt_g, a, FPROUNDING_POSINF, false)
456
+TRANS(FCVTMS_g, do_fcvt_g, a, FPROUNDING_NEGINF, true)
457
+TRANS(FCVTMU_g, do_fcvt_g, a, FPROUNDING_NEGINF, false)
458
+TRANS(FCVTZS_g, do_fcvt_g, a, FPROUNDING_ZERO, true)
459
+TRANS(FCVTZU_g, do_fcvt_g, a, FPROUNDING_ZERO, false)
460
+TRANS(FCVTAS_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, true)
461
+TRANS(FCVTAU_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, false)
462
+
463
static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
57
{
464
{
58
- return do_vector3_z(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->rm);
465
/* FMOV: gpr to or from float, double, or top half of quad fp reg,
59
+ return do_zzz_fn(s, a, tcg_gen_gvec_and);
466
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
60
}
467
switch (opcode) {
61
468
case 2: /* SCVTF */
62
static bool trans_ORR_zzz(DisasContext *s, arg_rrr_esz *a)
469
case 3: /* UCVTF */
63
{
470
- itof = true;
64
- return do_vector3_z(s, tcg_gen_gvec_or, 0, a->rd, a->rn, a->rm);
471
- /* fallthru */
65
+ return do_zzz_fn(s, a, tcg_gen_gvec_or);
472
case 4: /* FCVTAS */
66
}
473
case 5: /* FCVTAU */
67
474
- if (rmode != 0) {
68
static bool trans_EOR_zzz(DisasContext *s, arg_rrr_esz *a)
475
- goto do_unallocated;
69
{
476
- }
70
- return do_vector3_z(s, tcg_gen_gvec_xor, 0, a->rd, a->rn, a->rm);
477
- /* fallthru */
71
+ return do_zzz_fn(s, a, tcg_gen_gvec_xor);
478
case 0: /* FCVT[NPMZ]S */
72
}
479
case 1: /* FCVT[NPMZ]U */
73
480
- switch (type) {
74
static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a)
481
- case 0: /* float32 */
75
{
482
- case 1: /* float64 */
76
- return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
483
- break;
77
+ return do_zzz_fn(s, a, tcg_gen_gvec_andc);
484
- case 3: /* float16 */
78
}
485
- if (!dc_isar_feature(aa64_fp16, s)) {
79
486
- goto do_unallocated;
80
/*
487
- }
81
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a)
488
- break;
82
489
- default:
83
static bool trans_ADD_zzz(DisasContext *s, arg_rrr_esz *a)
490
- goto do_unallocated;
84
{
491
- }
85
- return do_vector3_z(s, tcg_gen_gvec_add, a->esz, a->rd, a->rn, a->rm);
492
- if (!fp_access_check(s)) {
86
+ return do_zzz_fn(s, a, tcg_gen_gvec_add);
493
- return;
87
}
494
- }
88
495
- handle_fpfpcvt(s, rd, rn, opcode, itof, rmode, 64, sf, type);
89
static bool trans_SUB_zzz(DisasContext *s, arg_rrr_esz *a)
496
- break;
90
{
497
+ goto do_unallocated;
91
- return do_vector3_z(s, tcg_gen_gvec_sub, a->esz, a->rd, a->rn, a->rm);
498
92
+ return do_zzz_fn(s, a, tcg_gen_gvec_sub);
499
default:
93
}
500
switch (sf << 7 | type << 5 | rmode << 3 | opcode) {
94
501
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
95
static bool trans_SQADD_zzz(DisasContext *s, arg_rrr_esz *a)
502
unallocated_encoding(s); /* in decodetree */
96
{
503
} else if (extract32(insn, 21, 1) == 0) {
97
- return do_vector3_z(s, tcg_gen_gvec_ssadd, a->esz, a->rd, a->rn, a->rm);
504
/* Floating point to fixed point conversions */
98
+ return do_zzz_fn(s, a, tcg_gen_gvec_ssadd);
505
- disas_fp_fixed_conv(s, insn);
99
}
506
+ unallocated_encoding(s); /* in decodetree */
100
507
} else {
101
static bool trans_SQSUB_zzz(DisasContext *s, arg_rrr_esz *a)
508
switch (extract32(insn, 10, 2)) {
102
{
509
case 1: /* Floating point conditional compare */
103
- return do_vector3_z(s, tcg_gen_gvec_sssub, a->esz, a->rd, a->rn, a->rm);
104
+ return do_zzz_fn(s, a, tcg_gen_gvec_sssub);
105
}
106
107
static bool trans_UQADD_zzz(DisasContext *s, arg_rrr_esz *a)
108
{
109
- return do_vector3_z(s, tcg_gen_gvec_usadd, a->esz, a->rd, a->rn, a->rm);
110
+ return do_zzz_fn(s, a, tcg_gen_gvec_usadd);
111
}
112
113
static bool trans_UQSUB_zzz(DisasContext *s, arg_rrr_esz *a)
114
{
115
- return do_vector3_z(s, tcg_gen_gvec_ussub, a->esz, a->rd, a->rn, a->rm);
116
+ return do_zzz_fn(s, a, tcg_gen_gvec_ussub);
117
}
118
119
/*
120
--
510
--
121
2.20.1
511
2.34.1
122
123
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-34-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 2 ++
9
target/arm/tcg/translate-a64.c | 41 +++++++++++++++++-----------------
10
2 files changed, 22 insertions(+), 21 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FCVTZU_g . 0011110 .. 111001 000000 ..... ..... @icvt
17
FCVTAS_g . 0011110 .. 100100 000000 ..... ..... @icvt
18
FCVTAU_g . 0011110 .. 100101 000000 ..... ..... @icvt
19
20
+FJCVTZS 0 0011110 01 111110 000000 ..... ..... @rr
21
+
22
# Floating-point data processing (1 source)
23
24
FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
25
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/tcg/translate-a64.c
28
+++ b/target/arm/tcg/translate-a64.c
29
@@ -XXX,XX +XXX,XX @@ TRANS(FCVTZU_g, do_fcvt_g, a, FPROUNDING_ZERO, false)
30
TRANS(FCVTAS_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, true)
31
TRANS(FCVTAU_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, false)
32
33
+static bool trans_FJCVTZS(DisasContext *s, arg_FJCVTZS *a)
34
+{
35
+ if (!dc_isar_feature(aa64_jscvt, s)) {
36
+ return false;
37
+ }
38
+ if (fp_access_check(s)) {
39
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
40
+ TCGv_ptr fpstatus = fpstatus_ptr(FPST_FPCR);
41
+
42
+ gen_helper_fjcvtzs(t, t, fpstatus);
43
+
44
+ tcg_gen_ext32u_i64(cpu_reg(s, a->rd), t);
45
+ tcg_gen_extrh_i64_i32(cpu_ZF, t);
46
+ tcg_gen_movi_i32(cpu_CF, 0);
47
+ tcg_gen_movi_i32(cpu_NF, 0);
48
+ tcg_gen_movi_i32(cpu_VF, 0);
49
+ }
50
+ return true;
51
+}
52
+
53
static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
54
{
55
/* FMOV: gpr to or from float, double, or top half of quad fp reg,
56
@@ -XXX,XX +XXX,XX @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
57
}
58
}
59
60
-static void handle_fjcvtzs(DisasContext *s, int rd, int rn)
61
-{
62
- TCGv_i64 t = read_fp_dreg(s, rn);
63
- TCGv_ptr fpstatus = fpstatus_ptr(FPST_FPCR);
64
-
65
- gen_helper_fjcvtzs(t, t, fpstatus);
66
-
67
- tcg_gen_ext32u_i64(cpu_reg(s, rd), t);
68
- tcg_gen_extrh_i64_i32(cpu_ZF, t);
69
- tcg_gen_movi_i32(cpu_CF, 0);
70
- tcg_gen_movi_i32(cpu_NF, 0);
71
- tcg_gen_movi_i32(cpu_VF, 0);
72
-}
73
-
74
/* Floating point <-> integer conversions
75
* 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
76
* +----+---+---+-----------+------+---+-------+-----+-------------+----+----+
77
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
78
break;
79
80
case 0b00111110: /* FJCVTZS */
81
- if (!dc_isar_feature(aa64_jscvt, s)) {
82
- goto do_unallocated;
83
- } else if (fp_access_check(s)) {
84
- handle_fjcvtzs(s, rd, rn);
85
- }
86
- break;
87
-
88
default:
89
do_unallocated:
90
unallocated_encoding(s);
91
--
92
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Model the new function on gen_gvec_fn2 in translate-a64.c, but
3
Remove disas_fp_int_conv and disas_data_proc_fp as these
4
indicating which kind of register and in which order. Since there
4
were the last insns decoded by those functions.
5
is only one user of do_vector2_z, fold it into do_mov_z.
6
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241211163036.2297116-35-richard.henderson@linaro.org
9
Message-id: 20200815013145.539409-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/translate-sve.c | 19 ++++++++++---------
11
target/arm/tcg/a64.decode | 14 ++
13
1 file changed, 10 insertions(+), 9 deletions(-)
12
target/arm/tcg/translate-a64.c | 232 ++++++++++-----------------------
13
2 files changed, 86 insertions(+), 160 deletions(-)
14
14
15
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-sve.c
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/translate-sve.c
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
19
@@ -XXX,XX +XXX,XX @@ FCVTAU_g . 0011110 .. 100101 000000 ..... ..... @icvt
20
21
FJCVTZS 0 0011110 01 111110 000000 ..... ..... @rr
22
23
+FMOV_ws 0 0011110 00 100110 000000 ..... ..... @rr
24
+FMOV_sw 0 0011110 00 100111 000000 ..... ..... @rr
25
+
26
+FMOV_xd 1 0011110 01 100110 000000 ..... ..... @rr
27
+FMOV_dx 1 0011110 01 100111 000000 ..... ..... @rr
28
+
29
+# Move to/from upper half of 128-bit
30
+FMOV_xu 1 0011110 10 101110 000000 ..... ..... @rr
31
+FMOV_ux 1 0011110 10 101111 000000 ..... ..... @rr
32
+
33
+# Half-precision allows both sf=0 and sf=1 with identical results
34
+FMOV_xh - 0011110 11 100110 000000 ..... ..... @rr
35
+FMOV_hx - 0011110 11 100111 000000 ..... ..... @rr
36
+
37
# Floating-point data processing (1 source)
38
39
FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
40
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/tcg/translate-a64.c
43
+++ b/target/arm/tcg/translate-a64.c
44
@@ -XXX,XX +XXX,XX @@ static bool trans_FJCVTZS(DisasContext *s, arg_FJCVTZS *a)
45
return true;
20
}
46
}
21
47
22
/* Invoke a vector expander on two Zregs. */
48
-static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
23
-static bool do_vector2_z(DisasContext *s, GVecGen2Fn *gvec_fn,
49
+static bool trans_FMOV_hx(DisasContext *s, arg_rr *a)
24
- int esz, int rd, int rn)
25
+
26
+static void gen_gvec_fn_zz(DisasContext *s, GVecGen2Fn *gvec_fn,
27
+ int esz, int rd, int rn)
28
{
50
{
29
- if (sve_access_check(s)) {
51
- /* FMOV: gpr to or from float, double, or top half of quad fp reg,
30
- unsigned vsz = vec_full_reg_size(s);
52
- * without conversion.
31
- gvec_fn(esz, vec_full_reg_offset(s, rd),
53
- */
32
- vec_full_reg_offset(s, rn), vsz, vsz);
54
-
55
- if (itof) {
56
- TCGv_i64 tcg_rn = cpu_reg(s, rn);
57
- TCGv_i64 tmp;
58
-
59
- switch (type) {
60
- case 0:
61
- /* 32 bit */
62
- tmp = tcg_temp_new_i64();
63
- tcg_gen_ext32u_i64(tmp, tcg_rn);
64
- write_fp_dreg(s, rd, tmp);
65
- break;
66
- case 1:
67
- /* 64 bit */
68
- write_fp_dreg(s, rd, tcg_rn);
69
- break;
70
- case 2:
71
- /* 64 bit to top half. */
72
- tcg_gen_st_i64(tcg_rn, tcg_env, fp_reg_hi_offset(s, rd));
73
- clear_vec_high(s, true, rd);
74
- break;
75
- case 3:
76
- /* 16 bit */
77
- tmp = tcg_temp_new_i64();
78
- tcg_gen_ext16u_i64(tmp, tcg_rn);
79
- write_fp_dreg(s, rd, tmp);
80
- break;
81
- default:
82
- g_assert_not_reached();
83
- }
84
- } else {
85
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
86
-
87
- switch (type) {
88
- case 0:
89
- /* 32 bit */
90
- tcg_gen_ld32u_i64(tcg_rd, tcg_env, fp_reg_offset(s, rn, MO_32));
91
- break;
92
- case 1:
93
- /* 64 bit */
94
- tcg_gen_ld_i64(tcg_rd, tcg_env, fp_reg_offset(s, rn, MO_64));
95
- break;
96
- case 2:
97
- /* 64 bits from top half */
98
- tcg_gen_ld_i64(tcg_rd, tcg_env, fp_reg_hi_offset(s, rn));
99
- break;
100
- case 3:
101
- /* 16 bit */
102
- tcg_gen_ld16u_i64(tcg_rd, tcg_env, fp_reg_offset(s, rn, MO_16));
103
- break;
104
- default:
105
- g_assert_not_reached();
106
- }
107
+ if (!dc_isar_feature(aa64_fp16, s)) {
108
+ return false;
109
}
110
+ if (fp_access_check(s)) {
111
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
112
+ TCGv_i64 tmp = tcg_temp_new_i64();
113
+ tcg_gen_ext16u_i64(tmp, tcg_rn);
114
+ write_fp_dreg(s, a->rd, tmp);
115
+ }
116
+ return true;
117
}
118
119
-/* Floating point <-> integer conversions
120
- * 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
121
- * +----+---+---+-----------+------+---+-------+-----+-------------+----+----+
122
- * | sf | 0 | S | 1 1 1 1 0 | type | 1 | rmode | opc | 0 0 0 0 0 0 | Rn | Rd |
123
- * +----+---+---+-----------+------+---+-------+-----+-------------+----+----+
124
- */
125
-static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
126
+static bool trans_FMOV_sw(DisasContext *s, arg_rr *a)
127
{
128
- int rd = extract32(insn, 0, 5);
129
- int rn = extract32(insn, 5, 5);
130
- int opcode = extract32(insn, 16, 3);
131
- int rmode = extract32(insn, 19, 2);
132
- int type = extract32(insn, 22, 2);
133
- bool sbit = extract32(insn, 29, 1);
134
- bool sf = extract32(insn, 31, 1);
135
- bool itof = false;
136
-
137
- if (sbit) {
138
- goto do_unallocated;
33
- }
139
- }
34
- return true;
140
-
35
+ unsigned vsz = vec_full_reg_size(s);
141
- switch (opcode) {
36
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
142
- case 2: /* SCVTF */
37
+ vec_full_reg_offset(s, rn), vsz, vsz);
143
- case 3: /* UCVTF */
144
- case 4: /* FCVTAS */
145
- case 5: /* FCVTAU */
146
- case 0: /* FCVT[NPMZ]S */
147
- case 1: /* FCVT[NPMZ]U */
148
- goto do_unallocated;
149
-
150
- default:
151
- switch (sf << 7 | type << 5 | rmode << 3 | opcode) {
152
- case 0b01100110: /* FMOV half <-> 32-bit int */
153
- case 0b01100111:
154
- case 0b11100110: /* FMOV half <-> 64-bit int */
155
- case 0b11100111:
156
- if (!dc_isar_feature(aa64_fp16, s)) {
157
- goto do_unallocated;
158
- }
159
- /* fallthru */
160
- case 0b00000110: /* FMOV 32-bit */
161
- case 0b00000111:
162
- case 0b10100110: /* FMOV 64-bit */
163
- case 0b10100111:
164
- case 0b11001110: /* FMOV top half of 128-bit */
165
- case 0b11001111:
166
- if (!fp_access_check(s)) {
167
- return;
168
- }
169
- itof = opcode & 1;
170
- handle_fmov(s, rd, rn, type, itof);
171
- break;
172
-
173
- case 0b00111110: /* FJCVTZS */
174
- default:
175
- do_unallocated:
176
- unallocated_encoding(s);
177
- return;
178
- }
179
- break;
180
+ if (fp_access_check(s)) {
181
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
182
+ TCGv_i64 tmp = tcg_temp_new_i64();
183
+ tcg_gen_ext32u_i64(tmp, tcg_rn);
184
+ write_fp_dreg(s, a->rd, tmp);
185
}
186
+ return true;
38
}
187
}
39
188
40
/* Invoke a vector expander on three Zregs. */
189
-/* FP-specific subcases of table C3-6 (SIMD and FP data processing)
41
@@ -XXX,XX +XXX,XX @@ static bool do_vector3_z(DisasContext *s, GVecGen3Fn *gvec_fn,
190
- * 31 30 29 28 25 24 0
42
/* Invoke a vector move on two Zregs. */
191
- * +---+---+---+---------+-----------------------------+
43
static bool do_mov_z(DisasContext *s, int rd, int rn)
192
- * | | 0 | | 1 1 1 1 | |
193
- * +---+---+---+---------+-----------------------------+
194
- */
195
-static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
196
+static bool trans_FMOV_dx(DisasContext *s, arg_rr *a)
44
{
197
{
45
- return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
198
- if (extract32(insn, 24, 1)) {
46
+ if (sve_access_check(s)) {
199
- unallocated_encoding(s); /* in decodetree */
47
+ gen_gvec_fn_zz(s, tcg_gen_gvec_mov, MO_8, rd, rn);
200
- } else if (extract32(insn, 21, 1) == 0) {
201
- /* Floating point to fixed point conversions */
202
- unallocated_encoding(s); /* in decodetree */
203
- } else {
204
- switch (extract32(insn, 10, 2)) {
205
- case 1: /* Floating point conditional compare */
206
- case 2: /* Floating point data-processing (2 source) */
207
- case 3: /* Floating point conditional select */
208
- unallocated_encoding(s); /* in decodetree */
209
- break;
210
- case 0:
211
- switch (ctz32(extract32(insn, 12, 4))) {
212
- case 0: /* [15:12] == xxx1 */
213
- /* Floating point immediate */
214
- unallocated_encoding(s); /* in decodetree */
215
- break;
216
- case 1: /* [15:12] == xx10 */
217
- /* Floating point compare */
218
- unallocated_encoding(s); /* in decodetree */
219
- break;
220
- case 2: /* [15:12] == x100 */
221
- /* Floating point data-processing (1 source) */
222
- unallocated_encoding(s); /* in decodetree */
223
- break;
224
- case 3: /* [15:12] == 1000 */
225
- unallocated_encoding(s);
226
- break;
227
- default: /* [15:12] == 0000 */
228
- /* Floating point <-> integer conversions */
229
- disas_fp_int_conv(s, insn);
230
- break;
231
- }
232
- break;
233
- }
234
+ if (fp_access_check(s)) {
235
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
236
+ write_fp_dreg(s, a->rd, tcg_rn);
237
}
238
+ return true;
239
+}
240
+
241
+static bool trans_FMOV_ux(DisasContext *s, arg_rr *a)
242
+{
243
+ if (fp_access_check(s)) {
244
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
245
+ tcg_gen_st_i64(tcg_rn, tcg_env, fp_reg_hi_offset(s, a->rd));
246
+ clear_vec_high(s, true, a->rd);
247
+ }
248
+ return true;
249
+}
250
+
251
+static bool trans_FMOV_xh(DisasContext *s, arg_rr *a)
252
+{
253
+ if (!dc_isar_feature(aa64_fp16, s)) {
254
+ return false;
255
+ }
256
+ if (fp_access_check(s)) {
257
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
258
+ tcg_gen_ld16u_i64(tcg_rd, tcg_env, fp_reg_offset(s, a->rn, MO_16));
259
+ }
260
+ return true;
261
+}
262
+
263
+static bool trans_FMOV_ws(DisasContext *s, arg_rr *a)
264
+{
265
+ if (fp_access_check(s)) {
266
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
267
+ tcg_gen_ld32u_i64(tcg_rd, tcg_env, fp_reg_offset(s, a->rn, MO_32));
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_FMOV_xd(DisasContext *s, arg_rr *a)
273
+{
274
+ if (fp_access_check(s)) {
275
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
276
+ tcg_gen_ld_i64(tcg_rd, tcg_env, fp_reg_offset(s, a->rn, MO_64));
277
+ }
278
+ return true;
279
+}
280
+
281
+static bool trans_FMOV_xu(DisasContext *s, arg_rr *a)
282
+{
283
+ if (fp_access_check(s)) {
284
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
285
+ tcg_gen_ld_i64(tcg_rd, tcg_env, fp_reg_hi_offset(s, a->rn));
48
+ }
286
+ }
49
+ return true;
287
+ return true;
50
}
288
}
51
289
52
/* Initialize a Zreg with replications of a 64-bit immediate. */
290
/* Common vector code for handling integer to FP conversion */
291
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_simd(DisasContext *s, uint32_t insn)
292
static void disas_data_proc_simd_fp(DisasContext *s, uint32_t insn)
293
{
294
if (extract32(insn, 28, 1) == 1 && extract32(insn, 30, 1) == 0) {
295
- disas_data_proc_fp(s, insn);
296
+ unallocated_encoding(s); /* in decodetree */
297
} else {
298
/* SIMD, including crypto */
299
disas_data_proc_simd(s, insn);
53
--
300
--
54
2.20.1
301
2.34.1
55
56
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-36-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 11 +++
9
target/arm/tcg/translate-a64.c | 123 +++++++++++++++++++++------------
10
2 files changed, 89 insertions(+), 45 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
18
@rr_s ........ ... ..... ...... rn:5 rd:5 &rr_e esz=2
19
@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
20
+@rr_e ........ esz:2 . ..... ...... rn:5 rd:5 &rr_e
21
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
22
@rr_hsd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_hsd
23
24
@@ -XXX,XX +XXX,XX @@ UQRSHRN_si 0111 11110 .... ... 10011 1 ..... ..... @shri_s
25
SQRSHRUN_si 0111 11110 .... ... 10001 1 ..... ..... @shri_b
26
SQRSHRUN_si 0111 11110 .... ... 10001 1 ..... ..... @shri_h
27
SQRSHRUN_si 0111 11110 .... ... 10001 1 ..... ..... @shri_s
28
+
29
+# Advanced SIMD scalar two-register miscellaneous
30
+
31
+SQABS_s 0101 1110 ..1 00000 01111 0 ..... ..... @rr_e
32
+SQNEG_s 0111 1110 ..1 00000 01111 0 ..... ..... @rr_e
33
+
34
+# Advanced SIMD two-register miscellaneous
35
+
36
+SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
37
+SQNEG_v 0.10 1110 ..1 00000 01111 0 ..... ..... @qrr_e
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
41
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_FMOV_xu(DisasContext *s, arg_rr *a)
43
return true;
44
}
45
46
+typedef struct ENVScalar1 {
47
+ NeonGenOneOpEnvFn *gen_bhs[3];
48
+ NeonGenOne64OpEnvFn *gen_d;
49
+} ENVScalar1;
50
+
51
+static bool do_env_scalar1(DisasContext *s, arg_rr_e *a, const ENVScalar1 *f)
52
+{
53
+ if (!fp_access_check(s)) {
54
+ return true;
55
+ }
56
+ if (a->esz == MO_64) {
57
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
58
+ f->gen_d(t, tcg_env, t);
59
+ write_fp_dreg(s, a->rd, t);
60
+ } else {
61
+ TCGv_i32 t = tcg_temp_new_i32();
62
+
63
+ read_vec_element_i32(s, t, a->rn, 0, a->esz);
64
+ f->gen_bhs[a->esz](t, tcg_env, t);
65
+ write_fp_sreg(s, a->rd, t);
66
+ }
67
+ return true;
68
+}
69
+
70
+static bool do_env_vector1(DisasContext *s, arg_qrr_e *a, const ENVScalar1 *f)
71
+{
72
+ if (a->esz == MO_64 && !a->q) {
73
+ return false;
74
+ }
75
+ if (!fp_access_check(s)) {
76
+ return true;
77
+ }
78
+ if (a->esz == MO_64) {
79
+ TCGv_i64 t = tcg_temp_new_i64();
80
+
81
+ for (int i = 0; i < 2; ++i) {
82
+ read_vec_element(s, t, a->rn, i, MO_64);
83
+ f->gen_d(t, tcg_env, t);
84
+ write_vec_element(s, t, a->rd, i, MO_64);
85
+ }
86
+ } else {
87
+ TCGv_i32 t = tcg_temp_new_i32();
88
+ int n = (a->q ? 16 : 8) >> a->esz;
89
+
90
+ for (int i = 0; i < n; ++i) {
91
+ read_vec_element_i32(s, t, a->rn, i, a->esz);
92
+ f->gen_bhs[a->esz](t, tcg_env, t);
93
+ write_vec_element_i32(s, t, a->rd, i, a->esz);
94
+ }
95
+ }
96
+ clear_vec_high(s, a->q, a->rd);
97
+ return true;
98
+}
99
+
100
+static const ENVScalar1 f_scalar_sqabs = {
101
+ { gen_helper_neon_qabs_s8,
102
+ gen_helper_neon_qabs_s16,
103
+ gen_helper_neon_qabs_s32 },
104
+ gen_helper_neon_qabs_s64,
105
+};
106
+TRANS(SQABS_s, do_env_scalar1, a, &f_scalar_sqabs)
107
+TRANS(SQABS_v, do_env_vector1, a, &f_scalar_sqabs)
108
+
109
+static const ENVScalar1 f_scalar_sqneg = {
110
+ { gen_helper_neon_qneg_s8,
111
+ gen_helper_neon_qneg_s16,
112
+ gen_helper_neon_qneg_s32 },
113
+ gen_helper_neon_qneg_s64,
114
+};
115
+TRANS(SQNEG_s, do_env_scalar1, a, &f_scalar_sqneg)
116
+TRANS(SQNEG_v, do_env_vector1, a, &f_scalar_sqneg)
117
+
118
/* Common vector code for handling integer to FP conversion */
119
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
120
int elements, int is_signed,
121
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
122
*/
123
tcg_gen_not_i64(tcg_rd, tcg_rn);
124
break;
125
- case 0x7: /* SQABS, SQNEG */
126
- if (u) {
127
- gen_helper_neon_qneg_s64(tcg_rd, tcg_env, tcg_rn);
128
- } else {
129
- gen_helper_neon_qabs_s64(tcg_rd, tcg_env, tcg_rn);
130
- }
131
- break;
132
case 0xa: /* CMLT */
133
cond = TCG_COND_LT;
134
do_cmop:
135
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
136
gen_helper_frint64_d(tcg_rd, tcg_rn, tcg_fpstatus);
137
break;
138
default:
139
+ case 0x7: /* SQABS, SQNEG */
140
g_assert_not_reached();
141
}
142
}
143
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
144
TCGv_ptr tcg_fpstatus;
145
146
switch (opcode) {
147
- case 0x7: /* SQABS / SQNEG */
148
- break;
149
case 0xa: /* CMLT */
150
if (u) {
151
unallocated_encoding(s);
152
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
153
break;
154
default:
155
case 0x3: /* USQADD / SUQADD */
156
+ case 0x7: /* SQABS / SQNEG */
157
unallocated_encoding(s);
158
return;
159
}
160
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
161
read_vec_element_i32(s, tcg_rn, rn, 0, size);
162
163
switch (opcode) {
164
- case 0x7: /* SQABS, SQNEG */
165
- {
166
- NeonGenOneOpEnvFn *genfn;
167
- static NeonGenOneOpEnvFn * const fns[3][2] = {
168
- { gen_helper_neon_qabs_s8, gen_helper_neon_qneg_s8 },
169
- { gen_helper_neon_qabs_s16, gen_helper_neon_qneg_s16 },
170
- { gen_helper_neon_qabs_s32, gen_helper_neon_qneg_s32 },
171
- };
172
- genfn = fns[size][u];
173
- genfn(tcg_rd, tcg_env, tcg_rn);
174
- break;
175
- }
176
case 0x1a: /* FCVTNS */
177
case 0x1b: /* FCVTMS */
178
case 0x1c: /* FCVTAS */
179
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
180
tcg_fpstatus);
181
break;
182
default:
183
+ case 0x7: /* SQABS, SQNEG */
184
g_assert_not_reached();
185
}
186
187
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
188
return;
189
}
190
break;
191
- case 0x7: /* SQABS, SQNEG */
192
- if (size == 3 && !is_q) {
193
- unallocated_encoding(s);
194
- return;
195
- }
196
- break;
197
case 0xc ... 0xf:
198
case 0x16 ... 0x1f:
199
{
200
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
201
}
202
default:
203
case 0x3: /* SUQADD, USQADD */
204
+ case 0x7: /* SQABS, SQNEG */
205
unallocated_encoding(s);
206
return;
207
}
208
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
209
tcg_gen_clrsb_i32(tcg_res, tcg_op);
210
}
211
break;
212
- case 0x7: /* SQABS, SQNEG */
213
- if (u) {
214
- gen_helper_neon_qneg_s32(tcg_res, tcg_env, tcg_op);
215
- } else {
216
- gen_helper_neon_qabs_s32(tcg_res, tcg_env, tcg_op);
217
- }
218
- break;
219
case 0x2f: /* FABS */
220
gen_vfp_abss(tcg_res, tcg_op);
221
break;
222
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
223
gen_helper_frint64_s(tcg_res, tcg_op, tcg_fpstatus);
224
break;
225
default:
226
+ case 0x7: /* SQABS, SQNEG */
227
g_assert_not_reached();
228
}
229
} else {
230
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
231
gen_helper_neon_cnt_u8(tcg_res, tcg_op);
232
}
233
break;
234
- case 0x7: /* SQABS, SQNEG */
235
- {
236
- NeonGenOneOpEnvFn *genfn;
237
- static NeonGenOneOpEnvFn * const fns[2][2] = {
238
- { gen_helper_neon_qabs_s8, gen_helper_neon_qneg_s8 },
239
- { gen_helper_neon_qabs_s16, gen_helper_neon_qneg_s16 },
240
- };
241
- genfn = fns[size][u];
242
- genfn(tcg_res, tcg_env, tcg_op);
243
- break;
244
- }
245
case 0x4: /* CLS, CLZ */
246
if (u) {
247
if (size == 0) {
248
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
249
}
250
break;
251
default:
252
+ case 0x7: /* SQABS, SQNEG */
253
g_assert_not_reached();
254
}
255
}
256
--
257
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-37-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 4 +++
9
target/arm/tcg/translate-a64.c | 46 +++++++++++++++++++++++-----------
10
2 files changed, 35 insertions(+), 15 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SQRSHRUN_si 0111 11110 .... ... 10001 1 ..... ..... @shri_s
17
18
SQABS_s 0101 1110 ..1 00000 01111 0 ..... ..... @rr_e
19
SQNEG_s 0111 1110 ..1 00000 01111 0 ..... ..... @rr_e
20
+ABS_s 0101 1110 111 00000 10111 0 ..... ..... @rr
21
+NEG_s 0111 1110 111 00000 10111 0 ..... ..... @rr
22
23
# Advanced SIMD two-register miscellaneous
24
25
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
26
SQNEG_v 0.10 1110 ..1 00000 01111 0 ..... ..... @qrr_e
27
+ABS_v 0.00 1110 ..1 00000 10111 0 ..... ..... @qrr_e
28
+NEG_v 0.10 1110 ..1 00000 10111 0 ..... ..... @qrr_e
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static const ENVScalar1 f_scalar_sqneg = {
34
TRANS(SQNEG_s, do_env_scalar1, a, &f_scalar_sqneg)
35
TRANS(SQNEG_v, do_env_vector1, a, &f_scalar_sqneg)
36
37
+static bool do_scalar1_d(DisasContext *s, arg_rr *a, ArithOneOp *f)
38
+{
39
+ if (fp_access_check(s)) {
40
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
41
+ f(t, t);
42
+ write_fp_dreg(s, a->rd, t);
43
+ }
44
+ return true;
45
+}
46
+
47
+TRANS(ABS_s, do_scalar1_d, a, tcg_gen_abs_i64)
48
+TRANS(NEG_s, do_scalar1_d, a, tcg_gen_neg_i64)
49
+
50
+static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
51
+{
52
+ if (!a->q && a->esz == MO_64) {
53
+ return false;
54
+ }
55
+ if (fp_access_check(s)) {
56
+ gen_gvec_fn2(s, a->q, a->rd, a->rn, fn, a->esz);
57
+ }
58
+ return true;
59
+}
60
+
61
+TRANS(ABS_v, do_gvec_fn2, a, tcg_gen_gvec_abs)
62
+TRANS(NEG_v, do_gvec_fn2, a, tcg_gen_gvec_neg)
63
+
64
/* Common vector code for handling integer to FP conversion */
65
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
66
int elements, int is_signed,
67
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
68
case 0x9: /* CMEQ, CMLE */
69
cond = u ? TCG_COND_LE : TCG_COND_EQ;
70
goto do_cmop;
71
- case 0xb: /* ABS, NEG */
72
- if (u) {
73
- tcg_gen_neg_i64(tcg_rd, tcg_rn);
74
- } else {
75
- tcg_gen_abs_i64(tcg_rd, tcg_rn);
76
- }
77
- break;
78
case 0x2f: /* FABS */
79
gen_vfp_absd(tcg_rd, tcg_rn);
80
break;
81
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
82
break;
83
default:
84
case 0x7: /* SQABS, SQNEG */
85
+ case 0xb: /* ABS, NEG */
86
g_assert_not_reached();
87
}
88
}
89
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
90
/* fall through */
91
case 0x8: /* CMGT, CMGE */
92
case 0x9: /* CMEQ, CMLE */
93
- case 0xb: /* ABS, NEG */
94
if (size != 3) {
95
unallocated_encoding(s);
96
return;
97
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
98
default:
99
case 0x3: /* USQADD / SUQADD */
100
case 0x7: /* SQABS / SQNEG */
101
+ case 0xb: /* ABS, NEG */
102
unallocated_encoding(s);
103
return;
104
}
105
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
106
/* fall through */
107
case 0x8: /* CMGT, CMGE */
108
case 0x9: /* CMEQ, CMLE */
109
- case 0xb: /* ABS, NEG */
110
if (size == 3 && !is_q) {
111
unallocated_encoding(s);
112
return;
113
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
114
default:
115
case 0x3: /* SUQADD, USQADD */
116
case 0x7: /* SQABS, SQNEG */
117
+ case 0xb: /* ABS, NEG */
118
unallocated_encoding(s);
119
return;
120
}
121
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
122
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
123
return;
124
case 0xb:
125
- if (u) { /* ABS, NEG */
126
- gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_neg, size);
127
- } else {
128
- gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_abs, size);
129
- }
130
- return;
131
+ g_assert_not_reached();
132
}
133
134
if (size == 3) {
135
--
136
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We want to assert the device is not realized. To avoid overloading
3
Add gvec interfaces for CLS and CLZ operations.
4
this header including "hw/qdev-core.h", uninline the function first.
5
4
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200803105647.22223-4-f4bug@amsat.org
7
Message-id: 20241211163036.2297116-38-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
include/hw/qdev-clock.h | 6 +-----
10
target/arm/tcg/translate.h | 5 +++++
12
hw/core/qdev-clock.c | 5 +++++
11
target/arm/tcg/gengvec.c | 35 +++++++++++++++++++++++++++++++++
13
2 files changed, 6 insertions(+), 5 deletions(-)
12
target/arm/tcg/translate-a64.c | 29 +++++++--------------------
13
target/arm/tcg/translate-neon.c | 29 ++-------------------------
14
4 files changed, 49 insertions(+), 49 deletions(-)
14
15
15
diff --git a/include/hw/qdev-clock.h b/include/hw/qdev-clock.h
16
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/qdev-clock.h
18
--- a/target/arm/tcg/translate.h
18
+++ b/include/hw/qdev-clock.h
19
+++ b/target/arm/tcg/translate.h
19
@@ -XXX,XX +XXX,XX @@ Clock *qdev_get_clock_out(DeviceState *dev, const char *name);
20
@@ -XXX,XX +XXX,XX @@ void gen_gvec_umaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
20
* Set the source clock of input clock @name of device @dev to @source.
21
void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
21
* @source period update will be propagated to @name clock.
22
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
23
24
+void gen_gvec_cls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
25
+ uint32_t opr_sz, uint32_t max_sz);
26
+void gen_gvec_clz(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
27
+ uint32_t opr_sz, uint32_t max_sz);
28
+
29
/*
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
22
*/
31
*/
23
-static inline void qdev_connect_clock_in(DeviceState *dev, const char *name,
32
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
24
- Clock *source)
25
-{
26
- clock_set_source(qdev_get_clock_in(dev, name), source);
27
-}
28
+void qdev_connect_clock_in(DeviceState *dev, const char *name, Clock *source);
29
30
/**
31
* qdev_alias_clock:
32
diff --git a/hw/core/qdev-clock.c b/hw/core/qdev-clock.c
33
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/core/qdev-clock.c
34
--- a/target/arm/tcg/gengvec.c
35
+++ b/hw/core/qdev-clock.c
35
+++ b/target/arm/tcg/gengvec.c
36
@@ -XXX,XX +XXX,XX @@ Clock *qdev_alias_clock(DeviceState *dev, const char *name,
36
@@ -XXX,XX +XXX,XX @@ void gen_gvec_urhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
37
37
assert(vece <= MO_32);
38
return ncl->clock;
38
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
39
}
39
}
40
+
40
+
41
+void qdev_connect_clock_in(DeviceState *dev, const char *name, Clock *source)
41
+void gen_gvec_cls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
42
+ uint32_t opr_sz, uint32_t max_sz)
42
+{
43
+{
43
+ clock_set_source(qdev_get_clock_in(dev, name), source);
44
+ static const GVecGen2 g[] = {
45
+ { .fni4 = gen_helper_neon_cls_s8,
46
+ .vece = MO_8 },
47
+ { .fni4 = gen_helper_neon_cls_s16,
48
+ .vece = MO_16 },
49
+ { .fni4 = tcg_gen_clrsb_i32,
50
+ .vece = MO_32 },
51
+ };
52
+ assert(vece <= MO_32);
53
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
44
+}
54
+}
55
+
56
+static void gen_clz32_i32(TCGv_i32 d, TCGv_i32 n)
57
+{
58
+ tcg_gen_clzi_i32(d, n, 32);
59
+}
60
+
61
+void gen_gvec_clz(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
62
+ uint32_t opr_sz, uint32_t max_sz)
63
+{
64
+ static const GVecGen2 g[] = {
65
+ { .fni4 = gen_helper_neon_clz_u8,
66
+ .vece = MO_8 },
67
+ { .fni4 = gen_helper_neon_clz_u16,
68
+ .vece = MO_16 },
69
+ { .fni4 = gen_clz32_i32,
70
+ .vece = MO_32 },
71
+ };
72
+ assert(vece <= MO_32);
73
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
74
+}
75
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/tcg/translate-a64.c
78
+++ b/target/arm/tcg/translate-a64.c
79
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
80
}
81
82
switch (opcode) {
83
+ case 0x4: /* CLZ, CLS */
84
+ if (u) {
85
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clz, size);
86
+ } else {
87
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cls, size);
88
+ }
89
+ return;
90
case 0x5:
91
if (u && size == 0) { /* NOT */
92
gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_not, 0);
93
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
94
if (size == 2) {
95
/* Special cases for 32 bit elements */
96
switch (opcode) {
97
- case 0x4: /* CLS */
98
- if (u) {
99
- tcg_gen_clzi_i32(tcg_res, tcg_op, 32);
100
- } else {
101
- tcg_gen_clrsb_i32(tcg_res, tcg_op);
102
- }
103
- break;
104
case 0x2f: /* FABS */
105
gen_vfp_abss(tcg_res, tcg_op);
106
break;
107
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
108
gen_helper_neon_cnt_u8(tcg_res, tcg_op);
109
}
110
break;
111
- case 0x4: /* CLS, CLZ */
112
- if (u) {
113
- if (size == 0) {
114
- gen_helper_neon_clz_u8(tcg_res, tcg_op);
115
- } else {
116
- gen_helper_neon_clz_u16(tcg_res, tcg_op);
117
- }
118
- } else {
119
- if (size == 0) {
120
- gen_helper_neon_cls_s8(tcg_res, tcg_op);
121
- } else {
122
- gen_helper_neon_cls_s16(tcg_res, tcg_op);
123
- }
124
- }
125
- break;
126
default:
127
case 0x7: /* SQABS, SQNEG */
128
g_assert_not_reached();
129
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/target/arm/tcg/translate-neon.c
132
+++ b/target/arm/tcg/translate-neon.c
133
@@ -XXX,XX +XXX,XX @@ DO_2MISC_VEC(VCGT0, gen_gvec_cgt0)
134
DO_2MISC_VEC(VCLE0, gen_gvec_cle0)
135
DO_2MISC_VEC(VCGE0, gen_gvec_cge0)
136
DO_2MISC_VEC(VCLT0, gen_gvec_clt0)
137
+DO_2MISC_VEC(VCLS, gen_gvec_cls)
138
+DO_2MISC_VEC(VCLZ, gen_gvec_clz)
139
140
static bool trans_VMVN(DisasContext *s, arg_2misc *a)
141
{
142
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV16(DisasContext *s, arg_2misc *a)
143
return do_2misc(s, a, gen_rev16);
144
}
145
146
-static bool trans_VCLS(DisasContext *s, arg_2misc *a)
147
-{
148
- static NeonGenOneOpFn * const fn[] = {
149
- gen_helper_neon_cls_s8,
150
- gen_helper_neon_cls_s16,
151
- gen_helper_neon_cls_s32,
152
- NULL,
153
- };
154
- return do_2misc(s, a, fn[a->size]);
155
-}
156
-
157
-static void do_VCLZ_32(TCGv_i32 rd, TCGv_i32 rm)
158
-{
159
- tcg_gen_clzi_i32(rd, rm, 32);
160
-}
161
-
162
-static bool trans_VCLZ(DisasContext *s, arg_2misc *a)
163
-{
164
- static NeonGenOneOpFn * const fn[] = {
165
- gen_helper_neon_clz_u8,
166
- gen_helper_neon_clz_u16,
167
- do_VCLZ_32,
168
- NULL,
169
- };
170
- return do_2misc(s, a, fn[a->size]);
171
-}
172
-
173
static bool trans_VCNT(DisasContext *s, arg_2misc *a)
174
{
175
if (a->size != 0) {
45
--
176
--
46
2.20.1
177
2.34.1
47
178
48
179
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-39-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 2 ++
9
target/arm/tcg/translate-a64.c | 37 ++++++++++++++++------------------
10
2 files changed, 19 insertions(+), 20 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
17
SQNEG_v 0.10 1110 ..1 00000 01111 0 ..... ..... @qrr_e
18
ABS_v 0.00 1110 ..1 00000 10111 0 ..... ..... @qrr_e
19
NEG_v 0.10 1110 ..1 00000 10111 0 ..... ..... @qrr_e
20
+CLS_v 0.00 1110 ..1 00000 01001 0 ..... ..... @qrr_e
21
+CLZ_v 0.10 1110 ..1 00000 01001 0 ..... ..... @qrr_e
22
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/tcg/translate-a64.c
25
+++ b/target/arm/tcg/translate-a64.c
26
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
27
TRANS(ABS_v, do_gvec_fn2, a, tcg_gen_gvec_abs)
28
TRANS(NEG_v, do_gvec_fn2, a, tcg_gen_gvec_neg)
29
30
+static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
31
+{
32
+ if (a->esz == MO_64) {
33
+ return false;
34
+ }
35
+ if (fp_access_check(s)) {
36
+ gen_gvec_fn2(s, a->q, a->rd, a->rn, fn, a->esz);
37
+ }
38
+ return true;
39
+}
40
+
41
+TRANS(CLS_v, do_gvec_fn2_bhs, a, gen_gvec_cls)
42
+TRANS(CLZ_v, do_gvec_fn2_bhs, a, gen_gvec_clz)
43
+
44
/* Common vector code for handling integer to FP conversion */
45
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
46
int elements, int is_signed,
47
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
48
TCGCond cond;
49
50
switch (opcode) {
51
- case 0x4: /* CLS, CLZ */
52
- if (u) {
53
- tcg_gen_clzi_i64(tcg_rd, tcg_rn, 64);
54
- } else {
55
- tcg_gen_clrsb_i64(tcg_rd, tcg_rn);
56
- }
57
- break;
58
case 0x5: /* NOT */
59
/* This opcode is shared with CNT and RBIT but we have earlier
60
* enforced that size == 3 if and only if this is the NOT insn.
61
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
62
gen_helper_frint64_d(tcg_rd, tcg_rn, tcg_fpstatus);
63
break;
64
default:
65
+ case 0x4: /* CLS, CLZ */
66
case 0x7: /* SQABS, SQNEG */
67
case 0xb: /* ABS, NEG */
68
g_assert_not_reached();
69
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
70
71
handle_2misc_narrow(s, false, opcode, u, is_q, size, rn, rd);
72
return;
73
- case 0x4: /* CLS, CLZ */
74
- if (size == 3) {
75
- unallocated_encoding(s);
76
- return;
77
- }
78
- break;
79
case 0x2: /* SADDLP, UADDLP */
80
case 0x6: /* SADALP, UADALP */
81
if (size == 3) {
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
83
}
84
default:
85
case 0x3: /* SUQADD, USQADD */
86
+ case 0x4: /* CLS, CLZ */
87
case 0x7: /* SQABS, SQNEG */
88
case 0xb: /* ABS, NEG */
89
unallocated_encoding(s);
90
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
91
}
92
93
switch (opcode) {
94
- case 0x4: /* CLZ, CLS */
95
- if (u) {
96
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clz, size);
97
- } else {
98
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cls, size);
99
- }
100
- return;
101
case 0x5:
102
if (u && size == 0) { /* NOT */
103
gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_not, 0);
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
105
case 0xa: /* CMLT */
106
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
107
return;
108
+ case 0x4: /* CLZ, CLS */
109
case 0xb:
110
g_assert_not_reached();
111
}
112
--
113
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Add gvec interfaces for CNT and RBIT operations.
4
Use ctpop8 for CNT and revbit+bswap for RBIT.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-40-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 4 ++--
12
target/arm/tcg/translate.h | 4 ++++
13
target/arm/tcg/gengvec.c | 16 ++++++++++++++++
14
target/arm/tcg/neon_helper.c | 21 ---------------------
15
target/arm/tcg/translate-a64.c | 32 +++++++++-----------------------
16
target/arm/tcg/translate-neon.c | 16 ++++++++--------
17
target/arm/tcg/vec_helper.c | 24 ++++++++++++++++++++++++
18
7 files changed, 63 insertions(+), 54 deletions(-)
19
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
23
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(neon_clz_u16, i32, i32)
25
DEF_HELPER_1(neon_cls_s8, i32, i32)
26
DEF_HELPER_1(neon_cls_s16, i32, i32)
27
DEF_HELPER_1(neon_cls_s32, i32, i32)
28
-DEF_HELPER_1(neon_cnt_u8, i32, i32)
29
-DEF_HELPER_FLAGS_1(neon_rbit_u8, TCG_CALL_NO_RWG_SE, i32, i32)
30
+DEF_HELPER_FLAGS_3(gvec_cnt_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(gvec_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
33
DEF_HELPER_3(neon_qdmulh_s16, i32, env, i32, i32)
34
DEF_HELPER_3(neon_qrdmulh_s16, i32, env, i32, i32)
35
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/tcg/translate.h
38
+++ b/target/arm/tcg/translate.h
39
@@ -XXX,XX +XXX,XX @@ void gen_gvec_cls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
40
uint32_t opr_sz, uint32_t max_sz);
41
void gen_gvec_clz(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
42
uint32_t opr_sz, uint32_t max_sz);
43
+void gen_gvec_cnt(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
44
+ uint32_t opr_sz, uint32_t max_sz);
45
+void gen_gvec_rbit(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
46
+ uint32_t opr_sz, uint32_t max_sz);
47
48
/*
49
* Forward to the isar_feature_* tests given a DisasContext pointer.
50
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/gengvec.c
53
+++ b/target/arm/tcg/gengvec.c
54
@@ -XXX,XX +XXX,XX @@ void gen_gvec_clz(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
55
assert(vece <= MO_32);
56
tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
57
}
58
+
59
+void gen_gvec_cnt(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
60
+ uint32_t opr_sz, uint32_t max_sz)
61
+{
62
+ assert(vece == MO_8);
63
+ tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
64
+ gen_helper_gvec_cnt_b);
65
+}
66
+
67
+void gen_gvec_rbit(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
68
+ uint32_t opr_sz, uint32_t max_sz)
69
+{
70
+ assert(vece == MO_8);
71
+ tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
72
+ gen_helper_gvec_rbit_b);
73
+}
74
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/tcg/neon_helper.c
77
+++ b/target/arm/tcg/neon_helper.c
78
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_cls_s32)(uint32_t x)
79
return count - 1;
80
}
81
82
-/* Bit count. */
83
-uint32_t HELPER(neon_cnt_u8)(uint32_t x)
84
-{
85
- x = (x & 0x55555555) + ((x >> 1) & 0x55555555);
86
- x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
87
- x = (x & 0x0f0f0f0f) + ((x >> 4) & 0x0f0f0f0f);
88
- return x;
89
-}
90
-
91
-/* Reverse bits in each 8 bit word */
92
-uint32_t HELPER(neon_rbit_u8)(uint32_t x)
93
-{
94
- x = ((x & 0xf0f0f0f0) >> 4)
95
- | ((x & 0x0f0f0f0f) << 4);
96
- x = ((x & 0x88888888) >> 3)
97
- | ((x & 0x44444444) >> 1)
98
- | ((x & 0x22222222) << 1)
99
- | ((x & 0x11111111) << 3);
100
- return x;
101
-}
102
-
103
#define NEON_QDMULH16(dest, src1, src2, round) do { \
104
uint32_t tmp = (int32_t)(int16_t) src1 * (int16_t) src2; \
105
if ((tmp ^ (tmp << 1)) & SIGNBIT) { \
106
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/tcg/translate-a64.c
109
+++ b/target/arm/tcg/translate-a64.c
110
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
111
}
112
113
switch (opcode) {
114
- case 0x5:
115
- if (u && size == 0) { /* NOT */
116
+ case 0x5: /* CNT, NOT, RBIT */
117
+ if (!u) {
118
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cnt, 0);
119
+ } else if (size) {
120
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_rbit, 0);
121
+ } else {
122
gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_not, 0);
123
- return;
124
}
125
- break;
126
+ return;
127
case 0x8: /* CMGT, CMGE */
128
if (u) {
129
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cge0, size);
130
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
131
} else {
132
int pass;
133
134
+ assert(size == 2);
135
for (pass = 0; pass < (is_q ? 4 : 2); pass++) {
136
TCGv_i32 tcg_op = tcg_temp_new_i32();
137
TCGv_i32 tcg_res = tcg_temp_new_i32();
138
139
read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
140
141
- if (size == 2) {
142
+ {
143
/* Special cases for 32 bit elements */
144
switch (opcode) {
145
case 0x2f: /* FABS */
146
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
147
case 0x7: /* SQABS, SQNEG */
148
g_assert_not_reached();
149
}
150
- } else {
151
- /* Use helpers for 8 and 16 bit elements */
152
- switch (opcode) {
153
- case 0x5: /* CNT, RBIT */
154
- /* For these two insns size is part of the opcode specifier
155
- * (handled earlier); they always operate on byte elements.
156
- */
157
- if (u) {
158
- gen_helper_neon_rbit_u8(tcg_res, tcg_op);
159
- } else {
160
- gen_helper_neon_cnt_u8(tcg_res, tcg_op);
161
- }
162
- break;
163
- default:
164
- case 0x7: /* SQABS, SQNEG */
165
- g_assert_not_reached();
166
- }
167
}
168
-
169
write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
170
}
171
}
172
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/target/arm/tcg/translate-neon.c
175
+++ b/target/arm/tcg/translate-neon.c
176
@@ -XXX,XX +XXX,XX @@ static bool trans_VMVN(DisasContext *s, arg_2misc *a)
177
return do_2misc_vec(s, a, tcg_gen_gvec_not);
178
}
179
180
+static bool trans_VCNT(DisasContext *s, arg_2misc *a)
181
+{
182
+ if (a->size != 0) {
183
+ return false;
184
+ }
185
+ return do_2misc_vec(s, a, gen_gvec_cnt);
186
+}
187
+
188
#define WRAP_2M_3_OOL_FN(WRAPNAME, FUNC, DATA) \
189
static void WRAPNAME(unsigned vece, uint32_t rd_ofs, \
190
uint32_t rm_ofs, uint32_t oprsz, \
191
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV16(DisasContext *s, arg_2misc *a)
192
return do_2misc(s, a, gen_rev16);
193
}
194
195
-static bool trans_VCNT(DisasContext *s, arg_2misc *a)
196
-{
197
- if (a->size != 0) {
198
- return false;
199
- }
200
- return do_2misc(s, a, gen_helper_neon_cnt_u8);
201
-}
202
-
203
static void gen_VABS_F(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
204
uint32_t oprsz, uint32_t maxsz)
205
{
206
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/tcg/vec_helper.c
209
+++ b/target/arm/tcg/vec_helper.c
210
@@ -XXX,XX +XXX,XX @@ DO_CLAMP(gvec_uclamp_b, uint8_t)
211
DO_CLAMP(gvec_uclamp_h, uint16_t)
212
DO_CLAMP(gvec_uclamp_s, uint32_t)
213
DO_CLAMP(gvec_uclamp_d, uint64_t)
214
+
215
+/* Bit count in each 8-bit word. */
216
+void HELPER(gvec_cnt_b)(void *vd, void *vn, uint32_t desc)
217
+{
218
+ intptr_t i, opr_sz = simd_oprsz(desc);
219
+ uint8_t *d = vd, *n = vn;
220
+
221
+ for (i = 0; i < opr_sz; ++i) {
222
+ d[i] = ctpop8(n[i]);
223
+ }
224
+ clear_tail(d, opr_sz, simd_maxsz(desc));
225
+}
226
+
227
+/* Reverse bits in each 8 bit word */
228
+void HELPER(gvec_rbit_b)(void *vd, void *vn, uint32_t desc)
229
+{
230
+ intptr_t i, opr_sz = simd_oprsz(desc);
231
+ uint64_t *d = vd, *n = vn;
232
+
233
+ for (i = 0; i < opr_sz / 8; ++i) {
234
+ d[i] = revbit64(bswap64(n[i]));
235
+ }
236
+ clear_tail(d, opr_sz, simd_maxsz(desc));
237
+}
238
--
239
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-41-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 4 ++++
9
target/arm/tcg/translate-a64.c | 34 ++++++----------------------------
10
2 files changed, 10 insertions(+), 28 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
18
@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
19
20
+@qrr_b . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=0
21
@qrr_h . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=1
22
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
23
24
@@ -XXX,XX +XXX,XX @@ ABS_v 0.00 1110 ..1 00000 10111 0 ..... ..... @qrr_e
25
NEG_v 0.10 1110 ..1 00000 10111 0 ..... ..... @qrr_e
26
CLS_v 0.00 1110 ..1 00000 01001 0 ..... ..... @qrr_e
27
CLZ_v 0.10 1110 ..1 00000 01001 0 ..... ..... @qrr_e
28
+CNT_v 0.00 1110 001 00000 01011 0 ..... ..... @qrr_b
29
+NOT_v 0.10 1110 001 00000 01011 0 ..... ..... @qrr_b
30
+RBIT_v 0.10 1110 011 00000 01011 0 ..... ..... @qrr_b
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-a64.c
34
+++ b/target/arm/tcg/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
36
37
TRANS(ABS_v, do_gvec_fn2, a, tcg_gen_gvec_abs)
38
TRANS(NEG_v, do_gvec_fn2, a, tcg_gen_gvec_neg)
39
+TRANS(NOT_v, do_gvec_fn2, a, tcg_gen_gvec_not)
40
+TRANS(CNT_v, do_gvec_fn2, a, gen_gvec_cnt)
41
+TRANS(RBIT_v, do_gvec_fn2, a, gen_gvec_rbit)
42
43
static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
44
{
45
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
46
TCGCond cond;
47
48
switch (opcode) {
49
- case 0x5: /* NOT */
50
- /* This opcode is shared with CNT and RBIT but we have earlier
51
- * enforced that size == 3 if and only if this is the NOT insn.
52
- */
53
- tcg_gen_not_i64(tcg_rd, tcg_rn);
54
- break;
55
case 0xa: /* CMLT */
56
cond = TCG_COND_LT;
57
do_cmop:
58
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
59
break;
60
default:
61
case 0x4: /* CLS, CLZ */
62
+ case 0x5: /* NOT */
63
case 0x7: /* SQABS, SQNEG */
64
case 0xb: /* ABS, NEG */
65
g_assert_not_reached();
66
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
67
case 0x1: /* REV16 */
68
handle_rev(s, opcode, u, is_q, size, rn, rd);
69
return;
70
- case 0x5: /* CNT, NOT, RBIT */
71
- if (u && size == 0) {
72
- /* NOT */
73
- break;
74
- } else if (u && size == 1) {
75
- /* RBIT */
76
- break;
77
- } else if (!u && size == 0) {
78
- /* CNT */
79
- break;
80
- }
81
- unallocated_encoding(s);
82
- return;
83
case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
84
case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
85
if (size == 3) {
86
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
87
default:
88
case 0x3: /* SUQADD, USQADD */
89
case 0x4: /* CLS, CLZ */
90
+ case 0x5: /* CNT, NOT, RBIT */
91
case 0x7: /* SQABS, SQNEG */
92
case 0xb: /* ABS, NEG */
93
unallocated_encoding(s);
94
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
95
}
96
97
switch (opcode) {
98
- case 0x5: /* CNT, NOT, RBIT */
99
- if (!u) {
100
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cnt, 0);
101
- } else if (size) {
102
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_rbit, 0);
103
- } else {
104
- gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_not, 0);
105
- }
106
- return;
107
case 0x8: /* CMGT, CMGE */
108
if (u) {
109
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cge0, size);
110
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
111
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
112
return;
113
case 0x4: /* CLZ, CLS */
114
+ case 0x5: /* CNT, NOT, RBIT */
115
case 0xb:
116
g_assert_not_reached();
117
}
118
--
119
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-42-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 10 ++++
9
target/arm/tcg/translate-a64.c | 94 +++++++++++-----------------------
10
2 files changed, 40 insertions(+), 64 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SQABS_s 0101 1110 ..1 00000 01111 0 ..... ..... @rr_e
17
SQNEG_s 0111 1110 ..1 00000 01111 0 ..... ..... @rr_e
18
ABS_s 0101 1110 111 00000 10111 0 ..... ..... @rr
19
NEG_s 0111 1110 111 00000 10111 0 ..... ..... @rr
20
+CMGT0_s 0101 1110 111 00000 10001 0 ..... ..... @rr
21
+CMGE0_s 0111 1110 111 00000 10001 0 ..... ..... @rr
22
+CMEQ0_s 0101 1110 111 00000 10011 0 ..... ..... @rr
23
+CMLE0_s 0111 1110 111 00000 10011 0 ..... ..... @rr
24
+CMLT0_s 0101 1110 111 00000 10101 0 ..... ..... @rr
25
26
# Advanced SIMD two-register miscellaneous
27
28
@@ -XXX,XX +XXX,XX @@ CLZ_v 0.10 1110 ..1 00000 01001 0 ..... ..... @qrr_e
29
CNT_v 0.00 1110 001 00000 01011 0 ..... ..... @qrr_b
30
NOT_v 0.10 1110 001 00000 01011 0 ..... ..... @qrr_b
31
RBIT_v 0.10 1110 011 00000 01011 0 ..... ..... @qrr_b
32
+CMGT0_v 0.00 1110 ..1 00000 10001 0 ..... ..... @qrr_e
33
+CMGE0_v 0.10 1110 ..1 00000 10001 0 ..... ..... @qrr_e
34
+CMEQ0_v 0.00 1110 ..1 00000 10011 0 ..... ..... @qrr_e
35
+CMLE0_v 0.10 1110 ..1 00000 10011 0 ..... ..... @qrr_e
36
+CMLT0_v 0.00 1110 ..1 00000 10101 0 ..... ..... @qrr_e
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static bool do_scalar1_d(DisasContext *s, arg_rr *a, ArithOneOp *f)
42
TRANS(ABS_s, do_scalar1_d, a, tcg_gen_abs_i64)
43
TRANS(NEG_s, do_scalar1_d, a, tcg_gen_neg_i64)
44
45
+static bool do_cmop0_d(DisasContext *s, arg_rr *a, TCGCond cond)
46
+{
47
+ if (fp_access_check(s)) {
48
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
49
+ tcg_gen_negsetcond_i64(cond, t, t, tcg_constant_i64(0));
50
+ write_fp_dreg(s, a->rd, t);
51
+ }
52
+ return true;
53
+}
54
+
55
+TRANS(CMGT0_s, do_cmop0_d, a, TCG_COND_GT)
56
+TRANS(CMGE0_s, do_cmop0_d, a, TCG_COND_GE)
57
+TRANS(CMLE0_s, do_cmop0_d, a, TCG_COND_LE)
58
+TRANS(CMLT0_s, do_cmop0_d, a, TCG_COND_LT)
59
+TRANS(CMEQ0_s, do_cmop0_d, a, TCG_COND_EQ)
60
+
61
static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
62
{
63
if (!a->q && a->esz == MO_64) {
64
@@ -XXX,XX +XXX,XX @@ TRANS(NEG_v, do_gvec_fn2, a, tcg_gen_gvec_neg)
65
TRANS(NOT_v, do_gvec_fn2, a, tcg_gen_gvec_not)
66
TRANS(CNT_v, do_gvec_fn2, a, gen_gvec_cnt)
67
TRANS(RBIT_v, do_gvec_fn2, a, gen_gvec_rbit)
68
+TRANS(CMGT0_v, do_gvec_fn2, a, gen_gvec_cgt0)
69
+TRANS(CMGE0_v, do_gvec_fn2, a, gen_gvec_cge0)
70
+TRANS(CMLT0_v, do_gvec_fn2, a, gen_gvec_clt0)
71
+TRANS(CMLE0_v, do_gvec_fn2, a, gen_gvec_cle0)
72
+TRANS(CMEQ0_v, do_gvec_fn2, a, gen_gvec_ceq0)
73
74
static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
75
{
76
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
77
* The caller only need provide tcg_rmode and tcg_fpstatus if the op
78
* requires them.
79
*/
80
- TCGCond cond;
81
-
82
switch (opcode) {
83
- case 0xa: /* CMLT */
84
- cond = TCG_COND_LT;
85
- do_cmop:
86
- /* 64 bit integer comparison against zero, result is test ? -1 : 0. */
87
- tcg_gen_negsetcond_i64(cond, tcg_rd, tcg_rn, tcg_constant_i64(0));
88
- break;
89
- case 0x8: /* CMGT, CMGE */
90
- cond = u ? TCG_COND_GE : TCG_COND_GT;
91
- goto do_cmop;
92
- case 0x9: /* CMEQ, CMLE */
93
- cond = u ? TCG_COND_LE : TCG_COND_EQ;
94
- goto do_cmop;
95
case 0x2f: /* FABS */
96
gen_vfp_absd(tcg_rd, tcg_rn);
97
break;
98
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
99
case 0x4: /* CLS, CLZ */
100
case 0x5: /* NOT */
101
case 0x7: /* SQABS, SQNEG */
102
+ case 0x8: /* CMGT, CMGE */
103
+ case 0x9: /* CMEQ, CMLE */
104
+ case 0xa: /* CMLT */
105
case 0xb: /* ABS, NEG */
106
g_assert_not_reached();
107
}
108
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
109
TCGv_ptr tcg_fpstatus;
110
111
switch (opcode) {
112
- case 0xa: /* CMLT */
113
- if (u) {
114
- unallocated_encoding(s);
115
- return;
116
- }
117
- /* fall through */
118
- case 0x8: /* CMGT, CMGE */
119
- case 0x9: /* CMEQ, CMLE */
120
- if (size != 3) {
121
- unallocated_encoding(s);
122
- return;
123
- }
124
- break;
125
case 0x12: /* SQXTUN */
126
if (!u) {
127
unallocated_encoding(s);
128
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
129
default:
130
case 0x3: /* USQADD / SUQADD */
131
case 0x7: /* SQABS / SQNEG */
132
+ case 0x8: /* CMGT, CMGE */
133
+ case 0x9: /* CMEQ, CMLE */
134
+ case 0xa: /* CMLT */
135
case 0xb: /* ABS, NEG */
136
unallocated_encoding(s);
137
return;
138
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
139
}
140
handle_shll(s, is_q, size, rn, rd);
141
return;
142
- case 0xa: /* CMLT */
143
- if (u == 1) {
144
- unallocated_encoding(s);
145
- return;
146
- }
147
- /* fall through */
148
- case 0x8: /* CMGT, CMGE */
149
- case 0x9: /* CMEQ, CMLE */
150
- if (size == 3 && !is_q) {
151
- unallocated_encoding(s);
152
- return;
153
- }
154
- break;
155
case 0xc ... 0xf:
156
case 0x16 ... 0x1f:
157
{
158
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
159
case 0x4: /* CLS, CLZ */
160
case 0x5: /* CNT, NOT, RBIT */
161
case 0x7: /* SQABS, SQNEG */
162
+ case 0x8: /* CMGT, CMGE */
163
+ case 0x9: /* CMEQ, CMLE */
164
+ case 0xa: /* CMLT */
165
case 0xb: /* ABS, NEG */
166
unallocated_encoding(s);
167
return;
168
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
169
tcg_rmode = NULL;
170
}
171
172
- switch (opcode) {
173
- case 0x8: /* CMGT, CMGE */
174
- if (u) {
175
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cge0, size);
176
- } else {
177
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cgt0, size);
178
- }
179
- return;
180
- case 0x9: /* CMEQ, CMLE */
181
- if (u) {
182
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cle0, size);
183
- } else {
184
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_ceq0, size);
185
- }
186
- return;
187
- case 0xa: /* CMLT */
188
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
189
- return;
190
- case 0x4: /* CLZ, CLS */
191
- case 0x5: /* CNT, NOT, RBIT */
192
- case 0xb:
193
- g_assert_not_reached();
194
- }
195
-
196
if (size == 3) {
197
/* All 64-bit element operations can be shared with scalar 2misc */
198
int pass;
199
--
200
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-43-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/translate.h | 6 +++
9
target/arm/tcg/gengvec.c | 58 ++++++++++++++++++++++
10
target/arm/tcg/translate-neon.c | 88 +++++++--------------------------
11
3 files changed, 81 insertions(+), 71 deletions(-)
12
13
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/tcg/translate.h
16
+++ b/target/arm/tcg/translate.h
17
@@ -XXX,XX +XXX,XX @@ void gen_gvec_cnt(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
18
uint32_t opr_sz, uint32_t max_sz);
19
void gen_gvec_rbit(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
20
uint32_t opr_sz, uint32_t max_sz);
21
+void gen_gvec_rev16(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
22
+ uint32_t opr_sz, uint32_t max_sz);
23
+void gen_gvec_rev32(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
24
+ uint32_t opr_sz, uint32_t max_sz);
25
+void gen_gvec_rev64(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
26
+ uint32_t opr_sz, uint32_t max_sz);
27
28
/*
29
* Forward to the isar_feature_* tests given a DisasContext pointer.
30
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/gengvec.c
33
+++ b/target/arm/tcg/gengvec.c
34
@@ -XXX,XX +XXX,XX @@ void gen_gvec_rbit(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
35
tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
36
gen_helper_gvec_rbit_b);
37
}
38
+
39
+void gen_gvec_rev16(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
40
+ uint32_t opr_sz, uint32_t max_sz)
41
+{
42
+ assert(vece == MO_8);
43
+ tcg_gen_gvec_rotli(MO_16, rd_ofs, rn_ofs, 8, opr_sz, max_sz);
44
+}
45
+
46
+static void gen_bswap32_i64(TCGv_i64 d, TCGv_i64 n)
47
+{
48
+ tcg_gen_bswap64_i64(d, n);
49
+ tcg_gen_rotli_i64(d, d, 32);
50
+}
51
+
52
+void gen_gvec_rev32(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
53
+ uint32_t opr_sz, uint32_t max_sz)
54
+{
55
+ static const GVecGen2 g = {
56
+ .fni8 = gen_bswap32_i64,
57
+ .fni4 = tcg_gen_bswap32_i32,
58
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
59
+ .vece = MO_32
60
+ };
61
+
62
+ switch (vece) {
63
+ case MO_16:
64
+ tcg_gen_gvec_rotli(MO_32, rd_ofs, rn_ofs, 16, opr_sz, max_sz);
65
+ break;
66
+ case MO_8:
67
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g);
68
+ break;
69
+ default:
70
+ g_assert_not_reached();
71
+ }
72
+}
73
+
74
+void gen_gvec_rev64(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
75
+ uint32_t opr_sz, uint32_t max_sz)
76
+{
77
+ static const GVecGen2 g[] = {
78
+ { .fni8 = tcg_gen_bswap64_i64,
79
+ .vece = MO_64 },
80
+ { .fni8 = tcg_gen_hswap_i64,
81
+ .vece = MO_64 },
82
+ };
83
+
84
+ switch (vece) {
85
+ case MO_32:
86
+ tcg_gen_gvec_rotli(MO_64, rd_ofs, rn_ofs, 32, opr_sz, max_sz);
87
+ break;
88
+ case MO_8:
89
+ case MO_16:
90
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
91
+ break;
92
+ default:
93
+ g_assert_not_reached();
94
+ }
95
+}
96
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/tcg/translate-neon.c
99
+++ b/target/arm/tcg/translate-neon.c
100
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
101
return true;
102
}
103
104
-static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
105
-{
106
- int pass, half;
107
- TCGv_i32 tmp[2];
108
-
109
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
110
- return false;
111
- }
112
-
113
- /* UNDEF accesses to D16-D31 if they don't exist. */
114
- if (!dc_isar_feature(aa32_simd_r32, s) &&
115
- ((a->vd | a->vm) & 0x10)) {
116
- return false;
117
- }
118
-
119
- if ((a->vd | a->vm) & a->q) {
120
- return false;
121
- }
122
-
123
- if (a->size == 3) {
124
- return false;
125
- }
126
-
127
- if (!vfp_access_check(s)) {
128
- return true;
129
- }
130
-
131
- tmp[0] = tcg_temp_new_i32();
132
- tmp[1] = tcg_temp_new_i32();
133
-
134
- for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
135
- for (half = 0; half < 2; half++) {
136
- read_neon_element32(tmp[half], a->vm, pass * 2 + half, MO_32);
137
- switch (a->size) {
138
- case 0:
139
- tcg_gen_bswap32_i32(tmp[half], tmp[half]);
140
- break;
141
- case 1:
142
- gen_swap_half(tmp[half], tmp[half]);
143
- break;
144
- case 2:
145
- break;
146
- default:
147
- g_assert_not_reached();
148
- }
149
- }
150
- write_neon_element32(tmp[1], a->vd, pass * 2, MO_32);
151
- write_neon_element32(tmp[0], a->vd, pass * 2 + 1, MO_32);
152
- }
153
- return true;
154
-}
155
-
156
static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
157
NeonGenWidenFn *widenfn,
158
NeonGenTwo64OpFn *opfn,
159
@@ -XXX,XX +XXX,XX @@ DO_2MISC_VEC(VCGE0, gen_gvec_cge0)
160
DO_2MISC_VEC(VCLT0, gen_gvec_clt0)
161
DO_2MISC_VEC(VCLS, gen_gvec_cls)
162
DO_2MISC_VEC(VCLZ, gen_gvec_clz)
163
+DO_2MISC_VEC(VREV64, gen_gvec_rev64)
164
165
static bool trans_VMVN(DisasContext *s, arg_2misc *a)
166
{
167
@@ -XXX,XX +XXX,XX @@ static bool trans_VCNT(DisasContext *s, arg_2misc *a)
168
return do_2misc_vec(s, a, gen_gvec_cnt);
169
}
170
171
+static bool trans_VREV16(DisasContext *s, arg_2misc *a)
172
+{
173
+ if (a->size != 0) {
174
+ return false;
175
+ }
176
+ return do_2misc_vec(s, a, gen_gvec_rev16);
177
+}
178
+
179
+static bool trans_VREV32(DisasContext *s, arg_2misc *a)
180
+{
181
+ if (a->size != 0 && a->size != 1) {
182
+ return false;
183
+ }
184
+ return do_2misc_vec(s, a, gen_gvec_rev32);
185
+}
186
+
187
#define WRAP_2M_3_OOL_FN(WRAPNAME, FUNC, DATA) \
188
static void WRAPNAME(unsigned vece, uint32_t rd_ofs, \
189
uint32_t rm_ofs, uint32_t oprsz, \
190
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
191
return true;
192
}
193
194
-static bool trans_VREV32(DisasContext *s, arg_2misc *a)
195
-{
196
- static NeonGenOneOpFn * const fn[] = {
197
- tcg_gen_bswap32_i32,
198
- gen_swap_half,
199
- NULL,
200
- NULL,
201
- };
202
- return do_2misc(s, a, fn[a->size]);
203
-}
204
-
205
-static bool trans_VREV16(DisasContext *s, arg_2misc *a)
206
-{
207
- if (a->size != 0) {
208
- return false;
209
- }
210
- return do_2misc(s, a, gen_rev16);
211
-}
212
-
213
static void gen_VABS_F(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
214
uint32_t oprsz, uint32_t maxsz)
215
{
216
--
217
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes REV16, REV32, REV64.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-44-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 5 +++
11
target/arm/tcg/translate-a64.c | 79 +++-------------------------------
12
2 files changed, 10 insertions(+), 74 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@
19
20
@qrr_b . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=0
21
@qrr_h . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=1
22
+@qrr_bh . q:1 ...... . esz:1 ...... ...... rn:5 rd:5 &qrr_e
23
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
24
25
@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
26
@@ -XXX,XX +XXX,XX @@ CMGE0_v 0.10 1110 ..1 00000 10001 0 ..... ..... @qrr_e
27
CMEQ0_v 0.00 1110 ..1 00000 10011 0 ..... ..... @qrr_e
28
CMLE0_v 0.10 1110 ..1 00000 10011 0 ..... ..... @qrr_e
29
CMLT0_v 0.00 1110 ..1 00000 10101 0 ..... ..... @qrr_e
30
+
31
+REV16_v 0.00 1110 001 00000 00011 0 ..... ..... @qrr_b
32
+REV32_v 0.10 1110 0.1 00000 00001 0 ..... ..... @qrr_bh
33
+REV64_v 0.00 1110 ..1 00000 00001 0 ..... ..... @qrr_e
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ TRANS(CMGE0_v, do_gvec_fn2, a, gen_gvec_cge0)
39
TRANS(CMLT0_v, do_gvec_fn2, a, gen_gvec_clt0)
40
TRANS(CMLE0_v, do_gvec_fn2, a, gen_gvec_cle0)
41
TRANS(CMEQ0_v, do_gvec_fn2, a, gen_gvec_ceq0)
42
+TRANS(REV16_v, do_gvec_fn2, a, gen_gvec_rev16)
43
+TRANS(REV32_v, do_gvec_fn2, a, gen_gvec_rev32)
44
45
static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
46
{
47
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
48
49
TRANS(CLS_v, do_gvec_fn2_bhs, a, gen_gvec_cls)
50
TRANS(CLZ_v, do_gvec_fn2_bhs, a, gen_gvec_clz)
51
+TRANS(REV64_v, do_gvec_fn2_bhs, a, gen_gvec_rev64)
52
53
/* Common vector code for handling integer to FP conversion */
54
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
55
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
56
}
57
}
58
59
-static void handle_rev(DisasContext *s, int opcode, bool u,
60
- bool is_q, int size, int rn, int rd)
61
-{
62
- int op = (opcode << 1) | u;
63
- int opsz = op + size;
64
- int grp_size = 3 - opsz;
65
- int dsize = is_q ? 128 : 64;
66
- int i;
67
-
68
- if (opsz >= 3) {
69
- unallocated_encoding(s);
70
- return;
71
- }
72
-
73
- if (!fp_access_check(s)) {
74
- return;
75
- }
76
-
77
- if (size == 0) {
78
- /* Special case bytes, use bswap op on each group of elements */
79
- int groups = dsize / (8 << grp_size);
80
-
81
- for (i = 0; i < groups; i++) {
82
- TCGv_i64 tcg_tmp = tcg_temp_new_i64();
83
-
84
- read_vec_element(s, tcg_tmp, rn, i, grp_size);
85
- switch (grp_size) {
86
- case MO_16:
87
- tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp, TCG_BSWAP_IZ);
88
- break;
89
- case MO_32:
90
- tcg_gen_bswap32_i64(tcg_tmp, tcg_tmp, TCG_BSWAP_IZ);
91
- break;
92
- case MO_64:
93
- tcg_gen_bswap64_i64(tcg_tmp, tcg_tmp);
94
- break;
95
- default:
96
- g_assert_not_reached();
97
- }
98
- write_vec_element(s, tcg_tmp, rd, i, grp_size);
99
- }
100
- clear_vec_high(s, is_q, rd);
101
- } else {
102
- int revmask = (1 << grp_size) - 1;
103
- int esize = 8 << size;
104
- int elements = dsize / esize;
105
- TCGv_i64 tcg_rn = tcg_temp_new_i64();
106
- TCGv_i64 tcg_rd[2];
107
-
108
- for (i = 0; i < 2; i++) {
109
- tcg_rd[i] = tcg_temp_new_i64();
110
- tcg_gen_movi_i64(tcg_rd[i], 0);
111
- }
112
-
113
- for (i = 0; i < elements; i++) {
114
- int e_rev = (i & 0xf) ^ revmask;
115
- int w = (e_rev * esize) / 64;
116
- int o = (e_rev * esize) % 64;
117
-
118
- read_vec_element(s, tcg_rn, rn, i, size);
119
- tcg_gen_deposit_i64(tcg_rd[w], tcg_rd[w], tcg_rn, o, esize);
120
- }
121
-
122
- for (i = 0; i < 2; i++) {
123
- write_vec_element(s, tcg_rd[i], rd, i, MO_64);
124
- }
125
- clear_vec_high(s, true, rd);
126
- }
127
-}
128
-
129
static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
130
bool is_q, int size, int rn, int rd)
131
{
132
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
133
TCGv_ptr tcg_fpstatus;
134
135
switch (opcode) {
136
- case 0x0: /* REV64, REV32 */
137
- case 0x1: /* REV16 */
138
- handle_rev(s, opcode, u, is_q, size, rn, rd);
139
- return;
140
case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
141
case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
142
if (size == 3) {
143
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
144
break;
145
}
146
default:
147
+ case 0x0: /* REV64, REV32 */
148
+ case 0x1: /* REV16 */
149
case 0x3: /* SUQADD, USQADD */
150
case 0x4: /* CLS, CLZ */
151
case 0x5: /* CNT, NOT, RBIT */
152
--
153
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Unify add/sub helpers and add a parameter for rounding.
3
Move from helper-a64.c to neon_helper.c so that these
4
This will allow saturating non-rounding to reuse this code.
4
functions are available for arm32 code as well.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
[PMM: fixed accidental use of '=' rather than '+=' in do_sqrdmlah_s]
8
Message-id: 20241211163036.2297116-45-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20200815013145.539409-15-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/vec_helper.c | 80 +++++++++++++++--------------------------
11
target/arm/helper.h | 2 ++
13
1 file changed, 29 insertions(+), 51 deletions(-)
12
target/arm/tcg/helper-a64.h | 2 --
13
target/arm/tcg/helper-a64.c | 43 ------------------------------------
14
target/arm/tcg/neon_helper.c | 43 ++++++++++++++++++++++++++++++++++++
15
4 files changed, 45 insertions(+), 45 deletions(-)
14
16
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/vec_helper.c
19
--- a/target/arm/helper.h
18
+++ b/target/arm/vec_helper.c
20
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(neon_addl_u16, i64, i64, i64)
20
#endif
22
DEF_HELPER_2(neon_addl_u32, i64, i64, i64)
21
23
DEF_HELPER_2(neon_paddl_u16, i64, i64, i64)
22
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
24
DEF_HELPER_2(neon_paddl_u32, i64, i64, i64)
23
-static int16_t inl_qrdmlah_s16(int16_t src1, int16_t src2,
25
+DEF_HELPER_FLAGS_1(neon_addlp_s8, TCG_CALL_NO_RWG_SE, i64, i64)
24
- int16_t src3, uint32_t *sat)
26
+DEF_HELPER_FLAGS_1(neon_addlp_s16, TCG_CALL_NO_RWG_SE, i64, i64)
25
+static int16_t do_sqrdmlah_h(int16_t src1, int16_t src2, int16_t src3,
27
DEF_HELPER_2(neon_subl_u16, i64, i64, i64)
26
+ bool neg, bool round, uint32_t *sat)
28
DEF_HELPER_2(neon_subl_u32, i64, i64, i64)
27
{
29
DEF_HELPER_3(neon_addl_saturate_s32, i64, env, i64, i64)
28
- /* Simplify:
30
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
29
+ /*
31
index XXXXXXX..XXXXXXX 100644
30
+ * Simplify:
32
--- a/target/arm/tcg/helper-a64.h
31
* = ((a3 << 16) + ((e1 * e2) << 1) + (1 << 15)) >> 16
33
+++ b/target/arm/tcg/helper-a64.h
32
* = ((a3 << 15) + (e1 * e2) + (1 << 14)) >> 15
34
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(recpsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
33
*/
35
DEF_HELPER_FLAGS_3(rsqrtsf_f16, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
34
int32_t ret = (int32_t)src1 * src2;
36
DEF_HELPER_FLAGS_3(rsqrtsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
35
- ret = ((int32_t)src3 << 15) + ret + (1 << 14);
37
DEF_HELPER_FLAGS_3(rsqrtsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
36
+ if (neg) {
38
-DEF_HELPER_FLAGS_1(neon_addlp_s8, TCG_CALL_NO_RWG_SE, i64, i64)
37
+ ret = -ret;
39
DEF_HELPER_FLAGS_1(neon_addlp_u8, TCG_CALL_NO_RWG_SE, i64, i64)
38
+ }
40
-DEF_HELPER_FLAGS_1(neon_addlp_s16, TCG_CALL_NO_RWG_SE, i64, i64)
39
+ ret += ((int32_t)src3 << 15) + (round << 14);
41
DEF_HELPER_FLAGS_1(neon_addlp_u16, TCG_CALL_NO_RWG_SE, i64, i64)
40
ret >>= 15;
42
DEF_HELPER_FLAGS_2(frecpx_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
41
+
43
DEF_HELPER_FLAGS_2(frecpx_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
42
if (ret != (int16_t)ret) {
44
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
43
*sat = 1;
45
index XXXXXXX..XXXXXXX 100644
44
- ret = (ret < 0 ? -0x8000 : 0x7fff);
46
--- a/target/arm/tcg/helper-a64.c
45
+ ret = (ret < 0 ? INT16_MIN : INT16_MAX);
47
+++ b/target/arm/tcg/helper-a64.c
46
}
48
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrtsf_f64)(float64 a, float64 b, void *fpstp)
47
return ret;
49
return float64_muladd(a, b, float64_three, float_muladd_halve_result, fpst);
48
}
50
}
49
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlah_s16)(CPUARMState *env, uint32_t src1,
51
50
uint32_t src2, uint32_t src3)
52
-/* Pairwise long add: add pairs of adjacent elements into
51
{
53
- * double-width elements in the result (eg _s8 is an 8x8->16 op)
52
uint32_t *sat = &env->vfp.qc[0];
54
- */
53
- uint16_t e1 = inl_qrdmlah_s16(src1, src2, src3, sat);
55
-uint64_t HELPER(neon_addlp_s8)(uint64_t a)
54
- uint16_t e2 = inl_qrdmlah_s16(src1 >> 16, src2 >> 16, src3 >> 16, sat);
55
+ uint16_t e1 = do_sqrdmlah_h(src1, src2, src3, false, true, sat);
56
+ uint16_t e2 = do_sqrdmlah_h(src1 >> 16, src2 >> 16, src3 >> 16,
57
+ false, true, sat);
58
return deposit32(e1, 16, 16, e2);
59
}
60
61
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlah_s16)(void *vd, void *vn, void *vm,
62
uintptr_t i;
63
64
for (i = 0; i < opr_sz / 2; ++i) {
65
- d[i] = inl_qrdmlah_s16(n[i], m[i], d[i], vq);
66
+ d[i] = do_sqrdmlah_h(n[i], m[i], d[i], false, true, vq);
67
}
68
clear_tail(d, opr_sz, simd_maxsz(desc));
69
}
70
71
-/* Signed saturating rounding doubling multiply-subtract high half, 16-bit */
72
-static int16_t inl_qrdmlsh_s16(int16_t src1, int16_t src2,
73
- int16_t src3, uint32_t *sat)
74
-{
56
-{
75
- /* Similarly, using subtraction:
57
- uint64_t nsignmask = 0x0080008000800080ULL;
76
- * = ((a3 << 16) - ((e1 * e2) << 1) + (1 << 15)) >> 16
58
- uint64_t wsignmask = 0x8000800080008000ULL;
77
- * = ((a3 << 15) - (e1 * e2) + (1 << 14)) >> 15
59
- uint64_t elementmask = 0x00ff00ff00ff00ffULL;
60
- uint64_t tmp1, tmp2;
61
- uint64_t res, signres;
62
-
63
- /* Extract odd elements, sign extend each to a 16 bit field */
64
- tmp1 = a & elementmask;
65
- tmp1 ^= nsignmask;
66
- tmp1 |= wsignmask;
67
- tmp1 = (tmp1 - nsignmask) ^ wsignmask;
68
- /* Ditto for the even elements */
69
- tmp2 = (a >> 8) & elementmask;
70
- tmp2 ^= nsignmask;
71
- tmp2 |= wsignmask;
72
- tmp2 = (tmp2 - nsignmask) ^ wsignmask;
73
-
74
- /* calculate the result by summing bits 0..14, 16..22, etc,
75
- * and then adjusting the sign bits 15, 23, etc manually.
76
- * This ensures the addition can't overflow the 16 bit field.
78
- */
77
- */
79
- int32_t ret = (int32_t)src1 * src2;
78
- signres = (tmp1 ^ tmp2) & wsignmask;
80
- ret = ((int32_t)src3 << 15) - ret + (1 << 14);
79
- res = (tmp1 & ~wsignmask) + (tmp2 & ~wsignmask);
81
- ret >>= 15;
80
- res ^= signres;
82
- if (ret != (int16_t)ret) {
81
-
83
- *sat = 1;
82
- return res;
84
- ret = (ret < 0 ? -0x8000 : 0x7fff);
85
- }
86
- return ret;
87
-}
83
-}
88
-
84
-
89
uint32_t HELPER(neon_qrdmlsh_s16)(CPUARMState *env, uint32_t src1,
85
uint64_t HELPER(neon_addlp_u8)(uint64_t a)
90
uint32_t src2, uint32_t src3)
91
{
86
{
92
uint32_t *sat = &env->vfp.qc[0];
87
uint64_t tmp;
93
- uint16_t e1 = inl_qrdmlsh_s16(src1, src2, src3, sat);
88
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u8)(uint64_t a)
94
- uint16_t e2 = inl_qrdmlsh_s16(src1 >> 16, src2 >> 16, src3 >> 16, sat);
89
return tmp;
95
+ uint16_t e1 = do_sqrdmlah_h(src1, src2, src3, true, true, sat);
96
+ uint16_t e2 = do_sqrdmlah_h(src1 >> 16, src2 >> 16, src3 >> 16,
97
+ true, true, sat);
98
return deposit32(e1, 16, 16, e2);
99
}
90
}
100
91
101
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s16)(void *vd, void *vn, void *vm,
92
-uint64_t HELPER(neon_addlp_s16)(uint64_t a)
102
uintptr_t i;
103
104
for (i = 0; i < opr_sz / 2; ++i) {
105
- d[i] = inl_qrdmlsh_s16(n[i], m[i], d[i], vq);
106
+ d[i] = do_sqrdmlah_h(n[i], m[i], d[i], true, true, vq);
107
}
108
clear_tail(d, opr_sz, simd_maxsz(desc));
109
}
110
111
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
112
-static int32_t inl_qrdmlah_s32(int32_t src1, int32_t src2,
113
- int32_t src3, uint32_t *sat)
114
+static int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
115
+ bool neg, bool round, uint32_t *sat)
116
{
117
/* Simplify similarly to int_qrdmlah_s16 above. */
118
int64_t ret = (int64_t)src1 * src2;
119
- ret = ((int64_t)src3 << 31) + ret + (1 << 30);
120
+ if (neg) {
121
+ ret = -ret;
122
+ }
123
+ ret += ((int64_t)src3 << 31) + (round << 30);
124
ret >>= 31;
125
+
126
if (ret != (int32_t)ret) {
127
*sat = 1;
128
ret = (ret < 0 ? INT32_MIN : INT32_MAX);
129
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
130
int32_t src2, int32_t src3)
131
{
132
uint32_t *sat = &env->vfp.qc[0];
133
- return inl_qrdmlah_s32(src1, src2, src3, sat);
134
+ return do_sqrdmlah_s(src1, src2, src3, false, true, sat);
135
}
136
137
void HELPER(gvec_qrdmlah_s32)(void *vd, void *vn, void *vm,
138
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlah_s32)(void *vd, void *vn, void *vm,
139
uintptr_t i;
140
141
for (i = 0; i < opr_sz / 4; ++i) {
142
- d[i] = inl_qrdmlah_s32(n[i], m[i], d[i], vq);
143
+ d[i] = do_sqrdmlah_s(n[i], m[i], d[i], false, true, vq);
144
}
145
clear_tail(d, opr_sz, simd_maxsz(desc));
146
}
147
148
-/* Signed saturating rounding doubling multiply-subtract high half, 32-bit */
149
-static int32_t inl_qrdmlsh_s32(int32_t src1, int32_t src2,
150
- int32_t src3, uint32_t *sat)
151
-{
93
-{
152
- /* Simplify similarly to int_qrdmlsh_s16 above. */
94
- int32_t reslo, reshi;
153
- int64_t ret = (int64_t)src1 * src2;
95
-
154
- ret = ((int64_t)src3 << 31) - ret + (1 << 30);
96
- reslo = (int32_t)(int16_t)a + (int32_t)(int16_t)(a >> 16);
155
- ret >>= 31;
97
- reshi = (int32_t)(int16_t)(a >> 32) + (int32_t)(int16_t)(a >> 48);
156
- if (ret != (int32_t)ret) {
98
-
157
- *sat = 1;
99
- return (uint32_t)reslo | (((uint64_t)reshi) << 32);
158
- ret = (ret < 0 ? INT32_MIN : INT32_MAX);
159
- }
160
- return ret;
161
-}
100
-}
162
-
101
-
163
uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
102
uint64_t HELPER(neon_addlp_u16)(uint64_t a)
164
int32_t src2, int32_t src3)
165
{
103
{
166
uint32_t *sat = &env->vfp.qc[0];
104
uint64_t tmp;
167
- return inl_qrdmlsh_s32(src1, src2, src3, sat);
105
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
168
+ return do_sqrdmlah_s(src1, src2, src3, true, true, sat);
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/tcg/neon_helper.c
108
+++ b/target/arm/tcg/neon_helper.c
109
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_paddl_u32)(uint64_t a, uint64_t b)
110
return low + ((uint64_t)high << 32);
169
}
111
}
170
112
171
void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
113
+/* Pairwise long add: add pairs of adjacent elements into
172
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
114
+ * double-width elements in the result (eg _s8 is an 8x8->16 op)
173
uintptr_t i;
115
+ */
174
116
+uint64_t HELPER(neon_addlp_s8)(uint64_t a)
175
for (i = 0; i < opr_sz / 4; ++i) {
117
+{
176
- d[i] = inl_qrdmlsh_s32(n[i], m[i], d[i], vq);
118
+ uint64_t nsignmask = 0x0080008000800080ULL;
177
+ d[i] = do_sqrdmlah_s(n[i], m[i], d[i], true, true, vq);
119
+ uint64_t wsignmask = 0x8000800080008000ULL;
178
}
120
+ uint64_t elementmask = 0x00ff00ff00ff00ffULL;
179
clear_tail(d, opr_sz, simd_maxsz(desc));
121
+ uint64_t tmp1, tmp2;
180
}
122
+ uint64_t res, signres;
123
+
124
+ /* Extract odd elements, sign extend each to a 16 bit field */
125
+ tmp1 = a & elementmask;
126
+ tmp1 ^= nsignmask;
127
+ tmp1 |= wsignmask;
128
+ tmp1 = (tmp1 - nsignmask) ^ wsignmask;
129
+ /* Ditto for the even elements */
130
+ tmp2 = (a >> 8) & elementmask;
131
+ tmp2 ^= nsignmask;
132
+ tmp2 |= wsignmask;
133
+ tmp2 = (tmp2 - nsignmask) ^ wsignmask;
134
+
135
+ /* calculate the result by summing bits 0..14, 16..22, etc,
136
+ * and then adjusting the sign bits 15, 23, etc manually.
137
+ * This ensures the addition can't overflow the 16 bit field.
138
+ */
139
+ signres = (tmp1 ^ tmp2) & wsignmask;
140
+ res = (tmp1 & ~wsignmask) + (tmp2 & ~wsignmask);
141
+ res ^= signres;
142
+
143
+ return res;
144
+}
145
+
146
+uint64_t HELPER(neon_addlp_s16)(uint64_t a)
147
+{
148
+ int32_t reslo, reshi;
149
+
150
+ reslo = (int32_t)(int16_t)a + (int32_t)(int16_t)(a >> 16);
151
+ reshi = (int32_t)(int16_t)(a >> 32) + (int32_t)(int16_t)(a >> 48);
152
+
153
+ return (uint32_t)reslo | (((uint64_t)reshi) << 32);
154
+}
155
+
156
uint64_t HELPER(neon_subl_u16)(uint64_t a, uint64_t b)
157
{
158
uint64_t mask;
181
--
159
--
182
2.20.1
160
2.34.1
183
184
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Pairwise addition with and without accumulation.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-46-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 2 -
11
target/arm/tcg/translate.h | 9 ++
12
target/arm/tcg/gengvec.c | 230 ++++++++++++++++++++++++++++++++
13
target/arm/tcg/neon_helper.c | 22 ---
14
target/arm/tcg/translate-neon.c | 150 +--------------------
15
5 files changed, 243 insertions(+), 170 deletions(-)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(neon_widen_s16, i64, i32)
22
23
DEF_HELPER_2(neon_addl_u16, i64, i64, i64)
24
DEF_HELPER_2(neon_addl_u32, i64, i64, i64)
25
-DEF_HELPER_2(neon_paddl_u16, i64, i64, i64)
26
-DEF_HELPER_2(neon_paddl_u32, i64, i64, i64)
27
DEF_HELPER_FLAGS_1(neon_addlp_s8, TCG_CALL_NO_RWG_SE, i64, i64)
28
DEF_HELPER_FLAGS_1(neon_addlp_s16, TCG_CALL_NO_RWG_SE, i64, i64)
29
DEF_HELPER_2(neon_subl_u16, i64, i64, i64)
30
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/translate.h
33
+++ b/target/arm/tcg/translate.h
34
@@ -XXX,XX +XXX,XX @@ void gen_gvec_rev32(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
35
void gen_gvec_rev64(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
36
uint32_t opr_sz, uint32_t max_sz);
37
38
+void gen_gvec_saddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
39
+ uint32_t opr_sz, uint32_t max_sz);
40
+void gen_gvec_sadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
41
+ uint32_t opr_sz, uint32_t max_sz);
42
+void gen_gvec_uaddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
43
+ uint32_t opr_sz, uint32_t max_sz);
44
+void gen_gvec_uadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
45
+ uint32_t opr_sz, uint32_t max_sz);
46
+
47
/*
48
* Forward to the isar_feature_* tests given a DisasContext pointer.
49
*/
50
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/gengvec.c
53
+++ b/target/arm/tcg/gengvec.c
54
@@ -XXX,XX +XXX,XX @@ void gen_gvec_rev64(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
55
g_assert_not_reached();
56
}
57
}
58
+
59
+static void gen_saddlp_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
60
+{
61
+ int half = 4 << vece;
62
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
63
+
64
+ tcg_gen_shli_vec(vece, t, n, half);
65
+ tcg_gen_sari_vec(vece, d, n, half);
66
+ tcg_gen_sari_vec(vece, t, t, half);
67
+ tcg_gen_add_vec(vece, d, d, t);
68
+}
69
+
70
+static void gen_saddlp_s_i64(TCGv_i64 d, TCGv_i64 n)
71
+{
72
+ TCGv_i64 t = tcg_temp_new_i64();
73
+
74
+ tcg_gen_ext32s_i64(t, n);
75
+ tcg_gen_sari_i64(d, n, 32);
76
+ tcg_gen_add_i64(d, d, t);
77
+}
78
+
79
+void gen_gvec_saddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
80
+ uint32_t opr_sz, uint32_t max_sz)
81
+{
82
+ static const TCGOpcode vecop_list[] = {
83
+ INDEX_op_sari_vec, INDEX_op_shli_vec, INDEX_op_add_vec, 0
84
+ };
85
+ static const GVecGen2 g[] = {
86
+ { .fniv = gen_saddlp_vec,
87
+ .fni8 = gen_helper_neon_addlp_s8,
88
+ .opt_opc = vecop_list,
89
+ .vece = MO_16 },
90
+ { .fniv = gen_saddlp_vec,
91
+ .fni8 = gen_helper_neon_addlp_s16,
92
+ .opt_opc = vecop_list,
93
+ .vece = MO_32 },
94
+ { .fniv = gen_saddlp_vec,
95
+ .fni8 = gen_saddlp_s_i64,
96
+ .opt_opc = vecop_list,
97
+ .vece = MO_64 },
98
+ };
99
+ assert(vece <= MO_32);
100
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
101
+}
102
+
103
+static void gen_sadalp_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
104
+{
105
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
106
+
107
+ gen_saddlp_vec(vece, t, n);
108
+ tcg_gen_add_vec(vece, d, d, t);
109
+}
110
+
111
+static void gen_sadalp_b_i64(TCGv_i64 d, TCGv_i64 n)
112
+{
113
+ TCGv_i64 t = tcg_temp_new_i64();
114
+
115
+ gen_helper_neon_addlp_s8(t, n);
116
+ tcg_gen_vec_add16_i64(d, d, t);
117
+}
118
+
119
+static void gen_sadalp_h_i64(TCGv_i64 d, TCGv_i64 n)
120
+{
121
+ TCGv_i64 t = tcg_temp_new_i64();
122
+
123
+ gen_helper_neon_addlp_s16(t, n);
124
+ tcg_gen_vec_add32_i64(d, d, t);
125
+}
126
+
127
+static void gen_sadalp_s_i64(TCGv_i64 d, TCGv_i64 n)
128
+{
129
+ TCGv_i64 t = tcg_temp_new_i64();
130
+
131
+ gen_saddlp_s_i64(t, n);
132
+ tcg_gen_add_i64(d, d, t);
133
+}
134
+
135
+void gen_gvec_sadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
136
+ uint32_t opr_sz, uint32_t max_sz)
137
+{
138
+ static const TCGOpcode vecop_list[] = {
139
+ INDEX_op_sari_vec, INDEX_op_shli_vec, INDEX_op_add_vec, 0
140
+ };
141
+ static const GVecGen2 g[] = {
142
+ { .fniv = gen_sadalp_vec,
143
+ .fni8 = gen_sadalp_b_i64,
144
+ .opt_opc = vecop_list,
145
+ .load_dest = true,
146
+ .vece = MO_16 },
147
+ { .fniv = gen_sadalp_vec,
148
+ .fni8 = gen_sadalp_h_i64,
149
+ .opt_opc = vecop_list,
150
+ .load_dest = true,
151
+ .vece = MO_32 },
152
+ { .fniv = gen_sadalp_vec,
153
+ .fni8 = gen_sadalp_s_i64,
154
+ .opt_opc = vecop_list,
155
+ .load_dest = true,
156
+ .vece = MO_64 },
157
+ };
158
+ assert(vece <= MO_32);
159
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
160
+}
161
+
162
+static void gen_uaddlp_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
163
+{
164
+ int half = 4 << vece;
165
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
166
+ TCGv_vec m = tcg_constant_vec_matching(d, vece, MAKE_64BIT_MASK(0, half));
167
+
168
+ tcg_gen_shri_vec(vece, t, n, half);
169
+ tcg_gen_and_vec(vece, d, n, m);
170
+ tcg_gen_add_vec(vece, d, d, t);
171
+}
172
+
173
+static void gen_uaddlp_b_i64(TCGv_i64 d, TCGv_i64 n)
174
+{
175
+ TCGv_i64 t = tcg_temp_new_i64();
176
+ TCGv_i64 m = tcg_constant_i64(dup_const(MO_16, 0xff));
177
+
178
+ tcg_gen_shri_i64(t, n, 8);
179
+ tcg_gen_and_i64(d, n, m);
180
+ tcg_gen_and_i64(t, t, m);
181
+ /* No carry between widened unsigned elements. */
182
+ tcg_gen_add_i64(d, d, t);
183
+}
184
+
185
+static void gen_uaddlp_h_i64(TCGv_i64 d, TCGv_i64 n)
186
+{
187
+ TCGv_i64 t = tcg_temp_new_i64();
188
+ TCGv_i64 m = tcg_constant_i64(dup_const(MO_32, 0xffff));
189
+
190
+ tcg_gen_shri_i64(t, n, 16);
191
+ tcg_gen_and_i64(d, n, m);
192
+ tcg_gen_and_i64(t, t, m);
193
+ /* No carry between widened unsigned elements. */
194
+ tcg_gen_add_i64(d, d, t);
195
+}
196
+
197
+static void gen_uaddlp_s_i64(TCGv_i64 d, TCGv_i64 n)
198
+{
199
+ TCGv_i64 t = tcg_temp_new_i64();
200
+
201
+ tcg_gen_ext32u_i64(t, n);
202
+ tcg_gen_shri_i64(d, n, 32);
203
+ tcg_gen_add_i64(d, d, t);
204
+}
205
+
206
+void gen_gvec_uaddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
207
+ uint32_t opr_sz, uint32_t max_sz)
208
+{
209
+ static const TCGOpcode vecop_list[] = {
210
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
211
+ };
212
+ static const GVecGen2 g[] = {
213
+ { .fniv = gen_uaddlp_vec,
214
+ .fni8 = gen_uaddlp_b_i64,
215
+ .opt_opc = vecop_list,
216
+ .vece = MO_16 },
217
+ { .fniv = gen_uaddlp_vec,
218
+ .fni8 = gen_uaddlp_h_i64,
219
+ .opt_opc = vecop_list,
220
+ .vece = MO_32 },
221
+ { .fniv = gen_uaddlp_vec,
222
+ .fni8 = gen_uaddlp_s_i64,
223
+ .opt_opc = vecop_list,
224
+ .vece = MO_64 },
225
+ };
226
+ assert(vece <= MO_32);
227
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
228
+}
229
+
230
+static void gen_uadalp_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
231
+{
232
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
233
+
234
+ gen_uaddlp_vec(vece, t, n);
235
+ tcg_gen_add_vec(vece, d, d, t);
236
+}
237
+
238
+static void gen_uadalp_b_i64(TCGv_i64 d, TCGv_i64 n)
239
+{
240
+ TCGv_i64 t = tcg_temp_new_i64();
241
+
242
+ gen_uaddlp_b_i64(t, n);
243
+ tcg_gen_vec_add16_i64(d, d, t);
244
+}
245
+
246
+static void gen_uadalp_h_i64(TCGv_i64 d, TCGv_i64 n)
247
+{
248
+ TCGv_i64 t = tcg_temp_new_i64();
249
+
250
+ gen_uaddlp_h_i64(t, n);
251
+ tcg_gen_vec_add32_i64(d, d, t);
252
+}
253
+
254
+static void gen_uadalp_s_i64(TCGv_i64 d, TCGv_i64 n)
255
+{
256
+ TCGv_i64 t = tcg_temp_new_i64();
257
+
258
+ gen_uaddlp_s_i64(t, n);
259
+ tcg_gen_add_i64(d, d, t);
260
+}
261
+
262
+void gen_gvec_uadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
263
+ uint32_t opr_sz, uint32_t max_sz)
264
+{
265
+ static const TCGOpcode vecop_list[] = {
266
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
267
+ };
268
+ static const GVecGen2 g[] = {
269
+ { .fniv = gen_uadalp_vec,
270
+ .fni8 = gen_uadalp_b_i64,
271
+ .load_dest = true,
272
+ .opt_opc = vecop_list,
273
+ .vece = MO_16 },
274
+ { .fniv = gen_uadalp_vec,
275
+ .fni8 = gen_uadalp_h_i64,
276
+ .load_dest = true,
277
+ .opt_opc = vecop_list,
278
+ .vece = MO_32 },
279
+ { .fniv = gen_uadalp_vec,
280
+ .fni8 = gen_uadalp_s_i64,
281
+ .load_dest = true,
282
+ .opt_opc = vecop_list,
283
+ .vece = MO_64 },
284
+ };
285
+ assert(vece <= MO_32);
286
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
287
+}
288
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
289
index XXXXXXX..XXXXXXX 100644
290
--- a/target/arm/tcg/neon_helper.c
291
+++ b/target/arm/tcg/neon_helper.c
292
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addl_u32)(uint64_t a, uint64_t b)
293
return (a + b) ^ mask;
294
}
295
296
-uint64_t HELPER(neon_paddl_u16)(uint64_t a, uint64_t b)
297
-{
298
- uint64_t tmp;
299
- uint64_t tmp2;
300
-
301
- tmp = a & 0x0000ffff0000ffffull;
302
- tmp += (a >> 16) & 0x0000ffff0000ffffull;
303
- tmp2 = b & 0xffff0000ffff0000ull;
304
- tmp2 += (b << 16) & 0xffff0000ffff0000ull;
305
- return ( tmp & 0xffff)
306
- | ((tmp >> 16) & 0xffff0000ull)
307
- | ((tmp2 << 16) & 0xffff00000000ull)
308
- | ( tmp2 & 0xffff000000000000ull);
309
-}
310
-
311
-uint64_t HELPER(neon_paddl_u32)(uint64_t a, uint64_t b)
312
-{
313
- uint32_t low = a + (a >> 32);
314
- uint32_t high = b + (b >> 32);
315
- return low + ((uint64_t)high << 32);
316
-}
317
-
318
/* Pairwise long add: add pairs of adjacent elements into
319
* double-width elements in the result (eg _s8 is an 8x8->16 op)
320
*/
321
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/tcg/translate-neon.c
324
+++ b/target/arm/tcg/translate-neon.c
325
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
326
return true;
327
}
328
329
-static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
330
- NeonGenWidenFn *widenfn,
331
- NeonGenTwo64OpFn *opfn,
332
- NeonGenTwo64OpFn *accfn)
333
-{
334
- /*
335
- * Pairwise long operations: widen both halves of the pair,
336
- * combine the pairs with the opfn, and then possibly accumulate
337
- * into the destination with the accfn.
338
- */
339
- int pass;
340
-
341
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
342
- return false;
343
- }
344
-
345
- /* UNDEF accesses to D16-D31 if they don't exist. */
346
- if (!dc_isar_feature(aa32_simd_r32, s) &&
347
- ((a->vd | a->vm) & 0x10)) {
348
- return false;
349
- }
350
-
351
- if ((a->vd | a->vm) & a->q) {
352
- return false;
353
- }
354
-
355
- if (!widenfn) {
356
- return false;
357
- }
358
-
359
- if (!vfp_access_check(s)) {
360
- return true;
361
- }
362
-
363
- for (pass = 0; pass < a->q + 1; pass++) {
364
- TCGv_i32 tmp;
365
- TCGv_i64 rm0_64, rm1_64, rd_64;
366
-
367
- rm0_64 = tcg_temp_new_i64();
368
- rm1_64 = tcg_temp_new_i64();
369
- rd_64 = tcg_temp_new_i64();
370
-
371
- tmp = tcg_temp_new_i32();
372
- read_neon_element32(tmp, a->vm, pass * 2, MO_32);
373
- widenfn(rm0_64, tmp);
374
- read_neon_element32(tmp, a->vm, pass * 2 + 1, MO_32);
375
- widenfn(rm1_64, tmp);
376
-
377
- opfn(rd_64, rm0_64, rm1_64);
378
-
379
- if (accfn) {
380
- TCGv_i64 tmp64 = tcg_temp_new_i64();
381
- read_neon_element64(tmp64, a->vd, pass, MO_64);
382
- accfn(rd_64, tmp64, rd_64);
383
- }
384
- write_neon_element64(rd_64, a->vd, pass, MO_64);
385
- }
386
- return true;
387
-}
388
-
389
-static bool trans_VPADDL_S(DisasContext *s, arg_2misc *a)
390
-{
391
- static NeonGenWidenFn * const widenfn[] = {
392
- gen_helper_neon_widen_s8,
393
- gen_helper_neon_widen_s16,
394
- tcg_gen_ext_i32_i64,
395
- NULL,
396
- };
397
- static NeonGenTwo64OpFn * const opfn[] = {
398
- gen_helper_neon_paddl_u16,
399
- gen_helper_neon_paddl_u32,
400
- tcg_gen_add_i64,
401
- NULL,
402
- };
403
-
404
- return do_2misc_pairwise(s, a, widenfn[a->size], opfn[a->size], NULL);
405
-}
406
-
407
-static bool trans_VPADDL_U(DisasContext *s, arg_2misc *a)
408
-{
409
- static NeonGenWidenFn * const widenfn[] = {
410
- gen_helper_neon_widen_u8,
411
- gen_helper_neon_widen_u16,
412
- tcg_gen_extu_i32_i64,
413
- NULL,
414
- };
415
- static NeonGenTwo64OpFn * const opfn[] = {
416
- gen_helper_neon_paddl_u16,
417
- gen_helper_neon_paddl_u32,
418
- tcg_gen_add_i64,
419
- NULL,
420
- };
421
-
422
- return do_2misc_pairwise(s, a, widenfn[a->size], opfn[a->size], NULL);
423
-}
424
-
425
-static bool trans_VPADAL_S(DisasContext *s, arg_2misc *a)
426
-{
427
- static NeonGenWidenFn * const widenfn[] = {
428
- gen_helper_neon_widen_s8,
429
- gen_helper_neon_widen_s16,
430
- tcg_gen_ext_i32_i64,
431
- NULL,
432
- };
433
- static NeonGenTwo64OpFn * const opfn[] = {
434
- gen_helper_neon_paddl_u16,
435
- gen_helper_neon_paddl_u32,
436
- tcg_gen_add_i64,
437
- NULL,
438
- };
439
- static NeonGenTwo64OpFn * const accfn[] = {
440
- gen_helper_neon_addl_u16,
441
- gen_helper_neon_addl_u32,
442
- tcg_gen_add_i64,
443
- NULL,
444
- };
445
-
446
- return do_2misc_pairwise(s, a, widenfn[a->size], opfn[a->size],
447
- accfn[a->size]);
448
-}
449
-
450
-static bool trans_VPADAL_U(DisasContext *s, arg_2misc *a)
451
-{
452
- static NeonGenWidenFn * const widenfn[] = {
453
- gen_helper_neon_widen_u8,
454
- gen_helper_neon_widen_u16,
455
- tcg_gen_extu_i32_i64,
456
- NULL,
457
- };
458
- static NeonGenTwo64OpFn * const opfn[] = {
459
- gen_helper_neon_paddl_u16,
460
- gen_helper_neon_paddl_u32,
461
- tcg_gen_add_i64,
462
- NULL,
463
- };
464
- static NeonGenTwo64OpFn * const accfn[] = {
465
- gen_helper_neon_addl_u16,
466
- gen_helper_neon_addl_u32,
467
- tcg_gen_add_i64,
468
- NULL,
469
- };
470
-
471
- return do_2misc_pairwise(s, a, widenfn[a->size], opfn[a->size],
472
- accfn[a->size]);
473
-}
474
-
475
typedef void ZipFn(TCGv_ptr, TCGv_ptr);
476
477
static bool do_zip_uzp(DisasContext *s, arg_2misc *a,
478
@@ -XXX,XX +XXX,XX @@ DO_2MISC_VEC(VCLT0, gen_gvec_clt0)
479
DO_2MISC_VEC(VCLS, gen_gvec_cls)
480
DO_2MISC_VEC(VCLZ, gen_gvec_clz)
481
DO_2MISC_VEC(VREV64, gen_gvec_rev64)
482
+DO_2MISC_VEC(VPADDL_S, gen_gvec_saddlp)
483
+DO_2MISC_VEC(VPADDL_U, gen_gvec_uaddlp)
484
+DO_2MISC_VEC(VPADAL_S, gen_gvec_sadalp)
485
+DO_2MISC_VEC(VPADAL_U, gen_gvec_uadalp)
486
487
static bool trans_VMVN(DisasContext *s, arg_2misc *a)
488
{
489
--
490
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes SADDLP, UADDLP, SADALP, UADALP.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-47-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/helper-a64.h | 2 -
11
target/arm/tcg/a64.decode | 5 ++
12
target/arm/tcg/helper-a64.c | 18 --------
13
target/arm/tcg/translate-a64.c | 84 +++-------------------------------
14
4 files changed, 11 insertions(+), 98 deletions(-)
15
16
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/helper-a64.h
19
+++ b/target/arm/tcg/helper-a64.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(recpsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
21
DEF_HELPER_FLAGS_3(rsqrtsf_f16, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
22
DEF_HELPER_FLAGS_3(rsqrtsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
23
DEF_HELPER_FLAGS_3(rsqrtsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
24
-DEF_HELPER_FLAGS_1(neon_addlp_u8, TCG_CALL_NO_RWG_SE, i64, i64)
25
-DEF_HELPER_FLAGS_1(neon_addlp_u16, TCG_CALL_NO_RWG_SE, i64, i64)
26
DEF_HELPER_FLAGS_2(frecpx_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
27
DEF_HELPER_FLAGS_2(frecpx_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
28
DEF_HELPER_FLAGS_2(frecpx_f16, TCG_CALL_NO_RWG, f16, f16, ptr)
29
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/a64.decode
32
+++ b/target/arm/tcg/a64.decode
33
@@ -XXX,XX +XXX,XX @@ CMLT0_v 0.00 1110 ..1 00000 10101 0 ..... ..... @qrr_e
34
REV16_v 0.00 1110 001 00000 00011 0 ..... ..... @qrr_b
35
REV32_v 0.10 1110 0.1 00000 00001 0 ..... ..... @qrr_bh
36
REV64_v 0.00 1110 ..1 00000 00001 0 ..... ..... @qrr_e
37
+
38
+SADDLP_v 0.00 1110 ..1 00000 00101 0 ..... ..... @qrr_e
39
+UADDLP_v 0.10 1110 ..1 00000 00101 0 ..... ..... @qrr_e
40
+SADALP_v 0.00 1110 ..1 00000 01101 0 ..... ..... @qrr_e
41
+UADALP_v 0.10 1110 ..1 00000 01101 0 ..... ..... @qrr_e
42
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/helper-a64.c
45
+++ b/target/arm/tcg/helper-a64.c
46
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrtsf_f64)(float64 a, float64 b, void *fpstp)
47
return float64_muladd(a, b, float64_three, float_muladd_halve_result, fpst);
48
}
49
50
-uint64_t HELPER(neon_addlp_u8)(uint64_t a)
51
-{
52
- uint64_t tmp;
53
-
54
- tmp = a & 0x00ff00ff00ff00ffULL;
55
- tmp += (a >> 8) & 0x00ff00ff00ff00ffULL;
56
- return tmp;
57
-}
58
-
59
-uint64_t HELPER(neon_addlp_u16)(uint64_t a)
60
-{
61
- uint64_t tmp;
62
-
63
- tmp = a & 0x0000ffff0000ffffULL;
64
- tmp += (a >> 16) & 0x0000ffff0000ffffULL;
65
- return tmp;
66
-}
67
-
68
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
69
uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
70
{
71
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/tcg/translate-a64.c
74
+++ b/target/arm/tcg/translate-a64.c
75
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
76
TRANS(CLS_v, do_gvec_fn2_bhs, a, gen_gvec_cls)
77
TRANS(CLZ_v, do_gvec_fn2_bhs, a, gen_gvec_clz)
78
TRANS(REV64_v, do_gvec_fn2_bhs, a, gen_gvec_rev64)
79
+TRANS(SADDLP_v, do_gvec_fn2_bhs, a, gen_gvec_saddlp)
80
+TRANS(UADDLP_v, do_gvec_fn2_bhs, a, gen_gvec_uaddlp)
81
+TRANS(SADALP_v, do_gvec_fn2_bhs, a, gen_gvec_sadalp)
82
+TRANS(UADALP_v, do_gvec_fn2_bhs, a, gen_gvec_uadalp)
83
84
/* Common vector code for handling integer to FP conversion */
85
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
86
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
87
}
88
}
89
90
-static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
91
- bool is_q, int size, int rn, int rd)
92
-{
93
- /* Implement the pairwise operations from 2-misc:
94
- * SADDLP, UADDLP, SADALP, UADALP.
95
- * These all add pairs of elements in the input to produce a
96
- * double-width result element in the output (possibly accumulating).
97
- */
98
- bool accum = (opcode == 0x6);
99
- int maxpass = is_q ? 2 : 1;
100
- int pass;
101
- TCGv_i64 tcg_res[2];
102
-
103
- if (size == 2) {
104
- /* 32 + 32 -> 64 op */
105
- MemOp memop = size + (u ? 0 : MO_SIGN);
106
-
107
- for (pass = 0; pass < maxpass; pass++) {
108
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
109
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
110
-
111
- tcg_res[pass] = tcg_temp_new_i64();
112
-
113
- read_vec_element(s, tcg_op1, rn, pass * 2, memop);
114
- read_vec_element(s, tcg_op2, rn, pass * 2 + 1, memop);
115
- tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
116
- if (accum) {
117
- read_vec_element(s, tcg_op1, rd, pass, MO_64);
118
- tcg_gen_add_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
119
- }
120
- }
121
- } else {
122
- for (pass = 0; pass < maxpass; pass++) {
123
- TCGv_i64 tcg_op = tcg_temp_new_i64();
124
- NeonGenOne64OpFn *genfn;
125
- static NeonGenOne64OpFn * const fns[2][2] = {
126
- { gen_helper_neon_addlp_s8, gen_helper_neon_addlp_u8 },
127
- { gen_helper_neon_addlp_s16, gen_helper_neon_addlp_u16 },
128
- };
129
-
130
- genfn = fns[size][u];
131
-
132
- tcg_res[pass] = tcg_temp_new_i64();
133
-
134
- read_vec_element(s, tcg_op, rn, pass, MO_64);
135
- genfn(tcg_res[pass], tcg_op);
136
-
137
- if (accum) {
138
- read_vec_element(s, tcg_op, rd, pass, MO_64);
139
- if (size == 0) {
140
- gen_helper_neon_addl_u16(tcg_res[pass],
141
- tcg_res[pass], tcg_op);
142
- } else {
143
- gen_helper_neon_addl_u32(tcg_res[pass],
144
- tcg_res[pass], tcg_op);
145
- }
146
- }
147
- }
148
- }
149
- if (!is_q) {
150
- tcg_res[1] = tcg_constant_i64(0);
151
- }
152
- for (pass = 0; pass < 2; pass++) {
153
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
154
- }
155
-}
156
-
157
static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
158
{
159
/* Implement SHLL and SHLL2 */
160
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
161
162
handle_2misc_narrow(s, false, opcode, u, is_q, size, rn, rd);
163
return;
164
- case 0x2: /* SADDLP, UADDLP */
165
- case 0x6: /* SADALP, UADALP */
166
- if (size == 3) {
167
- unallocated_encoding(s);
168
- return;
169
- }
170
- if (!fp_access_check(s)) {
171
- return;
172
- }
173
- handle_2misc_pairwise(s, opcode, u, is_q, size, rn, rd);
174
- return;
175
case 0x13: /* SHLL, SHLL2 */
176
if (u == 0 || size == 3) {
177
unallocated_encoding(s);
178
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
179
default:
180
case 0x0: /* REV64, REV32 */
181
case 0x1: /* REV16 */
182
+ case 0x2: /* SADDLP, UADDLP */
183
case 0x3: /* SUQADD, USQADD */
184
case 0x4: /* CLS, CLZ */
185
case 0x5: /* CNT, NOT, RBIT */
186
+ case 0x6: /* SADALP, UADALP */
187
case 0x7: /* SQABS, SQNEG */
188
case 0x8: /* CMGT, CMGE */
189
case 0x9: /* CMEQ, CMLE */
190
--
191
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The existing clr functions have only one vector argument, and so
3
These have generic equivalents: tcg_gen_vec_{add,sub}{16,32}_i64.
4
can only clear in place. The existing movz functions have two
5
vector arguments, and so can clear while moving. Merge them, with
6
a flag that controls the sense of active vs inactive elements
7
being cleared.
8
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20241211163036.2297116-48-richard.henderson@linaro.org
11
Message-id: 20200815013145.539409-10-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
target/arm/helper-sve.h | 5 ---
10
target/arm/helper.h | 4 ----
15
target/arm/sve_helper.c | 70 ++++++++------------------------------
11
target/arm/tcg/neon_helper.c | 36 ---------------------------------
16
target/arm/translate-sve.c | 53 +++++++++++------------------
12
target/arm/tcg/translate-neon.c | 22 ++++++++++----------
17
3 files changed, 34 insertions(+), 94 deletions(-)
13
3 files changed, 11 insertions(+), 51 deletions(-)
18
14
19
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-sve.h
17
--- a/target/arm/helper.h
22
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/helper.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uminv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(neon_widen_s8, i64, i32)
24
DEF_HELPER_FLAGS_3(sve_uminv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
20
DEF_HELPER_1(neon_widen_u16, i64, i32)
25
DEF_HELPER_FLAGS_3(sve_uminv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
21
DEF_HELPER_1(neon_widen_s16, i64, i32)
26
22
27
-DEF_HELPER_FLAGS_3(sve_clr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
-DEF_HELPER_2(neon_addl_u16, i64, i64, i64)
28
-DEF_HELPER_FLAGS_3(sve_clr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
-DEF_HELPER_2(neon_addl_u32, i64, i64, i64)
29
-DEF_HELPER_FLAGS_3(sve_clr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_1(neon_addlp_s8, TCG_CALL_NO_RWG_SE, i64, i64)
30
-DEF_HELPER_FLAGS_3(sve_clr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_1(neon_addlp_s16, TCG_CALL_NO_RWG_SE, i64, i64)
31
-
27
-DEF_HELPER_2(neon_subl_u16, i64, i64, i64)
32
DEF_HELPER_FLAGS_4(sve_movz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
-DEF_HELPER_2(neon_subl_u32, i64, i64, i64)
33
DEF_HELPER_FLAGS_4(sve_movz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
DEF_HELPER_3(neon_addl_saturate_s32, i64, env, i64, i64)
34
DEF_HELPER_FLAGS_4(sve_movz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
DEF_HELPER_3(neon_addl_saturate_s64, i64, env, i64, i64)
35
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
31
DEF_HELPER_2(neon_abdl_u16, i64, i32, i32)
32
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
36
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/sve_helper.c
34
--- a/target/arm/tcg/neon_helper.c
38
+++ b/target/arm/sve_helper.c
35
+++ b/target/arm/tcg/neon_helper.c
39
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_pnext)(void *vd, void *vg, uint32_t pred_desc)
36
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_widen_s16)(uint32_t x)
40
return flags;
37
return ((uint32_t)(int16_t)x) | (high << 32);
41
}
38
}
42
39
43
-/* Store zero into every active element of Zd. We will use this for two
40
-uint64_t HELPER(neon_addl_u16)(uint64_t a, uint64_t b)
44
- * and three-operand predicated instructions for which logic dictates a
45
- * zero result. In particular, logical shift by element size, which is
46
- * otherwise undefined on the host.
47
- *
48
- * For element sizes smaller than uint64_t, we use tables to expand
49
- * the N bits of the controlling predicate to a byte mask, and clear
50
- * those bytes.
51
+/*
52
+ * Copy Zn into Zd, and store zero into inactive elements.
53
+ * If inv, store zeros into the active elements.
54
*/
55
-void HELPER(sve_clr_b)(void *vd, void *vg, uint32_t desc)
56
-{
41
-{
57
- intptr_t i, opr_sz = simd_oprsz(desc) / 8;
42
- uint64_t mask;
58
- uint64_t *d = vd;
43
- mask = (a ^ b) & 0x8000800080008000ull;
59
- uint8_t *pg = vg;
44
- a &= ~0x8000800080008000ull;
60
- for (i = 0; i < opr_sz; i += 1) {
45
- b &= ~0x8000800080008000ull;
61
- d[i] &= ~expand_pred_b(pg[H1(i)]);
46
- return (a + b) ^ mask;
62
- }
63
-}
47
-}
64
-
48
-
65
-void HELPER(sve_clr_h)(void *vd, void *vg, uint32_t desc)
49
-uint64_t HELPER(neon_addl_u32)(uint64_t a, uint64_t b)
66
-{
50
-{
67
- intptr_t i, opr_sz = simd_oprsz(desc) / 8;
51
- uint64_t mask;
68
- uint64_t *d = vd;
52
- mask = (a ^ b) & 0x8000000080000000ull;
69
- uint8_t *pg = vg;
53
- a &= ~0x8000000080000000ull;
70
- for (i = 0; i < opr_sz; i += 1) {
54
- b &= ~0x8000000080000000ull;
71
- d[i] &= ~expand_pred_h(pg[H1(i)]);
55
- return (a + b) ^ mask;
72
- }
73
-}
56
-}
74
-
57
-
75
-void HELPER(sve_clr_s)(void *vd, void *vg, uint32_t desc)
58
/* Pairwise long add: add pairs of adjacent elements into
59
* double-width elements in the result (eg _s8 is an 8x8->16 op)
60
*/
61
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_s16)(uint64_t a)
62
return (uint32_t)reslo | (((uint64_t)reshi) << 32);
63
}
64
65
-uint64_t HELPER(neon_subl_u16)(uint64_t a, uint64_t b)
76
-{
66
-{
77
- intptr_t i, opr_sz = simd_oprsz(desc) / 8;
67
- uint64_t mask;
78
- uint64_t *d = vd;
68
- mask = (a ^ ~b) & 0x8000800080008000ull;
79
- uint8_t *pg = vg;
69
- a |= 0x8000800080008000ull;
80
- for (i = 0; i < opr_sz; i += 1) {
70
- b &= ~0x8000800080008000ull;
81
- d[i] &= ~expand_pred_s(pg[H1(i)]);
71
- return (a - b) ^ mask;
82
- }
83
-}
72
-}
84
-
73
-
85
-void HELPER(sve_clr_d)(void *vd, void *vg, uint32_t desc)
74
-uint64_t HELPER(neon_subl_u32)(uint64_t a, uint64_t b)
86
-{
75
-{
87
- intptr_t i, opr_sz = simd_oprsz(desc) / 8;
76
- uint64_t mask;
88
- uint64_t *d = vd;
77
- mask = (a ^ ~b) & 0x8000000080000000ull;
89
- uint8_t *pg = vg;
78
- a |= 0x8000000080000000ull;
90
- for (i = 0; i < opr_sz; i += 1) {
79
- b &= ~0x8000000080000000ull;
91
- if (pg[H1(i)] & 1) {
80
- return (a - b) ^ mask;
92
- d[i] = 0;
93
- }
94
- }
95
-}
81
-}
96
-
82
-
97
-/* Copy Zn into Zd, and store zero into inactive elements. */
83
uint64_t HELPER(neon_addl_saturate_s32)(CPUARMState *env, uint64_t a, uint64_t b)
98
void HELPER(sve_movz_b)(void *vd, void *vn, void *vg, uint32_t desc)
99
{
84
{
100
intptr_t i, opr_sz = simd_oprsz(desc) / 8;
85
uint32_t x, y;
101
+ uint64_t inv = -(uint64_t)(simd_data(desc) & 1);
86
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
102
uint64_t *d = vd, *n = vn;
103
uint8_t *pg = vg;
104
+
105
for (i = 0; i < opr_sz; i += 1) {
106
- d[i] = n[i] & expand_pred_b(pg[H1(i)]);
107
+ d[i] = n[i] & (expand_pred_b(pg[H1(i)]) ^ inv);
108
}
109
}
110
111
void HELPER(sve_movz_h)(void *vd, void *vn, void *vg, uint32_t desc)
112
{
113
intptr_t i, opr_sz = simd_oprsz(desc) / 8;
114
+ uint64_t inv = -(uint64_t)(simd_data(desc) & 1);
115
uint64_t *d = vd, *n = vn;
116
uint8_t *pg = vg;
117
+
118
for (i = 0; i < opr_sz; i += 1) {
119
- d[i] = n[i] & expand_pred_h(pg[H1(i)]);
120
+ d[i] = n[i] & (expand_pred_h(pg[H1(i)]) ^ inv);
121
}
122
}
123
124
void HELPER(sve_movz_s)(void *vd, void *vn, void *vg, uint32_t desc)
125
{
126
intptr_t i, opr_sz = simd_oprsz(desc) / 8;
127
+ uint64_t inv = -(uint64_t)(simd_data(desc) & 1);
128
uint64_t *d = vd, *n = vn;
129
uint8_t *pg = vg;
130
+
131
for (i = 0; i < opr_sz; i += 1) {
132
- d[i] = n[i] & expand_pred_s(pg[H1(i)]);
133
+ d[i] = n[i] & (expand_pred_s(pg[H1(i)]) ^ inv);
134
}
135
}
136
137
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_movz_d)(void *vd, void *vn, void *vg, uint32_t desc)
138
intptr_t i, opr_sz = simd_oprsz(desc) / 8;
139
uint64_t *d = vd, *n = vn;
140
uint8_t *pg = vg;
141
+ uint8_t inv = simd_data(desc);
142
+
143
for (i = 0; i < opr_sz; i += 1) {
144
- d[i] = n[i] & -(uint64_t)(pg[H1(i)] & 1);
145
+ d[i] = n[i] & -(uint64_t)((pg[H1(i)] ^ inv) & 1);
146
}
147
}
148
149
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
150
index XXXXXXX..XXXXXXX 100644
87
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-sve.c
88
--- a/target/arm/tcg/translate-neon.c
152
+++ b/target/arm/translate-sve.c
89
+++ b/target/arm/tcg/translate-neon.c
153
@@ -XXX,XX +XXX,XX @@ static bool trans_SADDV(DisasContext *s, arg_rpr_esz *a)
90
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
154
*** SVE Shift by Immediate - Predicated Group
91
NULL, NULL, \
155
*/
92
}; \
156
93
static NeonGenTwo64OpFn * const addfn[] = { \
157
-/* Store zero into every active element of Zd. We will use this for two
94
- gen_helper_neon_##OP##l_u16, \
158
- * and three-operand predicated instructions for which logic dictates a
95
- gen_helper_neon_##OP##l_u32, \
159
- * zero result.
96
+ tcg_gen_vec_##OP##16_i64, \
160
+/*
97
+ tcg_gen_vec_##OP##32_i64, \
161
+ * Copy Zn into Zd, storing zeros into inactive elements.
98
tcg_gen_##OP##_i64, \
162
+ * If invert, store zeros into the active elements.
99
NULL, \
163
*/
100
}; \
164
-static bool do_clr_zp(DisasContext *s, int rd, int pg, int esz)
101
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
165
-{
102
static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
166
- static gen_helper_gvec_2 * const fns[4] = {
103
{ \
167
- gen_helper_sve_clr_b, gen_helper_sve_clr_h,
104
static NeonGenTwo64OpFn * const addfn[] = { \
168
- gen_helper_sve_clr_s, gen_helper_sve_clr_d,
105
- gen_helper_neon_##OP##l_u16, \
169
- };
106
- gen_helper_neon_##OP##l_u32, \
170
- if (sve_access_check(s)) {
107
+ tcg_gen_vec_##OP##16_i64, \
171
- unsigned vsz = vec_full_reg_size(s);
108
+ tcg_gen_vec_##OP##32_i64, \
172
- tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
109
tcg_gen_##OP##_i64, \
173
- pred_full_reg_offset(s, pg),
110
NULL, \
174
- vsz, vsz, 0, fns[esz]);
111
}; \
175
- }
112
@@ -XXX,XX +XXX,XX @@ static bool trans_VABAL_S_3d(DisasContext *s, arg_3diff *a)
176
- return true;
113
NULL,
177
-}
178
-
179
-/* Copy Zn into Zd, storing zeros into inactive elements. */
180
-static void do_movz_zpz(DisasContext *s, int rd, int rn, int pg, int esz)
181
+static bool do_movz_zpz(DisasContext *s, int rd, int rn, int pg,
182
+ int esz, bool invert)
183
{
184
static gen_helper_gvec_3 * const fns[4] = {
185
gen_helper_sve_movz_b, gen_helper_sve_movz_h,
186
gen_helper_sve_movz_s, gen_helper_sve_movz_d,
187
};
114
};
188
- unsigned vsz = vec_full_reg_size(s);
115
static NeonGenTwo64OpFn * const addfn[] = {
189
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
116
- gen_helper_neon_addl_u16,
190
- vec_full_reg_offset(s, rn),
117
- gen_helper_neon_addl_u32,
191
- pred_full_reg_offset(s, pg),
118
+ tcg_gen_vec_add16_i64,
192
- vsz, vsz, 0, fns[esz]);
119
+ tcg_gen_vec_add32_i64,
193
+
120
tcg_gen_add_i64,
194
+ if (sve_access_check(s)) {
121
NULL,
195
+ unsigned vsz = vec_full_reg_size(s);
122
};
196
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
123
@@ -XXX,XX +XXX,XX @@ static bool trans_VABAL_U_3d(DisasContext *s, arg_3diff *a)
197
+ vec_full_reg_offset(s, rn),
124
NULL,
198
+ pred_full_reg_offset(s, pg),
125
};
199
+ vsz, vsz, invert, fns[esz]);
126
static NeonGenTwo64OpFn * const addfn[] = {
200
+ }
127
- gen_helper_neon_addl_u16,
201
+ return true;
128
- gen_helper_neon_addl_u32,
202
}
129
+ tcg_gen_vec_add16_i64,
203
130
+ tcg_gen_vec_add32_i64,
204
static bool do_zpzi_ool(DisasContext *s, arg_rpri_esz *a,
131
tcg_gen_add_i64,
205
@@ -XXX,XX +XXX,XX @@ static bool trans_LSR_zpzi(DisasContext *s, arg_rpri_esz *a)
132
NULL,
206
/* Shift by element size is architecturally valid.
133
};
207
For logical shifts, it is a zeroing operation. */
134
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_U_3d(DisasContext *s, arg_3diff *a)
208
if (a->imm >= (8 << a->esz)) {
135
NULL, \
209
- return do_clr_zp(s, a->rd, a->pg, a->esz);
136
}; \
210
+ return do_movz_zpz(s, a->rd, a->rd, a->pg, a->esz, true);
137
static NeonGenTwo64OpFn * const accfn[] = { \
211
} else {
138
- gen_helper_neon_##ACC##l_u16, \
212
return do_zpzi_ool(s, a, fns[a->esz]);
139
- gen_helper_neon_##ACC##l_u32, \
213
}
140
+ tcg_gen_vec_##ACC##16_i64, \
214
@@ -XXX,XX +XXX,XX @@ static bool trans_LSL_zpzi(DisasContext *s, arg_rpri_esz *a)
141
+ tcg_gen_vec_##ACC##32_i64, \
215
/* Shift by element size is architecturally valid.
142
tcg_gen_##ACC##_i64, \
216
For logical shifts, it is a zeroing operation. */
143
NULL, \
217
if (a->imm >= (8 << a->esz)) {
144
}; \
218
- return do_clr_zp(s, a->rd, a->pg, a->esz);
145
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_U_2sc(DisasContext *s, arg_2scalar *a)
219
+ return do_movz_zpz(s, a->rd, a->rd, a->pg, a->esz, true);
146
}; \
220
} else {
147
static NeonGenTwo64OpFn * const accfn[] = { \
221
return do_zpzi_ool(s, a, fns[a->esz]);
148
NULL, \
222
}
149
- gen_helper_neon_##ACC##l_u32, \
223
@@ -XXX,XX +XXX,XX @@ static bool trans_ASRD(DisasContext *s, arg_rpri_esz *a)
150
+ tcg_gen_vec_##ACC##32_i64, \
224
/* Shift by element size is architecturally valid. For arithmetic
151
tcg_gen_##ACC##_i64, \
225
right shift for division, it is a zeroing operation. */
152
NULL, \
226
if (a->imm >= (8 << a->esz)) {
153
}; \
227
- return do_clr_zp(s, a->rd, a->pg, a->esz);
228
+ return do_movz_zpz(s, a->rd, a->rd, a->pg, a->esz, true);
229
} else {
230
return do_zpzi_ool(s, a, fns[a->esz]);
231
}
232
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a)
233
234
/* Zero the inactive elements. */
235
gen_set_label(over);
236
- do_movz_zpz(s, a->rd, a->rd, a->pg, esz);
237
- return true;
238
+ return do_movz_zpz(s, a->rd, a->rd, a->pg, esz, false);
239
}
240
241
static void do_st_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
242
@@ -XXX,XX +XXX,XX @@ static bool trans_MOVPRFX_m(DisasContext *s, arg_rpr_esz *a)
243
244
static bool trans_MOVPRFX_z(DisasContext *s, arg_rpr_esz *a)
245
{
246
- if (sve_access_check(s)) {
247
- do_movz_zpz(s, a->rd, a->rn, a->pg, a->esz);
248
- }
249
- return true;
250
+ return do_movz_zpz(s, a->rd, a->rn, a->pg, a->esz, false);
251
}
252
--
154
--
253
2.20.1
155
2.34.1
254
255
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Model after gen_gvec_fn_zzz et al.
3
In a couple of places, clearing the entire vector before storing one
4
element is the easiest solution. Wrap that into a helper function.
4
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241211163036.2297116-49-richard.henderson@linaro.org
7
Message-id: 20200815013145.539409-11-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate-sve.c | 29 ++++++++++++++---------------
11
target/arm/tcg/translate-a64.c | 21 ++++++++++++---------
11
1 file changed, 14 insertions(+), 15 deletions(-)
12
1 file changed, 12 insertions(+), 9 deletions(-)
12
13
13
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-sve.c
16
--- a/target/arm/tcg/translate-a64.c
16
+++ b/target/arm/translate-sve.c
17
+++ b/target/arm/tcg/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
18
return size_for_gvec(pred_full_reg_size(s));
19
return v;
19
}
20
}
20
21
21
+/* Invoke an out-of-line helper on 2 Zregs and a predicate. */
22
-/* Clear the bits above an N-bit vector, for N = (is_q ? 128 : 64).
22
+static void gen_gvec_ool_zzp(DisasContext *s, gen_helper_gvec_3 *fn,
23
+static void clear_vec(DisasContext *s, int rd)
23
+ int rd, int rn, int pg, int data)
24
+{
24
+{
25
+ unsigned ofs = fp_reg_offset(s, rd, MO_64);
25
+ unsigned vsz = vec_full_reg_size(s);
26
+ unsigned vsz = vec_full_reg_size(s);
26
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
27
+
27
+ vec_full_reg_offset(s, rn),
28
+ tcg_gen_gvec_dup_imm(MO_64, ofs, vsz, vsz, 0);
28
+ pred_full_reg_offset(s, pg),
29
+ vsz, vsz, data, fn);
30
+}
29
+}
31
+
30
+
32
/* Invoke an out-of-line helper on 3 Zregs and a predicate. */
31
+/*
33
static void gen_gvec_ool_zzzp(DisasContext *s, gen_helper_gvec_4 *fn,
32
+ * Clear the bits above an N-bit vector, for N = (is_q ? 128 : 64).
34
int rd, int rn, int rm, int pg, int data)
33
* If SVE is not enabled, then there are only 128 bits in the vector.
35
@@ -XXX,XX +XXX,XX @@ static bool do_zpz_ool(DisasContext *s, arg_rpr_esz *a, gen_helper_gvec_3 *fn)
34
*/
36
return false;
35
static void clear_vec_high(DisasContext *s, bool is_q, int rd)
37
}
36
@@ -XXX,XX +XXX,XX @@ static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
38
if (sve_access_check(s)) {
37
TCGv_i32 tcg_op2 = tcg_temp_new_i32();
39
- unsigned vsz = vec_full_reg_size(s);
38
TCGv_i32 tcg_op3 = tcg_temp_new_i32();
40
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
39
TCGv_i32 tcg_res = tcg_temp_new_i32();
41
- vec_full_reg_offset(s, a->rn),
40
- unsigned vsz, dofs;
42
- pred_full_reg_offset(s, a->pg),
41
43
- vsz, vsz, 0, fn);
42
read_vec_element_i32(s, tcg_op1, a->rn, 3, MO_32);
44
+ gen_gvec_ool_zzp(s, fn, a->rd, a->rn, a->pg, 0);
43
read_vec_element_i32(s, tcg_op2, a->rm, 3, MO_32);
44
@@ -XXX,XX +XXX,XX @@ static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
45
tcg_gen_rotri_i32(tcg_res, tcg_res, 25);
46
47
/* Clear the whole register first, then store bits [127:96]. */
48
- vsz = vec_full_reg_size(s);
49
- dofs = vec_full_reg_offset(s, a->rd);
50
- tcg_gen_gvec_dup_imm(MO_64, dofs, vsz, vsz, 0);
51
+ clear_vec(s, a->rd);
52
write_vec_element_i32(s, tcg_res, a->rd, 3, MO_32);
45
}
53
}
46
return true;
54
return true;
47
}
55
@@ -XXX,XX +XXX,XX @@ static bool do_scalar_muladd_widening_idx(DisasContext *s, arg_rrx_e *a,
48
@@ -XXX,XX +XXX,XX @@ static bool do_movz_zpz(DisasContext *s, int rd, int rn, int pg,
56
TCGv_i64 t0 = tcg_temp_new_i64();
49
};
57
TCGv_i64 t1 = tcg_temp_new_i64();
50
58
TCGv_i64 t2 = tcg_temp_new_i64();
51
if (sve_access_check(s)) {
59
- unsigned vsz, dofs;
52
- unsigned vsz = vec_full_reg_size(s);
60
53
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
61
if (acc) {
54
- vec_full_reg_offset(s, rn),
62
read_vec_element(s, t0, a->rd, 0, a->esz + 1);
55
- pred_full_reg_offset(s, pg),
63
@@ -XXX,XX +XXX,XX @@ static bool do_scalar_muladd_widening_idx(DisasContext *s, arg_rrx_e *a,
56
- vsz, vsz, invert, fns[esz]);
64
fn(t0, t1, t2);
57
+ gen_gvec_ool_zzp(s, fns[esz], rd, rn, pg, invert);
65
66
/* Clear the whole register first, then store scalar. */
67
- vsz = vec_full_reg_size(s);
68
- dofs = vec_full_reg_offset(s, a->rd);
69
- tcg_gen_gvec_dup_imm(MO_64, dofs, vsz, vsz, 0);
70
+ clear_vec(s, a->rd);
71
write_vec_element(s, t0, a->rd, 0, a->esz + 1);
58
}
72
}
59
return true;
73
return true;
60
}
61
@@ -XXX,XX +XXX,XX @@ static bool do_zpzi_ool(DisasContext *s, arg_rpri_esz *a,
62
gen_helper_gvec_3 *fn)
63
{
64
if (sve_access_check(s)) {
65
- unsigned vsz = vec_full_reg_size(s);
66
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
67
- vec_full_reg_offset(s, a->rn),
68
- pred_full_reg_offset(s, a->pg),
69
- vsz, vsz, a->imm, fn);
70
+ gen_gvec_ool_zzp(s, fn, a->rd, a->rn, a->pg, a->imm);
71
}
72
return true;
73
}
74
--
74
--
75
2.20.1
75
2.34.1
76
76
77
77
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-50-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 9 ++
9
target/arm/tcg/translate-a64.c | 153 ++++++++++++++++++++-------------
10
2 files changed, 102 insertions(+), 60 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ CMEQ0_s 0101 1110 111 00000 10011 0 ..... ..... @rr
17
CMLE0_s 0111 1110 111 00000 10011 0 ..... ..... @rr
18
CMLT0_s 0101 1110 111 00000 10101 0 ..... ..... @rr
19
20
+SQXTUN_s 0111 1110 ..1 00001 00101 0 ..... ..... @rr_e
21
+SQXTN_s 0101 1110 ..1 00001 01001 0 ..... ..... @rr_e
22
+UQXTN_s 0111 1110 ..1 00001 01001 0 ..... ..... @rr_e
23
+
24
# Advanced SIMD two-register miscellaneous
25
26
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
27
@@ -XXX,XX +XXX,XX @@ SADDLP_v 0.00 1110 ..1 00000 00101 0 ..... ..... @qrr_e
28
UADDLP_v 0.10 1110 ..1 00000 00101 0 ..... ..... @qrr_e
29
SADALP_v 0.00 1110 ..1 00000 01101 0 ..... ..... @qrr_e
30
UADALP_v 0.10 1110 ..1 00000 01101 0 ..... ..... @qrr_e
31
+
32
+XTN 0.00 1110 ..1 00001 00101 0 ..... ..... @qrr_e
33
+SQXTUN_v 0.10 1110 ..1 00001 00101 0 ..... ..... @qrr_e
34
+SQXTN_v 0.00 1110 ..1 00001 01001 0 ..... ..... @qrr_e
35
+UQXTN_v 0.10 1110 ..1 00001 01001 0 ..... ..... @qrr_e
36
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/tcg/translate-a64.c
39
+++ b/target/arm/tcg/translate-a64.c
40
@@ -XXX,XX +XXX,XX @@ TRANS(CMLE0_s, do_cmop0_d, a, TCG_COND_LE)
41
TRANS(CMLT0_s, do_cmop0_d, a, TCG_COND_LT)
42
TRANS(CMEQ0_s, do_cmop0_d, a, TCG_COND_EQ)
43
44
+static bool do_2misc_narrow_scalar(DisasContext *s, arg_rr_e *a,
45
+ ArithOneOp * const fn[3])
46
+{
47
+ if (a->esz == MO_64) {
48
+ return false;
49
+ }
50
+ if (fp_access_check(s)) {
51
+ TCGv_i64 t = tcg_temp_new_i64();
52
+
53
+ read_vec_element(s, t, a->rn, 0, a->esz + 1);
54
+ fn[a->esz](t, t);
55
+ clear_vec(s, a->rd);
56
+ write_vec_element(s, t, a->rd, 0, a->esz);
57
+ }
58
+ return true;
59
+}
60
+
61
+#define WRAP_ENV(NAME) \
62
+ static void gen_##NAME(TCGv_i64 d, TCGv_i64 n) \
63
+ { gen_helper_##NAME(d, tcg_env, n); }
64
+
65
+WRAP_ENV(neon_unarrow_sat8)
66
+WRAP_ENV(neon_unarrow_sat16)
67
+WRAP_ENV(neon_unarrow_sat32)
68
+
69
+static ArithOneOp * const f_scalar_sqxtun[] = {
70
+ gen_neon_unarrow_sat8,
71
+ gen_neon_unarrow_sat16,
72
+ gen_neon_unarrow_sat32,
73
+};
74
+TRANS(SQXTUN_s, do_2misc_narrow_scalar, a, f_scalar_sqxtun)
75
+
76
+WRAP_ENV(neon_narrow_sat_s8)
77
+WRAP_ENV(neon_narrow_sat_s16)
78
+WRAP_ENV(neon_narrow_sat_s32)
79
+
80
+static ArithOneOp * const f_scalar_sqxtn[] = {
81
+ gen_neon_narrow_sat_s8,
82
+ gen_neon_narrow_sat_s16,
83
+ gen_neon_narrow_sat_s32,
84
+};
85
+TRANS(SQXTN_s, do_2misc_narrow_scalar, a, f_scalar_sqxtn)
86
+
87
+WRAP_ENV(neon_narrow_sat_u8)
88
+WRAP_ENV(neon_narrow_sat_u16)
89
+WRAP_ENV(neon_narrow_sat_u32)
90
+
91
+static ArithOneOp * const f_scalar_uqxtn[] = {
92
+ gen_neon_narrow_sat_u8,
93
+ gen_neon_narrow_sat_u16,
94
+ gen_neon_narrow_sat_u32,
95
+};
96
+TRANS(UQXTN_s, do_2misc_narrow_scalar, a, f_scalar_uqxtn)
97
+
98
+#undef WRAP_ENV
99
+
100
static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
101
{
102
if (!a->q && a->esz == MO_64) {
103
@@ -XXX,XX +XXX,XX @@ TRANS(UADDLP_v, do_gvec_fn2_bhs, a, gen_gvec_uaddlp)
104
TRANS(SADALP_v, do_gvec_fn2_bhs, a, gen_gvec_sadalp)
105
TRANS(UADALP_v, do_gvec_fn2_bhs, a, gen_gvec_uadalp)
106
107
+static bool do_2misc_narrow_vector(DisasContext *s, arg_qrr_e *a,
108
+ ArithOneOp * const fn[3])
109
+{
110
+ if (a->esz == MO_64) {
111
+ return false;
112
+ }
113
+ if (fp_access_check(s)) {
114
+ TCGv_i64 t0 = tcg_temp_new_i64();
115
+ TCGv_i64 t1 = tcg_temp_new_i64();
116
+
117
+ read_vec_element(s, t0, a->rn, 0, MO_64);
118
+ read_vec_element(s, t1, a->rn, 1, MO_64);
119
+ fn[a->esz](t0, t0);
120
+ fn[a->esz](t1, t1);
121
+ write_vec_element(s, t0, a->rd, a->q ? 2 : 0, MO_32);
122
+ write_vec_element(s, t1, a->rd, a->q ? 3 : 1, MO_32);
123
+ clear_vec_high(s, a->q, a->rd);
124
+ }
125
+ return true;
126
+}
127
+
128
+static ArithOneOp * const f_scalar_xtn[] = {
129
+ gen_helper_neon_narrow_u8,
130
+ gen_helper_neon_narrow_u16,
131
+ tcg_gen_ext32u_i64,
132
+};
133
+TRANS(XTN, do_2misc_narrow_vector, a, f_scalar_xtn)
134
+TRANS(SQXTUN_v, do_2misc_narrow_vector, a, f_scalar_sqxtun)
135
+TRANS(SQXTN_v, do_2misc_narrow_vector, a, f_scalar_sqxtn)
136
+TRANS(UQXTN_v, do_2misc_narrow_vector, a, f_scalar_uqxtn)
137
+
138
/* Common vector code for handling integer to FP conversion */
139
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
140
int elements, int is_signed,
141
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
142
tcg_res[pass] = tcg_temp_new_i64();
143
144
switch (opcode) {
145
- case 0x12: /* XTN, SQXTUN */
146
- {
147
- static NeonGenOne64OpFn * const xtnfns[3] = {
148
- gen_helper_neon_narrow_u8,
149
- gen_helper_neon_narrow_u16,
150
- tcg_gen_ext32u_i64,
151
- };
152
- static NeonGenOne64OpEnvFn * const sqxtunfns[3] = {
153
- gen_helper_neon_unarrow_sat8,
154
- gen_helper_neon_unarrow_sat16,
155
- gen_helper_neon_unarrow_sat32,
156
- };
157
- if (u) {
158
- genenvfn = sqxtunfns[size];
159
- } else {
160
- genfn = xtnfns[size];
161
- }
162
- break;
163
- }
164
- case 0x14: /* SQXTN, UQXTN */
165
- {
166
- static NeonGenOne64OpEnvFn * const fns[3][2] = {
167
- { gen_helper_neon_narrow_sat_s8,
168
- gen_helper_neon_narrow_sat_u8 },
169
- { gen_helper_neon_narrow_sat_s16,
170
- gen_helper_neon_narrow_sat_u16 },
171
- { gen_helper_neon_narrow_sat_s32,
172
- gen_helper_neon_narrow_sat_u32 },
173
- };
174
- genenvfn = fns[size][u];
175
- break;
176
- }
177
case 0x16: /* FCVTN, FCVTN2 */
178
/* 32 bit to 16 bit or 64 bit to 32 bit float conversion */
179
if (size == 2) {
180
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
181
}
182
break;
183
default:
184
+ case 0x12: /* XTN, SQXTUN */
185
+ case 0x14: /* SQXTN, UQXTN */
186
g_assert_not_reached();
187
}
188
189
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
190
TCGv_ptr tcg_fpstatus;
191
192
switch (opcode) {
193
- case 0x12: /* SQXTUN */
194
- if (!u) {
195
- unallocated_encoding(s);
196
- return;
197
- }
198
- /* fall through */
199
- case 0x14: /* SQXTN, UQXTN */
200
- if (size == 3) {
201
- unallocated_encoding(s);
202
- return;
203
- }
204
- if (!fp_access_check(s)) {
205
- return;
206
- }
207
- handle_2misc_narrow(s, true, opcode, u, false, size, rn, rd);
208
- return;
209
case 0xc ... 0xf:
210
case 0x16 ... 0x1d:
211
case 0x1f:
212
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
213
case 0x9: /* CMEQ, CMLE */
214
case 0xa: /* CMLT */
215
case 0xb: /* ABS, NEG */
216
+ case 0x12: /* SQXTUN */
217
+ case 0x14: /* SQXTN, UQXTN */
218
unallocated_encoding(s);
219
return;
220
}
221
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
222
TCGv_ptr tcg_fpstatus;
223
224
switch (opcode) {
225
- case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
226
- case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
227
- if (size == 3) {
228
- unallocated_encoding(s);
229
- return;
230
- }
231
- if (!fp_access_check(s)) {
232
- return;
233
- }
234
-
235
- handle_2misc_narrow(s, false, opcode, u, is_q, size, rn, rd);
236
- return;
237
case 0x13: /* SHLL, SHLL2 */
238
if (u == 0 || size == 3) {
239
unallocated_encoding(s);
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
241
case 0x9: /* CMEQ, CMLE */
242
case 0xa: /* CMLT */
243
case 0xb: /* ABS, NEG */
244
+ case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
245
+ case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
246
unallocated_encoding(s);
247
return;
248
}
249
--
250
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-51-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 5 ++
9
target/arm/tcg/translate-a64.c | 89 ++++++++++++++++++----------------
10
2 files changed, 52 insertions(+), 42 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
18
%rd 0:5
19
%esz_sd 22:1 !function=plus_2
20
+%esz_hs 22:1 !function=plus_1
21
%esz_hsd 22:2 !function=xor_2
22
%hl 11:1 21:1
23
%hlm 11:1 20:2
24
@@ -XXX,XX +XXX,XX @@
25
@qrr_b . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=0
26
@qrr_h . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=1
27
@qrr_bh . q:1 ...... . esz:1 ...... ...... rn:5 rd:5 &qrr_e
28
+@qrr_hs . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=%esz_hs
29
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
30
31
@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
32
@@ -XXX,XX +XXX,XX @@ XTN 0.00 1110 ..1 00001 00101 0 ..... ..... @qrr_e
33
SQXTUN_v 0.10 1110 ..1 00001 00101 0 ..... ..... @qrr_e
34
SQXTN_v 0.00 1110 ..1 00001 01001 0 ..... ..... @qrr_e
35
UQXTN_v 0.10 1110 ..1 00001 01001 0 ..... ..... @qrr_e
36
+
37
+FCVTN_v 0.00 1110 0.1 00001 01101 0 ..... ..... @qrr_hs
38
+BFCVTN_v 0.00 1110 101 00001 01101 0 ..... ..... @qrr_h
39
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/tcg/translate-a64.c
42
+++ b/target/arm/tcg/translate-a64.c
43
@@ -XXX,XX +XXX,XX @@ TRANS(SQXTUN_v, do_2misc_narrow_vector, a, f_scalar_sqxtun)
44
TRANS(SQXTN_v, do_2misc_narrow_vector, a, f_scalar_sqxtn)
45
TRANS(UQXTN_v, do_2misc_narrow_vector, a, f_scalar_uqxtn)
46
47
+static void gen_fcvtn_hs(TCGv_i64 d, TCGv_i64 n)
48
+{
49
+ TCGv_i32 tcg_lo = tcg_temp_new_i32();
50
+ TCGv_i32 tcg_hi = tcg_temp_new_i32();
51
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
52
+ TCGv_i32 ahp = get_ahp_flag();
53
+
54
+ tcg_gen_extr_i64_i32(tcg_lo, tcg_hi, n);
55
+ gen_helper_vfp_fcvt_f32_to_f16(tcg_lo, tcg_lo, fpst, ahp);
56
+ gen_helper_vfp_fcvt_f32_to_f16(tcg_hi, tcg_hi, fpst, ahp);
57
+ tcg_gen_deposit_i32(tcg_lo, tcg_lo, tcg_hi, 16, 16);
58
+ tcg_gen_extu_i32_i64(d, tcg_lo);
59
+}
60
+
61
+static void gen_fcvtn_sd(TCGv_i64 d, TCGv_i64 n)
62
+{
63
+ TCGv_i32 tmp = tcg_temp_new_i32();
64
+ gen_helper_vfp_fcvtsd(tmp, n, tcg_env);
65
+ tcg_gen_extu_i32_i64(d, tmp);
66
+}
67
+
68
+static ArithOneOp * const f_vector_fcvtn[] = {
69
+ NULL,
70
+ gen_fcvtn_hs,
71
+ gen_fcvtn_sd,
72
+};
73
+TRANS(FCVTN_v, do_2misc_narrow_vector, a, f_vector_fcvtn)
74
+
75
+static void gen_bfcvtn_hs(TCGv_i64 d, TCGv_i64 n)
76
+{
77
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
78
+ TCGv_i32 tmp = tcg_temp_new_i32();
79
+ gen_helper_bfcvt_pair(tmp, n, fpst);
80
+ tcg_gen_extu_i32_i64(d, tmp);
81
+}
82
+
83
+static ArithOneOp * const f_vector_bfcvtn[] = {
84
+ NULL,
85
+ gen_bfcvtn_hs,
86
+ NULL,
87
+};
88
+TRANS_FEAT(BFCVTN_v, aa64_bf16, do_2misc_narrow_vector, a, f_vector_bfcvtn)
89
+
90
/* Common vector code for handling integer to FP conversion */
91
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
92
int elements, int is_signed,
93
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
94
tcg_res[pass] = tcg_temp_new_i64();
95
96
switch (opcode) {
97
- case 0x16: /* FCVTN, FCVTN2 */
98
- /* 32 bit to 16 bit or 64 bit to 32 bit float conversion */
99
- if (size == 2) {
100
- TCGv_i32 tmp = tcg_temp_new_i32();
101
- gen_helper_vfp_fcvtsd(tmp, tcg_op, tcg_env);
102
- tcg_gen_extu_i32_i64(tcg_res[pass], tmp);
103
- } else {
104
- TCGv_i32 tcg_lo = tcg_temp_new_i32();
105
- TCGv_i32 tcg_hi = tcg_temp_new_i32();
106
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
107
- TCGv_i32 ahp = get_ahp_flag();
108
-
109
- tcg_gen_extr_i64_i32(tcg_lo, tcg_hi, tcg_op);
110
- gen_helper_vfp_fcvt_f32_to_f16(tcg_lo, tcg_lo, fpst, ahp);
111
- gen_helper_vfp_fcvt_f32_to_f16(tcg_hi, tcg_hi, fpst, ahp);
112
- tcg_gen_deposit_i32(tcg_lo, tcg_lo, tcg_hi, 16, 16);
113
- tcg_gen_extu_i32_i64(tcg_res[pass], tcg_lo);
114
- }
115
- break;
116
- case 0x36: /* BFCVTN, BFCVTN2 */
117
- {
118
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
119
- TCGv_i32 tmp = tcg_temp_new_i32();
120
- gen_helper_bfcvt_pair(tmp, tcg_op, fpst);
121
- tcg_gen_extu_i32_i64(tcg_res[pass], tmp);
122
- }
123
- break;
124
case 0x56: /* FCVTXN, FCVTXN2 */
125
{
126
/*
127
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
128
default:
129
case 0x12: /* XTN, SQXTUN */
130
case 0x14: /* SQXTN, UQXTN */
131
+ case 0x16: /* FCVTN, FCVTN2 */
132
+ case 0x36: /* BFCVTN, BFCVTN2 */
133
g_assert_not_reached();
134
}
135
136
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
137
unallocated_encoding(s);
138
return;
139
}
140
- /* fall through */
141
- case 0x16: /* FCVTN, FCVTN2 */
142
- /* handle_2misc_narrow does a 2*size -> size operation, but these
143
- * instructions encode the source size rather than dest size.
144
- */
145
- if (!fp_access_check(s)) {
146
- return;
147
- }
148
- handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
149
- return;
150
- case 0x36: /* BFCVTN, BFCVTN2 */
151
- if (!dc_isar_feature(aa64_bf16, s) || size != 2) {
152
- unallocated_encoding(s);
153
- return;
154
- }
155
if (!fp_access_check(s)) {
156
return;
157
}
158
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
159
}
160
break;
161
default:
162
+ case 0x16: /* FCVTN, FCVTN2 */
163
+ case 0x36: /* BFCVTN, BFCVTN2 */
164
unallocated_encoding(s);
165
return;
166
}
167
--
168
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_2misc_narrow as this was the last insn decoded
4
by that function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-52-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 4 ++
12
target/arm/tcg/translate-a64.c | 101 +++++++--------------------------
13
2 files changed, 24 insertions(+), 81 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@
20
21
@qrr_b . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=0
22
@qrr_h . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=1
23
+@qrr_s . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=2
24
@qrr_bh . q:1 ...... . esz:1 ...... ...... rn:5 rd:5 &qrr_e
25
@qrr_hs . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=%esz_hs
26
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
27
@@ -XXX,XX +XXX,XX @@ SQXTUN_s 0111 1110 ..1 00001 00101 0 ..... ..... @rr_e
28
SQXTN_s 0101 1110 ..1 00001 01001 0 ..... ..... @rr_e
29
UQXTN_s 0111 1110 ..1 00001 01001 0 ..... ..... @rr_e
30
31
+FCVTXN_s 0111 1110 011 00001 01101 0 ..... ..... @rr_s
32
+
33
# Advanced SIMD two-register miscellaneous
34
35
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
36
@@ -XXX,XX +XXX,XX @@ SQXTN_v 0.00 1110 ..1 00001 01001 0 ..... ..... @qrr_e
37
UQXTN_v 0.10 1110 ..1 00001 01001 0 ..... ..... @qrr_e
38
39
FCVTN_v 0.00 1110 0.1 00001 01101 0 ..... ..... @qrr_hs
40
+FCVTXN_v 0.10 1110 011 00001 01101 0 ..... ..... @qrr_s
41
BFCVTN_v 0.00 1110 101 00001 01101 0 ..... ..... @qrr_h
42
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/translate-a64.c
45
+++ b/target/arm/tcg/translate-a64.c
46
@@ -XXX,XX +XXX,XX @@ static ArithOneOp * const f_scalar_uqxtn[] = {
47
};
48
TRANS(UQXTN_s, do_2misc_narrow_scalar, a, f_scalar_uqxtn)
49
50
+static void gen_fcvtxn_sd(TCGv_i64 d, TCGv_i64 n)
51
+{
52
+ /*
53
+ * 64 bit to 32 bit float conversion
54
+ * with von Neumann rounding (round to odd)
55
+ */
56
+ TCGv_i32 tmp = tcg_temp_new_i32();
57
+ gen_helper_fcvtx_f64_to_f32(tmp, n, tcg_env);
58
+ tcg_gen_extu_i32_i64(d, tmp);
59
+}
60
+
61
+static ArithOneOp * const f_scalar_fcvtxn[] = {
62
+ NULL,
63
+ NULL,
64
+ gen_fcvtxn_sd,
65
+};
66
+TRANS(FCVTXN_s, do_2misc_narrow_scalar, a, f_scalar_fcvtxn)
67
+
68
#undef WRAP_ENV
69
70
static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
71
@@ -XXX,XX +XXX,XX @@ static ArithOneOp * const f_vector_fcvtn[] = {
72
gen_fcvtn_sd,
73
};
74
TRANS(FCVTN_v, do_2misc_narrow_vector, a, f_vector_fcvtn)
75
+TRANS(FCVTXN_v, do_2misc_narrow_vector, a, f_scalar_fcvtxn)
76
77
static void gen_bfcvtn_hs(TCGv_i64 d, TCGv_i64 n)
78
{
79
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
80
}
81
}
82
83
-static void handle_2misc_narrow(DisasContext *s, bool scalar,
84
- int opcode, bool u, bool is_q,
85
- int size, int rn, int rd)
86
-{
87
- /* Handle 2-reg-misc ops which are narrowing (so each 2*size element
88
- * in the source becomes a size element in the destination).
89
- */
90
- int pass;
91
- TCGv_i64 tcg_res[2];
92
- int destelt = is_q ? 2 : 0;
93
- int passes = scalar ? 1 : 2;
94
-
95
- if (scalar) {
96
- tcg_res[1] = tcg_constant_i64(0);
97
- }
98
-
99
- for (pass = 0; pass < passes; pass++) {
100
- TCGv_i64 tcg_op = tcg_temp_new_i64();
101
- NeonGenOne64OpFn *genfn = NULL;
102
- NeonGenOne64OpEnvFn *genenvfn = NULL;
103
-
104
- if (scalar) {
105
- read_vec_element(s, tcg_op, rn, pass, size + 1);
106
- } else {
107
- read_vec_element(s, tcg_op, rn, pass, MO_64);
108
- }
109
- tcg_res[pass] = tcg_temp_new_i64();
110
-
111
- switch (opcode) {
112
- case 0x56: /* FCVTXN, FCVTXN2 */
113
- {
114
- /*
115
- * 64 bit to 32 bit float conversion
116
- * with von Neumann rounding (round to odd)
117
- */
118
- TCGv_i32 tmp = tcg_temp_new_i32();
119
- assert(size == 2);
120
- gen_helper_fcvtx_f64_to_f32(tmp, tcg_op, tcg_env);
121
- tcg_gen_extu_i32_i64(tcg_res[pass], tmp);
122
- }
123
- break;
124
- default:
125
- case 0x12: /* XTN, SQXTUN */
126
- case 0x14: /* SQXTN, UQXTN */
127
- case 0x16: /* FCVTN, FCVTN2 */
128
- case 0x36: /* BFCVTN, BFCVTN2 */
129
- g_assert_not_reached();
130
- }
131
-
132
- if (genfn) {
133
- genfn(tcg_res[pass], tcg_op);
134
- } else if (genenvfn) {
135
- genenvfn(tcg_res[pass], tcg_env, tcg_op);
136
- }
137
- }
138
-
139
- for (pass = 0; pass < 2; pass++) {
140
- write_vec_element(s, tcg_res[pass], rd, destelt + pass, MO_32);
141
- }
142
- clear_vec_high(s, is_q, rd);
143
-}
144
-
145
/* AdvSIMD scalar two reg misc
146
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
147
* +-----+---+-----------+------+-----------+--------+-----+------+------+
148
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
149
rmode = FPROUNDING_TIEAWAY;
150
break;
151
case 0x56: /* FCVTXN, FCVTXN2 */
152
- if (size == 2) {
153
- unallocated_encoding(s);
154
- return;
155
- }
156
- if (!fp_access_check(s)) {
157
- return;
158
- }
159
- handle_2misc_narrow(s, true, opcode, u, false, size - 1, rn, rd);
160
- return;
161
default:
162
unallocated_encoding(s);
163
return;
164
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
165
}
166
handle_2misc_reciprocal(s, opcode, false, u, is_q, size, rn, rd);
167
return;
168
- case 0x56: /* FCVTXN, FCVTXN2 */
169
- if (size == 2) {
170
- unallocated_encoding(s);
171
- return;
172
- }
173
- if (!fp_access_check(s)) {
174
- return;
175
- }
176
- handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
177
- return;
178
case 0x17: /* FCVTL, FCVTL2 */
179
if (!fp_access_check(s)) {
180
return;
181
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
182
default:
183
case 0x16: /* FCVTN, FCVTN2 */
184
case 0x36: /* BFCVTN, BFCVTN2 */
185
+ case 0x56: /* FCVTXN, FCVTXN2 */
186
unallocated_encoding(s);
187
return;
188
}
189
--
190
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-53-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 2 +
9
target/arm/tcg/translate-a64.c | 75 +++++++++++++++++-----------------
10
2 files changed, 40 insertions(+), 37 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ UQXTN_v 0.10 1110 ..1 00001 01001 0 ..... ..... @qrr_e
17
FCVTN_v 0.00 1110 0.1 00001 01101 0 ..... ..... @qrr_hs
18
FCVTXN_v 0.10 1110 011 00001 01101 0 ..... ..... @qrr_s
19
BFCVTN_v 0.00 1110 101 00001 01101 0 ..... ..... @qrr_h
20
+
21
+SHLL_v 0.10 1110 ..1 00001 00111 0 ..... ..... @qrr_e
22
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/tcg/translate-a64.c
25
+++ b/target/arm/tcg/translate-a64.c
26
@@ -XXX,XX +XXX,XX @@ static ArithOneOp * const f_vector_bfcvtn[] = {
27
};
28
TRANS_FEAT(BFCVTN_v, aa64_bf16, do_2misc_narrow_vector, a, f_vector_bfcvtn)
29
30
+static bool trans_SHLL_v(DisasContext *s, arg_qrr_e *a)
31
+{
32
+ static NeonGenWidenFn * const widenfns[3] = {
33
+ gen_helper_neon_widen_u8,
34
+ gen_helper_neon_widen_u16,
35
+ tcg_gen_extu_i32_i64,
36
+ };
37
+ NeonGenWidenFn *widenfn;
38
+ TCGv_i64 tcg_res[2];
39
+ TCGv_i32 tcg_op;
40
+ int part, pass;
41
+
42
+ if (a->esz == MO_64) {
43
+ return false;
44
+ }
45
+ if (!fp_access_check(s)) {
46
+ return true;
47
+ }
48
+
49
+ tcg_op = tcg_temp_new_i32();
50
+ widenfn = widenfns[a->esz];
51
+ part = a->q ? 2 : 0;
52
+
53
+ for (pass = 0; pass < 2; pass++) {
54
+ read_vec_element_i32(s, tcg_op, a->rn, part + pass, MO_32);
55
+ tcg_res[pass] = tcg_temp_new_i64();
56
+ widenfn(tcg_res[pass], tcg_op);
57
+ tcg_gen_shli_i64(tcg_res[pass], tcg_res[pass], 8 << a->esz);
58
+ }
59
+
60
+ for (pass = 0; pass < 2; pass++) {
61
+ write_vec_element(s, tcg_res[pass], a->rd, pass, MO_64);
62
+ }
63
+ return true;
64
+}
65
+
66
+
67
/* Common vector code for handling integer to FP conversion */
68
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
69
int elements, int is_signed,
70
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
71
}
72
}
73
74
-static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
75
-{
76
- /* Implement SHLL and SHLL2 */
77
- int pass;
78
- int part = is_q ? 2 : 0;
79
- TCGv_i64 tcg_res[2];
80
-
81
- for (pass = 0; pass < 2; pass++) {
82
- static NeonGenWidenFn * const widenfns[3] = {
83
- gen_helper_neon_widen_u8,
84
- gen_helper_neon_widen_u16,
85
- tcg_gen_extu_i32_i64,
86
- };
87
- NeonGenWidenFn *widenfn = widenfns[size];
88
- TCGv_i32 tcg_op = tcg_temp_new_i32();
89
-
90
- read_vec_element_i32(s, tcg_op, rn, part + pass, MO_32);
91
- tcg_res[pass] = tcg_temp_new_i64();
92
- widenfn(tcg_res[pass], tcg_op);
93
- tcg_gen_shli_i64(tcg_res[pass], tcg_res[pass], 8 << size);
94
- }
95
-
96
- for (pass = 0; pass < 2; pass++) {
97
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
98
- }
99
-}
100
-
101
/* AdvSIMD two reg misc
102
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
103
* +---+---+---+-----------+------+-----------+--------+-----+------+------+
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
105
TCGv_ptr tcg_fpstatus;
106
107
switch (opcode) {
108
- case 0x13: /* SHLL, SHLL2 */
109
- if (u == 0 || size == 3) {
110
- unallocated_encoding(s);
111
- return;
112
- }
113
- if (!fp_access_check(s)) {
114
- return;
115
- }
116
- handle_shll(s, is_q, size, rn, rd);
117
- return;
118
case 0xc ... 0xf:
119
case 0x16 ... 0x1f:
120
{
121
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
122
case 0xa: /* CMLT */
123
case 0xb: /* ABS, NEG */
124
case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
125
+ case 0x13: /* SHLL, SHLL2 */
126
case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
127
unallocated_encoding(s);
128
return;
129
--
130
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move the current implementation out of translate-neon.c,
4
and extend to handle all element sizes.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241211163036.2297116-54-richard.henderson@linaro.org
5
Message-id: 20200815013145.539409-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate-sve.c | 53 +++++++++++++-------------------------
11
target/arm/tcg/translate.h | 6 ++++++
9
1 file changed, 18 insertions(+), 35 deletions(-)
12
target/arm/tcg/gengvec.c | 14 ++++++++++++++
13
target/arm/tcg/translate-neon.c | 20 ++------------------
14
3 files changed, 22 insertions(+), 18 deletions(-)
10
15
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
18
--- a/target/arm/tcg/translate.h
14
+++ b/target/arm/translate-sve.c
19
+++ b/target/arm/tcg/translate.h
15
@@ -XXX,XX +XXX,XX @@ static int pred_gvec_reg_size(DisasContext *s)
20
@@ -XXX,XX +XXX,XX @@ void gen_gvec_uaddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
16
return size_for_gvec(pred_full_reg_size(s));
21
void gen_gvec_uadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
22
uint32_t opr_sz, uint32_t max_sz);
23
24
+/* These exclusively manipulate the sign bit. */
25
+void gen_gvec_fabs(unsigned vece, uint32_t dofs, uint32_t aofs,
26
+ uint32_t oprsz, uint32_t maxsz);
27
+void gen_gvec_fneg(unsigned vece, uint32_t dofs, uint32_t aofs,
28
+ uint32_t oprsz, uint32_t maxsz);
29
+
30
/*
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
32
*/
33
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/gengvec.c
36
+++ b/target/arm/tcg/gengvec.c
37
@@ -XXX,XX +XXX,XX @@ void gen_gvec_uadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
38
assert(vece <= MO_32);
39
tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
17
}
40
}
18
41
+
19
+/* Invoke an out-of-line helper on 3 Zregs. */
42
+void gen_gvec_fabs(unsigned vece, uint32_t dofs, uint32_t aofs,
20
+static void gen_gvec_ool_zzz(DisasContext *s, gen_helper_gvec_3 *fn,
43
+ uint32_t oprsz, uint32_t maxsz)
21
+ int rd, int rn, int rm, int data)
22
+{
44
+{
23
+ unsigned vsz = vec_full_reg_size(s);
45
+ uint64_t s_bit = 1ull << ((8 << vece) - 1);
24
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
46
+ tcg_gen_gvec_andi(vece, dofs, aofs, s_bit - 1, oprsz, maxsz);
25
+ vec_full_reg_offset(s, rn),
26
+ vec_full_reg_offset(s, rm),
27
+ vsz, vsz, data, fn);
28
+}
47
+}
29
+
48
+
30
/* Invoke an out-of-line helper on 2 Zregs and a predicate. */
49
+void gen_gvec_fneg(unsigned vece, uint32_t dofs, uint32_t aofs,
31
static void gen_gvec_ool_zzp(DisasContext *s, gen_helper_gvec_3 *fn,
50
+ uint32_t oprsz, uint32_t maxsz)
32
int rd, int rn, int pg, int data)
51
+{
33
@@ -XXX,XX +XXX,XX @@ static bool do_zzw_ool(DisasContext *s, arg_rrr_esz *a, gen_helper_gvec_3 *fn)
52
+ uint64_t s_bit = 1ull << ((8 << vece) - 1);
53
+ tcg_gen_gvec_xori(vece, dofs, aofs, s_bit, oprsz, maxsz);
54
+}
55
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/tcg/translate-neon.c
58
+++ b/target/arm/tcg/translate-neon.c
59
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
60
return true;
61
}
62
63
-static void gen_VABS_F(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
64
- uint32_t oprsz, uint32_t maxsz)
65
-{
66
- tcg_gen_gvec_andi(vece, rd_ofs, rm_ofs,
67
- vece == MO_16 ? 0x7fff : 0x7fffffff,
68
- oprsz, maxsz);
69
-}
70
-
71
static bool trans_VABS_F(DisasContext *s, arg_2misc *a)
72
{
73
if (a->size == MO_16) {
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VABS_F(DisasContext *s, arg_2misc *a)
75
} else if (a->size != MO_32) {
34
return false;
76
return false;
35
}
77
}
36
if (sve_access_check(s)) {
78
- return do_2misc_vec(s, a, gen_VABS_F);
37
- unsigned vsz = vec_full_reg_size(s);
79
-}
38
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
80
-
39
- vec_full_reg_offset(s, a->rn),
81
-static void gen_VNEG_F(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
40
- vec_full_reg_offset(s, a->rm),
82
- uint32_t oprsz, uint32_t maxsz)
41
- vsz, vsz, 0, fn);
83
-{
42
+ gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, 0);
84
- tcg_gen_gvec_xori(vece, rd_ofs, rm_ofs,
43
}
85
- vece == MO_16 ? 0x8000 : 0x80000000,
44
return true;
86
- oprsz, maxsz);
87
+ return do_2misc_vec(s, a, gen_gvec_fabs);
45
}
88
}
46
@@ -XXX,XX +XXX,XX @@ static bool trans_RDVL(DisasContext *s, arg_RDVL *a)
89
47
static bool do_adr(DisasContext *s, arg_rrri *a, gen_helper_gvec_3 *fn)
90
static bool trans_VNEG_F(DisasContext *s, arg_2misc *a)
48
{
91
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_F(DisasContext *s, arg_2misc *a)
49
if (sve_access_check(s)) {
92
} else if (a->size != MO_32) {
50
- unsigned vsz = vec_full_reg_size(s);
51
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
52
- vec_full_reg_offset(s, a->rn),
53
- vec_full_reg_offset(s, a->rm),
54
- vsz, vsz, a->imm, fn);
55
+ gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, a->imm);
56
}
57
return true;
58
}
59
@@ -XXX,XX +XXX,XX @@ static bool trans_FTSSEL(DisasContext *s, arg_rrr_esz *a)
60
return false;
93
return false;
61
}
94
}
62
if (sve_access_check(s)) {
95
- return do_2misc_vec(s, a, gen_VNEG_F);
63
- unsigned vsz = vec_full_reg_size(s);
96
+ return do_2misc_vec(s, a, gen_gvec_fneg);
64
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
65
- vec_full_reg_offset(s, a->rn),
66
- vec_full_reg_offset(s, a->rm),
67
- vsz, vsz, 0, fns[a->esz]);
68
+ gen_gvec_ool_zzz(s, fns[a->esz], a->rd, a->rn, a->rm, 0);
69
}
70
return true;
71
}
97
}
72
@@ -XXX,XX +XXX,XX @@ static bool trans_TBL(DisasContext *s, arg_rrr_esz *a)
98
73
};
99
static bool trans_VRECPE(DisasContext *s, arg_2misc *a)
74
75
if (sve_access_check(s)) {
76
- unsigned vsz = vec_full_reg_size(s);
77
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
78
- vec_full_reg_offset(s, a->rn),
79
- vec_full_reg_offset(s, a->rm),
80
- vsz, vsz, 0, fns[a->esz]);
81
+ gen_gvec_ool_zzz(s, fns[a->esz], a->rd, a->rn, a->rm, 0);
82
}
83
return true;
84
}
85
@@ -XXX,XX +XXX,XX @@ static bool do_zzz_data_ool(DisasContext *s, arg_rrr_esz *a, int data,
86
gen_helper_gvec_3 *fn)
87
{
88
if (sve_access_check(s)) {
89
- unsigned vsz = vec_full_reg_size(s);
90
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
91
- vec_full_reg_offset(s, a->rn),
92
- vec_full_reg_offset(s, a->rm),
93
- vsz, vsz, data, fn);
94
+ gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, data);
95
}
96
return true;
97
}
98
@@ -XXX,XX +XXX,XX @@ static bool trans_DOT_zzz(DisasContext *s, arg_DOT_zzz *a)
99
};
100
101
if (sve_access_check(s)) {
102
- unsigned vsz = vec_full_reg_size(s);
103
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
104
- vec_full_reg_offset(s, a->rn),
105
- vec_full_reg_offset(s, a->rm),
106
- vsz, vsz, 0, fns[a->u][a->sz]);
107
+ gen_gvec_ool_zzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm, 0);
108
}
109
return true;
110
}
111
@@ -XXX,XX +XXX,XX @@ static bool trans_DOT_zzx(DisasContext *s, arg_DOT_zzx *a)
112
};
113
114
if (sve_access_check(s)) {
115
- unsigned vsz = vec_full_reg_size(s);
116
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
117
- vec_full_reg_offset(s, a->rn),
118
- vec_full_reg_offset(s, a->rm),
119
- vsz, vsz, a->index, fns[a->u][a->sz]);
120
+ gen_gvec_ool_zzz(s, fns[a->u][a->sz], a->rd, a->rn, a->rm, a->index);
121
}
122
return true;
123
}
124
--
100
--
125
2.20.1
101
2.34.1
126
127
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Allow the device to execute the DMA transfers in a different
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
AddressSpace.
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
5
Message-id: 20241211163036.2297116-55-richard.henderson@linaro.org
6
We keep using the system_memory address space, but via the
7
proper dma_memory_access() API.
8
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20200814125533.4047-1-f4bug@amsat.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
hw/arm/musicpal.c | 45 +++++++++++++++++++++++++++++++--------------
8
target/arm/tcg/a64.decode | 7 +++++
15
1 file changed, 31 insertions(+), 14 deletions(-)
9
target/arm/tcg/translate-a64.c | 54 +++++++++++++++-------------------
10
2 files changed, 31 insertions(+), 30 deletions(-)
16
11
17
diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
18
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/musicpal.c
14
--- a/target/arm/tcg/a64.decode
20
+++ b/hw/arm/musicpal.c
15
+++ b/target/arm/tcg/a64.decode
21
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@
22
#include "hw/audio/wm8750.h"
17
@qrr_s . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=2
23
#include "sysemu/block-backend.h"
18
@qrr_bh . q:1 ...... . esz:1 ...... ...... rn:5 rd:5 &qrr_e
24
#include "sysemu/runstate.h"
19
@qrr_hs . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=%esz_hs
25
+#include "sysemu/dma.h"
20
+@qrr_sd . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=%esz_sd
26
#include "exec/address-spaces.h"
21
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
27
#include "ui/pixel_ops.h"
22
28
#include "qemu/cutils.h"
23
@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
29
@@ -XXX,XX +XXX,XX @@ typedef struct mv88w8618_eth_state {
24
@@ -XXX,XX +XXX,XX @@ FCVTXN_v 0.10 1110 011 00001 01101 0 ..... ..... @qrr_s
30
25
BFCVTN_v 0.00 1110 101 00001 01101 0 ..... ..... @qrr_h
31
MemoryRegion iomem;
26
32
qemu_irq irq;
27
SHLL_v 0.10 1110 ..1 00001 00111 0 ..... ..... @qrr_e
33
+ MemoryRegion *dma_mr;
28
+
34
+ AddressSpace dma_as;
29
+FABS_v 0.00 1110 111 11000 11111 0 ..... ..... @qrr_h
35
uint32_t smir;
30
+FABS_v 0.00 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
36
uint32_t icr;
31
+
37
uint32_t imr;
32
+FNEG_v 0.10 1110 111 11000 11111 0 ..... ..... @qrr_h
38
@@ -XXX,XX +XXX,XX @@ typedef struct mv88w8618_eth_state {
33
+FNEG_v 0.10 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
39
NICConf conf;
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
40
} mv88w8618_eth_state;
35
index XXXXXXX..XXXXXXX 100644
41
36
--- a/target/arm/tcg/translate-a64.c
42
-static void eth_rx_desc_put(uint32_t addr, mv88w8618_rx_desc *desc)
37
+++ b/target/arm/tcg/translate-a64.c
43
+static void eth_rx_desc_put(AddressSpace *dma_as, uint32_t addr,
38
@@ -XXX,XX +XXX,XX @@ static bool trans_SHLL_v(DisasContext *s, arg_qrr_e *a)
44
+ mv88w8618_rx_desc *desc)
39
return true;
45
{
46
cpu_to_le32s(&desc->cmdstat);
47
cpu_to_le16s(&desc->bytes);
48
cpu_to_le16s(&desc->buffer_size);
49
cpu_to_le32s(&desc->buffer);
50
cpu_to_le32s(&desc->next);
51
- cpu_physical_memory_write(addr, desc, sizeof(*desc));
52
+ dma_memory_write(dma_as, addr, desc, sizeof(*desc));
53
}
40
}
54
41
55
-static void eth_rx_desc_get(uint32_t addr, mv88w8618_rx_desc *desc)
42
+static bool do_fabs_fneg_v(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
56
+static void eth_rx_desc_get(AddressSpace *dma_as, uint32_t addr,
43
+{
57
+ mv88w8618_rx_desc *desc)
44
+ int check = fp_access_check_vector_hsd(s, a->q, a->esz);
58
{
45
+
59
- cpu_physical_memory_read(addr, desc, sizeof(*desc));
46
+ if (check <= 0) {
60
+ dma_memory_read(dma_as, addr, desc, sizeof(*desc));
47
+ return check == 0;
61
le32_to_cpus(&desc->cmdstat);
62
le16_to_cpus(&desc->bytes);
63
le16_to_cpus(&desc->buffer_size);
64
@@ -XXX,XX +XXX,XX @@ static ssize_t eth_receive(NetClientState *nc, const uint8_t *buf, size_t size)
65
continue;
66
}
67
do {
68
- eth_rx_desc_get(desc_addr, &desc);
69
+ eth_rx_desc_get(&s->dma_as, desc_addr, &desc);
70
if ((desc.cmdstat & MP_ETH_RX_OWN) && desc.buffer_size >= size) {
71
- cpu_physical_memory_write(desc.buffer + s->vlan_header,
72
+ dma_memory_write(&s->dma_as, desc.buffer + s->vlan_header,
73
buf, size);
74
desc.bytes = size + s->vlan_header;
75
desc.cmdstat &= ~MP_ETH_RX_OWN;
76
@@ -XXX,XX +XXX,XX @@ static ssize_t eth_receive(NetClientState *nc, const uint8_t *buf, size_t size)
77
if (s->icr & s->imr) {
78
qemu_irq_raise(s->irq);
79
}
80
- eth_rx_desc_put(desc_addr, &desc);
81
+ eth_rx_desc_put(&s->dma_as, desc_addr, &desc);
82
return size;
83
}
84
desc_addr = desc.next;
85
@@ -XXX,XX +XXX,XX @@ static ssize_t eth_receive(NetClientState *nc, const uint8_t *buf, size_t size)
86
return size;
87
}
88
89
-static void eth_tx_desc_put(uint32_t addr, mv88w8618_tx_desc *desc)
90
+static void eth_tx_desc_put(AddressSpace *dma_as, uint32_t addr,
91
+ mv88w8618_tx_desc *desc)
92
{
93
cpu_to_le32s(&desc->cmdstat);
94
cpu_to_le16s(&desc->res);
95
cpu_to_le16s(&desc->bytes);
96
cpu_to_le32s(&desc->buffer);
97
cpu_to_le32s(&desc->next);
98
- cpu_physical_memory_write(addr, desc, sizeof(*desc));
99
+ dma_memory_write(dma_as, addr, desc, sizeof(*desc));
100
}
101
102
-static void eth_tx_desc_get(uint32_t addr, mv88w8618_tx_desc *desc)
103
+static void eth_tx_desc_get(AddressSpace *dma_as, uint32_t addr,
104
+ mv88w8618_tx_desc *desc)
105
{
106
- cpu_physical_memory_read(addr, desc, sizeof(*desc));
107
+ dma_memory_read(dma_as, addr, desc, sizeof(*desc));
108
le32_to_cpus(&desc->cmdstat);
109
le16_to_cpus(&desc->res);
110
le16_to_cpus(&desc->bytes);
111
@@ -XXX,XX +XXX,XX @@ static void eth_send(mv88w8618_eth_state *s, int queue_index)
112
int len;
113
114
do {
115
- eth_tx_desc_get(desc_addr, &desc);
116
+ eth_tx_desc_get(&s->dma_as, desc_addr, &desc);
117
next_desc = desc.next;
118
if (desc.cmdstat & MP_ETH_TX_OWN) {
119
len = desc.bytes;
120
if (len < 2048) {
121
- cpu_physical_memory_read(desc.buffer, buf, len);
122
+ dma_memory_read(&s->dma_as, desc.buffer, buf, len);
123
qemu_send_packet(qemu_get_queue(s->nic), buf, len);
124
}
125
desc.cmdstat &= ~MP_ETH_TX_OWN;
126
s->icr |= 1 << (MP_ETH_IRQ_TXLO_BIT - queue_index);
127
- eth_tx_desc_put(desc_addr, &desc);
128
+ eth_tx_desc_put(&s->dma_as, desc_addr, &desc);
129
}
130
desc_addr = next_desc;
131
} while (desc_addr != s->tx_queue[queue_index]);
132
@@ -XXX,XX +XXX,XX @@ static void mv88w8618_eth_realize(DeviceState *dev, Error **errp)
133
{
134
mv88w8618_eth_state *s = MV88W8618_ETH(dev);
135
136
+ if (!s->dma_mr) {
137
+ error_setg(errp, TYPE_MV88W8618_ETH " 'dma-memory' link not set");
138
+ return;
139
+ }
48
+ }
140
+
49
+
141
+ address_space_init(&s->dma_as, s->dma_mr, "emac-dma");
50
+ gen_gvec_fn2(s, a->q, a->rd, a->rn, fn, a->esz);
142
s->nic = qemu_new_nic(&net_mv88w8618_info, &s->conf,
51
+ return true;
143
object_get_typename(OBJECT(dev)), dev->id, s);
52
+}
53
+
54
+TRANS(FABS_v, do_fabs_fneg_v, a, gen_gvec_fabs)
55
+TRANS(FNEG_v, do_fabs_fneg_v, a, gen_gvec_fneg)
56
57
/* Common vector code for handling integer to FP conversion */
58
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
59
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
60
* requires them.
61
*/
62
switch (opcode) {
63
- case 0x2f: /* FABS */
64
- gen_vfp_absd(tcg_rd, tcg_rn);
65
- break;
66
- case 0x6f: /* FNEG */
67
- gen_vfp_negd(tcg_rd, tcg_rn);
68
- break;
69
case 0x7f: /* FSQRT */
70
gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_fpstatus);
71
break;
72
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
73
case 0x9: /* CMEQ, CMLE */
74
case 0xa: /* CMLT */
75
case 0xb: /* ABS, NEG */
76
+ case 0x2f: /* FABS */
77
+ case 0x6f: /* FNEG */
78
g_assert_not_reached();
79
}
144
}
80
}
145
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription mv88w8618_eth_vmsd = {
81
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
146
82
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
147
static Property mv88w8618_eth_properties[] = {
83
size = is_double ? 3 : 2;
148
DEFINE_NIC_PROPERTIES(mv88w8618_eth_state, conf),
84
switch (opcode) {
149
+ DEFINE_PROP_LINK("dma-memory", mv88w8618_eth_state, dma_mr,
85
- case 0x2f: /* FABS */
150
+ TYPE_MEMORY_REGION, MemoryRegion *),
86
- case 0x6f: /* FNEG */
151
DEFINE_PROP_END_OF_LIST(),
87
- if (size == 3 && !is_q) {
152
};
88
- unallocated_encoding(s);
153
89
- return;
154
@@ -XXX,XX +XXX,XX @@ static void musicpal_init(MachineState *machine)
90
- }
155
qemu_check_nic_model(&nd_table[0], "mv88w8618");
91
- break;
156
dev = qdev_new(TYPE_MV88W8618_ETH);
92
case 0x1d: /* SCVTF */
157
qdev_set_nic_properties(dev, &nd_table[0]);
93
case 0x5d: /* UCVTF */
158
+ object_property_set_link(OBJECT(dev), "dma-memory",
94
{
159
+ OBJECT(get_system_memory()), &error_fatal);
95
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
160
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
96
case 0x16: /* FCVTN, FCVTN2 */
161
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, MP_ETH_BASE);
97
case 0x36: /* BFCVTN, BFCVTN2 */
162
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[MP_ETH_IRQ]);
98
case 0x56: /* FCVTXN, FCVTXN2 */
99
+ case 0x2f: /* FABS */
100
+ case 0x6f: /* FNEG */
101
unallocated_encoding(s);
102
return;
103
}
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
105
{
106
/* Special cases for 32 bit elements */
107
switch (opcode) {
108
- case 0x2f: /* FABS */
109
- gen_vfp_abss(tcg_res, tcg_op);
110
- break;
111
- case 0x6f: /* FNEG */
112
- gen_vfp_negs(tcg_res, tcg_op);
113
- break;
114
case 0x7f: /* FSQRT */
115
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_fpstatus);
116
break;
117
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
118
break;
119
default:
120
case 0x7: /* SQABS, SQNEG */
121
+ case 0x2f: /* FABS */
122
+ case 0x6f: /* FNEG */
123
g_assert_not_reached();
124
}
125
}
126
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
127
case 0x7b: /* FCVTZU */
128
rmode = FPROUNDING_ZERO;
129
break;
130
- case 0x2f: /* FABS */
131
- case 0x6f: /* FNEG */
132
- only_in_vector = true;
133
- need_fpst = false;
134
- break;
135
case 0x7d: /* FRSQRTE */
136
break;
137
case 0x7f: /* FSQRT (vector) */
138
only_in_vector = true;
139
break;
140
default:
141
+ case 0x2f: /* FABS */
142
+ case 0x6f: /* FNEG */
143
unallocated_encoding(s);
144
return;
145
}
146
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
147
case 0x59: /* FRINTX */
148
gen_helper_advsimd_rinth_exact(tcg_res, tcg_op, tcg_fpstatus);
149
break;
150
- case 0x2f: /* FABS */
151
- tcg_gen_andi_i32(tcg_res, tcg_op, 0x7fff);
152
- break;
153
- case 0x6f: /* FNEG */
154
- tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
155
- break;
156
case 0x7d: /* FRSQRTE */
157
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
158
break;
159
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
160
gen_helper_vfp_sqrth(tcg_res, tcg_op, tcg_fpstatus);
161
break;
162
default:
163
+ case 0x2f: /* FABS */
164
+ case 0x6f: /* FNEG */
165
g_assert_not_reached();
166
}
167
163
--
168
--
164
2.20.1
169
2.34.1
165
166
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-56-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 3 ++
9
target/arm/tcg/translate-a64.c | 69 ++++++++++++++++++++++++----------
10
2 files changed, 53 insertions(+), 19 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FABS_v 0.00 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
17
18
FNEG_v 0.10 1110 111 11000 11111 0 ..... ..... @qrr_h
19
FNEG_v 0.10 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
20
+
21
+FSQRT_v 0.10 1110 111 11001 11111 0 ..... ..... @qrr_h
22
+FSQRT_v 0.10 1110 1.1 00001 11111 0 ..... ..... @qrr_sd
23
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/tcg/translate-a64.c
26
+++ b/target/arm/tcg/translate-a64.c
27
@@ -XXX,XX +XXX,XX @@ static bool do_fabs_fneg_v(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
28
TRANS(FABS_v, do_fabs_fneg_v, a, gen_gvec_fabs)
29
TRANS(FNEG_v, do_fabs_fneg_v, a, gen_gvec_fneg)
30
31
+static bool do_fp1_vector(DisasContext *s, arg_qrr_e *a,
32
+ const FPScalar1 *f, int rmode)
33
+{
34
+ TCGv_i32 tcg_rmode = NULL;
35
+ TCGv_ptr fpst;
36
+ int check = fp_access_check_vector_hsd(s, a->q, a->esz);
37
+
38
+ if (check <= 0) {
39
+ return check == 0;
40
+ }
41
+
42
+ fpst = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
43
+ if (rmode >= 0) {
44
+ tcg_rmode = gen_set_rmode(rmode, fpst);
45
+ }
46
+
47
+ if (a->esz == MO_64) {
48
+ TCGv_i64 t64 = tcg_temp_new_i64();
49
+
50
+ for (int pass = 0; pass < 2; ++pass) {
51
+ read_vec_element(s, t64, a->rn, pass, MO_64);
52
+ f->gen_d(t64, t64, fpst);
53
+ write_vec_element(s, t64, a->rd, pass, MO_64);
54
+ }
55
+ } else {
56
+ TCGv_i32 t32 = tcg_temp_new_i32();
57
+ void (*gen)(TCGv_i32, TCGv_i32, TCGv_ptr)
58
+ = (a->esz == MO_16 ? f->gen_h : f->gen_s);
59
+
60
+ for (int pass = 0, n = (a->q ? 16 : 8) >> a->esz; pass < n; ++pass) {
61
+ read_vec_element_i32(s, t32, a->rn, pass, a->esz);
62
+ gen(t32, t32, fpst);
63
+ write_vec_element_i32(s, t32, a->rd, pass, a->esz);
64
+ }
65
+ }
66
+ clear_vec_high(s, a->q, a->rd);
67
+
68
+ if (rmode >= 0) {
69
+ gen_restore_rmode(tcg_rmode, fpst);
70
+ }
71
+ return true;
72
+}
73
+
74
+TRANS(FSQRT_v, do_fp1_vector, a, &f_scalar_fsqrt, -1)
75
+
76
/* Common vector code for handling integer to FP conversion */
77
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
78
int elements, int is_signed,
79
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
80
* requires them.
81
*/
82
switch (opcode) {
83
- case 0x7f: /* FSQRT */
84
- gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_fpstatus);
85
- break;
86
case 0x1a: /* FCVTNS */
87
case 0x1b: /* FCVTMS */
88
case 0x1c: /* FCVTAS */
89
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
90
case 0xb: /* ABS, NEG */
91
case 0x2f: /* FABS */
92
case 0x6f: /* FNEG */
93
+ case 0x7f: /* FSQRT */
94
g_assert_not_reached();
95
}
96
}
97
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
98
}
99
handle_2misc_fcmp_zero(s, opcode, false, u, is_q, size, rn, rd);
100
return;
101
- case 0x7f: /* FSQRT */
102
- need_fpstatus = true;
103
- if (size == 3 && !is_q) {
104
- unallocated_encoding(s);
105
- return;
106
- }
107
- break;
108
case 0x1a: /* FCVTNS */
109
case 0x1b: /* FCVTMS */
110
case 0x3a: /* FCVTPS */
111
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
112
case 0x56: /* FCVTXN, FCVTXN2 */
113
case 0x2f: /* FABS */
114
case 0x6f: /* FNEG */
115
+ case 0x7f: /* FSQRT */
116
unallocated_encoding(s);
117
return;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
120
{
121
/* Special cases for 32 bit elements */
122
switch (opcode) {
123
- case 0x7f: /* FSQRT */
124
- gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_fpstatus);
125
- break;
126
case 0x1a: /* FCVTNS */
127
case 0x1b: /* FCVTMS */
128
case 0x1c: /* FCVTAS */
129
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
130
case 0x7: /* SQABS, SQNEG */
131
case 0x2f: /* FABS */
132
case 0x6f: /* FNEG */
133
+ case 0x7f: /* FSQRT */
134
g_assert_not_reached();
135
}
136
}
137
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
138
break;
139
case 0x7d: /* FRSQRTE */
140
break;
141
- case 0x7f: /* FSQRT (vector) */
142
- only_in_vector = true;
143
- break;
144
default:
145
case 0x2f: /* FABS */
146
case 0x6f: /* FNEG */
147
+ case 0x7f: /* FSQRT (vector) */
148
unallocated_encoding(s);
149
return;
150
}
151
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
152
case 0x7d: /* FRSQRTE */
153
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
154
break;
155
- case 0x7f: /* FSQRT */
156
- gen_helper_vfp_sqrth(tcg_res, tcg_op, tcg_fpstatus);
157
- break;
158
default:
159
case 0x2f: /* FABS */
160
case 0x6f: /* FNEG */
161
+ case 0x7f: /* FSQRT */
162
g_assert_not_reached();
163
}
164
165
--
166
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-57-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 26 +++++
9
target/arm/tcg/translate-a64.c | 176 ++++++++++++---------------------
10
2 files changed, 88 insertions(+), 114 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FNEG_v 0.10 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
17
18
FSQRT_v 0.10 1110 111 11001 11111 0 ..... ..... @qrr_h
19
FSQRT_v 0.10 1110 1.1 00001 11111 0 ..... ..... @qrr_sd
20
+
21
+FRINTN_v 0.00 1110 011 11001 10001 0 ..... ..... @qrr_h
22
+FRINTN_v 0.00 1110 0.1 00001 10001 0 ..... ..... @qrr_sd
23
+
24
+FRINTM_v 0.00 1110 011 11001 10011 0 ..... ..... @qrr_h
25
+FRINTM_v 0.00 1110 0.1 00001 10011 0 ..... ..... @qrr_sd
26
+
27
+FRINTP_v 0.00 1110 111 11001 10001 0 ..... ..... @qrr_h
28
+FRINTP_v 0.00 1110 1.1 00001 10001 0 ..... ..... @qrr_sd
29
+
30
+FRINTZ_v 0.00 1110 111 11001 10011 0 ..... ..... @qrr_h
31
+FRINTZ_v 0.00 1110 1.1 00001 10011 0 ..... ..... @qrr_sd
32
+
33
+FRINTA_v 0.10 1110 011 11001 10001 0 ..... ..... @qrr_h
34
+FRINTA_v 0.10 1110 0.1 00001 10001 0 ..... ..... @qrr_sd
35
+
36
+FRINTX_v 0.10 1110 011 11001 10011 0 ..... ..... @qrr_h
37
+FRINTX_v 0.10 1110 0.1 00001 10011 0 ..... ..... @qrr_sd
38
+
39
+FRINTI_v 0.10 1110 111 11001 10011 0 ..... ..... @qrr_h
40
+FRINTI_v 0.10 1110 1.1 00001 10011 0 ..... ..... @qrr_sd
41
+
42
+FRINT32Z_v 0.00 1110 0.1 00001 11101 0 ..... ..... @qrr_sd
43
+FRINT32X_v 0.10 1110 0.1 00001 11101 0 ..... ..... @qrr_sd
44
+FRINT64Z_v 0.00 1110 0.1 00001 11111 0 ..... ..... @qrr_sd
45
+FRINT64X_v 0.10 1110 0.1 00001 11111 0 ..... ..... @qrr_sd
46
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/tcg/translate-a64.c
49
+++ b/target/arm/tcg/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@ static bool do_fp1_vector(DisasContext *s, arg_qrr_e *a,
51
52
TRANS(FSQRT_v, do_fp1_vector, a, &f_scalar_fsqrt, -1)
53
54
+TRANS(FRINTN_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_TIEEVEN)
55
+TRANS(FRINTP_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_POSINF)
56
+TRANS(FRINTM_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_NEGINF)
57
+TRANS(FRINTZ_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_ZERO)
58
+TRANS(FRINTA_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_TIEAWAY)
59
+TRANS(FRINTI_v, do_fp1_vector, a, &f_scalar_frint, -1)
60
+TRANS(FRINTX_v, do_fp1_vector, a, &f_scalar_frintx, -1)
61
+
62
+TRANS_FEAT(FRINT32Z_v, aa64_frint, do_fp1_vector, a,
63
+ &f_scalar_frint32, FPROUNDING_ZERO)
64
+TRANS_FEAT(FRINT32X_v, aa64_frint, do_fp1_vector, a, &f_scalar_frint32, -1)
65
+TRANS_FEAT(FRINT64Z_v, aa64_frint, do_fp1_vector, a,
66
+ &f_scalar_frint64, FPROUNDING_ZERO)
67
+TRANS_FEAT(FRINT64X_v, aa64_frint, do_fp1_vector, a, &f_scalar_frint64, -1)
68
+
69
/* Common vector code for handling integer to FP conversion */
70
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
71
int elements, int is_signed,
72
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
73
case 0x7b: /* FCVTZU */
74
gen_helper_vfp_touqd(tcg_rd, tcg_rn, tcg_constant_i32(0), tcg_fpstatus);
75
break;
76
- case 0x18: /* FRINTN */
77
- case 0x19: /* FRINTM */
78
- case 0x38: /* FRINTP */
79
- case 0x39: /* FRINTZ */
80
- case 0x58: /* FRINTA */
81
- case 0x79: /* FRINTI */
82
- gen_helper_rintd(tcg_rd, tcg_rn, tcg_fpstatus);
83
- break;
84
- case 0x59: /* FRINTX */
85
- gen_helper_rintd_exact(tcg_rd, tcg_rn, tcg_fpstatus);
86
- break;
87
- case 0x1e: /* FRINT32Z */
88
- case 0x5e: /* FRINT32X */
89
- gen_helper_frint32_d(tcg_rd, tcg_rn, tcg_fpstatus);
90
- break;
91
- case 0x1f: /* FRINT64Z */
92
- case 0x5f: /* FRINT64X */
93
- gen_helper_frint64_d(tcg_rd, tcg_rn, tcg_fpstatus);
94
- break;
95
default:
96
case 0x4: /* CLS, CLZ */
97
case 0x5: /* NOT */
98
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
99
case 0x2f: /* FABS */
100
case 0x6f: /* FNEG */
101
case 0x7f: /* FSQRT */
102
+ case 0x18: /* FRINTN */
103
+ case 0x19: /* FRINTM */
104
+ case 0x38: /* FRINTP */
105
+ case 0x39: /* FRINTZ */
106
+ case 0x58: /* FRINTA */
107
+ case 0x79: /* FRINTI */
108
+ case 0x59: /* FRINTX */
109
+ case 0x1e: /* FRINT32Z */
110
+ case 0x5e: /* FRINT32X */
111
+ case 0x1f: /* FRINT64Z */
112
+ case 0x5f: /* FRINT64X */
113
g_assert_not_reached();
114
}
115
}
116
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
117
}
118
handle_2misc_widening(s, opcode, is_q, size, rn, rd);
119
return;
120
- case 0x18: /* FRINTN */
121
- case 0x19: /* FRINTM */
122
- case 0x38: /* FRINTP */
123
- case 0x39: /* FRINTZ */
124
- rmode = extract32(opcode, 5, 1) | (extract32(opcode, 0, 1) << 1);
125
- /* fall through */
126
- case 0x59: /* FRINTX */
127
- case 0x79: /* FRINTI */
128
- need_fpstatus = true;
129
- if (size == 3 && !is_q) {
130
- unallocated_encoding(s);
131
- return;
132
- }
133
- break;
134
- case 0x58: /* FRINTA */
135
- rmode = FPROUNDING_TIEAWAY;
136
- need_fpstatus = true;
137
- if (size == 3 && !is_q) {
138
- unallocated_encoding(s);
139
- return;
140
- }
141
- break;
142
case 0x7c: /* URSQRTE */
143
if (size == 3) {
144
unallocated_encoding(s);
145
return;
146
}
147
break;
148
- case 0x1e: /* FRINT32Z */
149
- case 0x1f: /* FRINT64Z */
150
- rmode = FPROUNDING_ZERO;
151
- /* fall through */
152
- case 0x5e: /* FRINT32X */
153
- case 0x5f: /* FRINT64X */
154
- need_fpstatus = true;
155
- if ((size == 3 && !is_q) || !dc_isar_feature(aa64_frint, s)) {
156
- unallocated_encoding(s);
157
- return;
158
- }
159
- break;
160
default:
161
case 0x16: /* FCVTN, FCVTN2 */
162
case 0x36: /* BFCVTN, BFCVTN2 */
163
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
164
case 0x2f: /* FABS */
165
case 0x6f: /* FNEG */
166
case 0x7f: /* FSQRT */
167
+ case 0x18: /* FRINTN */
168
+ case 0x19: /* FRINTM */
169
+ case 0x38: /* FRINTP */
170
+ case 0x39: /* FRINTZ */
171
+ case 0x59: /* FRINTX */
172
+ case 0x79: /* FRINTI */
173
+ case 0x58: /* FRINTA */
174
+ case 0x1e: /* FRINT32Z */
175
+ case 0x1f: /* FRINT64Z */
176
+ case 0x5e: /* FRINT32X */
177
+ case 0x5f: /* FRINT64X */
178
unallocated_encoding(s);
179
return;
180
}
181
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
182
gen_helper_vfp_touls(tcg_res, tcg_op,
183
tcg_constant_i32(0), tcg_fpstatus);
184
break;
185
- case 0x18: /* FRINTN */
186
- case 0x19: /* FRINTM */
187
- case 0x38: /* FRINTP */
188
- case 0x39: /* FRINTZ */
189
- case 0x58: /* FRINTA */
190
- case 0x79: /* FRINTI */
191
- gen_helper_rints(tcg_res, tcg_op, tcg_fpstatus);
192
- break;
193
- case 0x59: /* FRINTX */
194
- gen_helper_rints_exact(tcg_res, tcg_op, tcg_fpstatus);
195
- break;
196
case 0x7c: /* URSQRTE */
197
gen_helper_rsqrte_u32(tcg_res, tcg_op);
198
break;
199
- case 0x1e: /* FRINT32Z */
200
- case 0x5e: /* FRINT32X */
201
- gen_helper_frint32_s(tcg_res, tcg_op, tcg_fpstatus);
202
- break;
203
- case 0x1f: /* FRINT64Z */
204
- case 0x5f: /* FRINT64X */
205
- gen_helper_frint64_s(tcg_res, tcg_op, tcg_fpstatus);
206
- break;
207
default:
208
case 0x7: /* SQABS, SQNEG */
209
case 0x2f: /* FABS */
210
case 0x6f: /* FNEG */
211
case 0x7f: /* FSQRT */
212
+ case 0x18: /* FRINTN */
213
+ case 0x19: /* FRINTM */
214
+ case 0x38: /* FRINTP */
215
+ case 0x39: /* FRINTZ */
216
+ case 0x58: /* FRINTA */
217
+ case 0x79: /* FRINTI */
218
+ case 0x59: /* FRINTX */
219
+ case 0x1e: /* FRINT32Z */
220
+ case 0x5e: /* FRINT32X */
221
+ case 0x1f: /* FRINT64Z */
222
+ case 0x5f: /* FRINT64X */
223
g_assert_not_reached();
224
}
225
}
226
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
227
int rn, rd;
228
bool is_q;
229
bool is_scalar;
230
- bool only_in_vector = false;
231
232
int pass;
233
TCGv_i32 tcg_rmode = NULL;
234
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
235
case 0x3d: /* FRECPE */
236
case 0x3f: /* FRECPX */
237
break;
238
- case 0x18: /* FRINTN */
239
- only_in_vector = true;
240
- rmode = FPROUNDING_TIEEVEN;
241
- break;
242
- case 0x19: /* FRINTM */
243
- only_in_vector = true;
244
- rmode = FPROUNDING_NEGINF;
245
- break;
246
- case 0x38: /* FRINTP */
247
- only_in_vector = true;
248
- rmode = FPROUNDING_POSINF;
249
- break;
250
- case 0x39: /* FRINTZ */
251
- only_in_vector = true;
252
- rmode = FPROUNDING_ZERO;
253
- break;
254
- case 0x58: /* FRINTA */
255
- only_in_vector = true;
256
- rmode = FPROUNDING_TIEAWAY;
257
- break;
258
- case 0x59: /* FRINTX */
259
- case 0x79: /* FRINTI */
260
- only_in_vector = true;
261
- /* current rounding mode */
262
- break;
263
case 0x1a: /* FCVTNS */
264
rmode = FPROUNDING_TIEEVEN;
265
break;
266
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
267
case 0x2f: /* FABS */
268
case 0x6f: /* FNEG */
269
case 0x7f: /* FSQRT (vector) */
270
+ case 0x18: /* FRINTN */
271
+ case 0x19: /* FRINTM */
272
+ case 0x38: /* FRINTP */
273
+ case 0x39: /* FRINTZ */
274
+ case 0x58: /* FRINTA */
275
+ case 0x59: /* FRINTX */
276
+ case 0x79: /* FRINTI */
277
unallocated_encoding(s);
278
return;
279
}
280
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
281
unallocated_encoding(s);
282
return;
283
}
284
- /* FRINTxx is only in the vector form */
285
- if (only_in_vector) {
286
- unallocated_encoding(s);
287
- return;
288
- }
289
}
290
291
if (!fp_access_check(s)) {
292
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
293
case 0x7b: /* FCVTZU */
294
gen_helper_advsimd_f16touinth(tcg_res, tcg_op, tcg_fpstatus);
295
break;
296
- case 0x18: /* FRINTN */
297
- case 0x19: /* FRINTM */
298
- case 0x38: /* FRINTP */
299
- case 0x39: /* FRINTZ */
300
- case 0x58: /* FRINTA */
301
- case 0x79: /* FRINTI */
302
- gen_helper_advsimd_rinth(tcg_res, tcg_op, tcg_fpstatus);
303
- break;
304
- case 0x59: /* FRINTX */
305
- gen_helper_advsimd_rinth_exact(tcg_res, tcg_op, tcg_fpstatus);
306
- break;
307
case 0x7d: /* FRSQRTE */
308
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
309
break;
310
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
311
case 0x2f: /* FABS */
312
case 0x6f: /* FNEG */
313
case 0x7f: /* FSQRT */
314
+ case 0x18: /* FRINTN */
315
+ case 0x19: /* FRINTM */
316
+ case 0x38: /* FRINTP */
317
+ case 0x39: /* FRINTZ */
318
+ case 0x58: /* FRINTA */
319
+ case 0x79: /* FRINTI */
320
+ case 0x59: /* FRINTX */
321
g_assert_not_reached();
322
}
323
324
--
325
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Arm silliness with naming, the scalar insns described
4
as part of the vector instructions, as separate from
5
the "regular" scalar insns which output to general registers.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241211163036.2297116-58-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/tcg/a64.decode | 30 ++++++++
13
target/arm/tcg/translate-a64.c | 133 ++++++++++++++-------------------
14
2 files changed, 86 insertions(+), 77 deletions(-)
15
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ UQXTN_s 0111 1110 ..1 00001 01001 0 ..... ..... @rr_e
21
22
FCVTXN_s 0111 1110 011 00001 01101 0 ..... ..... @rr_s
23
24
+@icvt_h . ....... .. ...... ...... rn:5 rd:5 \
25
+ &fcvt sf=0 esz=1 shift=0
26
+@icvt_sd . ....... .. ...... ...... rn:5 rd:5 \
27
+ &fcvt sf=0 esz=%esz_sd shift=0
28
+
29
+FCVTNS_f 0101 1110 011 11001 10101 0 ..... ..... @icvt_h
30
+FCVTNS_f 0101 1110 0.1 00001 10101 0 ..... ..... @icvt_sd
31
+FCVTNU_f 0111 1110 011 11001 10101 0 ..... ..... @icvt_h
32
+FCVTNU_f 0111 1110 0.1 00001 10101 0 ..... ..... @icvt_sd
33
+
34
+FCVTPS_f 0101 1110 111 11001 10101 0 ..... ..... @icvt_h
35
+FCVTPS_f 0101 1110 1.1 00001 10101 0 ..... ..... @icvt_sd
36
+FCVTPU_f 0111 1110 111 11001 10101 0 ..... ..... @icvt_h
37
+FCVTPU_f 0111 1110 1.1 00001 10101 0 ..... ..... @icvt_sd
38
+
39
+FCVTMS_f 0101 1110 011 11001 10111 0 ..... ..... @icvt_h
40
+FCVTMS_f 0101 1110 0.1 00001 10111 0 ..... ..... @icvt_sd
41
+FCVTMU_f 0111 1110 011 11001 10111 0 ..... ..... @icvt_h
42
+FCVTMU_f 0111 1110 0.1 00001 10111 0 ..... ..... @icvt_sd
43
+
44
+FCVTZS_f 0101 1110 111 11001 10111 0 ..... ..... @icvt_h
45
+FCVTZS_f 0101 1110 1.1 00001 10111 0 ..... ..... @icvt_sd
46
+FCVTZU_f 0111 1110 111 11001 10111 0 ..... ..... @icvt_h
47
+FCVTZU_f 0111 1110 1.1 00001 10111 0 ..... ..... @icvt_sd
48
+
49
+FCVTAS_f 0101 1110 011 11001 11001 0 ..... ..... @icvt_h
50
+FCVTAS_f 0101 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
51
+FCVTAU_f 0111 1110 011 11001 11001 0 ..... ..... @icvt_h
52
+FCVTAU_f 0111 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
53
+
54
# Advanced SIMD two-register miscellaneous
55
56
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
57
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/tcg/translate-a64.c
60
+++ b/target/arm/tcg/translate-a64.c
61
@@ -XXX,XX +XXX,XX @@ static void do_fcvt_scalar(DisasContext *s, MemOp out, MemOp esz,
62
tcg_shift, tcg_fpstatus);
63
tcg_gen_extu_i32_i64(tcg_out, tcg_single);
64
break;
65
+ case MO_16 | MO_SIGN:
66
+ gen_helper_vfp_toshh(tcg_single, tcg_single,
67
+ tcg_shift, tcg_fpstatus);
68
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
69
+ break;
70
+ case MO_16:
71
+ gen_helper_vfp_touhh(tcg_single, tcg_single,
72
+ tcg_shift, tcg_fpstatus);
73
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
74
+ break;
75
default:
76
g_assert_not_reached();
77
}
78
@@ -XXX,XX +XXX,XX @@ TRANS(FCVTZU_g, do_fcvt_g, a, FPROUNDING_ZERO, false)
79
TRANS(FCVTAS_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, true)
80
TRANS(FCVTAU_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, false)
81
82
+/*
83
+ * FCVT* (vector), scalar version.
84
+ * Which sounds weird, but really just means output to fp register
85
+ * instead of output to general register. Input and output element
86
+ * size are always equal.
87
+ */
88
+static bool do_fcvt_f(DisasContext *s, arg_fcvt *a,
89
+ ARMFPRounding rmode, bool is_signed)
90
+{
91
+ TCGv_i64 tcg_int;
92
+ int check = fp_access_check_scalar_hsd(s, a->esz);
93
+
94
+ if (check <= 0) {
95
+ return check == 0;
96
+ }
97
+
98
+ tcg_int = tcg_temp_new_i64();
99
+ do_fcvt_scalar(s, a->esz | (is_signed ? MO_SIGN : 0),
100
+ a->esz, tcg_int, a->shift, a->rn, rmode);
101
+
102
+ clear_vec(s, a->rd);
103
+ write_vec_element(s, tcg_int, a->rd, 0, a->esz);
104
+ return true;
105
+}
106
+
107
+TRANS(FCVTNS_f, do_fcvt_f, a, FPROUNDING_TIEEVEN, true)
108
+TRANS(FCVTNU_f, do_fcvt_f, a, FPROUNDING_TIEEVEN, false)
109
+TRANS(FCVTPS_f, do_fcvt_f, a, FPROUNDING_POSINF, true)
110
+TRANS(FCVTPU_f, do_fcvt_f, a, FPROUNDING_POSINF, false)
111
+TRANS(FCVTMS_f, do_fcvt_f, a, FPROUNDING_NEGINF, true)
112
+TRANS(FCVTMU_f, do_fcvt_f, a, FPROUNDING_NEGINF, false)
113
+TRANS(FCVTZS_f, do_fcvt_f, a, FPROUNDING_ZERO, true)
114
+TRANS(FCVTZU_f, do_fcvt_f, a, FPROUNDING_ZERO, false)
115
+TRANS(FCVTAS_f, do_fcvt_f, a, FPROUNDING_TIEAWAY, true)
116
+TRANS(FCVTAU_f, do_fcvt_f, a, FPROUNDING_TIEAWAY, false)
117
+
118
static bool trans_FJCVTZS(DisasContext *s, arg_FJCVTZS *a)
119
{
120
if (!dc_isar_feature(aa64_jscvt, s)) {
121
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
122
int opcode = extract32(insn, 12, 5);
123
int size = extract32(insn, 22, 2);
124
bool u = extract32(insn, 29, 1);
125
- bool is_fcvt = false;
126
- int rmode;
127
- TCGv_i32 tcg_rmode;
128
- TCGv_ptr tcg_fpstatus;
129
130
switch (opcode) {
131
case 0xc ... 0xf:
132
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
133
case 0x5b: /* FCVTMU */
134
case 0x7a: /* FCVTPU */
135
case 0x7b: /* FCVTZU */
136
- is_fcvt = true;
137
- rmode = extract32(opcode, 5, 1) | (extract32(opcode, 0, 1) << 1);
138
- break;
139
case 0x1c: /* FCVTAS */
140
case 0x5c: /* FCVTAU */
141
- /* TIEAWAY doesn't fit in the usual rounding mode encoding */
142
- is_fcvt = true;
143
- rmode = FPROUNDING_TIEAWAY;
144
- break;
145
case 0x56: /* FCVTXN, FCVTXN2 */
146
default:
147
unallocated_encoding(s);
148
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
149
unallocated_encoding(s);
150
return;
151
}
152
-
153
- if (!fp_access_check(s)) {
154
- return;
155
- }
156
-
157
- if (is_fcvt) {
158
- tcg_fpstatus = fpstatus_ptr(FPST_FPCR);
159
- tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
160
- } else {
161
- tcg_fpstatus = NULL;
162
- tcg_rmode = NULL;
163
- }
164
-
165
- if (size == 3) {
166
- TCGv_i64 tcg_rn = read_fp_dreg(s, rn);
167
- TCGv_i64 tcg_rd = tcg_temp_new_i64();
168
-
169
- handle_2misc_64(s, opcode, u, tcg_rd, tcg_rn, tcg_rmode, tcg_fpstatus);
170
- write_fp_dreg(s, rd, tcg_rd);
171
- } else {
172
- TCGv_i32 tcg_rn = tcg_temp_new_i32();
173
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
174
-
175
- read_vec_element_i32(s, tcg_rn, rn, 0, size);
176
-
177
- switch (opcode) {
178
- case 0x1a: /* FCVTNS */
179
- case 0x1b: /* FCVTMS */
180
- case 0x1c: /* FCVTAS */
181
- case 0x3a: /* FCVTPS */
182
- case 0x3b: /* FCVTZS */
183
- gen_helper_vfp_tosls(tcg_rd, tcg_rn, tcg_constant_i32(0),
184
- tcg_fpstatus);
185
- break;
186
- case 0x5a: /* FCVTNU */
187
- case 0x5b: /* FCVTMU */
188
- case 0x5c: /* FCVTAU */
189
- case 0x7a: /* FCVTPU */
190
- case 0x7b: /* FCVTZU */
191
- gen_helper_vfp_touls(tcg_rd, tcg_rn, tcg_constant_i32(0),
192
- tcg_fpstatus);
193
- break;
194
- default:
195
- case 0x7: /* SQABS, SQNEG */
196
- g_assert_not_reached();
197
- }
198
-
199
- write_fp_sreg(s, rd, tcg_rd);
200
- }
201
-
202
- if (is_fcvt) {
203
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
204
- }
205
+ g_assert_not_reached();
206
}
207
208
/* AdvSIMD shift by immediate
209
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
210
TCGv_i32 tcg_res = tcg_temp_new_i32();
211
212
switch (fpop) {
213
- case 0x1a: /* FCVTNS */
214
- case 0x1b: /* FCVTMS */
215
- case 0x1c: /* FCVTAS */
216
- case 0x3a: /* FCVTPS */
217
- case 0x3b: /* FCVTZS */
218
- gen_helper_advsimd_f16tosinth(tcg_res, tcg_op, tcg_fpstatus);
219
- break;
220
case 0x3d: /* FRECPE */
221
gen_helper_recpe_f16(tcg_res, tcg_op, tcg_fpstatus);
222
break;
223
case 0x3f: /* FRECPX */
224
gen_helper_frecpx_f16(tcg_res, tcg_op, tcg_fpstatus);
225
break;
226
+ case 0x7d: /* FRSQRTE */
227
+ gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
228
+ break;
229
+ default:
230
+ case 0x1a: /* FCVTNS */
231
+ case 0x1b: /* FCVTMS */
232
+ case 0x1c: /* FCVTAS */
233
+ case 0x3a: /* FCVTPS */
234
+ case 0x3b: /* FCVTZS */
235
case 0x5a: /* FCVTNU */
236
case 0x5b: /* FCVTMU */
237
case 0x5c: /* FCVTAU */
238
case 0x7a: /* FCVTPU */
239
case 0x7b: /* FCVTZU */
240
- gen_helper_advsimd_f16touinth(tcg_res, tcg_op, tcg_fpstatus);
241
- break;
242
- case 0x7d: /* FRSQRTE */
243
- gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
244
- break;
245
- default:
246
g_assert_not_reached();
247
}
248
249
--
250
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-59-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 19 +++++++++++++++++++
9
target/arm/tcg/translate-a64.c | 4 +---
10
2 files changed, 20 insertions(+), 3 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FCVTAS_f 0101 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
17
FCVTAU_f 0111 1110 011 11001 11001 0 ..... ..... @icvt_h
18
FCVTAU_f 0111 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
19
20
+%fcvt_f_sh_h 16:4 !function=rsub_16
21
+%fcvt_f_sh_s 16:5 !function=rsub_32
22
+%fcvt_f_sh_d 16:6 !function=rsub_64
23
+
24
+@fcvt_fixed_h .... .... . 001 .... ...... rn:5 rd:5 \
25
+ &fcvt sf=0 esz=1 shift=%fcvt_f_sh_h
26
+@fcvt_fixed_s .... .... . 01 ..... ...... rn:5 rd:5 \
27
+ &fcvt sf=0 esz=2 shift=%fcvt_f_sh_s
28
+@fcvt_fixed_d .... .... . 1 ...... ...... rn:5 rd:5 \
29
+ &fcvt sf=0 esz=3 shift=%fcvt_f_sh_d
30
+
31
+FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_h
32
+FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_s
33
+FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_d
34
+
35
+FCVTZU_f 0111 1111 0 ....... 111111 ..... ..... @fcvt_fixed_h
36
+FCVTZU_f 0111 1111 0 ....... 111111 ..... ..... @fcvt_fixed_s
37
+FCVTZU_f 0111 1111 0 ....... 111111 ..... ..... @fcvt_fixed_d
38
+
39
# Advanced SIMD two-register miscellaneous
40
41
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
42
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/translate-a64.c
45
+++ b/target/arm/tcg/translate-a64.c
46
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_shift_imm(DisasContext *s, uint32_t insn)
47
handle_simd_shift_intfp_conv(s, true, false, is_u, immh, immb,
48
opcode, rn, rd);
49
break;
50
- case 0x1f: /* FCVTZS, FCVTZU */
51
- handle_simd_shift_fpint_conv(s, true, false, is_u, immh, immb, rn, rd);
52
- break;
53
default:
54
case 0x00: /* SSHR / USHR */
55
case 0x02: /* SSRA / USRA */
56
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_shift_imm(DisasContext *s, uint32_t insn)
57
case 0x11: /* SQRSHRUN */
58
case 0x12: /* SQSHRN, UQSHRN */
59
case 0x13: /* SQRSHRN, UQRSHRN */
60
+ case 0x1f: /* FCVTZS, FCVTZU */
61
unallocated_encoding(s);
62
break;
63
}
64
--
65
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-60-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 6 ++++++
9
target/arm/tcg/translate-a64.c | 35 ++++++++++++++++++++++++----------
10
2 files changed, 31 insertions(+), 10 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FCVTXN_s 0111 1110 011 00001 01101 0 ..... ..... @rr_s
17
@icvt_sd . ....... .. ...... ...... rn:5 rd:5 \
18
&fcvt sf=0 esz=%esz_sd shift=0
19
20
+SCVTF_f 0101 1110 011 11001 11011 0 ..... ..... @icvt_h
21
+SCVTF_f 0101 1110 0.1 00001 11011 0 ..... ..... @icvt_sd
22
+
23
+UCVTF_f 0111 1110 011 11001 11011 0 ..... ..... @icvt_h
24
+UCVTF_f 0111 1110 0.1 00001 11011 0 ..... ..... @icvt_sd
25
+
26
FCVTNS_f 0101 1110 011 11001 10101 0 ..... ..... @icvt_h
27
FCVTNS_f 0101 1110 0.1 00001 10101 0 ..... ..... @icvt_sd
28
FCVTNU_f 0111 1110 011 11001 10101 0 ..... ..... @icvt_h
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static bool do_cvtf_g(DisasContext *s, arg_fcvt *a, bool is_signed)
34
TRANS(SCVTF_g, do_cvtf_g, a, true)
35
TRANS(UCVTF_g, do_cvtf_g, a, false)
36
37
+/*
38
+ * [US]CVTF (vector), scalar version.
39
+ * Which sounds weird, but really just means input from fp register
40
+ * instead of input from general register. Input and output element
41
+ * size are always equal.
42
+ */
43
+static bool do_cvtf_f(DisasContext *s, arg_fcvt *a, bool is_signed)
44
+{
45
+ TCGv_i64 tcg_int;
46
+ int check = fp_access_check_scalar_hsd(s, a->esz);
47
+
48
+ if (check <= 0) {
49
+ return check == 0;
50
+ }
51
+
52
+ tcg_int = tcg_temp_new_i64();
53
+ read_vec_element(s, tcg_int, a->rn, 0, a->esz | (is_signed ? MO_SIGN : 0));
54
+ return do_cvtf_scalar(s, a->esz, a->rd, a->shift, tcg_int, is_signed);
55
+}
56
+
57
+TRANS(SCVTF_f, do_cvtf_f, a, true)
58
+TRANS(UCVTF_f, do_cvtf_f, a, false)
59
+
60
static void do_fcvt_scalar(DisasContext *s, MemOp out, MemOp esz,
61
TCGv_i64 tcg_out, int shift, int rn,
62
ARMFPRounding rmode)
63
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
64
case 0x6d: /* FCMLE (zero) */
65
handle_2misc_fcmp_zero(s, opcode, true, u, true, size, rn, rd);
66
return;
67
- case 0x1d: /* SCVTF */
68
- case 0x5d: /* UCVTF */
69
- {
70
- bool is_signed = (opcode == 0x1d);
71
- if (!fp_access_check(s)) {
72
- return;
73
- }
74
- handle_simd_intfp_conv(s, rd, rn, 1, is_signed, 0, size);
75
- return;
76
- }
77
case 0x3d: /* FRECPE */
78
case 0x3f: /* FRECPX */
79
case 0x7d: /* FRSQRTE */
80
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
81
case 0x1c: /* FCVTAS */
82
case 0x5c: /* FCVTAU */
83
case 0x56: /* FCVTXN, FCVTXN2 */
84
+ case 0x1d: /* SCVTF */
85
+ case 0x5d: /* UCVTF */
86
default:
87
unallocated_encoding(s);
88
return;
89
--
90
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove disas_simd_scalar_shift_imm as these were the
4
last insns decoded by that function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-61-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 8 ++++++
12
target/arm/tcg/translate-a64.c | 47 ----------------------------------
13
2 files changed, 8 insertions(+), 47 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FCVTAU_f 0111 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
20
@fcvt_fixed_d .... .... . 1 ...... ...... rn:5 rd:5 \
21
&fcvt sf=0 esz=3 shift=%fcvt_f_sh_d
22
23
+SCVTF_f 0101 1111 0 ....... 111001 ..... ..... @fcvt_fixed_h
24
+SCVTF_f 0101 1111 0 ....... 111001 ..... ..... @fcvt_fixed_s
25
+SCVTF_f 0101 1111 0 ....... 111001 ..... ..... @fcvt_fixed_d
26
+
27
+UCVTF_f 0111 1111 0 ....... 111001 ..... ..... @fcvt_fixed_h
28
+UCVTF_f 0111 1111 0 ....... 111001 ..... ..... @fcvt_fixed_s
29
+UCVTF_f 0111 1111 0 ....... 111001 ..... ..... @fcvt_fixed_d
30
+
31
FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_h
32
FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_s
33
FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_d
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
39
gen_restore_rmode(tcg_rmode, tcg_fpstatus);
40
}
41
42
-/* AdvSIMD scalar shift by immediate
43
- * 31 30 29 28 23 22 19 18 16 15 11 10 9 5 4 0
44
- * +-----+---+-------------+------+------+--------+---+------+------+
45
- * | 0 1 | U | 1 1 1 1 1 0 | immh | immb | opcode | 1 | Rn | Rd |
46
- * +-----+---+-------------+------+------+--------+---+------+------+
47
- *
48
- * This is the scalar version so it works on a fixed sized registers
49
- */
50
-static void disas_simd_scalar_shift_imm(DisasContext *s, uint32_t insn)
51
-{
52
- int rd = extract32(insn, 0, 5);
53
- int rn = extract32(insn, 5, 5);
54
- int opcode = extract32(insn, 11, 5);
55
- int immb = extract32(insn, 16, 3);
56
- int immh = extract32(insn, 19, 4);
57
- bool is_u = extract32(insn, 29, 1);
58
-
59
- if (immh == 0) {
60
- unallocated_encoding(s);
61
- return;
62
- }
63
-
64
- switch (opcode) {
65
- case 0x1c: /* SCVTF, UCVTF */
66
- handle_simd_shift_intfp_conv(s, true, false, is_u, immh, immb,
67
- opcode, rn, rd);
68
- break;
69
- default:
70
- case 0x00: /* SSHR / USHR */
71
- case 0x02: /* SSRA / USRA */
72
- case 0x04: /* SRSHR / URSHR */
73
- case 0x06: /* SRSRA / URSRA */
74
- case 0x08: /* SRI */
75
- case 0x0a: /* SHL / SLI */
76
- case 0x0c: /* SQSHLU */
77
- case 0x0e: /* SQSHL, UQSHL */
78
- case 0x10: /* SQSHRUN */
79
- case 0x11: /* SQRSHRUN */
80
- case 0x12: /* SQSHRN, UQSHRN */
81
- case 0x13: /* SQRSHRN, UQRSHRN */
82
- case 0x1f: /* FCVTZS, FCVTZU */
83
- unallocated_encoding(s);
84
- break;
85
- }
86
-}
87
-
88
static void handle_2misc_64(DisasContext *s, int opcode, bool u,
89
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn,
90
TCGv_i32 tcg_rmode, TCGv_ptr tcg_fpstatus)
91
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
92
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
93
{ 0x0f000400, 0x9f800400, disas_simd_shift_imm },
94
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
95
- { 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
96
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
97
{ 0x00000000, 0x00000000, NULL }
98
};
99
--
100
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Emphasize that these functions use round-to-zero mode.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-62-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 8 ++++----
11
target/arm/tcg/translate-neon.c | 8 ++++----
12
target/arm/tcg/vec_helper.c | 8 ++++----
13
3 files changed, 12 insertions(+), 12 deletions(-)
14
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
18
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_touizs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
21
DEF_HELPER_FLAGS_4(gvec_vcvt_sf, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(gvec_vcvt_uf, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
-DEF_HELPER_FLAGS_4(gvec_vcvt_fs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
-DEF_HELPER_FLAGS_4(gvec_vcvt_fu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_fs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_fu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
28
DEF_HELPER_FLAGS_4(gvec_vcvt_sh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_4(gvec_vcvt_uh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
-DEF_HELPER_FLAGS_4(gvec_vcvt_hs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
-DEF_HELPER_FLAGS_4(gvec_vcvt_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
35
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-neon.c
40
+++ b/target/arm/tcg/translate-neon.c
41
@@ -XXX,XX +XXX,XX @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
42
43
DO_FP_2SH(VCVT_SF, gen_helper_gvec_vcvt_sf)
44
DO_FP_2SH(VCVT_UF, gen_helper_gvec_vcvt_uf)
45
-DO_FP_2SH(VCVT_FS, gen_helper_gvec_vcvt_fs)
46
-DO_FP_2SH(VCVT_FU, gen_helper_gvec_vcvt_fu)
47
+DO_FP_2SH(VCVT_FS, gen_helper_gvec_vcvt_rz_fs)
48
+DO_FP_2SH(VCVT_FU, gen_helper_gvec_vcvt_rz_fu)
49
50
DO_FP_2SH(VCVT_SH, gen_helper_gvec_vcvt_sh)
51
DO_FP_2SH(VCVT_UH, gen_helper_gvec_vcvt_uh)
52
-DO_FP_2SH(VCVT_HS, gen_helper_gvec_vcvt_hs)
53
-DO_FP_2SH(VCVT_HU, gen_helper_gvec_vcvt_hu)
54
+DO_FP_2SH(VCVT_HS, gen_helper_gvec_vcvt_rz_hs)
55
+DO_FP_2SH(VCVT_HU, gen_helper_gvec_vcvt_rz_hu)
56
57
static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
58
GVecGen2iFn *fn)
59
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/tcg/vec_helper.c
62
+++ b/target/arm/tcg/vec_helper.c
63
@@ -XXX,XX +XXX,XX @@ DO_3OP_PAIR(gvec_uminp_s, MIN, uint32_t, H4)
64
65
DO_VCVT_FIXED(gvec_vcvt_sf, helper_vfp_sltos, uint32_t)
66
DO_VCVT_FIXED(gvec_vcvt_uf, helper_vfp_ultos, uint32_t)
67
-DO_VCVT_FIXED(gvec_vcvt_fs, helper_vfp_tosls_round_to_zero, uint32_t)
68
-DO_VCVT_FIXED(gvec_vcvt_fu, helper_vfp_touls_round_to_zero, uint32_t)
69
+DO_VCVT_FIXED(gvec_vcvt_rz_fs, helper_vfp_tosls_round_to_zero, uint32_t)
70
+DO_VCVT_FIXED(gvec_vcvt_rz_fu, helper_vfp_touls_round_to_zero, uint32_t)
71
DO_VCVT_FIXED(gvec_vcvt_sh, helper_vfp_shtoh, uint16_t)
72
DO_VCVT_FIXED(gvec_vcvt_uh, helper_vfp_uhtoh, uint16_t)
73
-DO_VCVT_FIXED(gvec_vcvt_hs, helper_vfp_toshh_round_to_zero, uint16_t)
74
-DO_VCVT_FIXED(gvec_vcvt_hu, helper_vfp_touhh_round_to_zero, uint16_t)
75
+DO_VCVT_FIXED(gvec_vcvt_rz_hs, helper_vfp_toshh_round_to_zero, uint16_t)
76
+DO_VCVT_FIXED(gvec_vcvt_rz_hu, helper_vfp_touhh_round_to_zero, uint16_t)
77
78
#undef DO_VCVT_FIXED
79
80
--
81
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_simd_intfp_conv and handle_simd_shift_intfp_conv
4
as these were the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-63-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 3 +
12
target/arm/tcg/a64.decode | 22 ++++
13
target/arm/tcg/translate-a64.c | 201 ++++++---------------------------
14
target/arm/tcg/vec_helper.c | 7 +-
15
4 files changed, 66 insertions(+), 167 deletions(-)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_vcvt_uh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(gvec_vcvt_sd, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(gvec_vcvt_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+
28
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/a64.decode
34
+++ b/target/arm/tcg/a64.decode
35
@@ -XXX,XX +XXX,XX @@ FRINT32Z_v 0.00 1110 0.1 00001 11101 0 ..... ..... @qrr_sd
36
FRINT32X_v 0.10 1110 0.1 00001 11101 0 ..... ..... @qrr_sd
37
FRINT64Z_v 0.00 1110 0.1 00001 11111 0 ..... ..... @qrr_sd
38
FRINT64X_v 0.10 1110 0.1 00001 11111 0 ..... ..... @qrr_sd
39
+
40
+SCVTF_vi 0.00 1110 011 11001 11011 0 ..... ..... @qrr_h
41
+SCVTF_vi 0.00 1110 0.1 00001 11011 0 ..... ..... @qrr_sd
42
+
43
+UCVTF_vi 0.10 1110 011 11001 11011 0 ..... ..... @qrr_h
44
+UCVTF_vi 0.10 1110 0.1 00001 11011 0 ..... ..... @qrr_sd
45
+
46
+&fcvt_q rd rn esz q shift
47
+@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
48
+ &fcvt_q esz=1 shift=%fcvt_f_sh_h
49
+@fcvtq_s . q:1 . ...... 01 ..... ...... rn:5 rd:5 \
50
+ &fcvt_q esz=2 shift=%fcvt_f_sh_s
51
+@fcvtq_d . q:1 . ...... 1 ...... ...... rn:5 rd:5 \
52
+ &fcvt_q esz=3 shift=%fcvt_f_sh_d
53
+
54
+SCVTF_vf 0.00 11110 ....... 111001 ..... ..... @fcvtq_h
55
+SCVTF_vf 0.00 11110 ....... 111001 ..... ..... @fcvtq_s
56
+SCVTF_vf 0.00 11110 ....... 111001 ..... ..... @fcvtq_d
57
+
58
+UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_h
59
+UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_s
60
+UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_d
61
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/tcg/translate-a64.c
64
+++ b/target/arm/tcg/translate-a64.c
65
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FRINT64Z_v, aa64_frint, do_fp1_vector, a,
66
&f_scalar_frint64, FPROUNDING_ZERO)
67
TRANS_FEAT(FRINT64X_v, aa64_frint, do_fp1_vector, a, &f_scalar_frint64, -1)
68
69
-/* Common vector code for handling integer to FP conversion */
70
-static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
71
- int elements, int is_signed,
72
- int fracbits, int size)
73
+static bool do_gvec_op2_fpst(DisasContext *s, MemOp esz, bool is_q,
74
+ int rd, int rn, int data,
75
+ gen_helper_gvec_2_ptr * const fns[3])
76
{
77
- TCGv_ptr tcg_fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
78
- TCGv_i32 tcg_shift = NULL;
79
+ int check = fp_access_check_vector_hsd(s, is_q, esz);
80
+ TCGv_ptr fpst;
81
82
- MemOp mop = size | (is_signed ? MO_SIGN : 0);
83
- int pass;
84
-
85
- if (fracbits || size == MO_64) {
86
- tcg_shift = tcg_constant_i32(fracbits);
87
+ if (check <= 0) {
88
+ return check == 0;
89
}
90
91
- if (size == MO_64) {
92
- TCGv_i64 tcg_int64 = tcg_temp_new_i64();
93
- TCGv_i64 tcg_double = tcg_temp_new_i64();
94
-
95
- for (pass = 0; pass < elements; pass++) {
96
- read_vec_element(s, tcg_int64, rn, pass, mop);
97
-
98
- if (is_signed) {
99
- gen_helper_vfp_sqtod(tcg_double, tcg_int64,
100
- tcg_shift, tcg_fpst);
101
- } else {
102
- gen_helper_vfp_uqtod(tcg_double, tcg_int64,
103
- tcg_shift, tcg_fpst);
104
- }
105
- if (elements == 1) {
106
- write_fp_dreg(s, rd, tcg_double);
107
- } else {
108
- write_vec_element(s, tcg_double, rd, pass, MO_64);
109
- }
110
- }
111
- } else {
112
- TCGv_i32 tcg_int32 = tcg_temp_new_i32();
113
- TCGv_i32 tcg_float = tcg_temp_new_i32();
114
-
115
- for (pass = 0; pass < elements; pass++) {
116
- read_vec_element_i32(s, tcg_int32, rn, pass, mop);
117
-
118
- switch (size) {
119
- case MO_32:
120
- if (fracbits) {
121
- if (is_signed) {
122
- gen_helper_vfp_sltos(tcg_float, tcg_int32,
123
- tcg_shift, tcg_fpst);
124
- } else {
125
- gen_helper_vfp_ultos(tcg_float, tcg_int32,
126
- tcg_shift, tcg_fpst);
127
- }
128
- } else {
129
- if (is_signed) {
130
- gen_helper_vfp_sitos(tcg_float, tcg_int32, tcg_fpst);
131
- } else {
132
- gen_helper_vfp_uitos(tcg_float, tcg_int32, tcg_fpst);
133
- }
134
- }
135
- break;
136
- case MO_16:
137
- if (fracbits) {
138
- if (is_signed) {
139
- gen_helper_vfp_sltoh(tcg_float, tcg_int32,
140
- tcg_shift, tcg_fpst);
141
- } else {
142
- gen_helper_vfp_ultoh(tcg_float, tcg_int32,
143
- tcg_shift, tcg_fpst);
144
- }
145
- } else {
146
- if (is_signed) {
147
- gen_helper_vfp_sitoh(tcg_float, tcg_int32, tcg_fpst);
148
- } else {
149
- gen_helper_vfp_uitoh(tcg_float, tcg_int32, tcg_fpst);
150
- }
151
- }
152
- break;
153
- default:
154
- g_assert_not_reached();
155
- }
156
-
157
- if (elements == 1) {
158
- write_fp_sreg(s, rd, tcg_float);
159
- } else {
160
- write_vec_element_i32(s, tcg_float, rd, pass, size);
161
- }
162
- }
163
- }
164
-
165
- clear_vec_high(s, elements << size == 16, rd);
166
+ fpst = fpstatus_ptr(esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
167
+ tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, rd),
168
+ vec_full_reg_offset(s, rn), fpst,
169
+ is_q ? 16 : 8, vec_full_reg_size(s),
170
+ data, fns[esz - 1]);
171
+ return true;
172
}
173
174
-/* UCVTF/SCVTF - Integer to FP conversion */
175
-static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
176
- bool is_q, bool is_u,
177
- int immh, int immb, int opcode,
178
- int rn, int rd)
179
-{
180
- int size, elements, fracbits;
181
- int immhb = immh << 3 | immb;
182
+static gen_helper_gvec_2_ptr * const f_scvtf_v[] = {
183
+ gen_helper_gvec_vcvt_sh,
184
+ gen_helper_gvec_vcvt_sf,
185
+ gen_helper_gvec_vcvt_sd,
186
+};
187
+TRANS(SCVTF_vi, do_gvec_op2_fpst,
188
+ a->esz, a->q, a->rd, a->rn, 0, f_scvtf_v)
189
+TRANS(SCVTF_vf, do_gvec_op2_fpst,
190
+ a->esz, a->q, a->rd, a->rn, a->shift, f_scvtf_v)
191
192
- if (immh & 8) {
193
- size = MO_64;
194
- if (!is_scalar && !is_q) {
195
- unallocated_encoding(s);
196
- return;
197
- }
198
- } else if (immh & 4) {
199
- size = MO_32;
200
- } else if (immh & 2) {
201
- size = MO_16;
202
- if (!dc_isar_feature(aa64_fp16, s)) {
203
- unallocated_encoding(s);
204
- return;
205
- }
206
- } else {
207
- /* immh == 0 would be a failure of the decode logic */
208
- g_assert(immh == 1);
209
- unallocated_encoding(s);
210
- return;
211
- }
212
-
213
- if (is_scalar) {
214
- elements = 1;
215
- } else {
216
- elements = (8 << is_q) >> size;
217
- }
218
- fracbits = (16 << size) - immhb;
219
-
220
- if (!fp_access_check(s)) {
221
- return;
222
- }
223
-
224
- handle_simd_intfp_conv(s, rd, rn, elements, !is_u, fracbits, size);
225
-}
226
+static gen_helper_gvec_2_ptr * const f_ucvtf_v[] = {
227
+ gen_helper_gvec_vcvt_uh,
228
+ gen_helper_gvec_vcvt_uf,
229
+ gen_helper_gvec_vcvt_ud,
230
+};
231
+TRANS(UCVTF_vi, do_gvec_op2_fpst,
232
+ a->esz, a->q, a->rd, a->rn, 0, f_ucvtf_v)
233
+TRANS(UCVTF_vf, do_gvec_op2_fpst,
234
+ a->esz, a->q, a->rd, a->rn, a->shift, f_ucvtf_v)
235
236
/* FCVTZS, FVCVTZU - FP to fixedpoint conversion */
237
static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
238
@@ -XXX,XX +XXX,XX @@ static void disas_simd_shift_imm(DisasContext *s, uint32_t insn)
239
}
240
241
switch (opcode) {
242
- case 0x1c: /* SCVTF / UCVTF */
243
- handle_simd_shift_intfp_conv(s, false, is_q, is_u, immh, immb,
244
- opcode, rn, rd);
245
- break;
246
case 0x1f: /* FCVTZS/ FCVTZU */
247
handle_simd_shift_fpint_conv(s, false, is_q, is_u, immh, immb, rn, rd);
248
return;
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_shift_imm(DisasContext *s, uint32_t insn)
250
case 0x12: /* SQSHRN / UQSHRN */
251
case 0x13: /* SQRSHRN / UQRSHRN */
252
case 0x14: /* SSHLL / USHLL */
253
+ case 0x1c: /* SCVTF / UCVTF */
254
unallocated_encoding(s);
255
return;
256
}
257
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
258
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
259
size = is_double ? 3 : 2;
260
switch (opcode) {
261
- case 0x1d: /* SCVTF */
262
- case 0x5d: /* UCVTF */
263
- {
264
- bool is_signed = (opcode == 0x1d) ? true : false;
265
- int elements = is_double ? 2 : is_q ? 4 : 2;
266
- if (is_double && !is_q) {
267
- unallocated_encoding(s);
268
- return;
269
- }
270
- if (!fp_access_check(s)) {
271
- return;
272
- }
273
- handle_simd_intfp_conv(s, rd, rn, elements, is_signed, 0, size);
274
- return;
275
- }
276
case 0x2c: /* FCMGT (zero) */
277
case 0x2d: /* FCMEQ (zero) */
278
case 0x2e: /* FCMLT (zero) */
279
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
280
case 0x1f: /* FRINT64Z */
281
case 0x5e: /* FRINT32X */
282
case 0x5f: /* FRINT64X */
283
+ case 0x1d: /* SCVTF */
284
+ case 0x5d: /* UCVTF */
285
unallocated_encoding(s);
286
return;
287
}
288
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
289
fpop = deposit32(fpop, 6, 1, u);
290
291
switch (fpop) {
292
- case 0x1d: /* SCVTF */
293
- case 0x5d: /* UCVTF */
294
- {
295
- int elements;
296
-
297
- if (is_scalar) {
298
- elements = 1;
299
- } else {
300
- elements = (is_q ? 8 : 4);
301
- }
302
-
303
- if (!fp_access_check(s)) {
304
- return;
305
- }
306
- handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_16);
307
- return;
308
- }
309
- break;
310
case 0x2c: /* FCMGT (zero) */
311
case 0x2d: /* FCMEQ (zero) */
312
case 0x2e: /* FCMLT (zero) */
313
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
314
case 0x58: /* FRINTA */
315
case 0x59: /* FRINTX */
316
case 0x79: /* FRINTI */
317
+ case 0x1d: /* SCVTF */
318
+ case 0x5d: /* UCVTF */
319
unallocated_encoding(s);
320
return;
321
}
322
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
323
index XXXXXXX..XXXXXXX 100644
324
--- a/target/arm/tcg/vec_helper.c
325
+++ b/target/arm/tcg/vec_helper.c
326
@@ -XXX,XX +XXX,XX @@ DO_3OP_PAIR(gvec_uminp_s, MIN, uint32_t, H4)
327
clear_tail(d, oprsz, simd_maxsz(desc)); \
328
}
329
330
+DO_VCVT_FIXED(gvec_vcvt_sd, helper_vfp_sqtod, uint64_t)
331
+DO_VCVT_FIXED(gvec_vcvt_ud, helper_vfp_uqtod, uint64_t)
332
DO_VCVT_FIXED(gvec_vcvt_sf, helper_vfp_sltos, uint32_t)
333
DO_VCVT_FIXED(gvec_vcvt_uf, helper_vfp_ultos, uint32_t)
334
-DO_VCVT_FIXED(gvec_vcvt_rz_fs, helper_vfp_tosls_round_to_zero, uint32_t)
335
-DO_VCVT_FIXED(gvec_vcvt_rz_fu, helper_vfp_touls_round_to_zero, uint32_t)
336
DO_VCVT_FIXED(gvec_vcvt_sh, helper_vfp_shtoh, uint16_t)
337
DO_VCVT_FIXED(gvec_vcvt_uh, helper_vfp_uhtoh, uint16_t)
338
+
339
+DO_VCVT_FIXED(gvec_vcvt_rz_fs, helper_vfp_tosls_round_to_zero, uint32_t)
340
+DO_VCVT_FIXED(gvec_vcvt_rz_fu, helper_vfp_touls_round_to_zero, uint32_t)
341
DO_VCVT_FIXED(gvec_vcvt_rz_hs, helper_vfp_toshh_round_to_zero, uint16_t)
342
DO_VCVT_FIXED(gvec_vcvt_rz_hu, helper_vfp_touhh_round_to_zero, uint16_t)
343
344
--
345
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Remove handle_simd_shift_fpint_conv and disas_simd_shift_imm
4
as these were the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241211163036.2297116-64-richard.henderson@linaro.org
5
Message-id: 20200815013145.539409-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper.h | 4 ++++
11
target/arm/helper.h | 4 +
9
target/arm/translate-a64.c | 16 ++++++++++++++++
12
target/arm/tcg/a64.decode | 8 ++
10
target/arm/vec_helper.c | 29 +++++++++++++++++++++++++----
13
target/arm/tcg/translate-a64.c | 160 +++------------------------------
11
3 files changed, 45 insertions(+), 4 deletions(-)
14
target/arm/tcg/vec_helper.c | 2 +
15
target/arm/vfp_helper.c | 4 +
16
5 files changed, 32 insertions(+), 146 deletions(-)
12
17
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
20
--- a/target/arm/helper.h
16
+++ b/target/arm/helper.h
21
+++ b/target/arm/helper.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_uaba_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_touhs_round_to_zero, i32, f32, i32, ptr)
18
DEF_HELPER_FLAGS_4(gvec_uaba_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
DEF_HELPER_3(vfp_touls_round_to_zero, i32, f32, i32, ptr)
19
DEF_HELPER_FLAGS_4(gvec_uaba_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
DEF_HELPER_3(vfp_toshd_round_to_zero, i64, f64, i32, ptr)
20
25
DEF_HELPER_3(vfp_tosld_round_to_zero, i64, f64, i32, ptr)
21
+DEF_HELPER_FLAGS_4(gvec_mul_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_3(vfp_tosqd_round_to_zero, i64, f64, i32, ptr)
22
+DEF_HELPER_FLAGS_4(gvec_mul_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
DEF_HELPER_3(vfp_touhd_round_to_zero, i64, f64, i32, ptr)
23
+DEF_HELPER_FLAGS_4(gvec_mul_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
DEF_HELPER_3(vfp_tould_round_to_zero, i64, f64, i32, ptr)
29
+DEF_HELPER_3(vfp_touqd_round_to_zero, i64, f64, i32, ptr)
30
DEF_HELPER_3(vfp_touhh, i32, f16, i32, ptr)
31
DEF_HELPER_3(vfp_toshh, i32, f16, i32, ptr)
32
DEF_HELPER_3(vfp_toulh, i32, f16, i32, ptr)
33
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
35
DEF_HELPER_FLAGS_4(gvec_vcvt_sd, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(gvec_vcvt_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_ds, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_du, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
40
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/a64.decode
45
+++ b/target/arm/tcg/a64.decode
46
@@ -XXX,XX +XXX,XX @@ SCVTF_vf 0.00 11110 ....... 111001 ..... ..... @fcvtq_d
47
UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_h
48
UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_s
49
UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_d
24
+
50
+
25
#ifdef TARGET_AARCH64
51
+FCVTZS_vf 0.00 11110 ....... 111111 ..... ..... @fcvtq_h
26
#include "helper-a64.h"
52
+FCVTZS_vf 0.00 11110 ....... 111111 ..... ..... @fcvtq_s
27
#include "helper-sve.h"
53
+FCVTZS_vf 0.00 11110 ....... 111111 ..... ..... @fcvtq_d
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
33
data, gen_helper_gvec_fmlal_idx_a64);
34
}
35
return;
36
+
54
+
37
+ case 0x08: /* MUL */
55
+FCVTZU_vf 0.10 11110 ....... 111111 ..... ..... @fcvtq_h
38
+ if (!is_long && !is_scalar) {
56
+FCVTZU_vf 0.10 11110 ....... 111111 ..... ..... @fcvtq_s
39
+ static gen_helper_gvec_3 * const fns[3] = {
57
+FCVTZU_vf 0.10 11110 ....... 111111 ..... ..... @fcvtq_d
40
+ gen_helper_gvec_mul_idx_h,
58
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
41
+ gen_helper_gvec_mul_idx_s,
59
index XXXXXXX..XXXXXXX 100644
42
+ gen_helper_gvec_mul_idx_d,
60
--- a/target/arm/tcg/translate-a64.c
43
+ };
61
+++ b/target/arm/tcg/translate-a64.c
44
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
62
@@ -XXX,XX +XXX,XX @@ TRANS(UCVTF_vi, do_gvec_op2_fpst,
45
+ vec_full_reg_offset(s, rn),
63
TRANS(UCVTF_vf, do_gvec_op2_fpst,
46
+ vec_full_reg_offset(s, rm),
64
a->esz, a->q, a->rd, a->rn, a->shift, f_ucvtf_v)
47
+ is_q ? 16 : 8, vec_full_reg_size(s),
65
48
+ index, fns[size - 1]);
66
-/* FCVTZS, FVCVTZU - FP to fixedpoint conversion */
49
+ return;
67
-static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
50
+ }
68
- bool is_q, bool is_u,
51
+ break;
69
- int immh, int immb, int rn, int rd)
52
}
70
-{
53
71
- int immhb = immh << 3 | immb;
54
if (size == 3) {
72
- int pass, size, fracbits;
55
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
73
- TCGv_ptr tcg_fpstatus;
56
index XXXXXXX..XXXXXXX 100644
74
- TCGv_i32 tcg_rmode, tcg_shift;
57
--- a/target/arm/vec_helper.c
75
+static gen_helper_gvec_2_ptr * const f_fcvtzs_vf[] = {
58
+++ b/target/arm/vec_helper.c
76
+ gen_helper_gvec_vcvt_rz_hs,
59
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
77
+ gen_helper_gvec_vcvt_rz_fs,
60
*/
78
+ gen_helper_gvec_vcvt_rz_ds,
61
79
+};
62
#define DO_MUL_IDX(NAME, TYPE, H) \
80
+TRANS(FCVTZS_vf, do_gvec_op2_fpst,
63
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
81
+ a->esz, a->q, a->rd, a->rn, a->shift, f_fcvtzs_vf)
64
+{ \
82
65
+ intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
83
- if (immh & 0x8) {
66
+ intptr_t idx = simd_data(desc); \
84
- size = MO_64;
67
+ TYPE *d = vd, *n = vn, *m = vm; \
85
- if (!is_scalar && !is_q) {
68
+ for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
86
- unallocated_encoding(s);
69
+ TYPE mm = m[H(i + idx)]; \
87
- return;
70
+ for (j = 0; j < segment; j++) { \
88
- }
71
+ d[i + j] = n[i + j] * mm; \
89
- } else if (immh & 0x4) {
72
+ } \
90
- size = MO_32;
73
+ } \
91
- } else if (immh & 0x2) {
74
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
92
- size = MO_16;
75
+}
93
- if (!dc_isar_feature(aa64_fp16, s)) {
76
+
94
- unallocated_encoding(s);
77
+DO_MUL_IDX(gvec_mul_idx_h, uint16_t, H2)
95
- return;
78
+DO_MUL_IDX(gvec_mul_idx_s, uint32_t, H4)
96
- }
79
+DO_MUL_IDX(gvec_mul_idx_d, uint64_t, )
97
- } else {
80
+
98
- /* Should have split out AdvSIMD modified immediate earlier. */
81
+#undef DO_MUL_IDX
99
- assert(immh == 1);
82
+
100
- unallocated_encoding(s);
83
+#define DO_FMUL_IDX(NAME, TYPE, H) \
101
- return;
84
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
102
- }
85
{ \
103
-
86
intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
104
- if (!fp_access_check(s)) {
87
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
105
- return;
88
clear_tail(d, oprsz, simd_maxsz(desc)); \
106
- }
107
-
108
- assert(!(is_scalar && is_q));
109
-
110
- tcg_fpstatus = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
111
- tcg_rmode = gen_set_rmode(FPROUNDING_ZERO, tcg_fpstatus);
112
- fracbits = (16 << size) - immhb;
113
- tcg_shift = tcg_constant_i32(fracbits);
114
-
115
- if (size == MO_64) {
116
- int maxpass = is_scalar ? 1 : 2;
117
-
118
- for (pass = 0; pass < maxpass; pass++) {
119
- TCGv_i64 tcg_op = tcg_temp_new_i64();
120
-
121
- read_vec_element(s, tcg_op, rn, pass, MO_64);
122
- if (is_u) {
123
- gen_helper_vfp_touqd(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
124
- } else {
125
- gen_helper_vfp_tosqd(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
126
- }
127
- write_vec_element(s, tcg_op, rd, pass, MO_64);
128
- }
129
- clear_vec_high(s, is_q, rd);
130
- } else {
131
- void (*fn)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
132
- int maxpass = is_scalar ? 1 : ((8 << is_q) >> size);
133
-
134
- switch (size) {
135
- case MO_16:
136
- if (is_u) {
137
- fn = gen_helper_vfp_touhh;
138
- } else {
139
- fn = gen_helper_vfp_toshh;
140
- }
141
- break;
142
- case MO_32:
143
- if (is_u) {
144
- fn = gen_helper_vfp_touls;
145
- } else {
146
- fn = gen_helper_vfp_tosls;
147
- }
148
- break;
149
- default:
150
- g_assert_not_reached();
151
- }
152
-
153
- for (pass = 0; pass < maxpass; pass++) {
154
- TCGv_i32 tcg_op = tcg_temp_new_i32();
155
-
156
- read_vec_element_i32(s, tcg_op, rn, pass, size);
157
- fn(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
158
- if (is_scalar) {
159
- if (size == MO_16 && !is_u) {
160
- tcg_gen_ext16u_i32(tcg_op, tcg_op);
161
- }
162
- write_fp_sreg(s, rd, tcg_op);
163
- } else {
164
- write_vec_element_i32(s, tcg_op, rd, pass, size);
165
- }
166
- }
167
- if (!is_scalar) {
168
- clear_vec_high(s, is_q, rd);
169
- }
170
- }
171
-
172
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
173
-}
174
+static gen_helper_gvec_2_ptr * const f_fcvtzu_vf[] = {
175
+ gen_helper_gvec_vcvt_rz_hu,
176
+ gen_helper_gvec_vcvt_rz_fu,
177
+ gen_helper_gvec_vcvt_rz_du,
178
+};
179
+TRANS(FCVTZU_vf, do_gvec_op2_fpst,
180
+ a->esz, a->q, a->rd, a->rn, a->shift, f_fcvtzu_vf)
181
182
static void handle_2misc_64(DisasContext *s, int opcode, bool u,
183
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn,
184
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
185
g_assert_not_reached();
89
}
186
}
90
187
91
-DO_MUL_IDX(gvec_fmul_idx_h, float16, H2)
188
-/* AdvSIMD shift by immediate
92
-DO_MUL_IDX(gvec_fmul_idx_s, float32, H4)
189
- * 31 30 29 28 23 22 19 18 16 15 11 10 9 5 4 0
93
-DO_MUL_IDX(gvec_fmul_idx_d, float64, )
190
- * +---+---+---+-------------+------+------+--------+---+------+------+
94
+DO_FMUL_IDX(gvec_fmul_idx_h, float16, H2)
191
- * | 0 | Q | U | 0 1 1 1 1 0 | immh | immb | opcode | 1 | Rn | Rd |
95
+DO_FMUL_IDX(gvec_fmul_idx_s, float32, H4)
192
- * +---+---+---+-------------+------+------+--------+---+------+------+
96
+DO_FMUL_IDX(gvec_fmul_idx_d, float64, )
193
- */
97
194
-static void disas_simd_shift_imm(DisasContext *s, uint32_t insn)
98
-#undef DO_MUL_IDX
195
-{
99
+#undef DO_FMUL_IDX
196
- int rd = extract32(insn, 0, 5);
100
197
- int rn = extract32(insn, 5, 5);
101
#define DO_FMLA_IDX(NAME, TYPE, H) \
198
- int opcode = extract32(insn, 11, 5);
102
void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, \
199
- int immb = extract32(insn, 16, 3);
200
- int immh = extract32(insn, 19, 4);
201
- bool is_u = extract32(insn, 29, 1);
202
- bool is_q = extract32(insn, 30, 1);
203
-
204
- if (immh == 0) {
205
- unallocated_encoding(s);
206
- return;
207
- }
208
-
209
- switch (opcode) {
210
- case 0x1f: /* FCVTZS/ FCVTZU */
211
- handle_simd_shift_fpint_conv(s, false, is_q, is_u, immh, immb, rn, rd);
212
- return;
213
- default:
214
- case 0x00: /* SSHR / USHR */
215
- case 0x02: /* SSRA / USRA (accumulate) */
216
- case 0x04: /* SRSHR / URSHR (rounding) */
217
- case 0x06: /* SRSRA / URSRA (accum + rounding) */
218
- case 0x08: /* SRI */
219
- case 0x0a: /* SHL / SLI */
220
- case 0x0c: /* SQSHLU */
221
- case 0x0e: /* SQSHL, UQSHL */
222
- case 0x10: /* SHRN / SQSHRUN */
223
- case 0x11: /* RSHRN / SQRSHRUN */
224
- case 0x12: /* SQSHRN / UQSHRN */
225
- case 0x13: /* SQRSHRN / UQRSHRN */
226
- case 0x14: /* SSHLL / USHLL */
227
- case 0x1c: /* SCVTF / UCVTF */
228
- unallocated_encoding(s);
229
- return;
230
- }
231
-}
232
-
233
static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
234
int size, int rn, int rd)
235
{
236
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
237
static const AArch64DecodeTable data_proc_simd[] = {
238
/* pattern , mask , fn */
239
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
240
- { 0x0f000400, 0x9f800400, disas_simd_shift_imm },
241
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
242
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
243
{ 0x00000000, 0x00000000, NULL }
244
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
245
index XXXXXXX..XXXXXXX 100644
246
--- a/target/arm/tcg/vec_helper.c
247
+++ b/target/arm/tcg/vec_helper.c
248
@@ -XXX,XX +XXX,XX @@ DO_VCVT_FIXED(gvec_vcvt_uf, helper_vfp_ultos, uint32_t)
249
DO_VCVT_FIXED(gvec_vcvt_sh, helper_vfp_shtoh, uint16_t)
250
DO_VCVT_FIXED(gvec_vcvt_uh, helper_vfp_uhtoh, uint16_t)
251
252
+DO_VCVT_FIXED(gvec_vcvt_rz_ds, helper_vfp_tosqd_round_to_zero, uint64_t)
253
+DO_VCVT_FIXED(gvec_vcvt_rz_du, helper_vfp_touqd_round_to_zero, uint64_t)
254
DO_VCVT_FIXED(gvec_vcvt_rz_fs, helper_vfp_tosls_round_to_zero, uint32_t)
255
DO_VCVT_FIXED(gvec_vcvt_rz_fu, helper_vfp_touls_round_to_zero, uint32_t)
256
DO_VCVT_FIXED(gvec_vcvt_rz_hs, helper_vfp_toshh_round_to_zero, uint16_t)
257
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
258
index XXXXXXX..XXXXXXX 100644
259
--- a/target/arm/vfp_helper.c
260
+++ b/target/arm/vfp_helper.c
261
@@ -XXX,XX +XXX,XX @@ VFP_CONV_FIX_A64(sq, h, 16, dh_ctype_f16, 64, int64)
262
VFP_CONV_FIX(uh, h, 16, dh_ctype_f16, 32, uint16)
263
VFP_CONV_FIX(ul, h, 16, dh_ctype_f16, 32, uint32)
264
VFP_CONV_FIX_A64(uq, h, 16, dh_ctype_f16, 64, uint64)
265
+VFP_CONV_FLOAT_FIX_ROUND(sq, d, 64, float64, 64, int64,
266
+ float_round_to_zero, _round_to_zero)
267
+VFP_CONV_FLOAT_FIX_ROUND(uq, d, 64, float64, 64, uint64,
268
+ float_round_to_zero, _round_to_zero)
269
270
#undef VFP_CONV_FIX
271
#undef VFP_CONV_FIX_FLOAT
103
--
272
--
104
2.20.1
273
2.34.1
105
106
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We need more information than just the mmu_idx in order
3
Remove handle_2misc_64 as these were the last insns decoded
4
to create the proper exception syndrome. Only change the
4
by that function. Remove helper_advsimd_f16to[su]inth as unused;
5
function signature so far.
5
we now always go through helper_vfp_to[su]hh or a specialized
6
vector function instead.
6
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200813200816.3037186-2-richard.henderson@linaro.org
10
Message-id: 20241211163036.2297116-65-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/mte_helper.c | 10 +++++-----
13
target/arm/helper.h | 2 +
13
1 file changed, 5 insertions(+), 5 deletions(-)
14
target/arm/tcg/helper-a64.h | 2 -
15
target/arm/tcg/a64.decode | 25 ++++
16
target/arm/tcg/helper-a64.c | 32 -----
17
target/arm/tcg/translate-a64.c | 227 +++++++++++----------------------
18
target/arm/tcg/vec_helper.c | 2 +
19
6 files changed, 102 insertions(+), 188 deletions(-)
14
20
15
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
21
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/mte_helper.c
23
--- a/target/arm/helper.h
18
+++ b/target/arm/mte_helper.c
24
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
25
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_vcvt_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_4(gvec_vcvt_rz_ds, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_4(gvec_vcvt_rz_du, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
29
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sd, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/helper-a64.h
37
+++ b/target/arm/tcg/helper-a64.h
38
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(advsimd_mulx2h, i32, i32, i32, ptr)
39
DEF_HELPER_4(advsimd_muladd2h, i32, i32, i32, i32, ptr)
40
DEF_HELPER_2(advsimd_rinth_exact, f16, f16, ptr)
41
DEF_HELPER_2(advsimd_rinth, f16, f16, ptr)
42
-DEF_HELPER_2(advsimd_f16tosinth, i32, f16, ptr)
43
-DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
44
45
DEF_HELPER_2(exception_return, void, env, i64)
46
DEF_HELPER_FLAGS_2(dc_zva, TCG_CALL_NO_WG, void, env, i64)
47
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/tcg/a64.decode
50
+++ b/target/arm/tcg/a64.decode
51
@@ -XXX,XX +XXX,XX @@ SCVTF_vi 0.00 1110 0.1 00001 11011 0 ..... ..... @qrr_sd
52
UCVTF_vi 0.10 1110 011 11001 11011 0 ..... ..... @qrr_h
53
UCVTF_vi 0.10 1110 0.1 00001 11011 0 ..... ..... @qrr_sd
54
55
+FCVTNS_vi 0.00 1110 011 11001 10101 0 ..... ..... @qrr_h
56
+FCVTNS_vi 0.00 1110 0.1 00001 10101 0 ..... ..... @qrr_sd
57
+FCVTNU_vi 0.10 1110 011 11001 10101 0 ..... ..... @qrr_h
58
+FCVTNU_vi 0.10 1110 0.1 00001 10101 0 ..... ..... @qrr_sd
59
+
60
+FCVTPS_vi 0.00 1110 111 11001 10101 0 ..... ..... @qrr_h
61
+FCVTPS_vi 0.00 1110 1.1 00001 10101 0 ..... ..... @qrr_sd
62
+FCVTPU_vi 0.10 1110 111 11001 10101 0 ..... ..... @qrr_h
63
+FCVTPU_vi 0.10 1110 1.1 00001 10101 0 ..... ..... @qrr_sd
64
+
65
+FCVTMS_vi 0.00 1110 011 11001 10111 0 ..... ..... @qrr_h
66
+FCVTMS_vi 0.00 1110 0.1 00001 10111 0 ..... ..... @qrr_sd
67
+FCVTMU_vi 0.10 1110 011 11001 10111 0 ..... ..... @qrr_h
68
+FCVTMU_vi 0.10 1110 0.1 00001 10111 0 ..... ..... @qrr_sd
69
+
70
+FCVTZS_vi 0.00 1110 111 11001 10111 0 ..... ..... @qrr_h
71
+FCVTZS_vi 0.00 1110 1.1 00001 10111 0 ..... ..... @qrr_sd
72
+FCVTZU_vi 0.10 1110 111 11001 10111 0 ..... ..... @qrr_h
73
+FCVTZU_vi 0.10 1110 1.1 00001 10111 0 ..... ..... @qrr_sd
74
+
75
+FCVTAS_vi 0.00 1110 011 11001 11001 0 ..... ..... @qrr_h
76
+FCVTAS_vi 0.00 1110 0.1 00001 11001 0 ..... ..... @qrr_sd
77
+FCVTAU_vi 0.10 1110 011 11001 11001 0 ..... ..... @qrr_h
78
+FCVTAU_vi 0.10 1110 0.1 00001 11001 0 ..... ..... @qrr_sd
79
+
80
&fcvt_q rd rn esz q shift
81
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
82
&fcvt_q esz=1 shift=%fcvt_f_sh_h
83
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/tcg/helper-a64.c
86
+++ b/target/arm/tcg/helper-a64.c
87
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
88
return ret;
20
}
89
}
21
90
22
/* Record a tag check failure. */
91
-/*
23
-static void mte_check_fail(CPUARMState *env, int mmu_idx,
92
- * Half-precision floating point conversion functions
24
+static void mte_check_fail(CPUARMState *env, uint32_t desc,
93
- *
25
uint64_t dirty_ptr, uintptr_t ra)
94
- * There are a multitude of conversion functions with various
95
- * different rounding modes. This is dealt with by the calling code
96
- * setting the mode appropriately before calling the helper.
97
- */
98
-
99
-uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
100
-{
101
- float_status *fpst = fpstp;
102
-
103
- /* Invalid if we are passed a NaN */
104
- if (float16_is_any_nan(a)) {
105
- float_raise(float_flag_invalid, fpst);
106
- return 0;
107
- }
108
- return float16_to_int16(a, fpst);
109
-}
110
-
111
-uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
112
-{
113
- float_status *fpst = fpstp;
114
-
115
- /* Invalid if we are passed a NaN */
116
- if (float16_is_any_nan(a)) {
117
- float_raise(float_flag_invalid, fpst);
118
- return 0;
119
- }
120
- return float16_to_uint16(a, fpst);
121
-}
122
-
123
static int el_from_spsr(uint32_t spsr)
26
{
124
{
27
+ int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
125
/* Return the exception level that this SPSR is requesting a return to,
28
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
126
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
29
int el, reg_el, tcf, select;
127
index XXXXXXX..XXXXXXX 100644
30
uint64_t sctlr;
128
--- a/target/arm/tcg/translate-a64.c
31
@@ -XXX,XX +XXX,XX @@ uint64_t mte_check1(CPUARMState *env, uint32_t desc,
129
+++ b/target/arm/tcg/translate-a64.c
130
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2_ptr * const f_fcvtzu_vf[] = {
131
TRANS(FCVTZU_vf, do_gvec_op2_fpst,
132
a->esz, a->q, a->rd, a->rn, a->shift, f_fcvtzu_vf)
133
134
-static void handle_2misc_64(DisasContext *s, int opcode, bool u,
135
- TCGv_i64 tcg_rd, TCGv_i64 tcg_rn,
136
- TCGv_i32 tcg_rmode, TCGv_ptr tcg_fpstatus)
137
-{
138
- /* Handle 64->64 opcodes which are shared between the scalar and
139
- * vector 2-reg-misc groups. We cover every integer opcode where size == 3
140
- * is valid in either group and also the double-precision fp ops.
141
- * The caller only need provide tcg_rmode and tcg_fpstatus if the op
142
- * requires them.
143
- */
144
- switch (opcode) {
145
- case 0x1a: /* FCVTNS */
146
- case 0x1b: /* FCVTMS */
147
- case 0x1c: /* FCVTAS */
148
- case 0x3a: /* FCVTPS */
149
- case 0x3b: /* FCVTZS */
150
- gen_helper_vfp_tosqd(tcg_rd, tcg_rn, tcg_constant_i32(0), tcg_fpstatus);
151
- break;
152
- case 0x5a: /* FCVTNU */
153
- case 0x5b: /* FCVTMU */
154
- case 0x5c: /* FCVTAU */
155
- case 0x7a: /* FCVTPU */
156
- case 0x7b: /* FCVTZU */
157
- gen_helper_vfp_touqd(tcg_rd, tcg_rn, tcg_constant_i32(0), tcg_fpstatus);
158
- break;
159
- default:
160
- case 0x4: /* CLS, CLZ */
161
- case 0x5: /* NOT */
162
- case 0x7: /* SQABS, SQNEG */
163
- case 0x8: /* CMGT, CMGE */
164
- case 0x9: /* CMEQ, CMLE */
165
- case 0xa: /* CMLT */
166
- case 0xb: /* ABS, NEG */
167
- case 0x2f: /* FABS */
168
- case 0x6f: /* FNEG */
169
- case 0x7f: /* FSQRT */
170
- case 0x18: /* FRINTN */
171
- case 0x19: /* FRINTM */
172
- case 0x38: /* FRINTP */
173
- case 0x39: /* FRINTZ */
174
- case 0x58: /* FRINTA */
175
- case 0x79: /* FRINTI */
176
- case 0x59: /* FRINTX */
177
- case 0x1e: /* FRINT32Z */
178
- case 0x5e: /* FRINT32X */
179
- case 0x1f: /* FRINT64Z */
180
- case 0x5f: /* FRINT64X */
181
- g_assert_not_reached();
182
- }
183
-}
184
+static gen_helper_gvec_2_ptr * const f_fcvt_s_vi[] = {
185
+ gen_helper_gvec_vcvt_rm_sh,
186
+ gen_helper_gvec_vcvt_rm_ss,
187
+ gen_helper_gvec_vcvt_rm_sd,
188
+};
189
+
190
+static gen_helper_gvec_2_ptr * const f_fcvt_u_vi[] = {
191
+ gen_helper_gvec_vcvt_rm_uh,
192
+ gen_helper_gvec_vcvt_rm_us,
193
+ gen_helper_gvec_vcvt_rm_ud,
194
+};
195
+
196
+TRANS(FCVTNS_vi, do_gvec_op2_fpst,
197
+ a->esz, a->q, a->rd, a->rn, float_round_nearest_even, f_fcvt_s_vi)
198
+TRANS(FCVTNU_vi, do_gvec_op2_fpst,
199
+ a->esz, a->q, a->rd, a->rn, float_round_nearest_even, f_fcvt_u_vi)
200
+TRANS(FCVTPS_vi, do_gvec_op2_fpst,
201
+ a->esz, a->q, a->rd, a->rn, float_round_up, f_fcvt_s_vi)
202
+TRANS(FCVTPU_vi, do_gvec_op2_fpst,
203
+ a->esz, a->q, a->rd, a->rn, float_round_up, f_fcvt_u_vi)
204
+TRANS(FCVTMS_vi, do_gvec_op2_fpst,
205
+ a->esz, a->q, a->rd, a->rn, float_round_down, f_fcvt_s_vi)
206
+TRANS(FCVTMU_vi, do_gvec_op2_fpst,
207
+ a->esz, a->q, a->rd, a->rn, float_round_down, f_fcvt_u_vi)
208
+TRANS(FCVTZS_vi, do_gvec_op2_fpst,
209
+ a->esz, a->q, a->rd, a->rn, float_round_to_zero, f_fcvt_s_vi)
210
+TRANS(FCVTZU_vi, do_gvec_op2_fpst,
211
+ a->esz, a->q, a->rd, a->rn, float_round_to_zero, f_fcvt_u_vi)
212
+TRANS(FCVTAS_vi, do_gvec_op2_fpst,
213
+ a->esz, a->q, a->rd, a->rn, float_round_ties_away, f_fcvt_s_vi)
214
+TRANS(FCVTAU_vi, do_gvec_op2_fpst,
215
+ a->esz, a->q, a->rd, a->rn, float_round_ties_away, f_fcvt_u_vi)
216
217
static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
218
bool is_scalar, bool is_u, bool is_q,
219
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
220
}
221
handle_2misc_fcmp_zero(s, opcode, false, u, is_q, size, rn, rd);
222
return;
223
- case 0x1a: /* FCVTNS */
224
- case 0x1b: /* FCVTMS */
225
- case 0x3a: /* FCVTPS */
226
- case 0x3b: /* FCVTZS */
227
- case 0x5a: /* FCVTNU */
228
- case 0x5b: /* FCVTMU */
229
- case 0x7a: /* FCVTPU */
230
- case 0x7b: /* FCVTZU */
231
- need_fpstatus = true;
232
- rmode = extract32(opcode, 5, 1) | (extract32(opcode, 0, 1) << 1);
233
- if (size == 3 && !is_q) {
234
- unallocated_encoding(s);
235
- return;
236
- }
237
- break;
238
- case 0x5c: /* FCVTAU */
239
- case 0x1c: /* FCVTAS */
240
- need_fpstatus = true;
241
- rmode = FPROUNDING_TIEAWAY;
242
- if (size == 3 && !is_q) {
243
- unallocated_encoding(s);
244
- return;
245
- }
246
- break;
247
case 0x3c: /* URECPE */
248
if (size == 3) {
249
unallocated_encoding(s);
250
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
251
case 0x5f: /* FRINT64X */
252
case 0x1d: /* SCVTF */
253
case 0x5d: /* UCVTF */
254
+ case 0x1a: /* FCVTNS */
255
+ case 0x1b: /* FCVTMS */
256
+ case 0x3a: /* FCVTPS */
257
+ case 0x3b: /* FCVTZS */
258
+ case 0x5a: /* FCVTNU */
259
+ case 0x5b: /* FCVTMU */
260
+ case 0x7a: /* FCVTPU */
261
+ case 0x7b: /* FCVTZU */
262
+ case 0x5c: /* FCVTAU */
263
+ case 0x1c: /* FCVTAS */
264
unallocated_encoding(s);
265
return;
266
}
267
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
268
tcg_rmode = NULL;
32
}
269
}
33
270
34
if (unlikely(!mte_probe1_int(env, desc, ptr, ra, bit55))) {
271
- if (size == 3) {
35
- int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
272
- /* All 64-bit element operations can be shared with scalar 2misc */
36
- mte_check_fail(env, mmu_idx, ptr, ra);
273
- int pass;
37
+ mte_check_fail(env, desc, ptr, ra);
274
-
275
- /* Coverity claims (size == 3 && !is_q) has been eliminated
276
- * from all paths leading to here.
277
- */
278
- tcg_debug_assert(is_q);
279
- for (pass = 0; pass < 2; pass++) {
280
- TCGv_i64 tcg_op = tcg_temp_new_i64();
281
- TCGv_i64 tcg_res = tcg_temp_new_i64();
282
-
283
- read_vec_element(s, tcg_op, rn, pass, MO_64);
284
-
285
- handle_2misc_64(s, opcode, u, tcg_res, tcg_op,
286
- tcg_rmode, tcg_fpstatus);
287
-
288
- write_vec_element(s, tcg_res, rd, pass, MO_64);
289
- }
290
- } else {
291
+ {
292
int pass;
293
294
assert(size == 2);
295
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
296
{
297
/* Special cases for 32 bit elements */
298
switch (opcode) {
299
- case 0x1a: /* FCVTNS */
300
- case 0x1b: /* FCVTMS */
301
- case 0x1c: /* FCVTAS */
302
- case 0x3a: /* FCVTPS */
303
- case 0x3b: /* FCVTZS */
304
- gen_helper_vfp_tosls(tcg_res, tcg_op,
305
- tcg_constant_i32(0), tcg_fpstatus);
306
- break;
307
- case 0x5a: /* FCVTNU */
308
- case 0x5b: /* FCVTMU */
309
- case 0x5c: /* FCVTAU */
310
- case 0x7a: /* FCVTPU */
311
- case 0x7b: /* FCVTZU */
312
- gen_helper_vfp_touls(tcg_res, tcg_op,
313
- tcg_constant_i32(0), tcg_fpstatus);
314
- break;
315
case 0x7c: /* URSQRTE */
316
gen_helper_rsqrte_u32(tcg_res, tcg_op);
317
break;
318
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
319
case 0x5e: /* FRINT32X */
320
case 0x1f: /* FRINT64Z */
321
case 0x5f: /* FRINT64X */
322
+ case 0x1a: /* FCVTNS */
323
+ case 0x1b: /* FCVTMS */
324
+ case 0x1c: /* FCVTAS */
325
+ case 0x3a: /* FCVTPS */
326
+ case 0x3b: /* FCVTZS */
327
+ case 0x5a: /* FCVTNU */
328
+ case 0x5b: /* FCVTMU */
329
+ case 0x5c: /* FCVTAU */
330
+ case 0x7a: /* FCVTPU */
331
+ case 0x7b: /* FCVTZU */
332
g_assert_not_reached();
333
}
334
}
335
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
336
case 0x3d: /* FRECPE */
337
case 0x3f: /* FRECPX */
338
break;
339
- case 0x1a: /* FCVTNS */
340
- rmode = FPROUNDING_TIEEVEN;
341
- break;
342
- case 0x1b: /* FCVTMS */
343
- rmode = FPROUNDING_NEGINF;
344
- break;
345
- case 0x1c: /* FCVTAS */
346
- rmode = FPROUNDING_TIEAWAY;
347
- break;
348
- case 0x3a: /* FCVTPS */
349
- rmode = FPROUNDING_POSINF;
350
- break;
351
- case 0x3b: /* FCVTZS */
352
- rmode = FPROUNDING_ZERO;
353
- break;
354
- case 0x5a: /* FCVTNU */
355
- rmode = FPROUNDING_TIEEVEN;
356
- break;
357
- case 0x5b: /* FCVTMU */
358
- rmode = FPROUNDING_NEGINF;
359
- break;
360
- case 0x5c: /* FCVTAU */
361
- rmode = FPROUNDING_TIEAWAY;
362
- break;
363
- case 0x7a: /* FCVTPU */
364
- rmode = FPROUNDING_POSINF;
365
- break;
366
- case 0x7b: /* FCVTZU */
367
- rmode = FPROUNDING_ZERO;
368
- break;
369
case 0x7d: /* FRSQRTE */
370
break;
371
default:
372
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
373
case 0x79: /* FRINTI */
374
case 0x1d: /* SCVTF */
375
case 0x5d: /* UCVTF */
376
+ case 0x1a: /* FCVTNS */
377
+ case 0x1b: /* FCVTMS */
378
+ case 0x1c: /* FCVTAS */
379
+ case 0x3a: /* FCVTPS */
380
+ case 0x3b: /* FCVTZS */
381
+ case 0x5a: /* FCVTNU */
382
+ case 0x5b: /* FCVTMU */
383
+ case 0x5c: /* FCVTAU */
384
+ case 0x7a: /* FCVTPU */
385
+ case 0x7b: /* FCVTZU */
386
unallocated_encoding(s);
387
return;
38
}
388
}
39
389
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
40
return useronly_clean_ptr(ptr);
390
read_vec_element_i32(s, tcg_op, rn, pass, MO_16);
41
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
391
42
392
switch (fpop) {
43
fail_ofs = tag_first + n * TAG_GRANULE - ptr;
393
- case 0x1a: /* FCVTNS */
44
fail_ofs = ROUND_UP(fail_ofs, esize);
394
- case 0x1b: /* FCVTMS */
45
- mte_check_fail(env, mmu_idx, ptr + fail_ofs, ra);
395
- case 0x1c: /* FCVTAS */
46
+ mte_check_fail(env, desc, ptr + fail_ofs, ra);
396
- case 0x3a: /* FCVTPS */
397
- case 0x3b: /* FCVTZS */
398
- gen_helper_advsimd_f16tosinth(tcg_res, tcg_op, tcg_fpstatus);
399
- break;
400
case 0x3d: /* FRECPE */
401
gen_helper_recpe_f16(tcg_res, tcg_op, tcg_fpstatus);
402
break;
403
- case 0x5a: /* FCVTNU */
404
- case 0x5b: /* FCVTMU */
405
- case 0x5c: /* FCVTAU */
406
- case 0x7a: /* FCVTPU */
407
- case 0x7b: /* FCVTZU */
408
- gen_helper_advsimd_f16touinth(tcg_res, tcg_op, tcg_fpstatus);
409
- break;
410
case 0x7d: /* FRSQRTE */
411
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
412
break;
413
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
414
case 0x58: /* FRINTA */
415
case 0x79: /* FRINTI */
416
case 0x59: /* FRINTX */
417
+ case 0x1a: /* FCVTNS */
418
+ case 0x1b: /* FCVTMS */
419
+ case 0x1c: /* FCVTAS */
420
+ case 0x3a: /* FCVTPS */
421
+ case 0x3b: /* FCVTZS */
422
+ case 0x5a: /* FCVTNU */
423
+ case 0x5b: /* FCVTMU */
424
+ case 0x5c: /* FCVTAU */
425
+ case 0x7a: /* FCVTPU */
426
+ case 0x7b: /* FCVTZU */
427
g_assert_not_reached();
428
}
429
430
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
431
index XXXXXXX..XXXXXXX 100644
432
--- a/target/arm/tcg/vec_helper.c
433
+++ b/target/arm/tcg/vec_helper.c
434
@@ -XXX,XX +XXX,XX @@ DO_VCVT_FIXED(gvec_vcvt_rz_hu, helper_vfp_touhh_round_to_zero, uint16_t)
435
clear_tail(d, oprsz, simd_maxsz(desc)); \
47
}
436
}
48
437
49
done:
438
+DO_VCVT_RMODE(gvec_vcvt_rm_sd, helper_vfp_tosqd, uint64_t)
50
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
439
+DO_VCVT_RMODE(gvec_vcvt_rm_ud, helper_vfp_touqd, uint64_t)
51
fail:
440
DO_VCVT_RMODE(gvec_vcvt_rm_ss, helper_vfp_tosls, uint32_t)
52
/* Locate the first nibble that differs. */
441
DO_VCVT_RMODE(gvec_vcvt_rm_us, helper_vfp_touls, uint32_t)
53
i = ctz64(mem_tag ^ ptr_tag) >> 4;
442
DO_VCVT_RMODE(gvec_vcvt_rm_sh, helper_vfp_toshh, uint16_t)
54
- mte_check_fail(env, mmu_idx, align_ptr + i * TAG_GRANULE, ra);
55
+ mte_check_fail(env, desc, align_ptr + i * TAG_GRANULE, ra);
56
57
done:
58
return useronly_clean_ptr(ptr);
59
--
443
--
60
2.20.1
444
2.34.1
61
62
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This includes FCMEQ, FCMGT, FCMGE, FCMLT, FCMLE.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20241211163036.2297116-66-richard.henderson@linaro.org
5
Message-id: 20200815013145.539409-20-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper.h | 14 ++++++++++++++
10
target/arm/helper.h | 5 +
9
target/arm/translate-a64.c | 34 ++++++++++++++++++++++++++++++++++
11
target/arm/tcg/a64.decode | 30 ++++
10
target/arm/vec_helper.c | 25 +++++++++++++++++++++++++
12
target/arm/tcg/translate-a64.c | 249 +++++++++++++--------------------
11
3 files changed, 73 insertions(+)
13
target/arm/tcg/vec_helper.c | 4 +-
14
4 files changed, 138 insertions(+), 150 deletions(-)
12
15
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
diff --git a/target/arm/helper.h b/target/arm/helper.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
18
--- a/target/arm/helper.h
16
+++ b/target/arm/helper.h
19
+++ b/target/arm/helper.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_mul_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_frsqrte_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
DEF_HELPER_FLAGS_4(gvec_mul_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
19
DEF_HELPER_FLAGS_4(gvec_mul_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(gvec_fcgt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
23
DEF_HELPER_FLAGS_4(gvec_fcgt0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
+DEF_HELPER_FLAGS_5(gvec_mla_idx_h, TCG_CALL_NO_RWG,
24
+DEF_HELPER_FLAGS_4(gvec_fcgt0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
+ void, ptr, ptr, ptr, ptr, i32)
25
23
+DEF_HELPER_FLAGS_5(gvec_mla_idx_s, TCG_CALL_NO_RWG,
26
DEF_HELPER_FLAGS_4(gvec_fcge0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+ void, ptr, ptr, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_4(gvec_fcge0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(gvec_mla_idx_d, TCG_CALL_NO_RWG,
28
+DEF_HELPER_FLAGS_4(gvec_fcge0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+ void, ptr, ptr, ptr, ptr, i32)
29
27
+
30
DEF_HELPER_FLAGS_4(gvec_fceq0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(gvec_mls_idx_h, TCG_CALL_NO_RWG,
31
DEF_HELPER_FLAGS_4(gvec_fceq0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(gvec_fceq0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(gvec_mls_idx_s, TCG_CALL_NO_RWG,
33
31
+ void, ptr, ptr, ptr, ptr, i32)
34
DEF_HELPER_FLAGS_4(gvec_fcle0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(gvec_mls_idx_d, TCG_CALL_NO_RWG,
35
DEF_HELPER_FLAGS_4(gvec_fcle0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+ void, ptr, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(gvec_fcle0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+
37
35
#ifdef TARGET_AARCH64
38
DEF_HELPER_FLAGS_4(gvec_fclt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
#include "helper-a64.h"
39
DEF_HELPER_FLAGS_4(gvec_fclt0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
#include "helper-sve.h"
40
+DEF_HELPER_FLAGS_4(gvec_fclt0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
41
42
DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
44
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
39
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-a64.c
46
--- a/target/arm/tcg/a64.decode
41
+++ b/target/arm/translate-a64.c
47
+++ b/target/arm/tcg/a64.decode
42
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
48
@@ -XXX,XX +XXX,XX @@ UQXTN_s 0111 1110 ..1 00001 01001 0 ..... ..... @rr_e
49
50
FCVTXN_s 0111 1110 011 00001 01101 0 ..... ..... @rr_s
51
52
+FCMGT0_s 0101 1110 111 11000 11001 0 ..... ..... @rr_h
53
+FCMGT0_s 0101 1110 1.1 00000 11001 0 ..... ..... @rr_sd
54
+
55
+FCMGE0_s 0111 1110 111 11000 11001 0 ..... ..... @rr_h
56
+FCMGE0_s 0111 1110 1.1 00000 11001 0 ..... ..... @rr_sd
57
+
58
+FCMEQ0_s 0101 1110 111 11000 11011 0 ..... ..... @rr_h
59
+FCMEQ0_s 0101 1110 1.1 00000 11011 0 ..... ..... @rr_sd
60
+
61
+FCMLE0_s 0111 1110 111 11000 11011 0 ..... ..... @rr_h
62
+FCMLE0_s 0111 1110 1.1 00000 11011 0 ..... ..... @rr_sd
63
+
64
+FCMLT0_s 0101 1110 111 11000 11101 0 ..... ..... @rr_h
65
+FCMLT0_s 0101 1110 1.1 00000 11101 0 ..... ..... @rr_sd
66
+
67
@icvt_h . ....... .. ...... ...... rn:5 rd:5 \
68
&fcvt sf=0 esz=1 shift=0
69
@icvt_sd . ....... .. ...... ...... rn:5 rd:5 \
70
@@ -XXX,XX +XXX,XX @@ FCVTAS_vi 0.00 1110 0.1 00001 11001 0 ..... ..... @qrr_sd
71
FCVTAU_vi 0.10 1110 011 11001 11001 0 ..... ..... @qrr_h
72
FCVTAU_vi 0.10 1110 0.1 00001 11001 0 ..... ..... @qrr_sd
73
74
+FCMGT0_v 0.00 1110 111 11000 11001 0 ..... ..... @qrr_h
75
+FCMGT0_v 0.00 1110 1.1 00000 11001 0 ..... ..... @qrr_sd
76
+
77
+FCMGE0_v 0.10 1110 111 11000 11001 0 ..... ..... @qrr_h
78
+FCMGE0_v 0.10 1110 1.1 00000 11001 0 ..... ..... @qrr_sd
79
+
80
+FCMEQ0_v 0.00 1110 111 11000 11011 0 ..... ..... @qrr_h
81
+FCMEQ0_v 0.00 1110 1.1 00000 11011 0 ..... ..... @qrr_sd
82
+
83
+FCMLE0_v 0.10 1110 111 11000 11011 0 ..... ..... @qrr_h
84
+FCMLE0_v 0.10 1110 1.1 00000 11011 0 ..... ..... @qrr_sd
85
+
86
+FCMLT0_v 0.00 1110 111 11000 11101 0 ..... ..... @qrr_h
87
+FCMLT0_v 0.00 1110 1.1 00000 11101 0 ..... ..... @qrr_sd
88
+
89
&fcvt_q rd rn esz q shift
90
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
91
&fcvt_q esz=1 shift=%fcvt_f_sh_h
92
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/tcg/translate-a64.c
95
+++ b/target/arm/tcg/translate-a64.c
96
@@ -XXX,XX +XXX,XX @@ static const FPScalar f_scalar_frsqrts = {
97
};
98
TRANS(FRSQRTS_s, do_fp3_scalar, a, &f_scalar_frsqrts)
99
100
+static bool do_fcmp0_s(DisasContext *s, arg_rr_e *a,
101
+ const FPScalar *f, bool swap)
102
+{
103
+ switch (a->esz) {
104
+ case MO_64:
105
+ if (fp_access_check(s)) {
106
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
107
+ TCGv_i64 t1 = tcg_constant_i64(0);
108
+ if (swap) {
109
+ f->gen_d(t0, t1, t0, fpstatus_ptr(FPST_FPCR));
110
+ } else {
111
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
112
+ }
113
+ write_fp_dreg(s, a->rd, t0);
114
+ }
115
+ break;
116
+ case MO_32:
117
+ if (fp_access_check(s)) {
118
+ TCGv_i32 t0 = read_fp_sreg(s, a->rn);
119
+ TCGv_i32 t1 = tcg_constant_i32(0);
120
+ if (swap) {
121
+ f->gen_s(t0, t1, t0, fpstatus_ptr(FPST_FPCR));
122
+ } else {
123
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
124
+ }
125
+ write_fp_sreg(s, a->rd, t0);
126
+ }
127
+ break;
128
+ case MO_16:
129
+ if (!dc_isar_feature(aa64_fp16, s)) {
130
+ return false;
131
+ }
132
+ if (fp_access_check(s)) {
133
+ TCGv_i32 t0 = read_fp_hreg(s, a->rn);
134
+ TCGv_i32 t1 = tcg_constant_i32(0);
135
+ if (swap) {
136
+ f->gen_h(t0, t1, t0, fpstatus_ptr(FPST_FPCR_F16));
137
+ } else {
138
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
139
+ }
140
+ write_fp_sreg(s, a->rd, t0);
141
+ }
142
+ break;
143
+ default:
144
+ return false;
145
+ }
146
+ return true;
147
+}
148
+
149
+TRANS(FCMEQ0_s, do_fcmp0_s, a, &f_scalar_fcmeq, false)
150
+TRANS(FCMGT0_s, do_fcmp0_s, a, &f_scalar_fcmgt, false)
151
+TRANS(FCMGE0_s, do_fcmp0_s, a, &f_scalar_fcmge, false)
152
+TRANS(FCMLT0_s, do_fcmp0_s, a, &f_scalar_fcmgt, true)
153
+TRANS(FCMLE0_s, do_fcmp0_s, a, &f_scalar_fcmge, true)
154
+
155
static bool do_satacc_s(DisasContext *s, arg_rrr_e *a,
156
MemOp sgn_n, MemOp sgn_m,
157
void (*gen_bhs)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64, MemOp),
158
@@ -XXX,XX +XXX,XX @@ TRANS(FCVTAS_vi, do_gvec_op2_fpst,
159
TRANS(FCVTAU_vi, do_gvec_op2_fpst,
160
a->esz, a->q, a->rd, a->rn, float_round_ties_away, f_fcvt_u_vi)
161
162
-static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
163
- bool is_scalar, bool is_u, bool is_q,
164
- int size, int rn, int rd)
165
-{
166
- bool is_double = (size == MO_64);
167
- TCGv_ptr fpst;
168
+static gen_helper_gvec_2_ptr * const f_fceq0[] = {
169
+ gen_helper_gvec_fceq0_h,
170
+ gen_helper_gvec_fceq0_s,
171
+ gen_helper_gvec_fceq0_d,
172
+};
173
+TRANS(FCMEQ0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fceq0)
174
175
- if (!fp_access_check(s)) {
176
- return;
177
- }
178
+static gen_helper_gvec_2_ptr * const f_fcgt0[] = {
179
+ gen_helper_gvec_fcgt0_h,
180
+ gen_helper_gvec_fcgt0_s,
181
+ gen_helper_gvec_fcgt0_d,
182
+};
183
+TRANS(FCMGT0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fcgt0)
184
185
- fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
186
+static gen_helper_gvec_2_ptr * const f_fcge0[] = {
187
+ gen_helper_gvec_fcge0_h,
188
+ gen_helper_gvec_fcge0_s,
189
+ gen_helper_gvec_fcge0_d,
190
+};
191
+TRANS(FCMGE0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fcge0)
192
193
- if (is_double) {
194
- TCGv_i64 tcg_op = tcg_temp_new_i64();
195
- TCGv_i64 tcg_zero = tcg_constant_i64(0);
196
- TCGv_i64 tcg_res = tcg_temp_new_i64();
197
- NeonGenTwoDoubleOpFn *genfn;
198
- bool swap = false;
199
- int pass;
200
+static gen_helper_gvec_2_ptr * const f_fclt0[] = {
201
+ gen_helper_gvec_fclt0_h,
202
+ gen_helper_gvec_fclt0_s,
203
+ gen_helper_gvec_fclt0_d,
204
+};
205
+TRANS(FCMLT0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fclt0)
206
207
- switch (opcode) {
208
- case 0x2e: /* FCMLT (zero) */
209
- swap = true;
210
- /* fallthrough */
211
- case 0x2c: /* FCMGT (zero) */
212
- genfn = gen_helper_neon_cgt_f64;
213
- break;
214
- case 0x2d: /* FCMEQ (zero) */
215
- genfn = gen_helper_neon_ceq_f64;
216
- break;
217
- case 0x6d: /* FCMLE (zero) */
218
- swap = true;
219
- /* fall through */
220
- case 0x6c: /* FCMGE (zero) */
221
- genfn = gen_helper_neon_cge_f64;
222
- break;
223
- default:
224
- g_assert_not_reached();
225
- }
226
-
227
- for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
228
- read_vec_element(s, tcg_op, rn, pass, MO_64);
229
- if (swap) {
230
- genfn(tcg_res, tcg_zero, tcg_op, fpst);
231
- } else {
232
- genfn(tcg_res, tcg_op, tcg_zero, fpst);
233
- }
234
- write_vec_element(s, tcg_res, rd, pass, MO_64);
235
- }
236
-
237
- clear_vec_high(s, !is_scalar, rd);
238
- } else {
239
- TCGv_i32 tcg_op = tcg_temp_new_i32();
240
- TCGv_i32 tcg_zero = tcg_constant_i32(0);
241
- TCGv_i32 tcg_res = tcg_temp_new_i32();
242
- NeonGenTwoSingleOpFn *genfn;
243
- bool swap = false;
244
- int pass, maxpasses;
245
-
246
- if (size == MO_16) {
247
- switch (opcode) {
248
- case 0x2e: /* FCMLT (zero) */
249
- swap = true;
250
- /* fall through */
251
- case 0x2c: /* FCMGT (zero) */
252
- genfn = gen_helper_advsimd_cgt_f16;
253
- break;
254
- case 0x2d: /* FCMEQ (zero) */
255
- genfn = gen_helper_advsimd_ceq_f16;
256
- break;
257
- case 0x6d: /* FCMLE (zero) */
258
- swap = true;
259
- /* fall through */
260
- case 0x6c: /* FCMGE (zero) */
261
- genfn = gen_helper_advsimd_cge_f16;
262
- break;
263
- default:
264
- g_assert_not_reached();
265
- }
266
- } else {
267
- switch (opcode) {
268
- case 0x2e: /* FCMLT (zero) */
269
- swap = true;
270
- /* fall through */
271
- case 0x2c: /* FCMGT (zero) */
272
- genfn = gen_helper_neon_cgt_f32;
273
- break;
274
- case 0x2d: /* FCMEQ (zero) */
275
- genfn = gen_helper_neon_ceq_f32;
276
- break;
277
- case 0x6d: /* FCMLE (zero) */
278
- swap = true;
279
- /* fall through */
280
- case 0x6c: /* FCMGE (zero) */
281
- genfn = gen_helper_neon_cge_f32;
282
- break;
283
- default:
284
- g_assert_not_reached();
285
- }
286
- }
287
-
288
- if (is_scalar) {
289
- maxpasses = 1;
290
- } else {
291
- int vector_size = 8 << is_q;
292
- maxpasses = vector_size >> size;
293
- }
294
-
295
- for (pass = 0; pass < maxpasses; pass++) {
296
- read_vec_element_i32(s, tcg_op, rn, pass, size);
297
- if (swap) {
298
- genfn(tcg_res, tcg_zero, tcg_op, fpst);
299
- } else {
300
- genfn(tcg_res, tcg_op, tcg_zero, fpst);
301
- }
302
- if (is_scalar) {
303
- write_fp_sreg(s, rd, tcg_res);
304
- } else {
305
- write_vec_element_i32(s, tcg_res, rd, pass, size);
306
- }
307
- }
308
-
309
- if (!is_scalar) {
310
- clear_vec_high(s, is_q, rd);
311
- }
312
- }
313
-}
314
+static gen_helper_gvec_2_ptr * const f_fcle0[] = {
315
+ gen_helper_gvec_fcle0_h,
316
+ gen_helper_gvec_fcle0_s,
317
+ gen_helper_gvec_fcle0_d,
318
+};
319
+TRANS(FCMLE0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fcle0)
320
321
static void handle_2misc_reciprocal(DisasContext *s, int opcode,
322
bool is_scalar, bool is_u, bool is_q,
323
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
324
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
325
size = extract32(size, 0, 1) ? 3 : 2;
326
switch (opcode) {
327
- case 0x2c: /* FCMGT (zero) */
328
- case 0x2d: /* FCMEQ (zero) */
329
- case 0x2e: /* FCMLT (zero) */
330
- case 0x6c: /* FCMGE (zero) */
331
- case 0x6d: /* FCMLE (zero) */
332
- handle_2misc_fcmp_zero(s, opcode, true, u, true, size, rn, rd);
333
- return;
334
case 0x3d: /* FRECPE */
335
case 0x3f: /* FRECPX */
336
case 0x7d: /* FRSQRTE */
337
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
338
case 0x56: /* FCVTXN, FCVTXN2 */
339
case 0x1d: /* SCVTF */
340
case 0x5d: /* UCVTF */
341
+ case 0x2c: /* FCMGT (zero) */
342
+ case 0x2d: /* FCMEQ (zero) */
343
+ case 0x2e: /* FCMLT (zero) */
344
+ case 0x6c: /* FCMGE (zero) */
345
+ case 0x6d: /* FCMLE (zero) */
346
default:
347
unallocated_encoding(s);
348
return;
349
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
350
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
351
size = is_double ? 3 : 2;
352
switch (opcode) {
353
- case 0x2c: /* FCMGT (zero) */
354
- case 0x2d: /* FCMEQ (zero) */
355
- case 0x2e: /* FCMLT (zero) */
356
- case 0x6c: /* FCMGE (zero) */
357
- case 0x6d: /* FCMLE (zero) */
358
- if (size == 3 && !is_q) {
359
- unallocated_encoding(s);
360
- return;
361
- }
362
- handle_2misc_fcmp_zero(s, opcode, false, u, is_q, size, rn, rd);
363
- return;
364
case 0x3c: /* URECPE */
365
if (size == 3) {
366
unallocated_encoding(s);
367
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
368
case 0x7b: /* FCVTZU */
369
case 0x5c: /* FCVTAU */
370
case 0x1c: /* FCVTAS */
371
+ case 0x2c: /* FCMGT (zero) */
372
+ case 0x2d: /* FCMEQ (zero) */
373
+ case 0x2e: /* FCMLT (zero) */
374
+ case 0x6c: /* FCMGE (zero) */
375
+ case 0x6d: /* FCMLE (zero) */
376
unallocated_encoding(s);
43
return;
377
return;
44
}
378
}
379
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
380
fpop = deposit32(fpop, 6, 1, u);
381
382
switch (fpop) {
383
- case 0x2c: /* FCMGT (zero) */
384
- case 0x2d: /* FCMEQ (zero) */
385
- case 0x2e: /* FCMLT (zero) */
386
- case 0x6c: /* FCMGE (zero) */
387
- case 0x6d: /* FCMLE (zero) */
388
- handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_16, rn, rd);
389
- return;
390
case 0x3d: /* FRECPE */
391
case 0x3f: /* FRECPX */
45
break;
392
break;
46
+
393
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
47
+ case 0x10: /* MLA */
394
case 0x5c: /* FCVTAU */
48
+ if (!is_long && !is_scalar) {
395
case 0x7a: /* FCVTPU */
49
+ static gen_helper_gvec_4 * const fns[3] = {
396
case 0x7b: /* FCVTZU */
50
+ gen_helper_gvec_mla_idx_h,
397
+ case 0x2c: /* FCMGT (zero) */
51
+ gen_helper_gvec_mla_idx_s,
398
+ case 0x2d: /* FCMEQ (zero) */
52
+ gen_helper_gvec_mla_idx_d,
399
+ case 0x2e: /* FCMLT (zero) */
53
+ };
400
+ case 0x6c: /* FCMGE (zero) */
54
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
401
+ case 0x6d: /* FCMLE (zero) */
55
+ vec_full_reg_offset(s, rn),
402
unallocated_encoding(s);
56
+ vec_full_reg_offset(s, rm),
403
return;
57
+ vec_full_reg_offset(s, rd),
58
+ is_q ? 16 : 8, vec_full_reg_size(s),
59
+ index, fns[size - 1]);
60
+ return;
61
+ }
62
+ break;
63
+
64
+ case 0x14: /* MLS */
65
+ if (!is_long && !is_scalar) {
66
+ static gen_helper_gvec_4 * const fns[3] = {
67
+ gen_helper_gvec_mls_idx_h,
68
+ gen_helper_gvec_mls_idx_s,
69
+ gen_helper_gvec_mls_idx_d,
70
+ };
71
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
72
+ vec_full_reg_offset(s, rn),
73
+ vec_full_reg_offset(s, rm),
74
+ vec_full_reg_offset(s, rd),
75
+ is_q ? 16 : 8, vec_full_reg_size(s),
76
+ index, fns[size - 1]);
77
+ return;
78
+ }
79
+ break;
80
}
404
}
81
405
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
82
if (size == 3) {
83
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
84
index XXXXXXX..XXXXXXX 100644
406
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/vec_helper.c
407
--- a/target/arm/tcg/vec_helper.c
86
+++ b/target/arm/vec_helper.c
408
+++ b/target/arm/tcg/vec_helper.c
87
@@ -XXX,XX +XXX,XX @@ DO_MUL_IDX(gvec_mul_idx_d, uint64_t, )
409
@@ -XXX,XX +XXX,XX @@ DO_2OP(gvec_touszh, vfp_touszh, float16)
88
410
#define DO_2OP_CMP0(FN, CMPOP, DIRN) \
89
#undef DO_MUL_IDX
411
WRAP_CMP0_##DIRN(FN, CMPOP, float16) \
90
412
WRAP_CMP0_##DIRN(FN, CMPOP, float32) \
91
+#define DO_MLA_IDX(NAME, TYPE, OP, H) \
413
+ WRAP_CMP0_##DIRN(FN, CMPOP, float64) \
92
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
414
DO_2OP(gvec_f##FN##0_h, float16_##FN##0, float16) \
93
+{ \
415
- DO_2OP(gvec_f##FN##0_s, float32_##FN##0, float32)
94
+ intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
416
+ DO_2OP(gvec_f##FN##0_s, float32_##FN##0, float32) \
95
+ intptr_t idx = simd_data(desc); \
417
+ DO_2OP(gvec_f##FN##0_d, float64_##FN##0, float64)
96
+ TYPE *d = vd, *n = vn, *m = vm, *a = va; \
418
97
+ for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
419
DO_2OP_CMP0(cgt, cgt, FWD)
98
+ TYPE mm = m[H(i + idx)]; \
420
DO_2OP_CMP0(cge, cge, FWD)
99
+ for (j = 0; j < segment; j++) { \
100
+ d[i + j] = a[i + j] OP n[i + j] * mm; \
101
+ } \
102
+ } \
103
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
104
+}
105
+
106
+DO_MLA_IDX(gvec_mla_idx_h, uint16_t, +, H2)
107
+DO_MLA_IDX(gvec_mla_idx_s, uint32_t, +, H4)
108
+DO_MLA_IDX(gvec_mla_idx_d, uint64_t, +, )
109
+
110
+DO_MLA_IDX(gvec_mls_idx_h, uint16_t, -, H2)
111
+DO_MLA_IDX(gvec_mls_idx_s, uint32_t, -, H4)
112
+DO_MLA_IDX(gvec_mls_idx_d, uint64_t, -, )
113
+
114
+#undef DO_MLA_IDX
115
+
116
#define DO_FMUL_IDX(NAME, TYPE, H) \
117
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
118
{ \
119
--
421
--
120
2.20.1
422
2.34.1
121
122
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Allow the device to execute the DMA transfers in a different
3
Remove disas_simd_scalar_two_reg_misc and
4
AddressSpace.
4
disas_simd_two_reg_misc_fp16 as these were the
5
last insns decoded by those functions.
5
6
6
The H3 SoC keeps using the system_memory address space,
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
but via the proper dma_memory_access() API.
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
9
Message-id: 20241211163036.2297116-67-richard.henderson@linaro.org
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
12
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
13
Message-id: 20200814122907.27732-1-f4bug@amsat.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
include/hw/net/allwinner-sun8i-emac.h | 6 ++++
12
target/arm/tcg/a64.decode | 15 ++
17
hw/arm/allwinner-h3.c | 2 ++
13
target/arm/tcg/translate-a64.c | 329 ++++-----------------------------
18
hw/net/allwinner-sun8i-emac.c | 46 +++++++++++++++++----------
14
2 files changed, 53 insertions(+), 291 deletions(-)
19
3 files changed, 38 insertions(+), 16 deletions(-)
20
15
21
diff --git a/include/hw/net/allwinner-sun8i-emac.h b/include/hw/net/allwinner-sun8i-emac.h
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/net/allwinner-sun8i-emac.h
18
--- a/target/arm/tcg/a64.decode
24
+++ b/include/hw/net/allwinner-sun8i-emac.h
19
+++ b/target/arm/tcg/a64.decode
25
@@ -XXX,XX +XXX,XX @@ typedef struct AwSun8iEmacState {
20
@@ -XXX,XX +XXX,XX @@ FCMLE0_s 0111 1110 1.1 00000 11011 0 ..... ..... @rr_sd
26
/** Interrupt output signal to notify CPU */
21
FCMLT0_s 0101 1110 111 11000 11101 0 ..... ..... @rr_h
27
qemu_irq irq;
22
FCMLT0_s 0101 1110 1.1 00000 11101 0 ..... ..... @rr_sd
28
23
29
+ /** Memory region where DMA transfers are done */
24
+FRECPE_s 0101 1110 111 11001 11011 0 ..... ..... @rr_h
30
+ MemoryRegion *dma_mr;
25
+FRECPE_s 0101 1110 1.1 00001 11011 0 ..... ..... @rr_sd
31
+
26
+
32
+ /** Address space used internally for DMA transfers */
27
+FRECPX_s 0101 1110 111 11001 11111 0 ..... ..... @rr_h
33
+ AddressSpace dma_as;
28
+FRECPX_s 0101 1110 1.1 00001 11111 0 ..... ..... @rr_sd
34
+
29
+
35
/** Generic Network Interface Controller (NIC) for networking API */
30
+FRSQRTE_s 0111 1110 111 11001 11011 0 ..... ..... @rr_h
36
NICState *nic;
31
+FRSQRTE_s 0111 1110 1.1 00001 11011 0 ..... ..... @rr_sd
37
32
+
38
diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
33
@icvt_h . ....... .. ...... ...... rn:5 rd:5 \
34
&fcvt sf=0 esz=1 shift=0
35
@icvt_sd . ....... .. ...... ...... rn:5 rd:5 \
36
@@ -XXX,XX +XXX,XX @@ FCMLE0_v 0.10 1110 1.1 00000 11011 0 ..... ..... @qrr_sd
37
FCMLT0_v 0.00 1110 111 11000 11101 0 ..... ..... @qrr_h
38
FCMLT0_v 0.00 1110 1.1 00000 11101 0 ..... ..... @qrr_sd
39
40
+FRECPE_v 0.00 1110 111 11001 11011 0 ..... ..... @qrr_h
41
+FRECPE_v 0.00 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
42
+
43
+FRSQRTE_v 0.10 1110 111 11001 11011 0 ..... ..... @qrr_h
44
+FRSQRTE_v 0.10 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
45
+
46
&fcvt_q rd rn esz q shift
47
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
48
&fcvt_q esz=1 shift=%fcvt_f_sh_h
49
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/allwinner-h3.c
51
--- a/target/arm/tcg/translate-a64.c
41
+++ b/hw/arm/allwinner-h3.c
52
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static void allwinner_h3_realize(DeviceState *dev, Error **errp)
53
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FRINT64Z_s, aa64_frint, do_fp1_scalar, a,
43
qemu_check_nic_model(&nd_table[0], TYPE_AW_SUN8I_EMAC);
54
&f_scalar_frint64, FPROUNDING_ZERO)
44
qdev_set_nic_properties(DEVICE(&s->emac), &nd_table[0]);
55
TRANS_FEAT(FRINT64X_s, aa64_frint, do_fp1_scalar, a, &f_scalar_frint64, -1)
45
}
56
46
+ object_property_set_link(OBJECT(&s->emac), "dma-memory",
57
+static const FPScalar1 f_scalar_frecpe = {
47
+ OBJECT(get_system_memory()), &error_fatal);
58
+ gen_helper_recpe_f16,
48
sysbus_realize(SYS_BUS_DEVICE(&s->emac), &error_fatal);
59
+ gen_helper_recpe_f32,
49
sysbus_mmio_map(SYS_BUS_DEVICE(&s->emac), 0, s->memmap[AW_H3_EMAC]);
60
+ gen_helper_recpe_f64,
50
sysbus_connect_irq(SYS_BUS_DEVICE(&s->emac), 0,
61
+};
51
diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
62
+TRANS(FRECPE_s, do_fp1_scalar, a, &f_scalar_frecpe, -1)
52
index XXXXXXX..XXXXXXX 100644
63
+
53
--- a/hw/net/allwinner-sun8i-emac.c
64
+static const FPScalar1 f_scalar_frecpx = {
54
+++ b/hw/net/allwinner-sun8i-emac.c
65
+ gen_helper_frecpx_f16,
55
@@ -XXX,XX +XXX,XX @@
66
+ gen_helper_frecpx_f32,
56
67
+ gen_helper_frecpx_f64,
57
#include "qemu/osdep.h"
68
+};
58
#include "qemu/units.h"
69
+TRANS(FRECPX_s, do_fp1_scalar, a, &f_scalar_frecpx, -1)
59
+#include "qapi/error.h"
70
+
60
#include "hw/sysbus.h"
71
+static const FPScalar1 f_scalar_frsqrte = {
61
#include "migration/vmstate.h"
72
+ gen_helper_rsqrte_f16,
62
#include "net/net.h"
73
+ gen_helper_rsqrte_f32,
63
@@ -XXX,XX +XXX,XX @@
74
+ gen_helper_rsqrte_f64,
64
#include "net/checksum.h"
75
+};
65
#include "qemu/module.h"
76
+TRANS(FRSQRTE_s, do_fp1_scalar, a, &f_scalar_frsqrte, -1)
66
#include "exec/cpu-common.h"
77
+
67
+#include "sysemu/dma.h"
78
static bool trans_FCVT_s_ds(DisasContext *s, arg_rr *a)
68
#include "hw/net/allwinner-sun8i-emac.h"
69
70
/* EMAC register offsets */
71
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_update_irq(AwSun8iEmacState *s)
72
qemu_set_irq(s->irq, (s->int_sta & s->int_en) != 0);
73
}
74
75
-static uint32_t allwinner_sun8i_emac_next_desc(FrameDescriptor *desc,
76
+static uint32_t allwinner_sun8i_emac_next_desc(AwSun8iEmacState *s,
77
+ FrameDescriptor *desc,
78
size_t min_size)
79
{
79
{
80
uint32_t paddr = desc->next;
80
if (fp_access_check(s)) {
81
81
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2_ptr * const f_fcle0[] = {
82
- cpu_physical_memory_read(paddr, desc, sizeof(*desc));
82
};
83
+ dma_memory_read(&s->dma_as, paddr, desc, sizeof(*desc));
83
TRANS(FCMLE0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fcle0)
84
84
85
if ((desc->status & DESC_STATUS_CTL) &&
85
+static gen_helper_gvec_2_ptr * const f_frecpe[] = {
86
(desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_size) {
86
+ gen_helper_gvec_frecpe_h,
87
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sun8i_emac_next_desc(FrameDescriptor *desc,
87
+ gen_helper_gvec_frecpe_s,
88
+ gen_helper_gvec_frecpe_d,
89
+};
90
+TRANS(FRECPE_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_frecpe)
91
+
92
+static gen_helper_gvec_2_ptr * const f_frsqrte[] = {
93
+ gen_helper_gvec_frsqrte_h,
94
+ gen_helper_gvec_frsqrte_s,
95
+ gen_helper_gvec_frsqrte_d,
96
+};
97
+TRANS(FRSQRTE_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_frsqrte)
98
+
99
static void handle_2misc_reciprocal(DisasContext *s, int opcode,
100
bool is_scalar, bool is_u, bool is_q,
101
int size, int rn, int rd)
102
{
103
bool is_double = (size == 3);
104
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
105
106
if (is_double) {
107
- TCGv_i64 tcg_op = tcg_temp_new_i64();
108
- TCGv_i64 tcg_res = tcg_temp_new_i64();
109
- int pass;
110
-
111
- for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
112
- read_vec_element(s, tcg_op, rn, pass, MO_64);
113
- switch (opcode) {
114
- case 0x3d: /* FRECPE */
115
- gen_helper_recpe_f64(tcg_res, tcg_op, fpst);
116
- break;
117
- case 0x3f: /* FRECPX */
118
- gen_helper_frecpx_f64(tcg_res, tcg_op, fpst);
119
- break;
120
- case 0x7d: /* FRSQRTE */
121
- gen_helper_rsqrte_f64(tcg_res, tcg_op, fpst);
122
- break;
123
- default:
124
- g_assert_not_reached();
125
- }
126
- write_vec_element(s, tcg_res, rd, pass, MO_64);
127
- }
128
- clear_vec_high(s, !is_scalar, rd);
129
+ g_assert_not_reached();
130
} else {
131
TCGv_i32 tcg_op = tcg_temp_new_i32();
132
TCGv_i32 tcg_res = tcg_temp_new_i32();
133
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
134
gen_helper_recpe_u32(tcg_res, tcg_op);
135
break;
136
case 0x3d: /* FRECPE */
137
- gen_helper_recpe_f32(tcg_res, tcg_op, fpst);
138
- break;
139
case 0x3f: /* FRECPX */
140
- gen_helper_frecpx_f32(tcg_res, tcg_op, fpst);
141
- break;
142
case 0x7d: /* FRSQRTE */
143
- gen_helper_rsqrte_f32(tcg_res, tcg_op, fpst);
144
- break;
145
default:
146
g_assert_not_reached();
147
}
148
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
88
}
149
}
89
}
150
}
90
151
91
-static uint32_t allwinner_sun8i_emac_get_desc(FrameDescriptor *desc,
152
-/* AdvSIMD scalar two reg misc
92
+static uint32_t allwinner_sun8i_emac_get_desc(AwSun8iEmacState *s,
153
- * 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
93
+ FrameDescriptor *desc,
154
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
94
uint32_t start_addr,
155
- * | 0 1 | U | 1 1 1 1 0 | size | 1 0 0 0 0 | opcode | 1 0 | Rn | Rd |
95
size_t min_size)
156
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
157
- */
158
-static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
159
-{
160
- int rd = extract32(insn, 0, 5);
161
- int rn = extract32(insn, 5, 5);
162
- int opcode = extract32(insn, 12, 5);
163
- int size = extract32(insn, 22, 2);
164
- bool u = extract32(insn, 29, 1);
165
-
166
- switch (opcode) {
167
- case 0xc ... 0xf:
168
- case 0x16 ... 0x1d:
169
- case 0x1f:
170
- /* Floating point: U, size[1] and opcode indicate operation;
171
- * size[0] indicates single or double precision.
172
- */
173
- opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
174
- size = extract32(size, 0, 1) ? 3 : 2;
175
- switch (opcode) {
176
- case 0x3d: /* FRECPE */
177
- case 0x3f: /* FRECPX */
178
- case 0x7d: /* FRSQRTE */
179
- if (!fp_access_check(s)) {
180
- return;
181
- }
182
- handle_2misc_reciprocal(s, opcode, true, u, true, size, rn, rd);
183
- return;
184
- case 0x1a: /* FCVTNS */
185
- case 0x1b: /* FCVTMS */
186
- case 0x3a: /* FCVTPS */
187
- case 0x3b: /* FCVTZS */
188
- case 0x5a: /* FCVTNU */
189
- case 0x5b: /* FCVTMU */
190
- case 0x7a: /* FCVTPU */
191
- case 0x7b: /* FCVTZU */
192
- case 0x1c: /* FCVTAS */
193
- case 0x5c: /* FCVTAU */
194
- case 0x56: /* FCVTXN, FCVTXN2 */
195
- case 0x1d: /* SCVTF */
196
- case 0x5d: /* UCVTF */
197
- case 0x2c: /* FCMGT (zero) */
198
- case 0x2d: /* FCMEQ (zero) */
199
- case 0x2e: /* FCMLT (zero) */
200
- case 0x6c: /* FCMGE (zero) */
201
- case 0x6d: /* FCMLE (zero) */
202
- default:
203
- unallocated_encoding(s);
204
- return;
205
- }
206
- break;
207
- default:
208
- case 0x3: /* USQADD / SUQADD */
209
- case 0x7: /* SQABS / SQNEG */
210
- case 0x8: /* CMGT, CMGE */
211
- case 0x9: /* CMEQ, CMLE */
212
- case 0xa: /* CMLT */
213
- case 0xb: /* ABS, NEG */
214
- case 0x12: /* SQXTUN */
215
- case 0x14: /* SQXTN, UQXTN */
216
- unallocated_encoding(s);
217
- return;
218
- }
219
- g_assert_not_reached();
220
-}
221
-
222
static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
223
int size, int rn, int rd)
96
{
224
{
97
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sun8i_emac_get_desc(FrameDescriptor *desc,
225
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
98
226
unallocated_encoding(s);
99
/* Note that the list is a cycle. Last entry points back to the head. */
227
return;
100
while (desc_addr != 0) {
228
}
101
- cpu_physical_memory_read(desc_addr, desc, sizeof(*desc));
229
- /* fall through */
102
+ dma_memory_read(&s->dma_as, desc_addr, desc, sizeof(*desc));
230
- case 0x3d: /* FRECPE */
103
231
- case 0x7d: /* FRSQRTE */
104
if ((desc->status & DESC_STATUS_CTL) &&
232
- if (size == 3 && !is_q) {
105
(desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_size) {
233
- unallocated_encoding(s);
106
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sun8i_emac_rx_desc(AwSun8iEmacState *s,
234
- return;
107
FrameDescriptor *desc,
235
- }
108
size_t min_size)
236
if (!fp_access_check(s)) {
109
{
237
return;
110
- return allwinner_sun8i_emac_get_desc(desc, s->rx_desc_curr, min_size);
238
}
111
+ return allwinner_sun8i_emac_get_desc(s, desc, s->rx_desc_curr, min_size);
239
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
240
case 0x2e: /* FCMLT (zero) */
241
case 0x6c: /* FCMGE (zero) */
242
case 0x6d: /* FCMLE (zero) */
243
+ case 0x3d: /* FRECPE */
244
+ case 0x7d: /* FRSQRTE */
245
unallocated_encoding(s);
246
return;
247
}
248
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
249
}
112
}
250
}
113
251
114
static uint32_t allwinner_sun8i_emac_tx_desc(AwSun8iEmacState *s,
252
-/* AdvSIMD [scalar] two register miscellaneous (FP16)
115
FrameDescriptor *desc,
253
- *
116
size_t min_size)
254
- * 31 30 29 28 27 24 23 22 21 17 16 12 11 10 9 5 4 0
117
{
255
- * +---+---+---+---+---------+---+-------------+--------+-----+------+------+
118
- return allwinner_sun8i_emac_get_desc(desc, s->tx_desc_head, min_size);
256
- * | 0 | Q | U | S | 1 1 1 0 | a | 1 1 1 1 0 0 | opcode | 1 0 | Rn | Rd |
119
+ return allwinner_sun8i_emac_get_desc(s, desc, s->tx_desc_head, min_size);
257
- * +---+---+---+---+---------+---+-------------+--------+-----+------+------+
120
}
258
- * mask: 1000 1111 0111 1110 0000 1100 0000 0000 0x8f7e 0c00
121
259
- * val: 0000 1110 0111 1000 0000 1000 0000 0000 0x0e78 0800
122
-static void allwinner_sun8i_emac_flush_desc(FrameDescriptor *desc,
260
- *
123
+static void allwinner_sun8i_emac_flush_desc(AwSun8iEmacState *s,
261
- * This actually covers two groups where scalar access is governed by
124
+ FrameDescriptor *desc,
262
- * bit 28. A bunch of the instructions (float to integral) only exist
125
uint32_t phys_addr)
263
- * in the vector form and are un-allocated for the scalar decode. Also
126
{
264
- * in the scalar decode Q is always 1.
127
- cpu_physical_memory_write(phys_addr, desc, sizeof(*desc));
265
- */
128
+ dma_memory_write(&s->dma_as, phys_addr, desc, sizeof(*desc));
266
-static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
129
}
267
-{
130
268
- int fpop, opcode, a, u;
131
static bool allwinner_sun8i_emac_can_receive(NetClientState *nc)
269
- int rn, rd;
132
@@ -XXX,XX +XXX,XX @@ static ssize_t allwinner_sun8i_emac_receive(NetClientState *nc,
270
- bool is_q;
133
<< RX_DESC_STATUS_FRM_LEN_SHIFT;
271
- bool is_scalar;
134
}
272
-
135
273
- int pass;
136
- cpu_physical_memory_write(desc.addr, buf, desc_bytes);
274
- TCGv_i32 tcg_rmode = NULL;
137
- allwinner_sun8i_emac_flush_desc(&desc, s->rx_desc_curr);
275
- TCGv_ptr tcg_fpstatus = NULL;
138
+ dma_memory_write(&s->dma_as, desc.addr, buf, desc_bytes);
276
- bool need_fpst = true;
139
+ allwinner_sun8i_emac_flush_desc(s, &desc, s->rx_desc_curr);
277
- int rmode = -1;
140
trace_allwinner_sun8i_emac_receive(s->rx_desc_curr, desc.addr,
278
-
141
desc_bytes);
279
- if (!dc_isar_feature(aa64_fp16, s)) {
142
280
- unallocated_encoding(s);
143
@@ -XXX,XX +XXX,XX @@ static ssize_t allwinner_sun8i_emac_receive(NetClientState *nc,
281
- return;
144
bytes_left -= desc_bytes;
282
- }
145
283
-
146
/* Move to the next descriptor */
284
- rd = extract32(insn, 0, 5);
147
- s->rx_desc_curr = allwinner_sun8i_emac_next_desc(&desc, 64);
285
- rn = extract32(insn, 5, 5);
148
+ s->rx_desc_curr = allwinner_sun8i_emac_next_desc(s, &desc, 64);
286
-
149
if (!s->rx_desc_curr) {
287
- a = extract32(insn, 23, 1);
150
/* Not enough buffer space available */
288
- u = extract32(insn, 29, 1);
151
s->int_sta |= INT_STA_RX_BUF_UA;
289
- is_scalar = extract32(insn, 28, 1);
152
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
290
- is_q = extract32(insn, 30, 1);
153
desc.status |= TX_DESC_STATUS_LENGTH_ERR;
291
-
154
break;
292
- opcode = extract32(insn, 12, 5);
155
}
293
- fpop = deposit32(opcode, 5, 1, a);
156
- cpu_physical_memory_read(desc.addr, packet_buf + packet_bytes, bytes);
294
- fpop = deposit32(fpop, 6, 1, u);
157
+ dma_memory_read(&s->dma_as, desc.addr, packet_buf + packet_bytes, bytes);
295
-
158
packet_bytes += bytes;
296
- switch (fpop) {
159
desc.status &= ~DESC_STATUS_CTL;
297
- case 0x3d: /* FRECPE */
160
- allwinner_sun8i_emac_flush_desc(&desc, s->tx_desc_curr);
298
- case 0x3f: /* FRECPX */
161
+ allwinner_sun8i_emac_flush_desc(s, &desc, s->tx_desc_curr);
299
- break;
162
300
- case 0x7d: /* FRSQRTE */
163
/* After the last descriptor, send the packet */
301
- break;
164
if (desc.status2 & TX_DESC_STATUS2_LAST_DESC) {
302
- default:
165
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
303
- case 0x2f: /* FABS */
166
packet_bytes = 0;
304
- case 0x6f: /* FNEG */
167
transmitted++;
305
- case 0x7f: /* FSQRT (vector) */
168
}
306
- case 0x18: /* FRINTN */
169
- s->tx_desc_curr = allwinner_sun8i_emac_next_desc(&desc, 0);
307
- case 0x19: /* FRINTM */
170
+ s->tx_desc_curr = allwinner_sun8i_emac_next_desc(s, &desc, 0);
308
- case 0x38: /* FRINTP */
171
}
309
- case 0x39: /* FRINTZ */
172
310
- case 0x58: /* FRINTA */
173
/* Raise transmit completed interrupt */
311
- case 0x59: /* FRINTX */
174
@@ -XXX,XX +XXX,XX @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, hwaddr offset,
312
- case 0x79: /* FRINTI */
175
break;
313
- case 0x1d: /* SCVTF */
176
case REG_TX_CUR_BUF: /* Transmit Current Buffer */
314
- case 0x5d: /* UCVTF */
177
if (s->tx_desc_curr != 0) {
315
- case 0x1a: /* FCVTNS */
178
- cpu_physical_memory_read(s->tx_desc_curr, &desc, sizeof(desc));
316
- case 0x1b: /* FCVTMS */
179
+ dma_memory_read(&s->dma_as, s->tx_desc_curr, &desc, sizeof(desc));
317
- case 0x1c: /* FCVTAS */
180
value = desc.addr;
318
- case 0x3a: /* FCVTPS */
181
} else {
319
- case 0x3b: /* FCVTZS */
182
value = 0;
320
- case 0x5a: /* FCVTNU */
183
@@ -XXX,XX +XXX,XX @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, hwaddr offset,
321
- case 0x5b: /* FCVTMU */
184
break;
322
- case 0x5c: /* FCVTAU */
185
case REG_RX_CUR_BUF: /* Receive Current Buffer */
323
- case 0x7a: /* FCVTPU */
186
if (s->rx_desc_curr != 0) {
324
- case 0x7b: /* FCVTZU */
187
- cpu_physical_memory_read(s->rx_desc_curr, &desc, sizeof(desc));
325
- case 0x2c: /* FCMGT (zero) */
188
+ dma_memory_read(&s->dma_as, s->rx_desc_curr, &desc, sizeof(desc));
326
- case 0x2d: /* FCMEQ (zero) */
189
value = desc.addr;
327
- case 0x2e: /* FCMLT (zero) */
190
} else {
328
- case 0x6c: /* FCMGE (zero) */
191
value = 0;
329
- case 0x6d: /* FCMLE (zero) */
192
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_realize(DeviceState *dev, Error **errp)
330
- unallocated_encoding(s);
193
{
331
- return;
194
AwSun8iEmacState *s = AW_SUN8I_EMAC(dev);
332
- }
195
333
-
196
+ if (!s->dma_mr) {
334
-
197
+ error_setg(errp, TYPE_AW_SUN8I_EMAC " 'dma-memory' link not set");
335
- /* Check additional constraints for the scalar encoding */
198
+ return;
336
- if (is_scalar) {
199
+ }
337
- if (!is_q) {
200
+
338
- unallocated_encoding(s);
201
+ address_space_init(&s->dma_as, s->dma_mr, "emac-dma");
339
- return;
202
+
340
- }
203
qemu_macaddr_default_if_unset(&s->conf.macaddr);
341
- }
204
s->nic = qemu_new_nic(&net_allwinner_sun8i_emac_info, &s->conf,
342
-
205
object_get_typename(OBJECT(dev)), dev->id, s);
343
- if (!fp_access_check(s)) {
206
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_realize(DeviceState *dev, Error **errp)
344
- return;
207
static Property allwinner_sun8i_emac_properties[] = {
345
- }
208
DEFINE_NIC_PROPERTIES(AwSun8iEmacState, conf),
346
-
209
DEFINE_PROP_UINT8("phy-addr", AwSun8iEmacState, mii_phy_addr, 0),
347
- if (rmode >= 0 || need_fpst) {
210
+ DEFINE_PROP_LINK("dma-memory", AwSun8iEmacState, dma_mr,
348
- tcg_fpstatus = fpstatus_ptr(FPST_FPCR_F16);
211
+ TYPE_MEMORY_REGION, MemoryRegion *),
349
- }
212
DEFINE_PROP_END_OF_LIST(),
350
-
351
- if (rmode >= 0) {
352
- tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
353
- }
354
-
355
- if (is_scalar) {
356
- TCGv_i32 tcg_op = read_fp_hreg(s, rn);
357
- TCGv_i32 tcg_res = tcg_temp_new_i32();
358
-
359
- switch (fpop) {
360
- case 0x3d: /* FRECPE */
361
- gen_helper_recpe_f16(tcg_res, tcg_op, tcg_fpstatus);
362
- break;
363
- case 0x3f: /* FRECPX */
364
- gen_helper_frecpx_f16(tcg_res, tcg_op, tcg_fpstatus);
365
- break;
366
- case 0x7d: /* FRSQRTE */
367
- gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
368
- break;
369
- default:
370
- case 0x1a: /* FCVTNS */
371
- case 0x1b: /* FCVTMS */
372
- case 0x1c: /* FCVTAS */
373
- case 0x3a: /* FCVTPS */
374
- case 0x3b: /* FCVTZS */
375
- case 0x5a: /* FCVTNU */
376
- case 0x5b: /* FCVTMU */
377
- case 0x5c: /* FCVTAU */
378
- case 0x7a: /* FCVTPU */
379
- case 0x7b: /* FCVTZU */
380
- g_assert_not_reached();
381
- }
382
-
383
- /* limit any sign extension going on */
384
- tcg_gen_andi_i32(tcg_res, tcg_res, 0xffff);
385
- write_fp_sreg(s, rd, tcg_res);
386
- } else {
387
- for (pass = 0; pass < (is_q ? 8 : 4); pass++) {
388
- TCGv_i32 tcg_op = tcg_temp_new_i32();
389
- TCGv_i32 tcg_res = tcg_temp_new_i32();
390
-
391
- read_vec_element_i32(s, tcg_op, rn, pass, MO_16);
392
-
393
- switch (fpop) {
394
- case 0x3d: /* FRECPE */
395
- gen_helper_recpe_f16(tcg_res, tcg_op, tcg_fpstatus);
396
- break;
397
- case 0x7d: /* FRSQRTE */
398
- gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
399
- break;
400
- default:
401
- case 0x2f: /* FABS */
402
- case 0x6f: /* FNEG */
403
- case 0x7f: /* FSQRT */
404
- case 0x18: /* FRINTN */
405
- case 0x19: /* FRINTM */
406
- case 0x38: /* FRINTP */
407
- case 0x39: /* FRINTZ */
408
- case 0x58: /* FRINTA */
409
- case 0x79: /* FRINTI */
410
- case 0x59: /* FRINTX */
411
- case 0x1a: /* FCVTNS */
412
- case 0x1b: /* FCVTMS */
413
- case 0x1c: /* FCVTAS */
414
- case 0x3a: /* FCVTPS */
415
- case 0x3b: /* FCVTZS */
416
- case 0x5a: /* FCVTNU */
417
- case 0x5b: /* FCVTMU */
418
- case 0x5c: /* FCVTAU */
419
- case 0x7a: /* FCVTPU */
420
- case 0x7b: /* FCVTZU */
421
- g_assert_not_reached();
422
- }
423
-
424
- write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
425
- }
426
-
427
- clear_vec_high(s, is_q, rd);
428
- }
429
-
430
- if (tcg_rmode) {
431
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
432
- }
433
-}
434
-
435
/* C3.6 Data processing - SIMD, inc Crypto
436
*
437
* As the decode gets a little complex we are using a table based
438
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
439
static const AArch64DecodeTable data_proc_simd[] = {
440
/* pattern , mask , fn */
441
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
442
- { 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
443
- { 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
444
{ 0x00000000, 0x00000000, NULL }
213
};
445
};
214
446
215
--
447
--
216
2.20.1
448
2.34.1
217
218
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20241211163036.2297116-68-richard.henderson@linaro.org
5
Message-id: 20200815013145.539409-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper.h | 10 ++++++++
8
target/arm/helper.h | 3 +++
9
target/arm/translate-a64.c | 33 ++++++++++++++++++--------
9
target/arm/tcg/translate.h | 5 +++++
10
target/arm/vec_helper.c | 48 ++++++++++++++++++++++++++++++++++++++
10
target/arm/tcg/gengvec.c | 16 ++++++++++++++++
11
3 files changed, 81 insertions(+), 10 deletions(-)
11
target/arm/tcg/translate-neon.c | 4 ++--
12
target/arm/tcg/vec_helper.c | 22 ++++++++++++++++++++++
13
5 files changed, 48 insertions(+), 2 deletions(-)
12
14
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
17
--- a/target/arm/helper.h
16
+++ b/target/arm/helper.h
18
+++ b/target/arm/helper.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_mls_idx_s, TCG_CALL_NO_RWG,
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_uminp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
DEF_HELPER_FLAGS_5(gvec_mls_idx_d, TCG_CALL_NO_RWG,
20
DEF_HELPER_FLAGS_4(gvec_uminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
void, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(gvec_uminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
22
21
+DEF_HELPER_FLAGS_5(neon_sqdmulh_h, TCG_CALL_NO_RWG,
23
+DEF_HELPER_FLAGS_3(gvec_urecpe_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_3(gvec_ursqrte_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_5(neon_sqdmulh_s, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_5(neon_sqrdmulh_h, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(neon_sqrdmulh_s, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
25
+
31
#ifdef TARGET_AARCH64
26
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
27
#include "tcg/helper-a64.h"
33
#include "helper-sve.h"
28
#include "tcg/helper-sve.h"
34
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
35
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-a64.c
31
--- a/target/arm/tcg/translate.h
37
+++ b/target/arm/translate-a64.c
32
+++ b/target/arm/tcg/translate.h
38
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3_fpst(DisasContext *s, bool is_q, int rd, int rn,
33
@@ -XXX,XX +XXX,XX @@ void gen_gvec_fabs(unsigned vece, uint32_t dofs, uint32_t aofs,
39
tcg_temp_free_ptr(fpst);
34
void gen_gvec_fneg(unsigned vece, uint32_t dofs, uint32_t aofs,
35
uint32_t oprsz, uint32_t maxsz);
36
37
+void gen_gvec_urecpe(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
38
+ uint32_t opr_sz, uint32_t max_sz);
39
+void gen_gvec_ursqrte(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
40
+ uint32_t opr_sz, uint32_t max_sz);
41
+
42
/*
43
* Forward to the isar_feature_* tests given a DisasContext pointer.
44
*/
45
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/tcg/gengvec.c
48
+++ b/target/arm/tcg/gengvec.c
49
@@ -XXX,XX +XXX,XX @@ void gen_gvec_fneg(unsigned vece, uint32_t dofs, uint32_t aofs,
50
uint64_t s_bit = 1ull << ((8 << vece) - 1);
51
tcg_gen_gvec_xori(vece, dofs, aofs, s_bit, oprsz, maxsz);
40
}
52
}
41
53
+
42
+/* Expand a 3-operand + qc + operation using an out-of-line helper. */
54
+void gen_gvec_urecpe(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
43
+static void gen_gvec_op3_qc(DisasContext *s, bool is_q, int rd, int rn,
55
+ uint32_t opr_sz, uint32_t max_sz)
44
+ int rm, gen_helper_gvec_3_ptr *fn)
45
+{
56
+{
46
+ TCGv_ptr qc_ptr = tcg_temp_new_ptr();
57
+ assert(vece == MO_32);
47
+
58
+ tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
48
+ tcg_gen_addi_ptr(qc_ptr, cpu_env, offsetof(CPUARMState, vfp.qc));
59
+ gen_helper_gvec_urecpe_s);
49
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
50
+ vec_full_reg_offset(s, rn),
51
+ vec_full_reg_offset(s, rm), qc_ptr,
52
+ is_q ? 16 : 8, vec_full_reg_size(s), 0, fn);
53
+ tcg_temp_free_ptr(qc_ptr);
54
+}
60
+}
55
+
61
+
56
/* Set ZF and NF based on a 64 bit result. This is alas fiddlier
62
+void gen_gvec_ursqrte(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
57
* than the 32 bit equivalent.
63
+ uint32_t opr_sz, uint32_t max_sz)
58
*/
64
+{
59
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
65
+ assert(vece == MO_32);
60
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_mla, size);
66
+ tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
61
}
67
+ gen_helper_gvec_ursqrte_s);
62
return;
68
+}
63
+ case 0x16: /* SQDMULH, SQRDMULH */
69
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
64
+ {
65
+ static gen_helper_gvec_3_ptr * const fns[2][2] = {
66
+ { gen_helper_neon_sqdmulh_h, gen_helper_neon_sqrdmulh_h },
67
+ { gen_helper_neon_sqdmulh_s, gen_helper_neon_sqrdmulh_s },
68
+ };
69
+ gen_gvec_op3_qc(s, is_q, rd, rn, rm, fns[size - 1][u]);
70
+ }
71
+ return;
72
case 0x11:
73
if (!u) { /* CMTST */
74
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_cmtst, size);
75
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
76
genenvfn = fns[size][u];
77
break;
78
}
79
- case 0x16: /* SQDMULH, SQRDMULH */
80
- {
81
- static NeonGenTwoOpEnvFn * const fns[2][2] = {
82
- { gen_helper_neon_qdmulh_s16, gen_helper_neon_qrdmulh_s16 },
83
- { gen_helper_neon_qdmulh_s32, gen_helper_neon_qrdmulh_s32 },
84
- };
85
- assert(size == 1 || size == 2);
86
- genenvfn = fns[size - 1][u];
87
- break;
88
- }
89
default:
90
g_assert_not_reached();
91
}
92
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
93
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/vec_helper.c
71
--- a/target/arm/tcg/translate-neon.c
95
+++ b/target/arm/vec_helper.c
72
+++ b/target/arm/tcg/translate-neon.c
96
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s16)(void *vd, void *vn, void *vm,
73
@@ -XXX,XX +XXX,XX @@ static bool trans_VRECPE(DisasContext *s, arg_2misc *a)
74
if (a->size != 2) {
75
return false;
76
}
77
- return do_2misc(s, a, gen_helper_recpe_u32);
78
+ return do_2misc_vec(s, a, gen_gvec_urecpe);
79
}
80
81
static bool trans_VRSQRTE(DisasContext *s, arg_2misc *a)
82
@@ -XXX,XX +XXX,XX @@ static bool trans_VRSQRTE(DisasContext *s, arg_2misc *a)
83
if (a->size != 2) {
84
return false;
85
}
86
- return do_2misc(s, a, gen_helper_rsqrte_u32);
87
+ return do_2misc_vec(s, a, gen_gvec_ursqrte);
88
}
89
90
#define WRAP_1OP_ENV_FN(WRAPNAME, FUNC) \
91
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/tcg/vec_helper.c
94
+++ b/target/arm/tcg/vec_helper.c
95
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_rbit_b)(void *vd, void *vn, uint32_t desc)
96
}
97
clear_tail(d, opr_sz, simd_maxsz(desc));
97
clear_tail(d, opr_sz, simd_maxsz(desc));
98
}
98
}
99
99
+
100
+void HELPER(neon_sqdmulh_h)(void *vd, void *vn, void *vm,
100
+void HELPER(gvec_urecpe_s)(void *vd, void *vn, uint32_t desc)
101
+ void *vq, uint32_t desc)
102
+{
101
+{
103
+ intptr_t i, opr_sz = simd_oprsz(desc);
102
+ intptr_t i, opr_sz = simd_oprsz(desc);
104
+ int16_t *d = vd, *n = vn, *m = vm;
103
+ uint32_t *d = vd, *n = vn;
105
+
104
+
106
+ for (i = 0; i < opr_sz / 2; ++i) {
105
+ for (i = 0; i < opr_sz / 4; ++i) {
107
+ d[i] = do_sqrdmlah_h(n[i], m[i], 0, false, false, vq);
106
+ d[i] = helper_recpe_u32(n[i]);
108
+ }
107
+ }
109
+ clear_tail(d, opr_sz, simd_maxsz(desc));
108
+ clear_tail(d, opr_sz, simd_maxsz(desc));
110
+}
109
+}
111
+
110
+
112
+void HELPER(neon_sqrdmulh_h)(void *vd, void *vn, void *vm,
111
+void HELPER(gvec_ursqrte_s)(void *vd, void *vn, uint32_t desc)
113
+ void *vq, uint32_t desc)
114
+{
112
+{
115
+ intptr_t i, opr_sz = simd_oprsz(desc);
113
+ intptr_t i, opr_sz = simd_oprsz(desc);
116
+ int16_t *d = vd, *n = vn, *m = vm;
114
+ uint32_t *d = vd, *n = vn;
117
+
115
+
118
+ for (i = 0; i < opr_sz / 2; ++i) {
116
+ for (i = 0; i < opr_sz / 4; ++i) {
119
+ d[i] = do_sqrdmlah_h(n[i], m[i], 0, false, true, vq);
117
+ d[i] = helper_rsqrte_u32(n[i]);
120
+ }
118
+ }
121
+ clear_tail(d, opr_sz, simd_maxsz(desc));
119
+ clear_tail(d, opr_sz, simd_maxsz(desc));
122
+}
120
+}
123
+
124
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
125
static int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
126
bool neg, bool round, uint32_t *sat)
127
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
128
clear_tail(d, opr_sz, simd_maxsz(desc));
129
}
130
131
+void HELPER(neon_sqdmulh_s)(void *vd, void *vn, void *vm,
132
+ void *vq, uint32_t desc)
133
+{
134
+ intptr_t i, opr_sz = simd_oprsz(desc);
135
+ int32_t *d = vd, *n = vn, *m = vm;
136
+
137
+ for (i = 0; i < opr_sz / 4; ++i) {
138
+ d[i] = do_sqrdmlah_s(n[i], m[i], 0, false, false, vq);
139
+ }
140
+ clear_tail(d, opr_sz, simd_maxsz(desc));
141
+}
142
+
143
+void HELPER(neon_sqrdmulh_s)(void *vd, void *vn, void *vm,
144
+ void *vq, uint32_t desc)
145
+{
146
+ intptr_t i, opr_sz = simd_oprsz(desc);
147
+ int32_t *d = vd, *n = vn, *m = vm;
148
+
149
+ for (i = 0; i < opr_sz / 4; ++i) {
150
+ d[i] = do_sqrdmlah_s(n[i], m[i], 0, false, true, vq);
151
+ }
152
+ clear_tail(d, opr_sz, simd_maxsz(desc));
153
+}
154
+
155
/* Integer 8 and 16-bit dot-product.
156
*
157
* Note that for the loops herein, host endianness does not matter
158
--
121
--
159
2.20.1
122
2.34.1
160
161
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_2misc_reciprocal as these were the last
4
insns decoded by that function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-69-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 3 +
12
target/arm/tcg/translate-a64.c | 139 ++-------------------------------
13
2 files changed, 8 insertions(+), 134 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FRECPE_v 0.00 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
20
FRSQRTE_v 0.10 1110 111 11001 11011 0 ..... ..... @qrr_h
21
FRSQRTE_v 0.10 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
22
23
+URECPE_v 0.00 1110 101 00001 11001 0 ..... ..... @qrr_s
24
+URSQRTE_v 0.10 1110 101 00001 11001 0 ..... ..... @qrr_s
25
+
26
&fcvt_q rd rn esz q shift
27
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
28
&fcvt_q esz=1 shift=%fcvt_f_sh_h
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ TRANS(CMLE0_v, do_gvec_fn2, a, gen_gvec_cle0)
34
TRANS(CMEQ0_v, do_gvec_fn2, a, gen_gvec_ceq0)
35
TRANS(REV16_v, do_gvec_fn2, a, gen_gvec_rev16)
36
TRANS(REV32_v, do_gvec_fn2, a, gen_gvec_rev32)
37
+TRANS(URECPE_v, do_gvec_fn2, a, gen_gvec_urecpe)
38
+TRANS(URSQRTE_v, do_gvec_fn2, a, gen_gvec_ursqrte)
39
40
static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
41
{
42
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2_ptr * const f_frsqrte[] = {
43
};
44
TRANS(FRSQRTE_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_frsqrte)
45
46
-static void handle_2misc_reciprocal(DisasContext *s, int opcode,
47
- bool is_scalar, bool is_u, bool is_q,
48
- int size, int rn, int rd)
49
-{
50
- bool is_double = (size == 3);
51
-
52
- if (is_double) {
53
- g_assert_not_reached();
54
- } else {
55
- TCGv_i32 tcg_op = tcg_temp_new_i32();
56
- TCGv_i32 tcg_res = tcg_temp_new_i32();
57
- int pass, maxpasses;
58
-
59
- if (is_scalar) {
60
- maxpasses = 1;
61
- } else {
62
- maxpasses = is_q ? 4 : 2;
63
- }
64
-
65
- for (pass = 0; pass < maxpasses; pass++) {
66
- read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
67
-
68
- switch (opcode) {
69
- case 0x3c: /* URECPE */
70
- gen_helper_recpe_u32(tcg_res, tcg_op);
71
- break;
72
- case 0x3d: /* FRECPE */
73
- case 0x3f: /* FRECPX */
74
- case 0x7d: /* FRSQRTE */
75
- default:
76
- g_assert_not_reached();
77
- }
78
-
79
- if (is_scalar) {
80
- write_fp_sreg(s, rd, tcg_res);
81
- } else {
82
- write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
83
- }
84
- }
85
- if (!is_scalar) {
86
- clear_vec_high(s, is_q, rd);
87
- }
88
- }
89
-}
90
-
91
static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
92
int size, int rn, int rd)
93
{
94
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
95
bool is_q = extract32(insn, 30, 1);
96
int rn = extract32(insn, 5, 5);
97
int rd = extract32(insn, 0, 5);
98
- bool need_fpstatus = false;
99
- int rmode = -1;
100
- TCGv_i32 tcg_rmode;
101
- TCGv_ptr tcg_fpstatus;
102
103
switch (opcode) {
104
case 0xc ... 0xf:
105
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
106
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
107
size = is_double ? 3 : 2;
108
switch (opcode) {
109
- case 0x3c: /* URECPE */
110
- if (size == 3) {
111
- unallocated_encoding(s);
112
- return;
113
- }
114
- if (!fp_access_check(s)) {
115
- return;
116
- }
117
- handle_2misc_reciprocal(s, opcode, false, u, is_q, size, rn, rd);
118
- return;
119
case 0x17: /* FCVTL, FCVTL2 */
120
if (!fp_access_check(s)) {
121
return;
122
}
123
handle_2misc_widening(s, opcode, is_q, size, rn, rd);
124
return;
125
- case 0x7c: /* URSQRTE */
126
- if (size == 3) {
127
- unallocated_encoding(s);
128
- return;
129
- }
130
- break;
131
default:
132
case 0x16: /* FCVTN, FCVTN2 */
133
case 0x36: /* BFCVTN, BFCVTN2 */
134
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
135
case 0x6d: /* FCMLE (zero) */
136
case 0x3d: /* FRECPE */
137
case 0x7d: /* FRSQRTE */
138
+ case 0x3c: /* URECPE */
139
+ case 0x7c: /* URSQRTE */
140
unallocated_encoding(s);
141
return;
142
}
143
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
144
unallocated_encoding(s);
145
return;
146
}
147
-
148
- if (!fp_access_check(s)) {
149
- return;
150
- }
151
-
152
- if (need_fpstatus || rmode >= 0) {
153
- tcg_fpstatus = fpstatus_ptr(FPST_FPCR);
154
- } else {
155
- tcg_fpstatus = NULL;
156
- }
157
- if (rmode >= 0) {
158
- tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
159
- } else {
160
- tcg_rmode = NULL;
161
- }
162
-
163
- {
164
- int pass;
165
-
166
- assert(size == 2);
167
- for (pass = 0; pass < (is_q ? 4 : 2); pass++) {
168
- TCGv_i32 tcg_op = tcg_temp_new_i32();
169
- TCGv_i32 tcg_res = tcg_temp_new_i32();
170
-
171
- read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
172
-
173
- {
174
- /* Special cases for 32 bit elements */
175
- switch (opcode) {
176
- case 0x7c: /* URSQRTE */
177
- gen_helper_rsqrte_u32(tcg_res, tcg_op);
178
- break;
179
- default:
180
- case 0x7: /* SQABS, SQNEG */
181
- case 0x2f: /* FABS */
182
- case 0x6f: /* FNEG */
183
- case 0x7f: /* FSQRT */
184
- case 0x18: /* FRINTN */
185
- case 0x19: /* FRINTM */
186
- case 0x38: /* FRINTP */
187
- case 0x39: /* FRINTZ */
188
- case 0x58: /* FRINTA */
189
- case 0x79: /* FRINTI */
190
- case 0x59: /* FRINTX */
191
- case 0x1e: /* FRINT32Z */
192
- case 0x5e: /* FRINT32X */
193
- case 0x1f: /* FRINT64Z */
194
- case 0x5f: /* FRINT64X */
195
- case 0x1a: /* FCVTNS */
196
- case 0x1b: /* FCVTMS */
197
- case 0x1c: /* FCVTAS */
198
- case 0x3a: /* FCVTPS */
199
- case 0x3b: /* FCVTZS */
200
- case 0x5a: /* FCVTNU */
201
- case 0x5b: /* FCVTMU */
202
- case 0x5c: /* FCVTAU */
203
- case 0x7a: /* FCVTPU */
204
- case 0x7b: /* FCVTZU */
205
- g_assert_not_reached();
206
- }
207
- }
208
- write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
209
- }
210
- }
211
- clear_vec_high(s, is_q, rd);
212
-
213
- if (tcg_rmode) {
214
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
215
- }
216
+ g_assert_not_reached();
217
}
218
219
/* C3.6 Data processing - SIMD, inc Crypto
220
--
221
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
To have a better idea of how big is the region where the offset
3
Remove lookup_disas_fn, handle_2misc_widening,
4
belongs, display the value with the width of the region size
4
disas_simd_two_reg_misc, disas_data_proc_simd,
5
(i.e. a region of 0x1000 bytes uses 0x000 format).
5
disas_data_proc_simd_fp, disas_a64_legacy, as
6
this is the final insn to be converted.
6
7
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200812190206.31595-4-f4bug@amsat.org
10
Message-id: 20241211163036.2297116-70-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
include/hw/misc/unimp.h | 1 +
13
target/arm/tcg/a64.decode | 2 +
13
hw/misc/unimp.c | 10 ++++++----
14
target/arm/tcg/translate-a64.c | 202 +++------------------------------
14
2 files changed, 7 insertions(+), 4 deletions(-)
15
2 files changed, 18 insertions(+), 186 deletions(-)
15
16
16
diff --git a/include/hw/misc/unimp.h b/include/hw/misc/unimp.h
17
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/misc/unimp.h
19
--- a/target/arm/tcg/a64.decode
19
+++ b/include/hw/misc/unimp.h
20
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ FRSQRTE_v 0.10 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
21
typedef struct {
22
URECPE_v 0.00 1110 101 00001 11001 0 ..... ..... @qrr_s
22
SysBusDevice parent_obj;
23
URSQRTE_v 0.10 1110 101 00001 11001 0 ..... ..... @qrr_s
23
MemoryRegion iomem;
24
24
+ unsigned offset_fmt_width;
25
+FCVTL_v 0.00 1110 0.1 00001 01111 0 ..... ..... @qrr_sd
25
char *name;
26
+
26
uint64_t size;
27
&fcvt_q rd rn esz q shift
27
} UnimplementedDeviceState;
28
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
28
diff --git a/hw/misc/unimp.c b/hw/misc/unimp.c
29
&fcvt_q esz=1 shift=%fcvt_f_sh_h
30
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/misc/unimp.c
32
--- a/target/arm/tcg/translate-a64.c
31
+++ b/hw/misc/unimp.c
33
+++ b/target/arm/tcg/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static uint64_t unimp_read(void *opaque, hwaddr offset, unsigned size)
34
@@ -XXX,XX +XXX,XX @@ static inline void gen_check_sp_alignment(DisasContext *s)
33
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
35
*/
34
35
qemu_log_mask(LOG_UNIMP, "%s: unimplemented device read "
36
- "(size %d, offset 0x%" HWADDR_PRIx ")\n",
37
- s->name, size, offset);
38
+ "(size %d, offset 0x%0*" HWADDR_PRIx ")\n",
39
+ s->name, size, s->offset_fmt_width, offset);
40
return 0;
41
}
36
}
42
37
43
@@ -XXX,XX +XXX,XX @@ static void unimp_write(void *opaque, hwaddr offset,
38
-/*
44
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
39
- * This provides a simple table based table lookup decoder. It is
45
40
- * intended to be used when the relevant bits for decode are too
46
qemu_log_mask(LOG_UNIMP, "%s: unimplemented device write "
41
- * awkwardly placed and switch/if based logic would be confusing and
47
- "(size %d, offset 0x%" HWADDR_PRIx
42
- * deeply nested. Since it's a linear search through the table, tables
48
+ "(size %d, offset 0x%0*" HWADDR_PRIx
43
- * should be kept small.
49
", value 0x%0*" PRIx64 ")\n",
44
- *
50
- s->name, size, offset, size << 1, value);
45
- * It returns the first handler where insn & mask == pattern, or
51
+ s->name, size, s->offset_fmt_width, offset, size << 1, value);
46
- * NULL if there is no match.
47
- * The table is terminated by an empty mask (i.e. 0)
48
- */
49
-static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
50
- uint32_t insn)
51
-{
52
- const AArch64DecodeTable *tptr = table;
53
-
54
- while (tptr->mask) {
55
- if ((insn & tptr->mask) == tptr->pattern) {
56
- return tptr->disas_fn;
57
- }
58
- tptr++;
59
- }
60
- return NULL;
61
-}
62
-
63
/*
64
* The instruction disassembly implemented here matches
65
* the instruction encoding classifications in chapter C4
66
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2_ptr * const f_frsqrte[] = {
67
};
68
TRANS(FRSQRTE_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_frsqrte)
69
70
-static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
71
- int size, int rn, int rd)
72
+static bool trans_FCVTL_v(DisasContext *s, arg_qrr_e *a)
73
{
74
/* Handle 2-reg-misc ops which are widening (so each size element
75
* in the source becomes a 2*size element in the destination.
76
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
77
*/
78
int pass;
79
80
- if (size == 3) {
81
+ if (!fp_access_check(s)) {
82
+ return true;
83
+ }
84
+
85
+ if (a->esz == MO_64) {
86
/* 32 -> 64 bit fp conversion */
87
TCGv_i64 tcg_res[2];
88
- int srcelt = is_q ? 2 : 0;
89
+ TCGv_i32 tcg_op = tcg_temp_new_i32();
90
+ int srcelt = a->q ? 2 : 0;
91
92
for (pass = 0; pass < 2; pass++) {
93
- TCGv_i32 tcg_op = tcg_temp_new_i32();
94
tcg_res[pass] = tcg_temp_new_i64();
95
-
96
- read_vec_element_i32(s, tcg_op, rn, srcelt + pass, MO_32);
97
+ read_vec_element_i32(s, tcg_op, a->rn, srcelt + pass, MO_32);
98
gen_helper_vfp_fcvtds(tcg_res[pass], tcg_op, tcg_env);
99
}
100
for (pass = 0; pass < 2; pass++) {
101
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
102
+ write_vec_element(s, tcg_res[pass], a->rd, pass, MO_64);
103
}
104
} else {
105
/* 16 -> 32 bit fp conversion */
106
- int srcelt = is_q ? 4 : 0;
107
+ int srcelt = a->q ? 4 : 0;
108
TCGv_i32 tcg_res[4];
109
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
110
TCGv_i32 ahp = get_ahp_flag();
111
112
for (pass = 0; pass < 4; pass++) {
113
tcg_res[pass] = tcg_temp_new_i32();
114
-
115
- read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_16);
116
+ read_vec_element_i32(s, tcg_res[pass], a->rn, srcelt + pass, MO_16);
117
gen_helper_vfp_fcvt_f16_to_f32(tcg_res[pass], tcg_res[pass],
118
fpst, ahp);
119
}
120
for (pass = 0; pass < 4; pass++) {
121
- write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
122
+ write_vec_element_i32(s, tcg_res[pass], a->rd, pass, MO_32);
123
}
124
}
125
-}
126
-
127
-/* AdvSIMD two reg misc
128
- * 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
129
- * +---+---+---+-----------+------+-----------+--------+-----+------+------+
130
- * | 0 | Q | U | 0 1 1 1 0 | size | 1 0 0 0 0 | opcode | 1 0 | Rn | Rd |
131
- * +---+---+---+-----------+------+-----------+--------+-----+------+------+
132
- */
133
-static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
134
-{
135
- int size = extract32(insn, 22, 2);
136
- int opcode = extract32(insn, 12, 5);
137
- bool u = extract32(insn, 29, 1);
138
- bool is_q = extract32(insn, 30, 1);
139
- int rn = extract32(insn, 5, 5);
140
- int rd = extract32(insn, 0, 5);
141
-
142
- switch (opcode) {
143
- case 0xc ... 0xf:
144
- case 0x16 ... 0x1f:
145
- {
146
- /* Floating point: U, size[1] and opcode indicate operation;
147
- * size[0] indicates single or double precision.
148
- */
149
- int is_double = extract32(size, 0, 1);
150
- opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
151
- size = is_double ? 3 : 2;
152
- switch (opcode) {
153
- case 0x17: /* FCVTL, FCVTL2 */
154
- if (!fp_access_check(s)) {
155
- return;
156
- }
157
- handle_2misc_widening(s, opcode, is_q, size, rn, rd);
158
- return;
159
- default:
160
- case 0x16: /* FCVTN, FCVTN2 */
161
- case 0x36: /* BFCVTN, BFCVTN2 */
162
- case 0x56: /* FCVTXN, FCVTXN2 */
163
- case 0x2f: /* FABS */
164
- case 0x6f: /* FNEG */
165
- case 0x7f: /* FSQRT */
166
- case 0x18: /* FRINTN */
167
- case 0x19: /* FRINTM */
168
- case 0x38: /* FRINTP */
169
- case 0x39: /* FRINTZ */
170
- case 0x59: /* FRINTX */
171
- case 0x79: /* FRINTI */
172
- case 0x58: /* FRINTA */
173
- case 0x1e: /* FRINT32Z */
174
- case 0x1f: /* FRINT64Z */
175
- case 0x5e: /* FRINT32X */
176
- case 0x5f: /* FRINT64X */
177
- case 0x1d: /* SCVTF */
178
- case 0x5d: /* UCVTF */
179
- case 0x1a: /* FCVTNS */
180
- case 0x1b: /* FCVTMS */
181
- case 0x3a: /* FCVTPS */
182
- case 0x3b: /* FCVTZS */
183
- case 0x5a: /* FCVTNU */
184
- case 0x5b: /* FCVTMU */
185
- case 0x7a: /* FCVTPU */
186
- case 0x7b: /* FCVTZU */
187
- case 0x5c: /* FCVTAU */
188
- case 0x1c: /* FCVTAS */
189
- case 0x2c: /* FCMGT (zero) */
190
- case 0x2d: /* FCMEQ (zero) */
191
- case 0x2e: /* FCMLT (zero) */
192
- case 0x6c: /* FCMGE (zero) */
193
- case 0x6d: /* FCMLE (zero) */
194
- case 0x3d: /* FRECPE */
195
- case 0x7d: /* FRSQRTE */
196
- case 0x3c: /* URECPE */
197
- case 0x7c: /* URSQRTE */
198
- unallocated_encoding(s);
199
- return;
200
- }
201
- break;
202
- }
203
- default:
204
- case 0x0: /* REV64, REV32 */
205
- case 0x1: /* REV16 */
206
- case 0x2: /* SADDLP, UADDLP */
207
- case 0x3: /* SUQADD, USQADD */
208
- case 0x4: /* CLS, CLZ */
209
- case 0x5: /* CNT, NOT, RBIT */
210
- case 0x6: /* SADALP, UADALP */
211
- case 0x7: /* SQABS, SQNEG */
212
- case 0x8: /* CMGT, CMGE */
213
- case 0x9: /* CMEQ, CMLE */
214
- case 0xa: /* CMLT */
215
- case 0xb: /* ABS, NEG */
216
- case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
217
- case 0x13: /* SHLL, SHLL2 */
218
- case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
219
- unallocated_encoding(s);
220
- return;
221
- }
222
- g_assert_not_reached();
223
-}
224
-
225
-/* C3.6 Data processing - SIMD, inc Crypto
226
- *
227
- * As the decode gets a little complex we are using a table based
228
- * approach for this part of the decode.
229
- */
230
-static const AArch64DecodeTable data_proc_simd[] = {
231
- /* pattern , mask , fn */
232
- { 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
233
- { 0x00000000, 0x00000000, NULL }
234
-};
235
-
236
-static void disas_data_proc_simd(DisasContext *s, uint32_t insn)
237
-{
238
- /* Note that this is called with all non-FP cases from
239
- * table C3-6 so it must UNDEF for entries not specifically
240
- * allocated to instructions in that table.
241
- */
242
- AArch64DecodeFn *fn = lookup_disas_fn(&data_proc_simd[0], insn);
243
- if (fn) {
244
- fn(s, insn);
245
- } else {
246
- unallocated_encoding(s);
247
- }
248
-}
249
-
250
-/* C3.6 Data processing - SIMD and floating point */
251
-static void disas_data_proc_simd_fp(DisasContext *s, uint32_t insn)
252
-{
253
- if (extract32(insn, 28, 1) == 1 && extract32(insn, 30, 1) == 0) {
254
- unallocated_encoding(s); /* in decodetree */
255
- } else {
256
- /* SIMD, including crypto */
257
- disas_data_proc_simd(s, insn);
258
- }
259
+ clear_vec_high(s, true, a->rd);
260
+ return true;
52
}
261
}
53
262
54
static const MemoryRegionOps unimp_ops = {
263
static bool trans_OK(DisasContext *s, arg_OK *a)
55
@@ -XXX,XX +XXX,XX @@ static void unimp_realize(DeviceState *dev, Error **errp)
264
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
56
return;
265
return false;
266
}
267
268
-/* C3.1 A64 instruction index by encoding */
269
-static void disas_a64_legacy(DisasContext *s, uint32_t insn)
270
-{
271
- switch (extract32(insn, 25, 4)) {
272
- case 0x7:
273
- case 0xf: /* Data processing - SIMD and floating point */
274
- disas_data_proc_simd_fp(s, insn);
275
- break;
276
- default:
277
- unallocated_encoding(s);
278
- break;
279
- }
280
-}
281
-
282
static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
283
CPUState *cpu)
284
{
285
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
286
if (!disas_a64(s, insn) &&
287
!disas_sme(s, insn) &&
288
!disas_sve(s, insn)) {
289
- disas_a64_legacy(s, insn);
290
+ unallocated_encoding(s);
57
}
291
}
58
292
59
+ s->offset_fmt_width = DIV_ROUND_UP(64 - clz64(s->size - 1), 4);
293
/*
60
+
61
memory_region_init_io(&s->iomem, OBJECT(s), &unimp_ops, s,
62
s->name, s->size);
63
sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->iomem);
64
--
294
--
65
2.20.1
295
2.34.1
66
67
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
According to AArch64.TagCheckFault, none of the other ISS values are
3
Softfloat has native support for round-to-odd. Use it.
4
provided, so we do not need to go so far as merge_syn_data_abort.
5
But we were missing the WnR bit.
6
4
7
Tested-by: Andrey Konovalov <andreyknvl@google.com>
8
Reported-by: Andrey Konovalov <andreyknvl@google.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200813200816.3037186-3-richard.henderson@linaro.org
6
Message-id: 20241206031428.78634-1-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
target/arm/mte_helper.c | 9 +++++----
10
target/arm/tcg/helper-a64.c | 18 ++++--------------
15
1 file changed, 5 insertions(+), 4 deletions(-)
11
1 file changed, 4 insertions(+), 14 deletions(-)
16
12
17
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
13
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/mte_helper.c
15
--- a/target/arm/tcg/helper-a64.c
20
+++ b/target/arm/mte_helper.c
16
+++ b/target/arm/tcg/helper-a64.c
21
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
17
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
18
19
float32 HELPER(fcvtx_f64_to_f32)(float64 a, CPUARMState *env)
22
{
20
{
23
int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
21
- /* Von Neumann rounding is implemented by using round-to-zero
24
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
22
- * and then setting the LSB of the result if Inexact was raised.
25
- int el, reg_el, tcf, select;
23
- */
26
+ int el, reg_el, tcf, select, is_write, syn;
24
float32 r;
27
uint64_t sctlr;
25
float_status *fpst = &env->vfp.fp_status;
28
26
- float_status tstat = *fpst;
29
reg_el = regime_el(env, arm_mmu_idx);
27
- int exflags;
30
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
28
+ int old = get_float_rounding_mode(fpst);
31
*/
29
32
cpu_restore_state(env_cpu(env), ra, true);
30
- set_float_rounding_mode(float_round_to_zero, &tstat);
33
env->exception.vaddress = dirty_ptr;
31
- set_float_exception_flags(0, &tstat);
34
- raise_exception(env, EXCP_DATA_ABORT,
32
- r = float64_to_float32(a, &tstat);
35
- syn_data_abort_no_iss(el != 0, 0, 0, 0, 0, 0, 0x11),
33
- exflags = get_float_exception_flags(&tstat);
36
- exception_target_el(env));
34
- if (exflags & float_flag_inexact) {
37
+
35
- r = make_float32(float32_val(r) | 1);
38
+ is_write = FIELD_EX32(desc, MTEDESC, WRITE);
36
- }
39
+ syn = syn_data_abort_no_iss(el != 0, 0, 0, 0, 0, is_write, 0x11);
37
- exflags |= get_float_exception_flags(fpst);
40
+ raise_exception(env, EXCP_DATA_ABORT, syn, exception_target_el(env));
38
- set_float_exception_flags(exflags, fpst);
41
/* noreturn, but fall through to the assert anyway */
39
+ set_float_rounding_mode(float_round_to_odd, fpst);
42
40
+ r = float64_to_float32(a, fpst);
43
case 0:
41
+ set_float_rounding_mode(old, fpst);
42
return r;
43
}
44
44
--
45
--
45
2.20.1
46
2.34.1
46
47
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
2
2
3
To quickly notice the access size, display the value with the
3
www.orangepi.org does not support https, it's expected to stick to http.
4
width of the access (i.e. 16-bit access is displayed 0x0000,
5
while 8-bit access 0x00).
6
4
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
9
Message-id: 20200812190206.31595-3-f4bug@amsat.org
7
Message-id: 20241206192254.3889131-2-pierrick.bouvier@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
hw/misc/unimp.c | 4 ++--
10
docs/system/arm/orangepi.rst | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
14
12
15
diff --git a/hw/misc/unimp.c b/hw/misc/unimp.c
13
diff --git a/docs/system/arm/orangepi.rst b/docs/system/arm/orangepi.rst
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/unimp.c
15
--- a/docs/system/arm/orangepi.rst
18
+++ b/hw/misc/unimp.c
16
+++ b/docs/system/arm/orangepi.rst
19
@@ -XXX,XX +XXX,XX @@ static void unimp_write(void *opaque, hwaddr offset,
17
@@ -XXX,XX +XXX,XX @@ Orange Pi PC images
20
18
Note that the mainline kernel does not have a root filesystem. You may provide it
21
qemu_log_mask(LOG_UNIMP, "%s: unimplemented device write "
19
with an official Orange Pi PC image from the official website:
22
"(size %d, offset 0x%" HWADDR_PRIx
20
23
- ", value 0x%" PRIx64 ")\n",
21
- http://www.orangepi.org/downloadresources/
24
- s->name, size, offset, value);
22
+ http://www.orangepi.org/html/serviceAndSupport/index.html
25
+ ", value 0x%0*" PRIx64 ")\n",
23
26
+ s->name, size, offset, size << 1, value);
24
Another possibility is to run an Armbian image for Orange Pi PC which
27
}
25
can be downloaded from:
28
26
@@ -XXX,XX +XXX,XX @@ including the Orange Pi PC. NetBSD 9.0 is known to work best for the Orange Pi P
29
static const MemoryRegionOps unimp_ops = {
27
board and provides a fully working system with serial console, networking and storage.
28
For the Orange Pi PC machine, get the 'evbarm-earmv7hf' based image from:
29
30
- https://cdn.netbsd.org/pub/NetBSD/NetBSD-9.0/evbarm-earmv7hf/binary/gzimg/armv7.img.gz
31
+ https://archive.netbsd.org/pub/NetBSD-archive/NetBSD-9.0/evbarm-earmv7hf/binary/gzimg/armv7.img.gz
32
33
The image requires manually installing U-Boot in the image. Build U-Boot with
34
the orangepi_pc_defconfig configuration as described in the previous section.
30
--
35
--
31
2.20.1
36
2.34.1
32
33
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
2
2
3
Clock canonical name is set in device_set_realized (see the block
3
Reviewed-by: Cédric Le Goater <clg@redhat.com>
4
added to hw/core/qdev.c in commit 0e6934f264).
4
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
5
If we connect a clock after the device is realized, this code is
5
Message-id: 20241206192254.3889131-3-pierrick.bouvier@linaro.org
6
not executed. This is currently not a problem as this name is only
7
used for trace events, however this disrupt tracing.
8
9
Fix by calling qdev_connect_clock_in() before realizing.
10
11
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Message-id: 20200803105647.22223-3-f4bug@amsat.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
7
---
16
hw/arm/xilinx_zynq.c | 18 +++++++++---------
8
docs/system/arm/fby35.rst | 5 +++++
17
1 file changed, 9 insertions(+), 9 deletions(-)
9
1 file changed, 5 insertions(+)
18
10
19
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
11
diff --git a/docs/system/arm/fby35.rst b/docs/system/arm/fby35.rst
20
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/xilinx_zynq.c
13
--- a/docs/system/arm/fby35.rst
22
+++ b/hw/arm/xilinx_zynq.c
14
+++ b/docs/system/arm/fby35.rst
23
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
15
@@ -XXX,XX +XXX,XX @@ process starts.
24
1, 0x0066, 0x0022, 0x0000, 0x0000, 0x0555, 0x2aa,
16
$ screen /dev/tty0 # In a separate TMUX pane, terminal window, etc.
25
0);
17
$ screen /dev/tty1
26
18
$ (qemu) c         # Start the boot process once screen is setup.
27
- /* Create slcr, keep a pointer to connect clocks */
28
- slcr = qdev_new("xilinx,zynq_slcr");
29
- sysbus_realize_and_unref(SYS_BUS_DEVICE(slcr), &error_fatal);
30
- sysbus_mmio_map(SYS_BUS_DEVICE(slcr), 0, 0xF8000000);
31
-
32
/* Create the main clock source, and feed slcr with it */
33
zynq_machine->ps_clk = CLOCK(object_new(TYPE_CLOCK));
34
object_property_add_child(OBJECT(zynq_machine), "ps_clk",
35
OBJECT(zynq_machine->ps_clk));
36
object_unref(OBJECT(zynq_machine->ps_clk));
37
clock_set_hz(zynq_machine->ps_clk, PS_CLK_FREQUENCY);
38
+
19
+
39
+ /* Create slcr, keep a pointer to connect clocks */
20
+This machine model supports emulation of the boot from the CE0 flash device by
40
+ slcr = qdev_new("xilinx,zynq_slcr");
21
+setting option ``execute-in-place``. When using this option, the CPU fetches
41
qdev_connect_clock_in(slcr, "ps_clk", zynq_machine->ps_clk);
22
+instructions to execute by reading CE0 and not from a preloaded ROM
42
+ sysbus_realize_and_unref(SYS_BUS_DEVICE(slcr), &error_fatal);
23
+initialized at machine init time. As a result, execution will be slower.
43
+ sysbus_mmio_map(SYS_BUS_DEVICE(slcr), 0, 0xF8000000);
44
45
dev = qdev_new(TYPE_A9MPCORE_PRIV);
46
qdev_prop_set_uint32(dev, "num-cpu", 1);
47
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
48
dev = qdev_new(TYPE_CADENCE_UART);
49
busdev = SYS_BUS_DEVICE(dev);
50
qdev_prop_set_chr(dev, "chardev", serial_hd(0));
51
+ qdev_connect_clock_in(dev, "refclk",
52
+ qdev_get_clock_out(slcr, "uart0_ref_clk"));
53
sysbus_realize_and_unref(busdev, &error_fatal);
54
sysbus_mmio_map(busdev, 0, 0xE0000000);
55
sysbus_connect_irq(busdev, 0, pic[59 - IRQ_OFFSET]);
56
- qdev_connect_clock_in(dev, "refclk",
57
- qdev_get_clock_out(slcr, "uart0_ref_clk"));
58
dev = qdev_new(TYPE_CADENCE_UART);
59
busdev = SYS_BUS_DEVICE(dev);
60
qdev_prop_set_chr(dev, "chardev", serial_hd(1));
61
+ qdev_connect_clock_in(dev, "refclk",
62
+ qdev_get_clock_out(slcr, "uart1_ref_clk"));
63
sysbus_realize_and_unref(busdev, &error_fatal);
64
sysbus_mmio_map(busdev, 0, 0xE0001000);
65
sysbus_connect_irq(busdev, 0, pic[82 - IRQ_OFFSET]);
66
- qdev_connect_clock_in(dev, "refclk",
67
- qdev_get_clock_out(slcr, "uart1_ref_clk"));
68
69
sysbus_create_varargs("cadence_ttc", 0xF8001000,
70
pic[42-IRQ_OFFSET], pic[43-IRQ_OFFSET], pic[44-IRQ_OFFSET], NULL);
71
--
24
--
72
2.20.1
25
2.34.1
73
26
74
27
diff view generated by jsdifflib
New patch
1
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
1
2
3
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
4
Message-id: 20241206192254.3889131-4-pierrick.bouvier@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
docs/system/arm/xlnx-versal-virt.rst | 3 +++
9
1 file changed, 3 insertions(+)
10
11
diff --git a/docs/system/arm/xlnx-versal-virt.rst b/docs/system/arm/xlnx-versal-virt.rst
12
index XXXXXXX..XXXXXXX 100644
13
--- a/docs/system/arm/xlnx-versal-virt.rst
14
+++ b/docs/system/arm/xlnx-versal-virt.rst
15
@@ -XXX,XX +XXX,XX @@ Run the following at the U-Boot prompt:
16
fdt set /chosen/dom0 reg <0x00000000 0x40000000 0x0 0x03100000>
17
booti 30000000 - 20000000
18
19
+It's possible to change the OSPI flash model emulated by using the machine model
20
+option ``ospi-flash``.
21
+
22
BBRAM File Backend
23
""""""""""""""""""
24
BBRAM can have an optional file backend, which must be a seekable
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
1
2
3
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
4
Message-id: 20241206192254.3889131-5-pierrick.bouvier@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
docs/system/arm/virt.rst | 16 ++++++++++++++++
9
1 file changed, 16 insertions(+)
10
11
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
12
index XXXXXXX..XXXXXXX 100644
13
--- a/docs/system/arm/virt.rst
14
+++ b/docs/system/arm/virt.rst
15
@@ -XXX,XX +XXX,XX @@ iommu
16
``smmuv3``
17
Create an SMMUv3
18
19
+default-bus-bypass-iommu
20
+ Set ``on``/``off`` to enable/disable `bypass_iommu
21
+ <https://gitlab.com/qemu-project/qemu/-/blob/master/docs/bypass-iommu.txt>`_
22
+ for default root bus.
23
+
24
ras
25
Set ``on``/``off`` to enable/disable reporting host memory errors to a guest
26
using ACPI and guest external abort exceptions. The default is off.
27
28
+acpi
29
+ Set ``on``/``off``/``auto`` to enable/disable ACPI.
30
+
31
dtb-randomness
32
Set ``on``/``off`` to pass random seeds via the guest DTB
33
rng-seed and kaslr-seed nodes (in both "/chosen" and
34
@@ -XXX,XX +XXX,XX @@ dtb-randomness
35
dtb-kaslr-seed
36
A deprecated synonym for dtb-randomness.
37
38
+x-oem-id
39
+ Set string (up to 6 bytes) to override the default value of field OEMID in ACPI
40
+ table header.
41
+
42
+x-oem-table-id
43
+ Set string (up to 8 bytes) to override the default value of field OEM Table ID
44
+ in ACPI table header.
45
+
46
Linux guest kernel configuration
47
""""""""""""""""""""""""""""""""
48
49
--
50
2.34.1
diff view generated by jsdifflib
1
From: Eduardo Habkost <ehabkost@redhat.com>
1
From: Brian Cain <brian.cain@oss.qualcomm.com>
2
2
3
TYPE_ARM_SSE is a TYPE_SYS_BUS_DEVICE subclass, but
3
Mea culpa, I don't know how I got this wrong in 2dfe93699c. Still
4
ARMSSEClass::parent_class is declared as DeviceClass.
4
getting used to the new address, I suppose. Somehow I got it right in the
5
mailmap, though.
5
6
6
It never caused any problems by pure luck:
7
Signed-off-by: Brian Cain <brian.cain@oss.qualcomm.com>
7
8
Message-id: 20241209181242.1434231-1-brian.cain@oss.qualcomm.com
8
We were not setting class_size for TYPE_ARM_SSE, so class_size of
9
TYPE_SYS_BUS_DEVICE was being used (sizeof(SysBusDeviceClass)).
10
This made the system allocate enough memory for TYPE_ARM_SSE
11
devices even though ARMSSEClass was too small for a sysbus
12
device.
13
14
Additionally, the ARMSSEClass::info field ended up at the same
15
offset as SysBusDeviceClass::explicit_ofw_unit_address. This
16
would make sysbus_get_fw_dev_path() crash for the device.
17
Luckily, sysbus_get_fw_dev_path() never gets called for
18
TYPE_ARM_SSE devices, because qdev_get_fw_dev_path() is only used
19
by the boot device code, and TYPE_ARM_SSE devices don't appear at
20
the fw_boot_order list.
21
22
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
23
Message-id: 20200826181006.4097163-1-ehabkost@redhat.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
11
---
27
include/hw/arm/armsse.h | 2 +-
12
MAINTAINERS | 2 +-
28
hw/arm/armsse.c | 1 +
13
1 file changed, 1 insertion(+), 1 deletion(-)
29
2 files changed, 2 insertions(+), 1 deletion(-)
30
14
31
diff --git a/include/hw/arm/armsse.h b/include/hw/arm/armsse.h
15
diff --git a/MAINTAINERS b/MAINTAINERS
32
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
33
--- a/include/hw/arm/armsse.h
17
--- a/MAINTAINERS
34
+++ b/include/hw/arm/armsse.h
18
+++ b/MAINTAINERS
35
@@ -XXX,XX +XXX,XX @@ typedef struct ARMSSE {
19
@@ -XXX,XX +XXX,XX @@ F: target/avr/
36
typedef struct ARMSSEInfo ARMSSEInfo;
20
F: tests/functional/test_avr_mega2560.py
37
21
38
typedef struct ARMSSEClass {
22
Hexagon TCG CPUs
39
- DeviceClass parent_class;
23
-M: Brian Cain <bcain@oss.qualcomm.com>
40
+ SysBusDeviceClass parent_class;
24
+M: Brian Cain <brian.cain@oss.qualcomm.com>
41
const ARMSSEInfo *info;
25
S: Supported
42
} ARMSSEClass;
26
F: target/hexagon/
43
27
X: target/hexagon/idef-parser/
44
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/hw/arm/armsse.c
47
+++ b/hw/arm/armsse.c
48
@@ -XXX,XX +XXX,XX @@ static const TypeInfo armsse_info = {
49
.name = TYPE_ARMSSE,
50
.parent = TYPE_SYS_BUS_DEVICE,
51
.instance_size = sizeof(ARMSSE),
52
+ .class_size = sizeof(ARMSSEClass),
53
.instance_init = armsse_init,
54
.abstract = true,
55
.interfaces = (InterfaceInfo[]) {
56
--
28
--
57
2.20.1
29
2.34.1
58
59
diff view generated by jsdifflib
New patch
1
target/arm/helper.c is very large and unwieldy. One subset of code
2
that we can pull out into its own file is the cpreg arrays and
3
corresponding functions for the TLBI instructions.
1
4
5
Because these are instructions they are only relevant for TCG and we
6
can make the new file only be built for CONFIG_TCG.
7
8
In this commit we move the AArch32 instructions from:
9
not_v7_cp_reginfo[]
10
v7_cp_reginfo[]
11
v7mp_cp_reginfo[]
12
v8_cp_reginfo[]
13
into a new file target/arm/tcg/tlb-insns.c.
14
15
A few small functions are used both by functions we haven't yet moved
16
across and by functions we have already moved. We temporarily make
17
these global with a prototype in cpregs.h; when the move of all TLBI
18
insns is complete these will return to being file-local.
19
20
For CONFIG_TCG, this is just moving code around. For a KVM only
21
build, these cpregs will no longer be added to the cpregs hashtable
22
for the CPU. However this should not be a behaviour change, because:
23
* we never try to migration sync or otherwise include
24
ARM_CP_NO_RAW cpregs
25
* for migration we treat the kernel's list of system registers
26
as the authoritative one, so these TLBI insns were never
27
in it anyway
28
The no-tcg stub of define_tlb_insn_regs() therefore does nothing.
29
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
32
Message-id: 20241210160452.2427965-2-peter.maydell@linaro.org
33
---
34
target/arm/cpregs.h | 14 +++
35
target/arm/internals.h | 3 +
36
target/arm/helper.c | 231 ++--------------------------------
37
target/arm/tcg-stubs.c | 5 +
38
target/arm/tcg/tlb-insns.c | 246 +++++++++++++++++++++++++++++++++++++
39
target/arm/tcg/meson.build | 1 +
40
6 files changed, 280 insertions(+), 220 deletions(-)
41
create mode 100644 target/arm/tcg/tlb-insns.c
42
43
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/cpregs.h
46
+++ b/target/arm/cpregs.h
47
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpreg_traps_in_nv(const ARMCPRegInfo *ri)
48
return ri->opc1 == 4 || ri->opc1 == 5;
49
}
50
51
+/*
52
+ * Temporary declarations of functions until the move to tlb_insn_helper.c
53
+ * is complete and we can make the functions static again
54
+ */
55
+CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
56
+ bool isread);
57
+CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
58
+ bool isread);
59
+bool tlb_force_broadcast(CPUARMState *env);
60
+void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
61
+ uint64_t value);
62
+void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
63
+ uint64_t value);
64
+
65
#endif /* TARGET_ARM_CPREGS_H */
66
diff --git a/target/arm/internals.h b/target/arm/internals.h
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/internals.h
69
+++ b/target/arm/internals.h
70
@@ -XXX,XX +XXX,XX @@ static inline uint64_t pauth_ptr_mask(ARMVAParameters param)
71
/* Add the cpreg definitions for debug related system registers */
72
void define_debug_regs(ARMCPU *cpu);
73
74
+/* Add the cpreg definitions for TLBI instructions */
75
+void define_tlb_insn_regs(ARMCPU *cpu);
76
+
77
/* Effective value of MDCR_EL2 */
78
static inline uint64_t arm_mdcr_el2_eff(CPUARMState *env)
79
{
80
diff --git a/target/arm/helper.c b/target/arm/helper.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/helper.c
83
+++ b/target/arm/helper.c
84
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tacr(CPUARMState *env, const ARMCPRegInfo *ri,
85
}
86
87
/* Check for traps from EL1 due to HCR_EL2.TTLB. */
88
-static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
89
- bool isread)
90
+CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
91
+ bool isread)
92
{
93
if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) {
94
return CP_ACCESS_TRAP_EL2;
95
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
96
}
97
98
/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBIS. */
99
-static CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
100
- bool isread)
101
+CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
102
+ bool isread)
103
{
104
if (arm_current_el(env) == 1 &&
105
(arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBIS))) {
106
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
107
ARMMMUIdxBit_Stage2_S);
108
}
109
110
-
111
-/* IS variants of TLB operations must affect all cores */
112
-static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
113
- uint64_t value)
114
-{
115
- CPUState *cs = env_cpu(env);
116
-
117
- tlb_flush_all_cpus_synced(cs);
118
-}
119
-
120
-static void tlbiasid_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
121
- uint64_t value)
122
-{
123
- CPUState *cs = env_cpu(env);
124
-
125
- tlb_flush_all_cpus_synced(cs);
126
-}
127
-
128
-static void tlbimva_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
129
- uint64_t value)
130
-{
131
- CPUState *cs = env_cpu(env);
132
-
133
- tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
134
-}
135
-
136
-static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
137
- uint64_t value)
138
-{
139
- CPUState *cs = env_cpu(env);
140
-
141
- tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
142
-}
143
-
144
/*
145
* Non-IS variants of TLB operations are upgraded to
146
* IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
147
* force broadcast of these operations.
148
*/
149
-static bool tlb_force_broadcast(CPUARMState *env)
150
+bool tlb_force_broadcast(CPUARMState *env)
151
{
152
return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
153
}
154
155
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
156
- uint64_t value)
157
-{
158
- /* Invalidate all (TLBIALL) */
159
- CPUState *cs = env_cpu(env);
160
-
161
- if (tlb_force_broadcast(env)) {
162
- tlb_flush_all_cpus_synced(cs);
163
- } else {
164
- tlb_flush(cs);
165
- }
166
-}
167
-
168
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
169
- uint64_t value)
170
-{
171
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
172
- CPUState *cs = env_cpu(env);
173
-
174
- value &= TARGET_PAGE_MASK;
175
- if (tlb_force_broadcast(env)) {
176
- tlb_flush_page_all_cpus_synced(cs, value);
177
- } else {
178
- tlb_flush_page(cs, value);
179
- }
180
-}
181
-
182
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
183
- uint64_t value)
184
-{
185
- /* Invalidate by ASID (TLBIASID) */
186
- CPUState *cs = env_cpu(env);
187
-
188
- if (tlb_force_broadcast(env)) {
189
- tlb_flush_all_cpus_synced(cs);
190
- } else {
191
- tlb_flush(cs);
192
- }
193
-}
194
-
195
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
- uint64_t value)
197
-{
198
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
199
- CPUState *cs = env_cpu(env);
200
-
201
- value &= TARGET_PAGE_MASK;
202
- if (tlb_force_broadcast(env)) {
203
- tlb_flush_page_all_cpus_synced(cs, value);
204
- } else {
205
- tlb_flush_page(cs, value);
206
- }
207
-}
208
-
209
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
210
uint64_t value)
211
{
212
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
214
}
215
216
-static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
217
- uint64_t value)
218
+void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
+ uint64_t value)
220
{
221
CPUState *cs = env_cpu(env);
222
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
223
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
224
tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
225
}
226
227
-static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
228
- uint64_t value)
229
+void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
230
+ uint64_t value)
231
{
232
CPUState *cs = env_cpu(env);
233
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
234
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
235
ARMMMUIdxBit_E2);
236
}
237
238
-static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
239
- uint64_t value)
240
-{
241
- CPUState *cs = env_cpu(env);
242
- uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
243
-
244
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
245
-}
246
-
247
-static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
248
- uint64_t value)
249
-{
250
- CPUState *cs = env_cpu(env);
251
- uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
252
-
253
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
254
-}
255
-
256
static const ARMCPRegInfo cp_reginfo[] = {
257
/*
258
* Define the secure and non-secure FCSE identifier CP registers
259
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
260
*/
261
{ .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0,
262
.access = PL0_R, .type = ARM_CP_CONST, .resetvalue = 0 },
263
- /*
264
- * MMU TLB control. Note that the wildcarding means we cover not just
265
- * the unified TLB ops but also the dside/iside/inner-shareable variants.
266
- */
267
- { .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY,
268
- .opc1 = CP_ANY, .opc2 = 0, .access = PL1_W, .writefn = tlbiall_write,
269
- .type = ARM_CP_NO_RAW },
270
- { .name = "TLBIMVA", .cp = 15, .crn = 8, .crm = CP_ANY,
271
- .opc1 = CP_ANY, .opc2 = 1, .access = PL1_W, .writefn = tlbimva_write,
272
- .type = ARM_CP_NO_RAW },
273
- { .name = "TLBIASID", .cp = 15, .crn = 8, .crm = CP_ANY,
274
- .opc1 = CP_ANY, .opc2 = 2, .access = PL1_W, .writefn = tlbiasid_write,
275
- .type = ARM_CP_NO_RAW },
276
- { .name = "TLBIMVAA", .cp = 15, .crn = 8, .crm = CP_ANY,
277
- .opc1 = CP_ANY, .opc2 = 3, .access = PL1_W, .writefn = tlbimvaa_write,
278
- .type = ARM_CP_NO_RAW },
279
{ .name = "PRRR", .cp = 15, .crn = 10, .crm = 2,
280
.opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NOP },
281
{ .name = "NMRR", .cp = 15, .crn = 10, .crm = 2,
282
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
283
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 0,
284
.fgt = FGT_ISR_EL1,
285
.type = ARM_CP_NO_RAW, .access = PL1_R, .readfn = isr_read },
286
- /* 32 bit ITLB invalidates */
287
- { .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0,
288
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
289
- .writefn = tlbiall_write },
290
- { .name = "ITLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
291
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
292
- .writefn = tlbimva_write },
293
- { .name = "ITLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 2,
294
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
295
- .writefn = tlbiasid_write },
296
- /* 32 bit DTLB invalidates */
297
- { .name = "DTLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 0,
298
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
299
- .writefn = tlbiall_write },
300
- { .name = "DTLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
301
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
302
- .writefn = tlbimva_write },
303
- { .name = "DTLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 2,
304
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
305
- .writefn = tlbiasid_write },
306
- /* 32 bit TLB invalidates */
307
- { .name = "TLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
308
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
309
- .writefn = tlbiall_write },
310
- { .name = "TLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
311
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
312
- .writefn = tlbimva_write },
313
- { .name = "TLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
314
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
315
- .writefn = tlbiasid_write },
316
- { .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
317
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
318
- .writefn = tlbimvaa_write },
319
-};
320
-
321
-static const ARMCPRegInfo v7mp_cp_reginfo[] = {
322
- /* 32 bit TLB invalidates, Inner Shareable */
323
- { .name = "TLBIALLIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
324
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
325
- .writefn = tlbiall_is_write },
326
- { .name = "TLBIMVAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
327
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
328
- .writefn = tlbimva_is_write },
329
- { .name = "TLBIASIDIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
330
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
331
- .writefn = tlbiasid_is_write },
332
- { .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
333
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
334
- .writefn = tlbimvaa_is_write },
335
};
336
337
static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
338
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
339
.fieldoffset = offsetof(CPUARMState, cp15.par_el[1]),
340
.writefn = par_write },
341
#endif
342
- /* TLB invalidate last level of translation table walk */
343
- { .name = "TLBIMVALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
344
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
345
- .writefn = tlbimva_is_write },
346
- { .name = "TLBIMVAALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
347
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
348
- .writefn = tlbimvaa_is_write },
349
- { .name = "TLBIMVAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
350
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
351
- .writefn = tlbimva_write },
352
- { .name = "TLBIMVAAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
353
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
354
- .writefn = tlbimvaa_write },
355
- { .name = "TLBIMVALH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
356
- .type = ARM_CP_NO_RAW, .access = PL2_W,
357
- .writefn = tlbimva_hyp_write },
358
- { .name = "TLBIMVALHIS",
359
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
360
- .type = ARM_CP_NO_RAW, .access = PL2_W,
361
- .writefn = tlbimva_hyp_is_write },
362
- { .name = "TLBIIPAS2",
363
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
364
- .type = ARM_CP_NO_RAW, .access = PL2_W,
365
- .writefn = tlbiipas2_hyp_write },
366
- { .name = "TLBIIPAS2IS",
367
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
368
- .type = ARM_CP_NO_RAW, .access = PL2_W,
369
- .writefn = tlbiipas2is_hyp_write },
370
- { .name = "TLBIIPAS2L",
371
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
372
- .type = ARM_CP_NO_RAW, .access = PL2_W,
373
- .writefn = tlbiipas2_hyp_write },
374
- { .name = "TLBIIPAS2LIS",
375
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
376
- .type = ARM_CP_NO_RAW, .access = PL2_W,
377
- .writefn = tlbiipas2is_hyp_write },
378
/* 32 bit cache operations */
379
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
380
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_ticab },
381
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
382
define_arm_cp_regs(cpu, not_v8_cp_reginfo);
383
}
384
385
+ define_tlb_insn_regs(cpu);
386
+
387
if (arm_feature(env, ARM_FEATURE_V6)) {
388
/* The ID registers all have impdef reset values */
389
ARMCPRegInfo v6_idregs[] = {
390
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
391
if (arm_feature(env, ARM_FEATURE_V6K)) {
392
define_arm_cp_regs(cpu, v6k_cp_reginfo);
393
}
394
- if (arm_feature(env, ARM_FEATURE_V7MP) &&
395
- !arm_feature(env, ARM_FEATURE_PMSA)) {
396
- define_arm_cp_regs(cpu, v7mp_cp_reginfo);
397
- }
398
if (arm_feature(env, ARM_FEATURE_V7VE)) {
399
define_arm_cp_regs(cpu, pmovsset_cp_reginfo);
400
}
401
diff --git a/target/arm/tcg-stubs.c b/target/arm/tcg-stubs.c
402
index XXXXXXX..XXXXXXX 100644
403
--- a/target/arm/tcg-stubs.c
404
+++ b/target/arm/tcg-stubs.c
405
@@ -XXX,XX +XXX,XX @@ void raise_exception_ra(CPUARMState *env, uint32_t excp, uint32_t syndrome,
406
void assert_hflags_rebuild_correctly(CPUARMState *env)
407
{
408
}
409
+
410
+/* TLBI insns are only used by TCG, so we don't need to do anything for KVM */
411
+void define_tlb_insn_regs(ARMCPU *cpu)
412
+{
413
+}
414
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
415
new file mode 100644
416
index XXXXXXX..XXXXXXX
417
--- /dev/null
418
+++ b/target/arm/tcg/tlb-insns.c
419
@@ -XXX,XX +XXX,XX @@
420
+/*
421
+ * Helpers for TLBI insns
422
+ *
423
+ * This code is licensed under the GNU GPL v2 or later.
424
+ *
425
+ * SPDX-License-Identifier: GPL-2.0-or-later
426
+ */
427
+#include "qemu/osdep.h"
428
+#include "exec/exec-all.h"
429
+#include "cpu.h"
430
+#include "internals.h"
431
+#include "cpu-features.h"
432
+#include "cpregs.h"
433
+
434
+/* IS variants of TLB operations must affect all cores */
435
+static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
436
+ uint64_t value)
437
+{
438
+ CPUState *cs = env_cpu(env);
439
+
440
+ tlb_flush_all_cpus_synced(cs);
441
+}
442
+
443
+static void tlbiasid_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
444
+ uint64_t value)
445
+{
446
+ CPUState *cs = env_cpu(env);
447
+
448
+ tlb_flush_all_cpus_synced(cs);
449
+}
450
+
451
+static void tlbimva_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
452
+ uint64_t value)
453
+{
454
+ CPUState *cs = env_cpu(env);
455
+
456
+ tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
457
+}
458
+
459
+static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
460
+ uint64_t value)
461
+{
462
+ CPUState *cs = env_cpu(env);
463
+
464
+ tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
465
+}
466
+
467
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
468
+ uint64_t value)
469
+{
470
+ /* Invalidate all (TLBIALL) */
471
+ CPUState *cs = env_cpu(env);
472
+
473
+ if (tlb_force_broadcast(env)) {
474
+ tlb_flush_all_cpus_synced(cs);
475
+ } else {
476
+ tlb_flush(cs);
477
+ }
478
+}
479
+
480
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
481
+ uint64_t value)
482
+{
483
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
484
+ CPUState *cs = env_cpu(env);
485
+
486
+ value &= TARGET_PAGE_MASK;
487
+ if (tlb_force_broadcast(env)) {
488
+ tlb_flush_page_all_cpus_synced(cs, value);
489
+ } else {
490
+ tlb_flush_page(cs, value);
491
+ }
492
+}
493
+
494
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
495
+ uint64_t value)
496
+{
497
+ /* Invalidate by ASID (TLBIASID) */
498
+ CPUState *cs = env_cpu(env);
499
+
500
+ if (tlb_force_broadcast(env)) {
501
+ tlb_flush_all_cpus_synced(cs);
502
+ } else {
503
+ tlb_flush(cs);
504
+ }
505
+}
506
+
507
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
508
+ uint64_t value)
509
+{
510
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
511
+ CPUState *cs = env_cpu(env);
512
+
513
+ value &= TARGET_PAGE_MASK;
514
+ if (tlb_force_broadcast(env)) {
515
+ tlb_flush_page_all_cpus_synced(cs, value);
516
+ } else {
517
+ tlb_flush_page(cs, value);
518
+ }
519
+}
520
+
521
+static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
522
+ uint64_t value)
523
+{
524
+ CPUState *cs = env_cpu(env);
525
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
526
+
527
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
528
+}
529
+
530
+static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
531
+ uint64_t value)
532
+{
533
+ CPUState *cs = env_cpu(env);
534
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
535
+
536
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
537
+}
538
+
539
+static const ARMCPRegInfo tlbi_not_v7_cp_reginfo[] = {
540
+ /*
541
+ * MMU TLB control. Note that the wildcarding means we cover not just
542
+ * the unified TLB ops but also the dside/iside/inner-shareable variants.
543
+ */
544
+ { .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY,
545
+ .opc1 = CP_ANY, .opc2 = 0, .access = PL1_W, .writefn = tlbiall_write,
546
+ .type = ARM_CP_NO_RAW },
547
+ { .name = "TLBIMVA", .cp = 15, .crn = 8, .crm = CP_ANY,
548
+ .opc1 = CP_ANY, .opc2 = 1, .access = PL1_W, .writefn = tlbimva_write,
549
+ .type = ARM_CP_NO_RAW },
550
+ { .name = "TLBIASID", .cp = 15, .crn = 8, .crm = CP_ANY,
551
+ .opc1 = CP_ANY, .opc2 = 2, .access = PL1_W, .writefn = tlbiasid_write,
552
+ .type = ARM_CP_NO_RAW },
553
+ { .name = "TLBIMVAA", .cp = 15, .crn = 8, .crm = CP_ANY,
554
+ .opc1 = CP_ANY, .opc2 = 3, .access = PL1_W, .writefn = tlbimvaa_write,
555
+ .type = ARM_CP_NO_RAW },
556
+};
557
+
558
+static const ARMCPRegInfo tlbi_v7_cp_reginfo[] = {
559
+ /* 32 bit ITLB invalidates */
560
+ { .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0,
561
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
562
+ .writefn = tlbiall_write },
563
+ { .name = "ITLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
564
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
565
+ .writefn = tlbimva_write },
566
+ { .name = "ITLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 2,
567
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
568
+ .writefn = tlbiasid_write },
569
+ /* 32 bit DTLB invalidates */
570
+ { .name = "DTLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 0,
571
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
572
+ .writefn = tlbiall_write },
573
+ { .name = "DTLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
574
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
575
+ .writefn = tlbimva_write },
576
+ { .name = "DTLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 2,
577
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
578
+ .writefn = tlbiasid_write },
579
+ /* 32 bit TLB invalidates */
580
+ { .name = "TLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
581
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
582
+ .writefn = tlbiall_write },
583
+ { .name = "TLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
584
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
585
+ .writefn = tlbimva_write },
586
+ { .name = "TLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
587
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
588
+ .writefn = tlbiasid_write },
589
+ { .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
590
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
591
+ .writefn = tlbimvaa_write },
592
+};
593
+
594
+static const ARMCPRegInfo tlbi_v7mp_cp_reginfo[] = {
595
+ /* 32 bit TLB invalidates, Inner Shareable */
596
+ { .name = "TLBIALLIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
597
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
598
+ .writefn = tlbiall_is_write },
599
+ { .name = "TLBIMVAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
600
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
601
+ .writefn = tlbimva_is_write },
602
+ { .name = "TLBIASIDIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
603
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
604
+ .writefn = tlbiasid_is_write },
605
+ { .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
606
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
607
+ .writefn = tlbimvaa_is_write },
608
+};
609
+
610
+static const ARMCPRegInfo tlbi_v8_cp_reginfo[] = {
611
+ /* AArch32 TLB invalidate last level of translation table walk */
612
+ { .name = "TLBIMVALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
613
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
614
+ .writefn = tlbimva_is_write },
615
+ { .name = "TLBIMVAALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
616
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
617
+ .writefn = tlbimvaa_is_write },
618
+ { .name = "TLBIMVAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
619
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
620
+ .writefn = tlbimva_write },
621
+ { .name = "TLBIMVAAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
622
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
623
+ .writefn = tlbimvaa_write },
624
+ { .name = "TLBIMVALH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
625
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
626
+ .writefn = tlbimva_hyp_write },
627
+ { .name = "TLBIMVALHIS",
628
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
629
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
630
+ .writefn = tlbimva_hyp_is_write },
631
+ { .name = "TLBIIPAS2",
632
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
633
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
634
+ .writefn = tlbiipas2_hyp_write },
635
+ { .name = "TLBIIPAS2IS",
636
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
637
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
638
+ .writefn = tlbiipas2is_hyp_write },
639
+ { .name = "TLBIIPAS2L",
640
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
641
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
642
+ .writefn = tlbiipas2_hyp_write },
643
+ { .name = "TLBIIPAS2LIS",
644
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
645
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
646
+ .writefn = tlbiipas2is_hyp_write },
647
+};
648
+
649
+void define_tlb_insn_regs(ARMCPU *cpu)
650
+{
651
+ CPUARMState *env = &cpu->env;
652
+
653
+ if (!arm_feature(env, ARM_FEATURE_V7)) {
654
+ define_arm_cp_regs(cpu, tlbi_not_v7_cp_reginfo);
655
+ } else {
656
+ define_arm_cp_regs(cpu, tlbi_v7_cp_reginfo);
657
+ }
658
+ if (arm_feature(env, ARM_FEATURE_V7MP) &&
659
+ !arm_feature(env, ARM_FEATURE_PMSA)) {
660
+ define_arm_cp_regs(cpu, tlbi_v7mp_cp_reginfo);
661
+ }
662
+ if (arm_feature(env, ARM_FEATURE_V8)) {
663
+ define_arm_cp_regs(cpu, tlbi_v8_cp_reginfo);
664
+ }
665
+}
666
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
667
index XXXXXXX..XXXXXXX 100644
668
--- a/target/arm/tcg/meson.build
669
+++ b/target/arm/tcg/meson.build
670
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
671
'op_helper.c',
672
'tlb_helper.c',
673
'vec_helper.c',
674
+ 'tlb-insns.c',
675
))
676
677
arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
678
--
679
2.34.1
diff view generated by jsdifflib
New patch
1
1
Move the AArch32 TLBI insns for AArch32 EL2 to tlbi_insn_helper.c.
2
To keep this as an obviously pure code-movement, we retain the
3
same condition for registering tlbi_el2_cp_reginfo that we use for
4
el2_cp_reginfo. We'll be able to simplify this condition later,
5
since the need to define the reginfo for EL3-without-EL2 doesn't
6
apply for the TLBI ops specifically.
7
8
This move brings all the uses of tlbimva_hyp_write() and
9
tlbimva_hyp_is_write() back into a single file, so we can move those
10
also, and make them file-local again.
11
12
The helper alle1_tlbmask() is an exception to the pattern that we
13
only need to make these functions global temporarily, because once
14
this refactoring is complete it will be called by both code in
15
helper.c (vttbr_write()) and by code in tlb-insns.c. We therefore
16
put its prototype in a permanent home in internals.h.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20241210160452.2427965-3-peter.maydell@linaro.org
21
---
22
target/arm/cpregs.h | 4 --
23
target/arm/internals.h | 6 +++
24
target/arm/helper.c | 74 +--------------------------------
25
target/arm/tcg/tlb-insns.c | 85 ++++++++++++++++++++++++++++++++++++++
26
4 files changed, 92 insertions(+), 77 deletions(-)
27
28
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpregs.h
31
+++ b/target/arm/cpregs.h
32
@@ -XXX,XX +XXX,XX @@ CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
33
CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
34
bool isread);
35
bool tlb_force_broadcast(CPUARMState *env);
36
-void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
- uint64_t value);
38
-void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
39
- uint64_t value);
40
41
#endif /* TARGET_ARM_CPREGS_H */
42
diff --git a/target/arm/internals.h b/target/arm/internals.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/internals.h
45
+++ b/target/arm/internals.h
46
@@ -XXX,XX +XXX,XX @@ uint64_t gt_get_countervalue(CPUARMState *env);
47
* and CNTVCT_EL0 (this will be either 0 or the value of CNTVOFF_EL2).
48
*/
49
uint64_t gt_virt_cnt_offset(CPUARMState *env);
50
+
51
+/*
52
+ * Return mask of ARMMMUIdxBit values corresponding to an "invalidate
53
+ * all EL1" scope; this covers stage 1 and stage 2.
54
+ */
55
+int alle1_tlbmask(CPUARMState *env);
56
#endif
57
diff --git a/target/arm/helper.c b/target/arm/helper.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/helper.c
60
+++ b/target/arm/helper.c
61
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
62
raw_write(env, ri, value);
63
}
64
65
-static int alle1_tlbmask(CPUARMState *env)
66
+int alle1_tlbmask(CPUARMState *env)
67
{
68
/*
69
* Note that the 'ALL' scope must invalidate both stage 1 and
70
@@ -XXX,XX +XXX,XX @@ bool tlb_force_broadcast(CPUARMState *env)
71
return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
72
}
73
74
-static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
75
- uint64_t value)
76
-{
77
- CPUState *cs = env_cpu(env);
78
-
79
- tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
80
-}
81
-
82
-static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
83
- uint64_t value)
84
-{
85
- CPUState *cs = env_cpu(env);
86
-
87
- tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
88
-}
89
-
90
-
91
-static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
- uint64_t value)
93
-{
94
- CPUState *cs = env_cpu(env);
95
-
96
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
97
-}
98
-
99
-static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
100
- uint64_t value)
101
-{
102
- CPUState *cs = env_cpu(env);
103
-
104
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
105
-}
106
-
107
-void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
108
- uint64_t value)
109
-{
110
- CPUState *cs = env_cpu(env);
111
- uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
112
-
113
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
114
-}
115
-
116
-void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
117
- uint64_t value)
118
-{
119
- CPUState *cs = env_cpu(env);
120
- uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
121
-
122
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
123
- ARMMMUIdxBit_E2);
124
-}
125
-
126
static const ARMCPRegInfo cp_reginfo[] = {
127
/*
128
* Define the secure and non-secure FCSE identifier CP registers
129
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
130
{ .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
131
.access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS,
132
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) },
133
- { .name = "TLBIALLNSNH",
134
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
135
- .type = ARM_CP_NO_RAW, .access = PL2_W,
136
- .writefn = tlbiall_nsnh_write },
137
- { .name = "TLBIALLNSNHIS",
138
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
139
- .type = ARM_CP_NO_RAW, .access = PL2_W,
140
- .writefn = tlbiall_nsnh_is_write },
141
- { .name = "TLBIALLH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
142
- .type = ARM_CP_NO_RAW, .access = PL2_W,
143
- .writefn = tlbiall_hyp_write },
144
- { .name = "TLBIALLHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
145
- .type = ARM_CP_NO_RAW, .access = PL2_W,
146
- .writefn = tlbiall_hyp_is_write },
147
- { .name = "TLBIMVAH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
148
- .type = ARM_CP_NO_RAW, .access = PL2_W,
149
- .writefn = tlbimva_hyp_write },
150
- { .name = "TLBIMVAHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
151
- .type = ARM_CP_NO_RAW, .access = PL2_W,
152
- .writefn = tlbimva_hyp_is_write },
153
{ .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
154
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
155
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
156
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/target/arm/tcg/tlb-insns.c
159
+++ b/target/arm/tcg/tlb-insns.c
160
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
}
162
}
163
164
+static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
165
+ uint64_t value)
166
+{
167
+ CPUState *cs = env_cpu(env);
168
+ uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
169
+
170
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
171
+}
172
+
173
+static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
174
+ uint64_t value)
175
+{
176
+ CPUState *cs = env_cpu(env);
177
+ uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
178
+
179
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
180
+ ARMMMUIdxBit_E2);
181
+}
182
+
183
static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
184
uint64_t value)
185
{
186
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
187
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
188
}
189
190
+static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
191
+ uint64_t value)
192
+{
193
+ CPUState *cs = env_cpu(env);
194
+
195
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
196
+}
197
+
198
+static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
199
+ uint64_t value)
200
+{
201
+ CPUState *cs = env_cpu(env);
202
+
203
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
204
+}
205
+
206
+
207
+static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
208
+ uint64_t value)
209
+{
210
+ CPUState *cs = env_cpu(env);
211
+
212
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
213
+}
214
+
215
+static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
216
+ uint64_t value)
217
+{
218
+ CPUState *cs = env_cpu(env);
219
+
220
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
221
+}
222
+
223
static const ARMCPRegInfo tlbi_not_v7_cp_reginfo[] = {
224
/*
225
* MMU TLB control. Note that the wildcarding means we cover not just
226
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_v8_cp_reginfo[] = {
227
.writefn = tlbiipas2is_hyp_write },
228
};
229
230
+static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
231
+ { .name = "TLBIALLNSNH",
232
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
233
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
234
+ .writefn = tlbiall_nsnh_write },
235
+ { .name = "TLBIALLNSNHIS",
236
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
237
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
238
+ .writefn = tlbiall_nsnh_is_write },
239
+ { .name = "TLBIALLH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
240
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
241
+ .writefn = tlbiall_hyp_write },
242
+ { .name = "TLBIALLHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
243
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
244
+ .writefn = tlbiall_hyp_is_write },
245
+ { .name = "TLBIMVAH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
246
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
247
+ .writefn = tlbimva_hyp_write },
248
+ { .name = "TLBIMVAHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
249
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
250
+ .writefn = tlbimva_hyp_is_write },
251
+};
252
+
253
void define_tlb_insn_regs(ARMCPU *cpu)
254
{
255
CPUARMState *env = &cpu->env;
256
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
257
if (arm_feature(env, ARM_FEATURE_V8)) {
258
define_arm_cp_regs(cpu, tlbi_v8_cp_reginfo);
259
}
260
+ /*
261
+ * We retain the existing logic for when to register these TLBI
262
+ * ops (i.e. matching the condition for el2_cp_reginfo[] in
263
+ * helper.c), but we will be able to simplify this later.
264
+ */
265
+ if (arm_feature(env, ARM_FEATURE_EL2)
266
+ || (arm_feature(env, ARM_FEATURE_EL3)
267
+ && arm_feature(env, ARM_FEATURE_V8))) {
268
+ define_arm_cp_regs(cpu, tlbi_el2_cp_reginfo);
269
+ }
270
}
271
--
272
2.34.1
diff view generated by jsdifflib
New patch
1
Move the AArch64 TLBI insns that are declared in v8_cp_reginfo[]
2
into tlb-insns.c.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241210160452.2427965-4-peter.maydell@linaro.org
7
---
8
target/arm/cpregs.h | 11 +++
9
target/arm/helper.c | 182 +++----------------------------------
10
target/arm/tcg/tlb-insns.c | 160 ++++++++++++++++++++++++++++++++
11
3 files changed, 182 insertions(+), 171 deletions(-)
12
13
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpregs.h
16
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
18
CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
19
bool isread);
20
bool tlb_force_broadcast(CPUARMState *env);
21
+int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
22
+ uint64_t addr);
23
+int vae1_tlbbits(CPUARMState *env, uint64_t addr);
24
+int vae1_tlbmask(CPUARMState *env);
25
+int ipas2e1_tlbmask(CPUARMState *env, int64_t value);
26
+void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
27
+ uint64_t value);
28
+void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
+ uint64_t value);
30
+void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
31
+ uint64_t value);
32
33
#endif /* TARGET_ARM_CPREGS_H */
34
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/helper.c
37
+++ b/target/arm/helper.c
38
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tocu(CPUARMState *env, const ARMCPRegInfo *ri,
39
* Page D4-1736 (DDI0487A.b)
40
*/
41
42
-static int vae1_tlbmask(CPUARMState *env)
43
+int vae1_tlbmask(CPUARMState *env)
44
{
45
uint64_t hcr = arm_hcr_el2_eff(env);
46
uint16_t mask;
47
@@ -XXX,XX +XXX,XX @@ static int vae2_tlbmask(CPUARMState *env)
48
}
49
50
/* Return 56 if TBI is enabled, 64 otherwise. */
51
-static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
52
- uint64_t addr)
53
+int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
54
+ uint64_t addr)
55
{
56
uint64_t tcr = regime_tcr(env, mmu_idx);
57
int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
58
@@ -XXX,XX +XXX,XX @@ static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
59
return (tbi >> select) & 1 ? 56 : 64;
60
}
61
62
-static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
63
+int vae1_tlbbits(CPUARMState *env, uint64_t addr)
64
{
65
uint64_t hcr = arm_hcr_el2_eff(env);
66
ARMMMUIdx mmu_idx;
67
@@ -XXX,XX +XXX,XX @@ static int vae2_tlbbits(CPUARMState *env, uint64_t addr)
68
return tlbbits_for_regime(env, mmu_idx, addr);
69
}
70
71
-static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
72
- uint64_t value)
73
+void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
74
+ uint64_t value)
75
{
76
CPUState *cs = env_cpu(env);
77
int mask = vae1_tlbmask(env);
78
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
79
tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
80
}
81
82
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
83
- uint64_t value)
84
-{
85
- CPUState *cs = env_cpu(env);
86
- int mask = vae1_tlbmask(env);
87
-
88
- if (tlb_force_broadcast(env)) {
89
- tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
90
- } else {
91
- tlb_flush_by_mmuidx(cs, mask);
92
- }
93
-}
94
-
95
static int e2_tlbmask(CPUARMState *env)
96
{
97
return (ARMMMUIdxBit_E20_0 |
98
@@ -XXX,XX +XXX,XX @@ static int e2_tlbmask(CPUARMState *env)
99
ARMMMUIdxBit_E2);
100
}
101
102
-static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
103
- uint64_t value)
104
-{
105
- CPUState *cs = env_cpu(env);
106
- int mask = alle1_tlbmask(env);
107
-
108
- tlb_flush_by_mmuidx(cs, mask);
109
-}
110
-
111
static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
uint64_t value)
113
{
114
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
115
tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
116
}
117
118
-static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
119
- uint64_t value)
120
+void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
121
+ uint64_t value)
122
{
123
CPUState *cs = env_cpu(env);
124
int mask = alle1_tlbmask(env);
125
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
126
tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
127
}
128
129
-static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
130
- uint64_t value)
131
+void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
132
+ uint64_t value)
133
{
134
CPUState *cs = env_cpu(env);
135
int mask = vae1_tlbmask(env);
136
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
137
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
138
}
139
140
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
141
- uint64_t value)
142
-{
143
- /*
144
- * Invalidate by VA, EL1&0 (AArch64 version).
145
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
146
- * since we don't support flush-for-specific-ASID-only or
147
- * flush-last-level-only.
148
- */
149
- CPUState *cs = env_cpu(env);
150
- int mask = vae1_tlbmask(env);
151
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
152
- int bits = vae1_tlbbits(env, pageaddr);
153
-
154
- if (tlb_force_broadcast(env)) {
155
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
156
- } else {
157
- tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
158
- }
159
-}
160
-
161
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
162
uint64_t value)
163
{
164
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
165
ARMMMUIdxBit_E3, bits);
166
}
167
168
-static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
169
+int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
170
{
171
/*
172
* The MSB of value is the NS field, which only applies if SEL2
173
@@ -XXX,XX +XXX,XX @@ static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
174
: ARMMMUIdxBit_Stage2);
175
}
176
177
-static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
178
- uint64_t value)
179
-{
180
- CPUState *cs = env_cpu(env);
181
- int mask = ipas2e1_tlbmask(env, value);
182
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
183
-
184
- if (tlb_force_broadcast(env)) {
185
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
186
- } else {
187
- tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
188
- }
189
-}
190
-
191
-static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
192
- uint64_t value)
193
-{
194
- CPUState *cs = env_cpu(env);
195
- int mask = ipas2e1_tlbmask(env, value);
196
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
197
-
198
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
199
-}
200
-
201
#ifdef TARGET_AARCH64
202
typedef struct {
203
uint64_t base;
204
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
205
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
206
.fgt = FGT_DCCISW,
207
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
208
- /* TLBI operations */
209
- { .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
210
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
211
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
212
- .fgt = FGT_TLBIVMALLE1IS,
213
- .writefn = tlbi_aa64_vmalle1is_write },
214
- { .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
215
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
216
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
217
- .fgt = FGT_TLBIVAE1IS,
218
- .writefn = tlbi_aa64_vae1is_write },
219
- { .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
220
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
221
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
222
- .fgt = FGT_TLBIASIDE1IS,
223
- .writefn = tlbi_aa64_vmalle1is_write },
224
- { .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
225
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
226
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
227
- .fgt = FGT_TLBIVAAE1IS,
228
- .writefn = tlbi_aa64_vae1is_write },
229
- { .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
230
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
231
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
232
- .fgt = FGT_TLBIVALE1IS,
233
- .writefn = tlbi_aa64_vae1is_write },
234
- { .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
235
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
236
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
237
- .fgt = FGT_TLBIVAALE1IS,
238
- .writefn = tlbi_aa64_vae1is_write },
239
- { .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
240
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
241
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
242
- .fgt = FGT_TLBIVMALLE1,
243
- .writefn = tlbi_aa64_vmalle1_write },
244
- { .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64,
245
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
246
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
247
- .fgt = FGT_TLBIVAE1,
248
- .writefn = tlbi_aa64_vae1_write },
249
- { .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64,
250
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
251
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
252
- .fgt = FGT_TLBIASIDE1,
253
- .writefn = tlbi_aa64_vmalle1_write },
254
- { .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64,
255
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
256
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
257
- .fgt = FGT_TLBIVAAE1,
258
- .writefn = tlbi_aa64_vae1_write },
259
- { .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64,
260
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
261
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
262
- .fgt = FGT_TLBIVALE1,
263
- .writefn = tlbi_aa64_vae1_write },
264
- { .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64,
265
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
266
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
267
- .fgt = FGT_TLBIVAALE1,
268
- .writefn = tlbi_aa64_vae1_write },
269
- { .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
270
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
271
- .access = PL2_W, .type = ARM_CP_NO_RAW,
272
- .writefn = tlbi_aa64_ipas2e1is_write },
273
- { .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
274
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
275
- .access = PL2_W, .type = ARM_CP_NO_RAW,
276
- .writefn = tlbi_aa64_ipas2e1is_write },
277
- { .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
278
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
279
- .access = PL2_W, .type = ARM_CP_NO_RAW,
280
- .writefn = tlbi_aa64_alle1is_write },
281
- { .name = "TLBI_VMALLS12E1IS", .state = ARM_CP_STATE_AA64,
282
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 6,
283
- .access = PL2_W, .type = ARM_CP_NO_RAW,
284
- .writefn = tlbi_aa64_alle1is_write },
285
- { .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
286
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
287
- .access = PL2_W, .type = ARM_CP_NO_RAW,
288
- .writefn = tlbi_aa64_ipas2e1_write },
289
- { .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
290
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
291
- .access = PL2_W, .type = ARM_CP_NO_RAW,
292
- .writefn = tlbi_aa64_ipas2e1_write },
293
- { .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
294
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
295
- .access = PL2_W, .type = ARM_CP_NO_RAW,
296
- .writefn = tlbi_aa64_alle1_write },
297
- { .name = "TLBI_VMALLS12E1", .state = ARM_CP_STATE_AA64,
298
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 6,
299
- .access = PL2_W, .type = ARM_CP_NO_RAW,
300
- .writefn = tlbi_aa64_alle1is_write },
301
#ifndef CONFIG_USER_ONLY
302
/* 64 bit address translation operations */
303
{ .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
304
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
305
index XXXXXXX..XXXXXXX 100644
306
--- a/target/arm/tcg/tlb-insns.c
307
+++ b/target/arm/tcg/tlb-insns.c
308
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
309
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
310
}
311
312
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
313
+ uint64_t value)
314
+{
315
+ CPUState *cs = env_cpu(env);
316
+ int mask = vae1_tlbmask(env);
317
+
318
+ if (tlb_force_broadcast(env)) {
319
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
320
+ } else {
321
+ tlb_flush_by_mmuidx(cs, mask);
322
+ }
323
+}
324
+
325
+static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
326
+ uint64_t value)
327
+{
328
+ CPUState *cs = env_cpu(env);
329
+ int mask = alle1_tlbmask(env);
330
+
331
+ tlb_flush_by_mmuidx(cs, mask);
332
+}
333
+
334
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
335
+ uint64_t value)
336
+{
337
+ /*
338
+ * Invalidate by VA, EL1&0 (AArch64 version).
339
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
340
+ * since we don't support flush-for-specific-ASID-only or
341
+ * flush-last-level-only.
342
+ */
343
+ CPUState *cs = env_cpu(env);
344
+ int mask = vae1_tlbmask(env);
345
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
346
+ int bits = vae1_tlbbits(env, pageaddr);
347
+
348
+ if (tlb_force_broadcast(env)) {
349
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
350
+ } else {
351
+ tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
352
+ }
353
+}
354
+
355
+static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
356
+ uint64_t value)
357
+{
358
+ CPUState *cs = env_cpu(env);
359
+ int mask = ipas2e1_tlbmask(env, value);
360
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
361
+
362
+ if (tlb_force_broadcast(env)) {
363
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
364
+ } else {
365
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
366
+ }
367
+}
368
+
369
+static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
370
+ uint64_t value)
371
+{
372
+ CPUState *cs = env_cpu(env);
373
+ int mask = ipas2e1_tlbmask(env, value);
374
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
375
+
376
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
377
+}
378
+
379
static const ARMCPRegInfo tlbi_not_v7_cp_reginfo[] = {
380
/*
381
* MMU TLB control. Note that the wildcarding means we cover not just
382
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_v8_cp_reginfo[] = {
383
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
384
.type = ARM_CP_NO_RAW, .access = PL2_W,
385
.writefn = tlbiipas2is_hyp_write },
386
+ /* AArch64 TLBI operations */
387
+ { .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
388
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
389
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
390
+ .fgt = FGT_TLBIVMALLE1IS,
391
+ .writefn = tlbi_aa64_vmalle1is_write },
392
+ { .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
393
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
394
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
395
+ .fgt = FGT_TLBIVAE1IS,
396
+ .writefn = tlbi_aa64_vae1is_write },
397
+ { .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
398
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
399
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
400
+ .fgt = FGT_TLBIASIDE1IS,
401
+ .writefn = tlbi_aa64_vmalle1is_write },
402
+ { .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
403
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
404
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
405
+ .fgt = FGT_TLBIVAAE1IS,
406
+ .writefn = tlbi_aa64_vae1is_write },
407
+ { .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
408
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
409
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
410
+ .fgt = FGT_TLBIVALE1IS,
411
+ .writefn = tlbi_aa64_vae1is_write },
412
+ { .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
413
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
414
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
415
+ .fgt = FGT_TLBIVAALE1IS,
416
+ .writefn = tlbi_aa64_vae1is_write },
417
+ { .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
418
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
419
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
420
+ .fgt = FGT_TLBIVMALLE1,
421
+ .writefn = tlbi_aa64_vmalle1_write },
422
+ { .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64,
423
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
424
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
425
+ .fgt = FGT_TLBIVAE1,
426
+ .writefn = tlbi_aa64_vae1_write },
427
+ { .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64,
428
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
429
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
430
+ .fgt = FGT_TLBIASIDE1,
431
+ .writefn = tlbi_aa64_vmalle1_write },
432
+ { .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64,
433
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
434
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
435
+ .fgt = FGT_TLBIVAAE1,
436
+ .writefn = tlbi_aa64_vae1_write },
437
+ { .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64,
438
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
439
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
440
+ .fgt = FGT_TLBIVALE1,
441
+ .writefn = tlbi_aa64_vae1_write },
442
+ { .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64,
443
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
444
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
445
+ .fgt = FGT_TLBIVAALE1,
446
+ .writefn = tlbi_aa64_vae1_write },
447
+ { .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
448
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
449
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
450
+ .writefn = tlbi_aa64_ipas2e1is_write },
451
+ { .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
452
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
453
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
454
+ .writefn = tlbi_aa64_ipas2e1is_write },
455
+ { .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
456
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
457
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
458
+ .writefn = tlbi_aa64_alle1is_write },
459
+ { .name = "TLBI_VMALLS12E1IS", .state = ARM_CP_STATE_AA64,
460
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 6,
461
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
462
+ .writefn = tlbi_aa64_alle1is_write },
463
+ { .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
464
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
465
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
466
+ .writefn = tlbi_aa64_ipas2e1_write },
467
+ { .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
468
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
469
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
470
+ .writefn = tlbi_aa64_ipas2e1_write },
471
+ { .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
472
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
473
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
474
+ .writefn = tlbi_aa64_alle1_write },
475
+ { .name = "TLBI_VMALLS12E1", .state = ARM_CP_STATE_AA64,
476
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 6,
477
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
478
+ .writefn = tlbi_aa64_alle1is_write },
479
};
480
481
static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
482
--
483
2.34.1
diff view generated by jsdifflib
New patch
1
Move the AArch64 EL2 TLBI insn definitions that were
2
in el2_cp_reginfo[] across to tlb-insns.c.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241210160452.2427965-5-peter.maydell@linaro.org
7
---
8
target/arm/cpregs.h | 7 +++++
9
target/arm/helper.c | 61 ++++----------------------------------
10
target/arm/tcg/tlb-insns.c | 49 ++++++++++++++++++++++++++++++
11
3 files changed, 62 insertions(+), 55 deletions(-)
12
13
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpregs.h
16
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ bool tlb_force_broadcast(CPUARMState *env);
18
int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
19
uint64_t addr);
20
int vae1_tlbbits(CPUARMState *env, uint64_t addr);
21
+int vae2_tlbbits(CPUARMState *env, uint64_t addr);
22
int vae1_tlbmask(CPUARMState *env);
23
+int vae2_tlbmask(CPUARMState *env);
24
int ipas2e1_tlbmask(CPUARMState *env, int64_t value);
25
+int e2_tlbmask(CPUARMState *env);
26
void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
27
uint64_t value);
28
void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
uint64_t value);
30
void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
31
uint64_t value);
32
+void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
33
+ uint64_t value);
34
+void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
35
+ uint64_t value);
36
37
#endif /* TARGET_ARM_CPREGS_H */
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper.c
41
+++ b/target/arm/helper.c
42
@@ -XXX,XX +XXX,XX @@ int vae1_tlbmask(CPUARMState *env)
43
return mask;
44
}
45
46
-static int vae2_tlbmask(CPUARMState *env)
47
+int vae2_tlbmask(CPUARMState *env)
48
{
49
uint64_t hcr = arm_hcr_el2_eff(env);
50
uint16_t mask;
51
@@ -XXX,XX +XXX,XX @@ int vae1_tlbbits(CPUARMState *env, uint64_t addr)
52
return tlbbits_for_regime(env, mmu_idx, addr);
53
}
54
55
-static int vae2_tlbbits(CPUARMState *env, uint64_t addr)
56
+int vae2_tlbbits(CPUARMState *env, uint64_t addr)
57
{
58
uint64_t hcr = arm_hcr_el2_eff(env);
59
ARMMMUIdx mmu_idx;
60
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
61
tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
62
}
63
64
-static int e2_tlbmask(CPUARMState *env)
65
+int e2_tlbmask(CPUARMState *env)
66
{
67
return (ARMMMUIdxBit_E20_0 |
68
ARMMMUIdxBit_E20_2 |
69
@@ -XXX,XX +XXX,XX @@ static int e2_tlbmask(CPUARMState *env)
70
ARMMMUIdxBit_E2);
71
}
72
73
-static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
74
- uint64_t value)
75
-{
76
- CPUState *cs = env_cpu(env);
77
- int mask = e2_tlbmask(env);
78
-
79
- tlb_flush_by_mmuidx(cs, mask);
80
-}
81
-
82
static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
83
uint64_t value)
84
{
85
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
86
tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
87
}
88
89
-static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
90
- uint64_t value)
91
+void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
+ uint64_t value)
93
{
94
CPUState *cs = env_cpu(env);
95
int mask = e2_tlbmask(env);
96
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
97
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
98
}
99
100
-static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
101
- uint64_t value)
102
-{
103
- /*
104
- * Invalidate by VA, EL2
105
- * Currently handles both VAE2 and VALE2, since we don't support
106
- * flush-last-level-only.
107
- */
108
- CPUState *cs = env_cpu(env);
109
- int mask = vae2_tlbmask(env);
110
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
111
- int bits = vae2_tlbbits(env, pageaddr);
112
-
113
- tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
114
-}
115
-
116
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
117
uint64_t value)
118
{
119
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
120
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
121
}
122
123
-static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
125
uint64_t value)
126
{
127
CPUState *cs = env_cpu(env);
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
129
{ .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
130
.access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS,
131
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) },
132
- { .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
133
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
134
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
135
- .writefn = tlbi_aa64_alle2_write },
136
- { .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
137
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
138
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
139
- .writefn = tlbi_aa64_vae2_write },
140
- { .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
141
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
142
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
143
- .writefn = tlbi_aa64_vae2_write },
144
- { .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
145
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
146
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
147
- .writefn = tlbi_aa64_alle2is_write },
148
- { .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
149
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
150
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
151
- .writefn = tlbi_aa64_vae2is_write },
152
- { .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
153
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
154
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
155
- .writefn = tlbi_aa64_vae2is_write },
156
#ifndef CONFIG_USER_ONLY
157
/*
158
* Unlike the other EL2-related AT operations, these must
159
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/target/arm/tcg/tlb-insns.c
162
+++ b/target/arm/tcg/tlb-insns.c
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
tlb_flush_by_mmuidx(cs, mask);
165
}
166
167
+static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
168
+ uint64_t value)
169
+{
170
+ CPUState *cs = env_cpu(env);
171
+ int mask = e2_tlbmask(env);
172
+
173
+ tlb_flush_by_mmuidx(cs, mask);
174
+}
175
+
176
+static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
177
+ uint64_t value)
178
+{
179
+ /*
180
+ * Invalidate by VA, EL2
181
+ * Currently handles both VAE2 and VALE2, since we don't support
182
+ * flush-last-level-only.
183
+ */
184
+ CPUState *cs = env_cpu(env);
185
+ int mask = vae2_tlbmask(env);
186
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
187
+ int bits = vae2_tlbbits(env, pageaddr);
188
+
189
+ tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
190
+}
191
+
192
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
193
uint64_t value)
194
{
195
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
196
{ .name = "TLBIMVAHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
197
.type = ARM_CP_NO_RAW, .access = PL2_W,
198
.writefn = tlbimva_hyp_is_write },
199
+ { .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
200
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
201
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
202
+ .writefn = tlbi_aa64_alle2_write },
203
+ { .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
204
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
205
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
206
+ .writefn = tlbi_aa64_vae2_write },
207
+ { .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
208
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
209
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
210
+ .writefn = tlbi_aa64_vae2_write },
211
+ { .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
212
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
213
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
214
+ .writefn = tlbi_aa64_alle2is_write },
215
+ { .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
216
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
217
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
218
+ .writefn = tlbi_aa64_vae2is_write },
219
+ { .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
220
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
221
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
222
+ .writefn = tlbi_aa64_vae2is_write },
223
};
224
225
void define_tlb_insn_regs(ARMCPU *cpu)
226
--
227
2.34.1
diff view generated by jsdifflib
New patch
1
Move the AArch64 EL3 TLBI insns from el3_cp_reginfo[] across
2
to tlb-insns.c.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241210160452.2427965-6-peter.maydell@linaro.org
7
---
8
target/arm/cpregs.h | 4 +++
9
target/arm/helper.c | 56 +++-----------------------------------
10
target/arm/tcg/tlb-insns.c | 54 ++++++++++++++++++++++++++++++++++++
11
3 files changed, 62 insertions(+), 52 deletions(-)
12
13
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpregs.h
16
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
18
uint64_t value);
19
void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
uint64_t value);
21
+void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
22
+ uint64_t value);
23
+void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
+ uint64_t value);
25
26
#endif /* TARGET_ARM_CPREGS_H */
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.c
30
+++ b/target/arm/helper.c
31
@@ -XXX,XX +XXX,XX @@ int e2_tlbmask(CPUARMState *env)
32
ARMMMUIdxBit_E2);
33
}
34
35
-static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
36
- uint64_t value)
37
-{
38
- ARMCPU *cpu = env_archcpu(env);
39
- CPUState *cs = CPU(cpu);
40
-
41
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
42
-}
43
-
44
void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
45
uint64_t value)
46
{
47
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
49
}
50
51
-static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
52
- uint64_t value)
53
+void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
+ uint64_t value)
55
{
56
CPUState *cs = env_cpu(env);
57
58
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
59
}
60
61
-static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
62
- uint64_t value)
63
-{
64
- /*
65
- * Invalidate by VA, EL3
66
- * Currently handles both VAE3 and VALE3, since we don't support
67
- * flush-last-level-only.
68
- */
69
- ARMCPU *cpu = env_archcpu(env);
70
- CPUState *cs = CPU(cpu);
71
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
72
-
73
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
74
-}
75
-
76
void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
77
uint64_t value)
78
{
79
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
80
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
81
}
82
83
-static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
84
- uint64_t value)
85
+void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
86
+ uint64_t value)
87
{
88
CPUState *cs = env_cpu(env);
89
uint64_t pageaddr = sextract64(value << 12, 0, 56);
90
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
91
.opc0 = 3, .opc1 = 6, .crn = 5, .crm = 1, .opc2 = 1,
92
.access = PL3_RW, .type = ARM_CP_CONST,
93
.resetvalue = 0 },
94
- { .name = "TLBI_ALLE3IS", .state = ARM_CP_STATE_AA64,
95
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 0,
96
- .access = PL3_W, .type = ARM_CP_NO_RAW,
97
- .writefn = tlbi_aa64_alle3is_write },
98
- { .name = "TLBI_VAE3IS", .state = ARM_CP_STATE_AA64,
99
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 1,
100
- .access = PL3_W, .type = ARM_CP_NO_RAW,
101
- .writefn = tlbi_aa64_vae3is_write },
102
- { .name = "TLBI_VALE3IS", .state = ARM_CP_STATE_AA64,
103
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 5,
104
- .access = PL3_W, .type = ARM_CP_NO_RAW,
105
- .writefn = tlbi_aa64_vae3is_write },
106
- { .name = "TLBI_ALLE3", .state = ARM_CP_STATE_AA64,
107
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 0,
108
- .access = PL3_W, .type = ARM_CP_NO_RAW,
109
- .writefn = tlbi_aa64_alle3_write },
110
- { .name = "TLBI_VAE3", .state = ARM_CP_STATE_AA64,
111
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 1,
112
- .access = PL3_W, .type = ARM_CP_NO_RAW,
113
- .writefn = tlbi_aa64_vae3_write },
114
- { .name = "TLBI_VALE3", .state = ARM_CP_STATE_AA64,
115
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
116
- .access = PL3_W, .type = ARM_CP_NO_RAW,
117
- .writefn = tlbi_aa64_vae3_write },
118
};
119
120
#ifndef CONFIG_USER_ONLY
121
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/tcg/tlb-insns.c
124
+++ b/target/arm/tcg/tlb-insns.c
125
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
126
tlb_flush_by_mmuidx(cs, mask);
127
}
128
129
+static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
130
+ uint64_t value)
131
+{
132
+ ARMCPU *cpu = env_archcpu(env);
133
+ CPUState *cs = CPU(cpu);
134
+
135
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
136
+}
137
+
138
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
139
uint64_t value)
140
{
141
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
142
tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
143
}
144
145
+static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
146
+ uint64_t value)
147
+{
148
+ /*
149
+ * Invalidate by VA, EL3
150
+ * Currently handles both VAE3 and VALE3, since we don't support
151
+ * flush-last-level-only.
152
+ */
153
+ ARMCPU *cpu = env_archcpu(env);
154
+ CPUState *cs = CPU(cpu);
155
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
156
+
157
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
158
+}
159
+
160
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
164
.writefn = tlbi_aa64_vae2is_write },
165
};
166
167
+static const ARMCPRegInfo tlbi_el3_cp_reginfo[] = {
168
+ { .name = "TLBI_ALLE3IS", .state = ARM_CP_STATE_AA64,
169
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 0,
170
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
171
+ .writefn = tlbi_aa64_alle3is_write },
172
+ { .name = "TLBI_VAE3IS", .state = ARM_CP_STATE_AA64,
173
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 1,
174
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
175
+ .writefn = tlbi_aa64_vae3is_write },
176
+ { .name = "TLBI_VALE3IS", .state = ARM_CP_STATE_AA64,
177
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 5,
178
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
179
+ .writefn = tlbi_aa64_vae3is_write },
180
+ { .name = "TLBI_ALLE3", .state = ARM_CP_STATE_AA64,
181
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 0,
182
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
183
+ .writefn = tlbi_aa64_alle3_write },
184
+ { .name = "TLBI_VAE3", .state = ARM_CP_STATE_AA64,
185
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 1,
186
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
187
+ .writefn = tlbi_aa64_vae3_write },
188
+ { .name = "TLBI_VALE3", .state = ARM_CP_STATE_AA64,
189
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
190
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
191
+ .writefn = tlbi_aa64_vae3_write },
192
+};
193
+
194
void define_tlb_insn_regs(ARMCPU *cpu)
195
{
196
CPUARMState *env = &cpu->env;
197
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
198
&& arm_feature(env, ARM_FEATURE_V8))) {
199
define_arm_cp_regs(cpu, tlbi_el2_cp_reginfo);
200
}
201
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
202
+ define_arm_cp_regs(cpu, tlbi_el3_cp_reginfo);
203
+ }
204
}
205
--
206
2.34.1
diff view generated by jsdifflib
New patch
1
Move the TLBI invalidate-range insns across to tlb-insns.c.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241210160452.2427965-7-peter.maydell@linaro.org
6
---
7
target/arm/cpregs.h | 2 +
8
target/arm/helper.c | 330 +------------------------------------
9
target/arm/tcg/tlb-insns.c | 329 ++++++++++++++++++++++++++++++++++++
10
3 files changed, 333 insertions(+), 328 deletions(-)
11
12
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpregs.h
15
+++ b/target/arm/cpregs.h
16
@@ -XXX,XX +XXX,XX @@ CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
17
bool isread);
18
CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
19
bool isread);
20
+CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
21
+ bool isread);
22
bool tlb_force_broadcast(CPUARMState *env);
23
int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
24
uint64_t addr);
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
30
31
#ifdef TARGET_AARCH64
32
/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBOS. */
33
-static CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
34
- bool isread)
35
+CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
36
+ bool isread)
37
{
38
if (arm_current_el(env) == 1 &&
39
(arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBOS))) {
40
@@ -XXX,XX +XXX,XX @@ int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
41
: ARMMMUIdxBit_Stage2);
42
}
43
44
-#ifdef TARGET_AARCH64
45
-typedef struct {
46
- uint64_t base;
47
- uint64_t length;
48
-} TLBIRange;
49
-
50
-static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
51
-{
52
- /*
53
- * Note that the TLBI range TG field encoding differs from both
54
- * TG0 and TG1 encodings.
55
- */
56
- switch (tg) {
57
- case 1:
58
- return Gran4K;
59
- case 2:
60
- return Gran16K;
61
- case 3:
62
- return Gran64K;
63
- default:
64
- return GranInvalid;
65
- }
66
-}
67
-
68
-static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
69
- uint64_t value)
70
-{
71
- unsigned int page_size_granule, page_shift, num, scale, exponent;
72
- /* Extract one bit to represent the va selector in use. */
73
- uint64_t select = sextract64(value, 36, 1);
74
- ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true, false);
75
- TLBIRange ret = { };
76
- ARMGranuleSize gran;
77
-
78
- page_size_granule = extract64(value, 46, 2);
79
- gran = tlbi_range_tg_to_gran_size(page_size_granule);
80
-
81
- /* The granule encoded in value must match the granule in use. */
82
- if (gran != param.gran) {
83
- qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
84
- page_size_granule);
85
- return ret;
86
- }
87
-
88
- page_shift = arm_granule_bits(gran);
89
- num = extract64(value, 39, 5);
90
- scale = extract64(value, 44, 2);
91
- exponent = (5 * scale) + 1;
92
-
93
- ret.length = (num + 1) << (exponent + page_shift);
94
-
95
- if (param.select) {
96
- ret.base = sextract64(value, 0, 37);
97
- } else {
98
- ret.base = extract64(value, 0, 37);
99
- }
100
- if (param.ds) {
101
- /*
102
- * With DS=1, BaseADDR is always shifted 16 so that it is able
103
- * to address all 52 va bits. The input address is perforce
104
- * aligned on a 64k boundary regardless of translation granule.
105
- */
106
- page_shift = 16;
107
- }
108
- ret.base <<= page_shift;
109
-
110
- return ret;
111
-}
112
-
113
-static void do_rvae_write(CPUARMState *env, uint64_t value,
114
- int idxmap, bool synced)
115
-{
116
- ARMMMUIdx one_idx = ARM_MMU_IDX_A | ctz32(idxmap);
117
- TLBIRange range;
118
- int bits;
119
-
120
- range = tlbi_aa64_get_range(env, one_idx, value);
121
- bits = tlbbits_for_regime(env, one_idx, range.base);
122
-
123
- if (synced) {
124
- tlb_flush_range_by_mmuidx_all_cpus_synced(env_cpu(env),
125
- range.base,
126
- range.length,
127
- idxmap,
128
- bits);
129
- } else {
130
- tlb_flush_range_by_mmuidx(env_cpu(env), range.base,
131
- range.length, idxmap, bits);
132
- }
133
-}
134
-
135
-static void tlbi_aa64_rvae1_write(CPUARMState *env,
136
- const ARMCPRegInfo *ri,
137
- uint64_t value)
138
-{
139
- /*
140
- * Invalidate by VA range, EL1&0.
141
- * Currently handles all of RVAE1, RVAAE1, RVAALE1 and RVALE1,
142
- * since we don't support flush-for-specific-ASID-only or
143
- * flush-last-level-only.
144
- */
145
-
146
- do_rvae_write(env, value, vae1_tlbmask(env),
147
- tlb_force_broadcast(env));
148
-}
149
-
150
-static void tlbi_aa64_rvae1is_write(CPUARMState *env,
151
- const ARMCPRegInfo *ri,
152
- uint64_t value)
153
-{
154
- /*
155
- * Invalidate by VA range, Inner/Outer Shareable EL1&0.
156
- * Currently handles all of RVAE1IS, RVAE1OS, RVAAE1IS, RVAAE1OS,
157
- * RVAALE1IS, RVAALE1OS, RVALE1IS and RVALE1OS, since we don't support
158
- * flush-for-specific-ASID-only, flush-last-level-only or inner/outer
159
- * shareable specific flushes.
160
- */
161
-
162
- do_rvae_write(env, value, vae1_tlbmask(env), true);
163
-}
164
-
165
-static void tlbi_aa64_rvae2_write(CPUARMState *env,
166
- const ARMCPRegInfo *ri,
167
- uint64_t value)
168
-{
169
- /*
170
- * Invalidate by VA range, EL2.
171
- * Currently handles all of RVAE2 and RVALE2,
172
- * since we don't support flush-for-specific-ASID-only or
173
- * flush-last-level-only.
174
- */
175
-
176
- do_rvae_write(env, value, vae2_tlbmask(env),
177
- tlb_force_broadcast(env));
178
-
179
-
180
-}
181
-
182
-static void tlbi_aa64_rvae2is_write(CPUARMState *env,
183
- const ARMCPRegInfo *ri,
184
- uint64_t value)
185
-{
186
- /*
187
- * Invalidate by VA range, Inner/Outer Shareable, EL2.
188
- * Currently handles all of RVAE2IS, RVAE2OS, RVALE2IS and RVALE2OS,
189
- * since we don't support flush-for-specific-ASID-only,
190
- * flush-last-level-only or inner/outer shareable specific flushes.
191
- */
192
-
193
- do_rvae_write(env, value, vae2_tlbmask(env), true);
194
-
195
-}
196
-
197
-static void tlbi_aa64_rvae3_write(CPUARMState *env,
198
- const ARMCPRegInfo *ri,
199
- uint64_t value)
200
-{
201
- /*
202
- * Invalidate by VA range, EL3.
203
- * Currently handles all of RVAE3 and RVALE3,
204
- * since we don't support flush-for-specific-ASID-only or
205
- * flush-last-level-only.
206
- */
207
-
208
- do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
209
-}
210
-
211
-static void tlbi_aa64_rvae3is_write(CPUARMState *env,
212
- const ARMCPRegInfo *ri,
213
- uint64_t value)
214
-{
215
- /*
216
- * Invalidate by VA range, EL3, Inner/Outer Shareable.
217
- * Currently handles all of RVAE3IS, RVAE3OS, RVALE3IS and RVALE3OS,
218
- * since we don't support flush-for-specific-ASID-only,
219
- * flush-last-level-only or inner/outer specific flushes.
220
- */
221
-
222
- do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
223
-}
224
-
225
-static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
226
- uint64_t value)
227
-{
228
- do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
229
- tlb_force_broadcast(env));
230
-}
231
-
232
-static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
233
- const ARMCPRegInfo *ri,
234
- uint64_t value)
235
-{
236
- do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
237
-}
238
-#endif
239
-
240
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
241
bool isread)
242
{
243
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
244
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
245
};
246
247
-static const ARMCPRegInfo tlbirange_reginfo[] = {
248
- { .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
249
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
250
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
251
- .fgt = FGT_TLBIRVAE1IS,
252
- .writefn = tlbi_aa64_rvae1is_write },
253
- { .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
254
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
255
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
256
- .fgt = FGT_TLBIRVAAE1IS,
257
- .writefn = tlbi_aa64_rvae1is_write },
258
- { .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
259
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
260
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
261
- .fgt = FGT_TLBIRVALE1IS,
262
- .writefn = tlbi_aa64_rvae1is_write },
263
- { .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
264
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
265
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
266
- .fgt = FGT_TLBIRVAALE1IS,
267
- .writefn = tlbi_aa64_rvae1is_write },
268
- { .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
269
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
270
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
271
- .fgt = FGT_TLBIRVAE1OS,
272
- .writefn = tlbi_aa64_rvae1is_write },
273
- { .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
274
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
275
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
276
- .fgt = FGT_TLBIRVAAE1OS,
277
- .writefn = tlbi_aa64_rvae1is_write },
278
- { .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
279
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
280
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
281
- .fgt = FGT_TLBIRVALE1OS,
282
- .writefn = tlbi_aa64_rvae1is_write },
283
- { .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
284
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
285
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
286
- .fgt = FGT_TLBIRVAALE1OS,
287
- .writefn = tlbi_aa64_rvae1is_write },
288
- { .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
289
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
290
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
291
- .fgt = FGT_TLBIRVAE1,
292
- .writefn = tlbi_aa64_rvae1_write },
293
- { .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
294
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
295
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
296
- .fgt = FGT_TLBIRVAAE1,
297
- .writefn = tlbi_aa64_rvae1_write },
298
- { .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
299
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
300
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
301
- .fgt = FGT_TLBIRVALE1,
302
- .writefn = tlbi_aa64_rvae1_write },
303
- { .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
304
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
305
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
306
- .fgt = FGT_TLBIRVAALE1,
307
- .writefn = tlbi_aa64_rvae1_write },
308
- { .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
309
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
310
- .access = PL2_W, .type = ARM_CP_NO_RAW,
311
- .writefn = tlbi_aa64_ripas2e1is_write },
312
- { .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
313
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
314
- .access = PL2_W, .type = ARM_CP_NO_RAW,
315
- .writefn = tlbi_aa64_ripas2e1is_write },
316
- { .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
317
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
318
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
319
- .writefn = tlbi_aa64_rvae2is_write },
320
- { .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
321
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
322
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
323
- .writefn = tlbi_aa64_rvae2is_write },
324
- { .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
325
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
326
- .access = PL2_W, .type = ARM_CP_NO_RAW,
327
- .writefn = tlbi_aa64_ripas2e1_write },
328
- { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
329
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
330
- .access = PL2_W, .type = ARM_CP_NO_RAW,
331
- .writefn = tlbi_aa64_ripas2e1_write },
332
- { .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
333
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
334
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
335
- .writefn = tlbi_aa64_rvae2is_write },
336
- { .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
337
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
338
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
339
- .writefn = tlbi_aa64_rvae2is_write },
340
- { .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
341
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
342
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
343
- .writefn = tlbi_aa64_rvae2_write },
344
- { .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
345
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
346
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
347
- .writefn = tlbi_aa64_rvae2_write },
348
- { .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
349
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
350
- .access = PL3_W, .type = ARM_CP_NO_RAW,
351
- .writefn = tlbi_aa64_rvae3is_write },
352
- { .name = "TLBI_RVALE3IS", .state = ARM_CP_STATE_AA64,
353
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 5,
354
- .access = PL3_W, .type = ARM_CP_NO_RAW,
355
- .writefn = tlbi_aa64_rvae3is_write },
356
- { .name = "TLBI_RVAE3OS", .state = ARM_CP_STATE_AA64,
357
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 1,
358
- .access = PL3_W, .type = ARM_CP_NO_RAW,
359
- .writefn = tlbi_aa64_rvae3is_write },
360
- { .name = "TLBI_RVALE3OS", .state = ARM_CP_STATE_AA64,
361
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 5,
362
- .access = PL3_W, .type = ARM_CP_NO_RAW,
363
- .writefn = tlbi_aa64_rvae3is_write },
364
- { .name = "TLBI_RVAE3", .state = ARM_CP_STATE_AA64,
365
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 1,
366
- .access = PL3_W, .type = ARM_CP_NO_RAW,
367
- .writefn = tlbi_aa64_rvae3_write },
368
- { .name = "TLBI_RVALE3", .state = ARM_CP_STATE_AA64,
369
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
370
- .access = PL3_W, .type = ARM_CP_NO_RAW,
371
- .writefn = tlbi_aa64_rvae3_write },
372
-};
373
-
374
static const ARMCPRegInfo tlbios_reginfo[] = {
375
{ .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
376
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
377
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
378
if (cpu_isar_feature(aa64_rndr, cpu)) {
379
define_arm_cp_regs(cpu, rndr_reginfo);
380
}
381
- if (cpu_isar_feature(aa64_tlbirange, cpu)) {
382
- define_arm_cp_regs(cpu, tlbirange_reginfo);
383
- }
384
if (cpu_isar_feature(aa64_tlbios, cpu)) {
385
define_arm_cp_regs(cpu, tlbios_reginfo);
386
}
387
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
388
index XXXXXXX..XXXXXXX 100644
389
--- a/target/arm/tcg/tlb-insns.c
390
+++ b/target/arm/tcg/tlb-insns.c
391
@@ -XXX,XX +XXX,XX @@
392
* SPDX-License-Identifier: GPL-2.0-or-later
393
*/
394
#include "qemu/osdep.h"
395
+#include "qemu/log.h"
396
#include "exec/exec-all.h"
397
#include "cpu.h"
398
#include "internals.h"
399
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_el3_cp_reginfo[] = {
400
.writefn = tlbi_aa64_vae3_write },
401
};
402
403
+#ifdef TARGET_AARCH64
404
+typedef struct {
405
+ uint64_t base;
406
+ uint64_t length;
407
+} TLBIRange;
408
+
409
+static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
410
+{
411
+ /*
412
+ * Note that the TLBI range TG field encoding differs from both
413
+ * TG0 and TG1 encodings.
414
+ */
415
+ switch (tg) {
416
+ case 1:
417
+ return Gran4K;
418
+ case 2:
419
+ return Gran16K;
420
+ case 3:
421
+ return Gran64K;
422
+ default:
423
+ return GranInvalid;
424
+ }
425
+}
426
+
427
+static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
428
+ uint64_t value)
429
+{
430
+ unsigned int page_size_granule, page_shift, num, scale, exponent;
431
+ /* Extract one bit to represent the va selector in use. */
432
+ uint64_t select = sextract64(value, 36, 1);
433
+ ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true, false);
434
+ TLBIRange ret = { };
435
+ ARMGranuleSize gran;
436
+
437
+ page_size_granule = extract64(value, 46, 2);
438
+ gran = tlbi_range_tg_to_gran_size(page_size_granule);
439
+
440
+ /* The granule encoded in value must match the granule in use. */
441
+ if (gran != param.gran) {
442
+ qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
443
+ page_size_granule);
444
+ return ret;
445
+ }
446
+
447
+ page_shift = arm_granule_bits(gran);
448
+ num = extract64(value, 39, 5);
449
+ scale = extract64(value, 44, 2);
450
+ exponent = (5 * scale) + 1;
451
+
452
+ ret.length = (num + 1) << (exponent + page_shift);
453
+
454
+ if (param.select) {
455
+ ret.base = sextract64(value, 0, 37);
456
+ } else {
457
+ ret.base = extract64(value, 0, 37);
458
+ }
459
+ if (param.ds) {
460
+ /*
461
+ * With DS=1, BaseADDR is always shifted 16 so that it is able
462
+ * to address all 52 va bits. The input address is perforce
463
+ * aligned on a 64k boundary regardless of translation granule.
464
+ */
465
+ page_shift = 16;
466
+ }
467
+ ret.base <<= page_shift;
468
+
469
+ return ret;
470
+}
471
+
472
+static void do_rvae_write(CPUARMState *env, uint64_t value,
473
+ int idxmap, bool synced)
474
+{
475
+ ARMMMUIdx one_idx = ARM_MMU_IDX_A | ctz32(idxmap);
476
+ TLBIRange range;
477
+ int bits;
478
+
479
+ range = tlbi_aa64_get_range(env, one_idx, value);
480
+ bits = tlbbits_for_regime(env, one_idx, range.base);
481
+
482
+ if (synced) {
483
+ tlb_flush_range_by_mmuidx_all_cpus_synced(env_cpu(env),
484
+ range.base,
485
+ range.length,
486
+ idxmap,
487
+ bits);
488
+ } else {
489
+ tlb_flush_range_by_mmuidx(env_cpu(env), range.base,
490
+ range.length, idxmap, bits);
491
+ }
492
+}
493
+
494
+static void tlbi_aa64_rvae1_write(CPUARMState *env,
495
+ const ARMCPRegInfo *ri,
496
+ uint64_t value)
497
+{
498
+ /*
499
+ * Invalidate by VA range, EL1&0.
500
+ * Currently handles all of RVAE1, RVAAE1, RVAALE1 and RVALE1,
501
+ * since we don't support flush-for-specific-ASID-only or
502
+ * flush-last-level-only.
503
+ */
504
+
505
+ do_rvae_write(env, value, vae1_tlbmask(env),
506
+ tlb_force_broadcast(env));
507
+}
508
+
509
+static void tlbi_aa64_rvae1is_write(CPUARMState *env,
510
+ const ARMCPRegInfo *ri,
511
+ uint64_t value)
512
+{
513
+ /*
514
+ * Invalidate by VA range, Inner/Outer Shareable EL1&0.
515
+ * Currently handles all of RVAE1IS, RVAE1OS, RVAAE1IS, RVAAE1OS,
516
+ * RVAALE1IS, RVAALE1OS, RVALE1IS and RVALE1OS, since we don't support
517
+ * flush-for-specific-ASID-only, flush-last-level-only or inner/outer
518
+ * shareable specific flushes.
519
+ */
520
+
521
+ do_rvae_write(env, value, vae1_tlbmask(env), true);
522
+}
523
+
524
+static void tlbi_aa64_rvae2_write(CPUARMState *env,
525
+ const ARMCPRegInfo *ri,
526
+ uint64_t value)
527
+{
528
+ /*
529
+ * Invalidate by VA range, EL2.
530
+ * Currently handles all of RVAE2 and RVALE2,
531
+ * since we don't support flush-for-specific-ASID-only or
532
+ * flush-last-level-only.
533
+ */
534
+
535
+ do_rvae_write(env, value, vae2_tlbmask(env),
536
+ tlb_force_broadcast(env));
537
+
538
+
539
+}
540
+
541
+static void tlbi_aa64_rvae2is_write(CPUARMState *env,
542
+ const ARMCPRegInfo *ri,
543
+ uint64_t value)
544
+{
545
+ /*
546
+ * Invalidate by VA range, Inner/Outer Shareable, EL2.
547
+ * Currently handles all of RVAE2IS, RVAE2OS, RVALE2IS and RVALE2OS,
548
+ * since we don't support flush-for-specific-ASID-only,
549
+ * flush-last-level-only or inner/outer shareable specific flushes.
550
+ */
551
+
552
+ do_rvae_write(env, value, vae2_tlbmask(env), true);
553
+
554
+}
555
+
556
+static void tlbi_aa64_rvae3_write(CPUARMState *env,
557
+ const ARMCPRegInfo *ri,
558
+ uint64_t value)
559
+{
560
+ /*
561
+ * Invalidate by VA range, EL3.
562
+ * Currently handles all of RVAE3 and RVALE3,
563
+ * since we don't support flush-for-specific-ASID-only or
564
+ * flush-last-level-only.
565
+ */
566
+
567
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
568
+}
569
+
570
+static void tlbi_aa64_rvae3is_write(CPUARMState *env,
571
+ const ARMCPRegInfo *ri,
572
+ uint64_t value)
573
+{
574
+ /*
575
+ * Invalidate by VA range, EL3, Inner/Outer Shareable.
576
+ * Currently handles all of RVAE3IS, RVAE3OS, RVALE3IS and RVALE3OS,
577
+ * since we don't support flush-for-specific-ASID-only,
578
+ * flush-last-level-only or inner/outer specific flushes.
579
+ */
580
+
581
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
582
+}
583
+
584
+static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
585
+ uint64_t value)
586
+{
587
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
588
+ tlb_force_broadcast(env));
589
+}
590
+
591
+static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
592
+ const ARMCPRegInfo *ri,
593
+ uint64_t value)
594
+{
595
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
596
+}
597
+
598
+static const ARMCPRegInfo tlbirange_reginfo[] = {
599
+ { .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
600
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
601
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
602
+ .fgt = FGT_TLBIRVAE1IS,
603
+ .writefn = tlbi_aa64_rvae1is_write },
604
+ { .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
605
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
606
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
607
+ .fgt = FGT_TLBIRVAAE1IS,
608
+ .writefn = tlbi_aa64_rvae1is_write },
609
+ { .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
610
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
611
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
612
+ .fgt = FGT_TLBIRVALE1IS,
613
+ .writefn = tlbi_aa64_rvae1is_write },
614
+ { .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
615
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
616
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
617
+ .fgt = FGT_TLBIRVAALE1IS,
618
+ .writefn = tlbi_aa64_rvae1is_write },
619
+ { .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
620
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
621
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
622
+ .fgt = FGT_TLBIRVAE1OS,
623
+ .writefn = tlbi_aa64_rvae1is_write },
624
+ { .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
625
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
626
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
627
+ .fgt = FGT_TLBIRVAAE1OS,
628
+ .writefn = tlbi_aa64_rvae1is_write },
629
+ { .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
630
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
631
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
632
+ .fgt = FGT_TLBIRVALE1OS,
633
+ .writefn = tlbi_aa64_rvae1is_write },
634
+ { .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
635
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
636
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
637
+ .fgt = FGT_TLBIRVAALE1OS,
638
+ .writefn = tlbi_aa64_rvae1is_write },
639
+ { .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
640
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
641
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
642
+ .fgt = FGT_TLBIRVAE1,
643
+ .writefn = tlbi_aa64_rvae1_write },
644
+ { .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
645
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
646
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
647
+ .fgt = FGT_TLBIRVAAE1,
648
+ .writefn = tlbi_aa64_rvae1_write },
649
+ { .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
650
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
651
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
652
+ .fgt = FGT_TLBIRVALE1,
653
+ .writefn = tlbi_aa64_rvae1_write },
654
+ { .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
655
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
656
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
657
+ .fgt = FGT_TLBIRVAALE1,
658
+ .writefn = tlbi_aa64_rvae1_write },
659
+ { .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
660
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
661
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
662
+ .writefn = tlbi_aa64_ripas2e1is_write },
663
+ { .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
664
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
665
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
666
+ .writefn = tlbi_aa64_ripas2e1is_write },
667
+ { .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
668
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
669
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
670
+ .writefn = tlbi_aa64_rvae2is_write },
671
+ { .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
672
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
673
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
674
+ .writefn = tlbi_aa64_rvae2is_write },
675
+ { .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
676
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
677
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
678
+ .writefn = tlbi_aa64_ripas2e1_write },
679
+ { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
680
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
681
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
682
+ .writefn = tlbi_aa64_ripas2e1_write },
683
+ { .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
684
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
685
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
686
+ .writefn = tlbi_aa64_rvae2is_write },
687
+ { .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
688
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
689
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
690
+ .writefn = tlbi_aa64_rvae2is_write },
691
+ { .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
692
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
693
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
694
+ .writefn = tlbi_aa64_rvae2_write },
695
+ { .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
696
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
697
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
698
+ .writefn = tlbi_aa64_rvae2_write },
699
+ { .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
700
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
701
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
702
+ .writefn = tlbi_aa64_rvae3is_write },
703
+ { .name = "TLBI_RVALE3IS", .state = ARM_CP_STATE_AA64,
704
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 5,
705
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
706
+ .writefn = tlbi_aa64_rvae3is_write },
707
+ { .name = "TLBI_RVAE3OS", .state = ARM_CP_STATE_AA64,
708
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 1,
709
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
710
+ .writefn = tlbi_aa64_rvae3is_write },
711
+ { .name = "TLBI_RVALE3OS", .state = ARM_CP_STATE_AA64,
712
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 5,
713
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
714
+ .writefn = tlbi_aa64_rvae3is_write },
715
+ { .name = "TLBI_RVAE3", .state = ARM_CP_STATE_AA64,
716
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 1,
717
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
718
+ .writefn = tlbi_aa64_rvae3_write },
719
+ { .name = "TLBI_RVALE3", .state = ARM_CP_STATE_AA64,
720
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
721
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
722
+ .writefn = tlbi_aa64_rvae3_write },
723
+};
724
+#endif
725
+
726
void define_tlb_insn_regs(ARMCPU *cpu)
727
{
728
CPUARMState *env = &cpu->env;
729
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
730
if (arm_feature(env, ARM_FEATURE_EL3)) {
731
define_arm_cp_regs(cpu, tlbi_el3_cp_reginfo);
732
}
733
+#ifdef TARGET_AARCH64
734
+ if (cpu_isar_feature(aa64_tlbirange, cpu)) {
735
+ define_arm_cp_regs(cpu, tlbirange_reginfo);
736
+ }
737
+#endif
738
}
739
--
740
2.34.1
diff view generated by jsdifflib
New patch
1
Move the TLBI OS insns across to tlb-insns.c.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241210160452.2427965-8-peter.maydell@linaro.org
6
---
7
target/arm/helper.c | 80 --------------------------------------
8
target/arm/tcg/tlb-insns.c | 80 ++++++++++++++++++++++++++++++++++++++
9
2 files changed, 80 insertions(+), 80 deletions(-)
10
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
16
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
17
};
18
19
-static const ARMCPRegInfo tlbios_reginfo[] = {
20
- { .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
21
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
22
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
23
- .fgt = FGT_TLBIVMALLE1OS,
24
- .writefn = tlbi_aa64_vmalle1is_write },
25
- { .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
26
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
27
- .fgt = FGT_TLBIVAE1OS,
28
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
29
- .writefn = tlbi_aa64_vae1is_write },
30
- { .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
31
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
32
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
33
- .fgt = FGT_TLBIASIDE1OS,
34
- .writefn = tlbi_aa64_vmalle1is_write },
35
- { .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
36
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
37
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
38
- .fgt = FGT_TLBIVAAE1OS,
39
- .writefn = tlbi_aa64_vae1is_write },
40
- { .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
41
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
42
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
43
- .fgt = FGT_TLBIVALE1OS,
44
- .writefn = tlbi_aa64_vae1is_write },
45
- { .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
46
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
47
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
48
- .fgt = FGT_TLBIVAALE1OS,
49
- .writefn = tlbi_aa64_vae1is_write },
50
- { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
51
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
52
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
53
- .writefn = tlbi_aa64_alle2is_write },
54
- { .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
55
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
56
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
57
- .writefn = tlbi_aa64_vae2is_write },
58
- { .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
59
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
60
- .access = PL2_W, .type = ARM_CP_NO_RAW,
61
- .writefn = tlbi_aa64_alle1is_write },
62
- { .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
63
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
64
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
65
- .writefn = tlbi_aa64_vae2is_write },
66
- { .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
67
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
68
- .access = PL2_W, .type = ARM_CP_NO_RAW,
69
- .writefn = tlbi_aa64_alle1is_write },
70
- { .name = "TLBI_IPAS2E1OS", .state = ARM_CP_STATE_AA64,
71
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 0,
72
- .access = PL2_W, .type = ARM_CP_NOP },
73
- { .name = "TLBI_RIPAS2E1OS", .state = ARM_CP_STATE_AA64,
74
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 3,
75
- .access = PL2_W, .type = ARM_CP_NOP },
76
- { .name = "TLBI_IPAS2LE1OS", .state = ARM_CP_STATE_AA64,
77
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 4,
78
- .access = PL2_W, .type = ARM_CP_NOP },
79
- { .name = "TLBI_RIPAS2LE1OS", .state = ARM_CP_STATE_AA64,
80
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 7,
81
- .access = PL2_W, .type = ARM_CP_NOP },
82
- { .name = "TLBI_ALLE3OS", .state = ARM_CP_STATE_AA64,
83
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 0,
84
- .access = PL3_W, .type = ARM_CP_NO_RAW,
85
- .writefn = tlbi_aa64_alle3is_write },
86
- { .name = "TLBI_VAE3OS", .state = ARM_CP_STATE_AA64,
87
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 1,
88
- .access = PL3_W, .type = ARM_CP_NO_RAW,
89
- .writefn = tlbi_aa64_vae3is_write },
90
- { .name = "TLBI_VALE3OS", .state = ARM_CP_STATE_AA64,
91
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
92
- .access = PL3_W, .type = ARM_CP_NO_RAW,
93
- .writefn = tlbi_aa64_vae3is_write },
94
-};
95
-
96
static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
97
{
98
Error *err = NULL;
99
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
100
if (cpu_isar_feature(aa64_rndr, cpu)) {
101
define_arm_cp_regs(cpu, rndr_reginfo);
102
}
103
- if (cpu_isar_feature(aa64_tlbios, cpu)) {
104
- define_arm_cp_regs(cpu, tlbios_reginfo);
105
- }
106
/* Data Cache clean instructions up to PoP */
107
if (cpu_isar_feature(aa64_dcpop, cpu)) {
108
define_one_arm_cp_reg(cpu, dcpop_reg);
109
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/tcg/tlb-insns.c
112
+++ b/target/arm/tcg/tlb-insns.c
113
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
114
.access = PL3_W, .type = ARM_CP_NO_RAW,
115
.writefn = tlbi_aa64_rvae3_write },
116
};
117
+
118
+static const ARMCPRegInfo tlbios_reginfo[] = {
119
+ { .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
120
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
121
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
122
+ .fgt = FGT_TLBIVMALLE1OS,
123
+ .writefn = tlbi_aa64_vmalle1is_write },
124
+ { .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
125
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
126
+ .fgt = FGT_TLBIVAE1OS,
127
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
128
+ .writefn = tlbi_aa64_vae1is_write },
129
+ { .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
130
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
131
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
132
+ .fgt = FGT_TLBIASIDE1OS,
133
+ .writefn = tlbi_aa64_vmalle1is_write },
134
+ { .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
135
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
136
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
137
+ .fgt = FGT_TLBIVAAE1OS,
138
+ .writefn = tlbi_aa64_vae1is_write },
139
+ { .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
140
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
141
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
142
+ .fgt = FGT_TLBIVALE1OS,
143
+ .writefn = tlbi_aa64_vae1is_write },
144
+ { .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
145
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
146
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
147
+ .fgt = FGT_TLBIVAALE1OS,
148
+ .writefn = tlbi_aa64_vae1is_write },
149
+ { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
150
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
151
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
152
+ .writefn = tlbi_aa64_alle2is_write },
153
+ { .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
154
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
155
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
156
+ .writefn = tlbi_aa64_vae2is_write },
157
+ { .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
158
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
159
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
160
+ .writefn = tlbi_aa64_alle1is_write },
161
+ { .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
162
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
163
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
164
+ .writefn = tlbi_aa64_vae2is_write },
165
+ { .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
166
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
167
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
168
+ .writefn = tlbi_aa64_alle1is_write },
169
+ { .name = "TLBI_IPAS2E1OS", .state = ARM_CP_STATE_AA64,
170
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 0,
171
+ .access = PL2_W, .type = ARM_CP_NOP },
172
+ { .name = "TLBI_RIPAS2E1OS", .state = ARM_CP_STATE_AA64,
173
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 3,
174
+ .access = PL2_W, .type = ARM_CP_NOP },
175
+ { .name = "TLBI_IPAS2LE1OS", .state = ARM_CP_STATE_AA64,
176
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 4,
177
+ .access = PL2_W, .type = ARM_CP_NOP },
178
+ { .name = "TLBI_RIPAS2LE1OS", .state = ARM_CP_STATE_AA64,
179
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 7,
180
+ .access = PL2_W, .type = ARM_CP_NOP },
181
+ { .name = "TLBI_ALLE3OS", .state = ARM_CP_STATE_AA64,
182
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 0,
183
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
184
+ .writefn = tlbi_aa64_alle3is_write },
185
+ { .name = "TLBI_VAE3OS", .state = ARM_CP_STATE_AA64,
186
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 1,
187
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
188
+ .writefn = tlbi_aa64_vae3is_write },
189
+ { .name = "TLBI_VALE3OS", .state = ARM_CP_STATE_AA64,
190
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
191
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
192
+ .writefn = tlbi_aa64_vae3is_write },
193
+};
194
#endif
195
196
void define_tlb_insn_regs(ARMCPU *cpu)
197
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
198
if (cpu_isar_feature(aa64_tlbirange, cpu)) {
199
define_arm_cp_regs(cpu, tlbirange_reginfo);
200
}
201
+ if (cpu_isar_feature(aa64_tlbios, cpu)) {
202
+ define_arm_cp_regs(cpu, tlbios_reginfo);
203
+ }
204
#endif
205
}
206
--
207
2.34.1
diff view generated by jsdifflib
New patch
1
The remaining functions that we temporarily made global are now
2
used only from callsits in tlb-insns.c; move them across and
3
make them file-local again.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241210160452.2427965-9-peter.maydell@linaro.org
8
---
9
target/arm/cpregs.h | 34 ------
10
target/arm/helper.c | 220 -------------------------------------
11
target/arm/tcg/tlb-insns.c | 220 +++++++++++++++++++++++++++++++++++++
12
3 files changed, 220 insertions(+), 254 deletions(-)
13
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpregs.h
17
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpreg_traps_in_nv(const ARMCPRegInfo *ri)
19
return ri->opc1 == 4 || ri->opc1 == 5;
20
}
21
22
-/*
23
- * Temporary declarations of functions until the move to tlb_insn_helper.c
24
- * is complete and we can make the functions static again
25
- */
26
-CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
27
- bool isread);
28
-CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
29
- bool isread);
30
-CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
31
- bool isread);
32
-bool tlb_force_broadcast(CPUARMState *env);
33
-int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
34
- uint64_t addr);
35
-int vae1_tlbbits(CPUARMState *env, uint64_t addr);
36
-int vae2_tlbbits(CPUARMState *env, uint64_t addr);
37
-int vae1_tlbmask(CPUARMState *env);
38
-int vae2_tlbmask(CPUARMState *env);
39
-int ipas2e1_tlbmask(CPUARMState *env, int64_t value);
40
-int e2_tlbmask(CPUARMState *env);
41
-void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
42
- uint64_t value);
43
-void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
44
- uint64_t value);
45
-void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
- uint64_t value);
47
-void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
- uint64_t value);
49
-void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
50
- uint64_t value);
51
-void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
52
- uint64_t value);
53
-void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
- uint64_t value);
55
-
56
#endif /* TARGET_ARM_CPREGS_H */
57
diff --git a/target/arm/helper.c b/target/arm/helper.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/helper.c
60
+++ b/target/arm/helper.c
61
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tacr(CPUARMState *env, const ARMCPRegInfo *ri,
62
return CP_ACCESS_OK;
63
}
64
65
-/* Check for traps from EL1 due to HCR_EL2.TTLB. */
66
-CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
67
- bool isread)
68
-{
69
- if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) {
70
- return CP_ACCESS_TRAP_EL2;
71
- }
72
- return CP_ACCESS_OK;
73
-}
74
-
75
-/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBIS. */
76
-CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
77
- bool isread)
78
-{
79
- if (arm_current_el(env) == 1 &&
80
- (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBIS))) {
81
- return CP_ACCESS_TRAP_EL2;
82
- }
83
- return CP_ACCESS_OK;
84
-}
85
-
86
-#ifdef TARGET_AARCH64
87
-/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBOS. */
88
-CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
89
- bool isread)
90
-{
91
- if (arm_current_el(env) == 1 &&
92
- (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBOS))) {
93
- return CP_ACCESS_TRAP_EL2;
94
- }
95
- return CP_ACCESS_OK;
96
-}
97
-#endif
98
-
99
static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
100
{
101
ARMCPU *cpu = env_archcpu(env);
102
@@ -XXX,XX +XXX,XX @@ int alle1_tlbmask(CPUARMState *env)
103
ARMMMUIdxBit_Stage2_S);
104
}
105
106
-/*
107
- * Non-IS variants of TLB operations are upgraded to
108
- * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
109
- * force broadcast of these operations.
110
- */
111
-bool tlb_force_broadcast(CPUARMState *env)
112
-{
113
- return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
114
-}
115
-
116
static const ARMCPRegInfo cp_reginfo[] = {
117
/*
118
* Define the secure and non-secure FCSE identifier CP registers
119
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tocu(CPUARMState *env, const ARMCPRegInfo *ri,
120
return do_cacheop_pou_access(env, HCR_TOCU | HCR_TPU);
121
}
122
123
-/*
124
- * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
125
- * Page D4-1736 (DDI0487A.b)
126
- */
127
-
128
-int vae1_tlbmask(CPUARMState *env)
129
-{
130
- uint64_t hcr = arm_hcr_el2_eff(env);
131
- uint16_t mask;
132
-
133
- assert(arm_feature(env, ARM_FEATURE_AARCH64));
134
-
135
- if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
136
- mask = ARMMMUIdxBit_E20_2 |
137
- ARMMMUIdxBit_E20_2_PAN |
138
- ARMMMUIdxBit_E20_0;
139
- } else {
140
- /* This is AArch64 only, so we don't need to touch the EL30_x TLBs */
141
- mask = ARMMMUIdxBit_E10_1 |
142
- ARMMMUIdxBit_E10_1_PAN |
143
- ARMMMUIdxBit_E10_0;
144
- }
145
- return mask;
146
-}
147
-
148
-int vae2_tlbmask(CPUARMState *env)
149
-{
150
- uint64_t hcr = arm_hcr_el2_eff(env);
151
- uint16_t mask;
152
-
153
- if (hcr & HCR_E2H) {
154
- mask = ARMMMUIdxBit_E20_2 |
155
- ARMMMUIdxBit_E20_2_PAN |
156
- ARMMMUIdxBit_E20_0;
157
- } else {
158
- mask = ARMMMUIdxBit_E2;
159
- }
160
- return mask;
161
-}
162
-
163
-/* Return 56 if TBI is enabled, 64 otherwise. */
164
-int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
165
- uint64_t addr)
166
-{
167
- uint64_t tcr = regime_tcr(env, mmu_idx);
168
- int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
169
- int select = extract64(addr, 55, 1);
170
-
171
- return (tbi >> select) & 1 ? 56 : 64;
172
-}
173
-
174
-int vae1_tlbbits(CPUARMState *env, uint64_t addr)
175
-{
176
- uint64_t hcr = arm_hcr_el2_eff(env);
177
- ARMMMUIdx mmu_idx;
178
-
179
- assert(arm_feature(env, ARM_FEATURE_AARCH64));
180
-
181
- /* Only the regime of the mmu_idx below is significant. */
182
- if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
183
- mmu_idx = ARMMMUIdx_E20_0;
184
- } else {
185
- mmu_idx = ARMMMUIdx_E10_0;
186
- }
187
-
188
- return tlbbits_for_regime(env, mmu_idx, addr);
189
-}
190
-
191
-int vae2_tlbbits(CPUARMState *env, uint64_t addr)
192
-{
193
- uint64_t hcr = arm_hcr_el2_eff(env);
194
- ARMMMUIdx mmu_idx;
195
-
196
- /*
197
- * Only the regime of the mmu_idx below is significant.
198
- * Regime EL2&0 has two ranges with separate TBI configuration, while EL2
199
- * only has one.
200
- */
201
- if (hcr & HCR_E2H) {
202
- mmu_idx = ARMMMUIdx_E20_2;
203
- } else {
204
- mmu_idx = ARMMMUIdx_E2;
205
- }
206
-
207
- return tlbbits_for_regime(env, mmu_idx, addr);
208
-}
209
-
210
-void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
211
- uint64_t value)
212
-{
213
- CPUState *cs = env_cpu(env);
214
- int mask = vae1_tlbmask(env);
215
-
216
- tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
217
-}
218
-
219
-int e2_tlbmask(CPUARMState *env)
220
-{
221
- return (ARMMMUIdxBit_E20_0 |
222
- ARMMMUIdxBit_E20_2 |
223
- ARMMMUIdxBit_E20_2_PAN |
224
- ARMMMUIdxBit_E2);
225
-}
226
-
227
-void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
228
- uint64_t value)
229
-{
230
- CPUState *cs = env_cpu(env);
231
- int mask = alle1_tlbmask(env);
232
-
233
- tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
234
-}
235
-
236
-void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
237
- uint64_t value)
238
-{
239
- CPUState *cs = env_cpu(env);
240
- int mask = e2_tlbmask(env);
241
-
242
- tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
243
-}
244
-
245
-void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
246
- uint64_t value)
247
-{
248
- CPUState *cs = env_cpu(env);
249
-
250
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
251
-}
252
-
253
-void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
- uint64_t value)
255
-{
256
- CPUState *cs = env_cpu(env);
257
- int mask = vae1_tlbmask(env);
258
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
259
- int bits = vae1_tlbbits(env, pageaddr);
260
-
261
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
262
-}
263
-
264
-void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
265
- uint64_t value)
266
-{
267
- CPUState *cs = env_cpu(env);
268
- int mask = vae2_tlbmask(env);
269
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
270
- int bits = vae2_tlbbits(env, pageaddr);
271
-
272
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
273
-}
274
-
275
-void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
276
- uint64_t value)
277
-{
278
- CPUState *cs = env_cpu(env);
279
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
280
- int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
281
-
282
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
283
- ARMMMUIdxBit_E3, bits);
284
-}
285
-
286
-int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
287
-{
288
- /*
289
- * The MSB of value is the NS field, which only applies if SEL2
290
- * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
291
- */
292
- return (value >= 0
293
- && cpu_isar_feature(aa64_sel2, env_archcpu(env))
294
- && arm_is_secure_below_el3(env)
295
- ? ARMMMUIdxBit_Stage2_S
296
- : ARMMMUIdxBit_Stage2);
297
-}
298
-
299
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
300
bool isread)
301
{
302
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
303
index XXXXXXX..XXXXXXX 100644
304
--- a/target/arm/tcg/tlb-insns.c
305
+++ b/target/arm/tcg/tlb-insns.c
306
@@ -XXX,XX +XXX,XX @@
307
#include "cpu-features.h"
308
#include "cpregs.h"
309
310
+/* Check for traps from EL1 due to HCR_EL2.TTLB. */
311
+static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
312
+ bool isread)
313
+{
314
+ if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) {
315
+ return CP_ACCESS_TRAP_EL2;
316
+ }
317
+ return CP_ACCESS_OK;
318
+}
319
+
320
+/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBIS. */
321
+static CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
322
+ bool isread)
323
+{
324
+ if (arm_current_el(env) == 1 &&
325
+ (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBIS))) {
326
+ return CP_ACCESS_TRAP_EL2;
327
+ }
328
+ return CP_ACCESS_OK;
329
+}
330
+
331
+#ifdef TARGET_AARCH64
332
+/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBOS. */
333
+static CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
334
+ bool isread)
335
+{
336
+ if (arm_current_el(env) == 1 &&
337
+ (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBOS))) {
338
+ return CP_ACCESS_TRAP_EL2;
339
+ }
340
+ return CP_ACCESS_OK;
341
+}
342
+#endif
343
+
344
/* IS variants of TLB operations must affect all cores */
345
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
346
uint64_t value)
347
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
348
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
349
}
350
351
+/*
352
+ * Non-IS variants of TLB operations are upgraded to
353
+ * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
354
+ * force broadcast of these operations.
355
+ */
356
+static bool tlb_force_broadcast(CPUARMState *env)
357
+{
358
+ return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
359
+}
360
+
361
static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
362
uint64_t value)
363
{
364
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
365
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
366
}
367
368
+/*
369
+ * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
370
+ * Page D4-1736 (DDI0487A.b)
371
+ */
372
+
373
+static int vae1_tlbmask(CPUARMState *env)
374
+{
375
+ uint64_t hcr = arm_hcr_el2_eff(env);
376
+ uint16_t mask;
377
+
378
+ assert(arm_feature(env, ARM_FEATURE_AARCH64));
379
+
380
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
381
+ mask = ARMMMUIdxBit_E20_2 |
382
+ ARMMMUIdxBit_E20_2_PAN |
383
+ ARMMMUIdxBit_E20_0;
384
+ } else {
385
+ /* This is AArch64 only, so we don't need to touch the EL30_x TLBs */
386
+ mask = ARMMMUIdxBit_E10_1 |
387
+ ARMMMUIdxBit_E10_1_PAN |
388
+ ARMMMUIdxBit_E10_0;
389
+ }
390
+ return mask;
391
+}
392
+
393
+static int vae2_tlbmask(CPUARMState *env)
394
+{
395
+ uint64_t hcr = arm_hcr_el2_eff(env);
396
+ uint16_t mask;
397
+
398
+ if (hcr & HCR_E2H) {
399
+ mask = ARMMMUIdxBit_E20_2 |
400
+ ARMMMUIdxBit_E20_2_PAN |
401
+ ARMMMUIdxBit_E20_0;
402
+ } else {
403
+ mask = ARMMMUIdxBit_E2;
404
+ }
405
+ return mask;
406
+}
407
+
408
+/* Return 56 if TBI is enabled, 64 otherwise. */
409
+static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
410
+ uint64_t addr)
411
+{
412
+ uint64_t tcr = regime_tcr(env, mmu_idx);
413
+ int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
414
+ int select = extract64(addr, 55, 1);
415
+
416
+ return (tbi >> select) & 1 ? 56 : 64;
417
+}
418
+
419
+static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
420
+{
421
+ uint64_t hcr = arm_hcr_el2_eff(env);
422
+ ARMMMUIdx mmu_idx;
423
+
424
+ assert(arm_feature(env, ARM_FEATURE_AARCH64));
425
+
426
+ /* Only the regime of the mmu_idx below is significant. */
427
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
428
+ mmu_idx = ARMMMUIdx_E20_0;
429
+ } else {
430
+ mmu_idx = ARMMMUIdx_E10_0;
431
+ }
432
+
433
+ return tlbbits_for_regime(env, mmu_idx, addr);
434
+}
435
+
436
+static int vae2_tlbbits(CPUARMState *env, uint64_t addr)
437
+{
438
+ uint64_t hcr = arm_hcr_el2_eff(env);
439
+ ARMMMUIdx mmu_idx;
440
+
441
+ /*
442
+ * Only the regime of the mmu_idx below is significant.
443
+ * Regime EL2&0 has two ranges with separate TBI configuration, while EL2
444
+ * only has one.
445
+ */
446
+ if (hcr & HCR_E2H) {
447
+ mmu_idx = ARMMMUIdx_E20_2;
448
+ } else {
449
+ mmu_idx = ARMMMUIdx_E2;
450
+ }
451
+
452
+ return tlbbits_for_regime(env, mmu_idx, addr);
453
+}
454
+
455
+static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
456
+ uint64_t value)
457
+{
458
+ CPUState *cs = env_cpu(env);
459
+ int mask = vae1_tlbmask(env);
460
+
461
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
462
+}
463
+
464
static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
465
uint64_t value)
466
{
467
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
468
}
469
}
470
471
+static int e2_tlbmask(CPUARMState *env)
472
+{
473
+ return (ARMMMUIdxBit_E20_0 |
474
+ ARMMMUIdxBit_E20_2 |
475
+ ARMMMUIdxBit_E20_2_PAN |
476
+ ARMMMUIdxBit_E2);
477
+}
478
+
479
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
480
uint64_t value)
481
{
482
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
483
tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
484
}
485
486
+static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
487
+ uint64_t value)
488
+{
489
+ CPUState *cs = env_cpu(env);
490
+ int mask = alle1_tlbmask(env);
491
+
492
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
493
+}
494
+
495
+static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
496
+ uint64_t value)
497
+{
498
+ CPUState *cs = env_cpu(env);
499
+ int mask = e2_tlbmask(env);
500
+
501
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
502
+}
503
+
504
+static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
505
+ uint64_t value)
506
+{
507
+ CPUState *cs = env_cpu(env);
508
+
509
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
510
+}
511
+
512
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
513
uint64_t value)
514
{
515
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
516
tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
517
}
518
519
+static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
520
+ uint64_t value)
521
+{
522
+ CPUState *cs = env_cpu(env);
523
+ int mask = vae1_tlbmask(env);
524
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
525
+ int bits = vae1_tlbbits(env, pageaddr);
526
+
527
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
528
+}
529
+
530
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
531
uint64_t value)
532
{
533
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
534
}
535
}
536
537
+static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
538
+ uint64_t value)
539
+{
540
+ CPUState *cs = env_cpu(env);
541
+ int mask = vae2_tlbmask(env);
542
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
543
+ int bits = vae2_tlbbits(env, pageaddr);
544
+
545
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
546
+}
547
+
548
+static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
549
+ uint64_t value)
550
+{
551
+ CPUState *cs = env_cpu(env);
552
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
553
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
554
+
555
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
556
+ ARMMMUIdxBit_E3, bits);
557
+}
558
+
559
+static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
560
+{
561
+ /*
562
+ * The MSB of value is the NS field, which only applies if SEL2
563
+ * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
564
+ */
565
+ return (value >= 0
566
+ && cpu_isar_feature(aa64_sel2, env_archcpu(env))
567
+ && arm_is_secure_below_el3(env)
568
+ ? ARMMMUIdxBit_Stage2_S
569
+ : ARMMMUIdxBit_Stage2);
570
+}
571
+
572
static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
573
uint64_t value)
574
{
575
--
576
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Move the FEAT_RME specific TLB insns across to tlb-insns.c.
2
2
3
In commit ce4afed839 ("target/arm: Implement AArch32 HCR and HCR2")
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
the HCR_EL2 register has been changed from type NO_RAW (no underlying
5
state and does not support raw access for state saving/loading) to
6
type CONST (TCG can assume the value to be constant), removing the
7
read/write accessors.
8
We forgot to remove the previous type ARM_CP_NO_RAW. This is not
9
really a problem since the field is overwritten. However it makes
10
code review confuse, so remove it.
11
12
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20200812111223.7787-1-f4bug@amsat.org
5
Message-id: 20241210160452.2427965-10-peter.maydell@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
6
---
18
target/arm/helper.c | 1 -
7
target/arm/helper.c | 38 --------------------------------
19
1 file changed, 1 deletion(-)
8
target/arm/tcg/tlb-insns.c | 45 ++++++++++++++++++++++++++++++++++++++
9
2 files changed, 45 insertions(+), 38 deletions(-)
20
10
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
15
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
26
.access = PL2_RW,
27
.readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore },
28
{ .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH,
29
- .type = ARM_CP_NO_RAW,
30
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
31
.access = PL2_RW,
32
.type = ARM_CP_CONST, .resetvalue = 0 },
16
.type = ARM_CP_CONST, .resetvalue = 0 },
17
};
18
19
-static void tlbi_aa64_paall_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
- uint64_t value)
21
-{
22
- CPUState *cs = env_cpu(env);
23
-
24
- tlb_flush(cs);
25
-}
26
-
27
static void gpccr_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
uint64_t value)
29
{
30
@@ -XXX,XX +XXX,XX @@ static void gpccr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
31
env_archcpu(env)->reset_l0gptsz);
32
}
33
34
-static void tlbi_aa64_paallos_write(CPUARMState *env, const ARMCPRegInfo *ri,
35
- uint64_t value)
36
-{
37
- CPUState *cs = env_cpu(env);
38
-
39
- tlb_flush_all_cpus_synced(cs);
40
-}
41
-
42
static const ARMCPRegInfo rme_reginfo[] = {
43
{ .name = "GPCCR_EL3", .state = ARM_CP_STATE_AA64,
44
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 1, .opc2 = 6,
45
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rme_reginfo[] = {
46
{ .name = "MFAR_EL3", .state = ARM_CP_STATE_AA64,
47
.opc0 = 3, .opc1 = 6, .crn = 6, .crm = 0, .opc2 = 5,
48
.access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mfar_el3) },
49
- { .name = "TLBI_PAALL", .state = ARM_CP_STATE_AA64,
50
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 4,
51
- .access = PL3_W, .type = ARM_CP_NO_RAW,
52
- .writefn = tlbi_aa64_paall_write },
53
- { .name = "TLBI_PAALLOS", .state = ARM_CP_STATE_AA64,
54
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 4,
55
- .access = PL3_W, .type = ARM_CP_NO_RAW,
56
- .writefn = tlbi_aa64_paallos_write },
57
- /*
58
- * QEMU does not have a way to invalidate by physical address, thus
59
- * invalidating a range of physical addresses is accomplished by
60
- * flushing all tlb entries in the outer shareable domain,
61
- * just like PAALLOS.
62
- */
63
- { .name = "TLBI_RPALOS", .state = ARM_CP_STATE_AA64,
64
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 4, .opc2 = 7,
65
- .access = PL3_W, .type = ARM_CP_NO_RAW,
66
- .writefn = tlbi_aa64_paallos_write },
67
- { .name = "TLBI_RPAOS", .state = ARM_CP_STATE_AA64,
68
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 4, .opc2 = 3,
69
- .access = PL3_W, .type = ARM_CP_NO_RAW,
70
- .writefn = tlbi_aa64_paallos_write },
71
{ .name = "DC_CIPAPA", .state = ARM_CP_STATE_AA64,
72
.opc0 = 1, .opc1 = 6, .crn = 7, .crm = 14, .opc2 = 1,
73
.access = PL3_W, .type = ARM_CP_NOP },
74
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/tcg/tlb-insns.c
77
+++ b/target/arm/tcg/tlb-insns.c
78
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
79
.access = PL3_W, .type = ARM_CP_NO_RAW,
80
.writefn = tlbi_aa64_vae3is_write },
81
};
82
+
83
+static void tlbi_aa64_paall_write(CPUARMState *env, const ARMCPRegInfo *ri,
84
+ uint64_t value)
85
+{
86
+ CPUState *cs = env_cpu(env);
87
+
88
+ tlb_flush(cs);
89
+}
90
+
91
+static void tlbi_aa64_paallos_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
+ uint64_t value)
93
+{
94
+ CPUState *cs = env_cpu(env);
95
+
96
+ tlb_flush_all_cpus_synced(cs);
97
+}
98
+
99
+static const ARMCPRegInfo tlbi_rme_reginfo[] = {
100
+ { .name = "TLBI_PAALL", .state = ARM_CP_STATE_AA64,
101
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 4,
102
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
103
+ .writefn = tlbi_aa64_paall_write },
104
+ { .name = "TLBI_PAALLOS", .state = ARM_CP_STATE_AA64,
105
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 4,
106
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
107
+ .writefn = tlbi_aa64_paallos_write },
108
+ /*
109
+ * QEMU does not have a way to invalidate by physical address, thus
110
+ * invalidating a range of physical addresses is accomplished by
111
+ * flushing all tlb entries in the outer shareable domain,
112
+ * just like PAALLOS.
113
+ */
114
+ { .name = "TLBI_RPALOS", .state = ARM_CP_STATE_AA64,
115
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 4, .opc2 = 7,
116
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
117
+ .writefn = tlbi_aa64_paallos_write },
118
+ { .name = "TLBI_RPAOS", .state = ARM_CP_STATE_AA64,
119
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 4, .opc2 = 3,
120
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
121
+ .writefn = tlbi_aa64_paallos_write },
122
+};
123
+
124
#endif
125
126
void define_tlb_insn_regs(ARMCPU *cpu)
127
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
128
if (cpu_isar_feature(aa64_tlbios, cpu)) {
129
define_arm_cp_regs(cpu, tlbios_reginfo);
130
}
131
+ if (cpu_isar_feature(aa64_rme, cpu)) {
132
+ define_arm_cp_regs(cpu, tlbi_rme_reginfo);
133
+ }
134
#endif
135
}
33
--
136
--
34
2.20.1
137
2.34.1
35
36
diff view generated by jsdifflib
New patch
1
We currently register the tlbi_el2_cp_reginfo[] TLBI insns if EL2 is
2
implemented, or if EL3 and v8 is implemented. This is a copy of the
3
logic used for el2_cp_reginfo[], but for the specific case of the
4
TLBI insns we can simplify it. This is because we do not need the
5
"if EL2 does not exist but EL3 does then EL2 registers should exist
6
and be RAZ/WI" handling here: all our cpregs are for instructions,
7
which UNDEF when EL3 exists and EL2 does not.
1
8
9
Simplify the condition down to just "if EL2 exists".
10
This is not a behaviour change because:
11
* for AArch64 insns we marked them with ARM_CP_EL3_NO_EL2_UNDEF,
12
which meant that define_arm_cp_regs() would ignore them if
13
EL2 wasn't present
14
* for AArch32 insns, the .access = PL2_W meant that if EL2
15
was not present the only way to get at them was from AArch32
16
EL3; but we have no CPUs which have ARM_FEATURE_V8 but
17
start in AArch32
18
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20241210160452.2427965-11-peter.maydell@linaro.org
22
---
23
target/arm/tcg/tlb-insns.c | 4 +---
24
1 file changed, 1 insertion(+), 3 deletions(-)
25
26
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/tlb-insns.c
29
+++ b/target/arm/tcg/tlb-insns.c
30
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
31
* ops (i.e. matching the condition for el2_cp_reginfo[] in
32
* helper.c), but we will be able to simplify this later.
33
*/
34
- if (arm_feature(env, ARM_FEATURE_EL2)
35
- || (arm_feature(env, ARM_FEATURE_EL3)
36
- && arm_feature(env, ARM_FEATURE_V8))) {
37
+ if (arm_feature(env, ARM_FEATURE_EL2)) {
38
define_arm_cp_regs(cpu, tlbi_el2_cp_reginfo);
39
}
40
if (arm_feature(env, ARM_FEATURE_EL3)) {
41
--
42
2.34.1
diff view generated by jsdifflib