1
Some arm patches before softfreeze. These are all bug fixes.
1
Another very large pullreq (this one mostly because it has
2
RTH's decodetree conversion series in it), but this should be
3
the last of the really large things in my to-review queue...
2
4
5
thanks
3
-- PMM
6
-- PMM
4
7
5
The following changes since commit 0ebf76aae58324b8f7bf6af798696687f5f4c2a9:
8
The following changes since commit 83aaec1d5a49f158abaa31797a0f976b3c07e5ca:
6
9
7
Merge tag 'nvme-next-pull-request' of git://git.infradead.org/qemu-nvme into staging (2022-07-15 15:38:13 +0100)
10
Merge tag 'pull-tcg-20241212' of https://gitlab.com/rth7680/qemu into staging (2024-12-12 18:45:39 -0500)
8
11
9
are available in the Git repository at:
12
are available in the Git repository at:
10
13
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220718
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20241213
12
15
13
for you to fetch changes up to 004c8a8bc569c8b18fca6fc90ffe3223daaf17b7:
16
for you to fetch changes up to 48e652c4bd9570f6f24def25355cb3009a7300f8:
14
17
15
Align Raspberry Pi DMA interrupts with Linux DTS (2022-07-18 13:25:13 +0100)
18
target/arm: Simplify condition for tlbi_el2_cp_reginfo[] (2024-12-13 15:41:09 +0000)
16
19
17
----------------------------------------------------------------
20
----------------------------------------------------------------
18
target-arm queue:
21
target-arm queue:
19
* hw/intc/armv7m_nvic: ICPRn must not unpend an IRQ that is being held high
22
* Finish conversion of A64 decoder to decodetree
20
* target/arm: Fill in VL for tbflags when SME enabled and SVE disabled
23
* Use float_round_to_odd in helper_fcvtx_f64_to_f32
21
* target/arm: Fix aarch64_sve_change_el for SME
24
* Move TLBI insn emulation code out to its own source file
22
* linux-user/aarch64: Do not clear PROT_MTE on mprotect
25
* docs/system/arm: fix broken links, document undocumented properties
23
* target/arm: Honour VTCR_EL2 bits in Secure EL2
26
* MAINTAINERS: correct an email address
24
* hw/adc: Fix CONV bit in NPCM7XX ADC CON register
25
* hw/adc: Make adci[*] R/W in NPCM7XX ADC
26
* target/arm: Don't set syndrome ISS for loads and stores with writeback
27
* Align Raspberry Pi DMA interrupts with Linux DTS
28
27
29
----------------------------------------------------------------
28
----------------------------------------------------------------
30
Andrey Makarov (1):
29
Brian Cain (1):
31
Align Raspberry Pi DMA interrupts with Linux DTS
30
MAINTAINERS: correct my email address
32
31
33
Hao Wu (2):
32
Peter Maydell (10):
34
hw/adc: Fix CONV bit in NPCM7XX ADC CON register
33
target/arm: Move some TLBI insns to their own source file
35
hw/adc: Make adci[*] R/W in NPCM7XX ADC
34
target/arm: Move TLBI insns for AArch32 EL2 to tlbi_insn_helper.c
35
target/arm: Move AArch64 TLBI insns from v8_cp_reginfo[]
36
target/arm: Move the AArch64 EL2 TLBI insns
37
target/arm: Move AArch64 EL3 TLBI insns
38
target/arm: Move TLBI range insns
39
target/arm: Move the TLBI OS insns to tlb-insns.c.
40
target/arm: Move small helper functions to tlb-insns.c
41
target/arm: Move RME TLB insns to tlb-insns.c
42
target/arm: Simplify condition for tlbi_el2_cp_reginfo[]
36
43
37
Peter Maydell (9):
44
Pierrick Bouvier (4):
38
hw/intc/armv7m_nvic: ICPRn must not unpend an IRQ that is being held high
45
docs/system/arm/orangepi: update links
39
target/arm: Define and use new regime_tcr_value() function
46
docs/system/arm/fby35: document execute-in-place property
40
target/arm: Calculate mask/base_mask in get_level1_table_address()
47
docs/system/arm/xlnx-versal-virt: document ospi-flash property
41
target/arm: Fold regime_tcr() and regime_tcr_value() together
48
docs/system/arm/virt: document missing properties
42
target/arm: Fix big-endian host handling of VTCR
43
target/arm: Store VTCR_EL2, VSTCR_EL2 registers as uint64_t
44
target/arm: Store TCR_EL* registers as uint64_t
45
target/arm: Honour VTCR_EL2 bits in Secure EL2
46
target/arm: Don't set syndrome ISS for loads and stores with writeback
47
49
48
Richard Henderson (3):
50
Richard Henderson (70):
49
target/arm: Fill in VL for tbflags when SME enabled and SVE disabled
51
target/arm: Add section labels for "Data Processing (register)"
50
target/arm: Fix aarch64_sve_change_el for SME
52
target/arm: Convert UDIV, SDIV to decodetree
51
linux-user/aarch64: Do not clear PROT_MTE on mprotect
53
target/arm: Convert LSLV, LSRV, ASRV, RORV to decodetree
54
target/arm: Convert CRC32, CRC32C to decodetree
55
target/arm: Convert SUBP, IRG, GMI to decodetree
56
target/arm: Convert PACGA to decodetree
57
target/arm: Convert RBIT, REV16, REV32, REV64 to decodetree
58
target/arm: Convert CLZ, CLS to decodetree
59
target/arm: Convert PAC[ID]*, AUT[ID]* to decodetree
60
target/arm: Convert XPAC[ID] to decodetree
61
target/arm: Convert disas_logic_reg to decodetree
62
target/arm: Convert disas_add_sub_ext_reg to decodetree
63
target/arm: Convert disas_add_sub_reg to decodetree
64
target/arm: Convert disas_data_proc_3src to decodetree
65
target/arm: Convert disas_adc_sbc to decodetree
66
target/arm: Convert RMIF to decodetree
67
target/arm: Convert SETF8, SETF16 to decodetree
68
target/arm: Convert CCMP, CCMN to decodetree
69
target/arm: Convert disas_cond_select to decodetree
70
target/arm: Introduce fp_access_check_scalar_hsd
71
target/arm: Introduce fp_access_check_vector_hsd
72
target/arm: Convert FCMP, FCMPE, FCCMP, FCCMPE to decodetree
73
target/arm: Fix decode of fp16 vector fabs, fneg, fsqrt
74
target/arm: Convert FMOV, FABS, FNEG (scalar) to decodetree
75
target/arm: Pass fpstatus to vfp_sqrt*
76
target/arm: Remove helper_sqrt_f16
77
target/arm: Convert FSQRT (scalar) to decodetree
78
target/arm: Convert FRINT[NPMSAXI] (scalar) to decodetree
79
target/arm: Convert BFCVT to decodetree
80
target/arm: Convert FRINT{32, 64}[ZX] (scalar) to decodetree
81
target/arm: Convert FCVT (scalar) to decodetree
82
target/arm: Convert handle_fpfpcvt to decodetree
83
target/arm: Convert FJCVTZS to decodetree
84
target/arm: Convert handle_fmov to decodetree
85
target/arm: Convert SQABS, SQNEG to decodetree
86
target/arm: Convert ABS, NEG to decodetree
87
target/arm: Introduce gen_gvec_cls, gen_gvec_clz
88
target/arm: Convert CLS, CLZ (vector) to decodetree
89
target/arm: Introduce gen_gvec_cnt, gen_gvec_rbit
90
target/arm: Convert CNT, NOT, RBIT (vector) to decodetree
91
target/arm: Convert CMGT, CMGE, GMLT, GMLE, CMEQ (zero) to decodetree
92
target/arm: Introduce gen_gvec_rev{16,32,64}
93
target/arm: Convert handle_rev to decodetree
94
target/arm: Move helper_neon_addlp_{s8, s16} to neon_helper.c
95
target/arm: Introduce gen_gvec_{s,u}{add,ada}lp
96
target/arm: Convert handle_2misc_pairwise to decodetree
97
target/arm: Remove helper_neon_{add,sub}l_u{16,32}
98
target/arm: Introduce clear_vec
99
target/arm: Convert XTN, SQXTUN, SQXTN, UQXTN to decodetree
100
target/arm: Convert FCVTN, BFCVTN to decodetree
101
target/arm: Convert FCVTXN to decodetree
102
target/arm: Convert SHLL to decodetree
103
target/arm: Implement gen_gvec_fabs, gen_gvec_fneg
104
target/arm: Convert FABS, FNEG (vector) to decodetree
105
target/arm: Convert FSQRT (vector) to decodetree
106
target/arm: Convert FRINT* (vector) to decodetree
107
target/arm: Convert FCVT* (vector, integer) scalar to decodetree
108
target/arm: Convert FCVT* (vector, fixed-point) scalar to decodetree
109
target/arm: Convert [US]CVTF (vector, integer) scalar to decodetree
110
target/arm: Convert [US]CVTF (vector, fixed-point) scalar to decodetree
111
target/arm: Rename helper_gvec_vcvt_[hf][su] with _rz
112
target/arm: Convert [US]CVTF (vector) to decodetree
113
target/arm: Convert FCVTZ[SU] (vector, fixed-point) to decodetree
114
target/arm: Convert FCVT* (vector, integer) to decodetree
115
target/arm: Convert handle_2misc_fcmp_zero to decodetree
116
target/arm: Convert FRECPE, FRECPX, FRSQRTE to decodetree
117
target/arm: Introduce gen_gvec_urecpe, gen_gvec_ursqrte
118
target/arm: Convert URECPE and URSQRTE to decodetree
119
target/arm: Convert FCVTL to decodetree
120
target/arm: Use float_round_to_odd in helper_fcvtx_f64_to_f32
52
121
53
include/hw/arm/bcm2835_peripherals.h | 2 +
122
MAINTAINERS | 2 +-
54
target/arm/cpu.h | 38 ++++++++---
123
docs/system/arm/fby35.rst | 5 +
55
target/arm/internals.h | 34 +++++++---
124
docs/system/arm/orangepi.rst | 4 +-
56
accel/tcg/translate-all.c | 13 +++-
125
docs/system/arm/virt.rst | 16 +
57
hw/adc/npcm7xx_adc.c | 4 +-
126
docs/system/arm/xlnx-versal-virt.rst | 3 +
58
hw/arm/bcm2835_peripherals.c | 26 ++++++-
127
target/arm/helper.h | 43 +-
59
hw/intc/armv7m_nvic.c | 9 ++-
128
target/arm/internals.h | 9 +
60
target/arm/cpu.c | 2 +-
129
target/arm/tcg/helper-a64.h | 7 -
61
target/arm/debug_helper.c | 2 +-
130
target/arm/tcg/translate.h | 35 +
62
target/arm/helper.c | 128 ++++++++++++++++-------------------
131
target/arm/tcg/a64.decode | 502 ++-
63
target/arm/ptw.c | 38 ++++++-----
132
target/arm/helper.c | 1208 +-------
64
target/arm/tlb_helper.c | 2 +-
133
target/arm/tcg-stubs.c | 5 +
65
target/arm/translate-a64.c | 4 +-
134
target/arm/tcg/gengvec.c | 369 +++
66
tests/qtest/bcm2835-dma-test.c | 118 ++++++++++++++++++++++++++++++++
135
target/arm/tcg/helper-a64.c | 122 +-
67
tests/qtest/npcm7xx_adc-test.c | 2 +-
136
target/arm/tcg/neon_helper.c | 106 +-
68
tests/qtest/meson.build | 3 +-
137
target/arm/tcg/tlb-insns.c | 1266 ++++++++
69
16 files changed, 306 insertions(+), 119 deletions(-)
138
target/arm/tcg/translate-a64.c | 5670 +++++++++++-----------------------
70
create mode 100644 tests/qtest/bcm2835-dma-test.c
139
target/arm/tcg/translate-neon.c | 337 +-
140
target/arm/tcg/translate-vfp.c | 6 +-
141
target/arm/tcg/vec_helper.c | 65 +-
142
target/arm/vfp_helper.c | 16 +-
143
target/arm/tcg/meson.build | 1 +
144
22 files changed, 4203 insertions(+), 5594 deletions(-)
145
create mode 100644 target/arm/tcg/tlb-insns.c
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
At the same time, use ### to separate 3rd-level sections.
4
We already use ### for 4.1.92 Data Processing (immediate),
5
but not the two following two third-level sections:
6
4.1.93 Branches, and 4.1.94 Loads and stores.
7
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20241211163036.2297116-2-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/tcg/a64.decode | 19 +++++++++++++++++--
14
1 file changed, 17 insertions(+), 2 deletions(-)
15
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_32
21
EXTR 1 00 100111 1 0 rm:5 imm:6 rn:5 rd:5 &extract sf=1
22
EXTR 0 00 100111 0 0 rm:5 0 imm:5 rn:5 rd:5 &extract sf=0
23
24
-# Branches
25
+### Branches
26
27
%imm26 0:s26 !function=times_4
28
@branch . ..... .......................... &i imm=%imm26
29
@@ -XXX,XX +XXX,XX @@ HLT 1101 0100 010 ................ 000 00 @i16
30
# DCPS2 1101 0100 101 ................ 000 10 @i16
31
# DCPS3 1101 0100 101 ................ 000 11 @i16
32
33
-# Loads and stores
34
+### Loads and stores
35
36
&stxr rn rt rt2 rs sz lasr
37
&stlr rn rt sz lasr
38
@@ -XXX,XX +XXX,XX @@ CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy
39
CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy
40
CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
41
42
+### Data Processing (register)
43
+
44
+# Data Processing (2-source)
45
+# Data Processing (1-source)
46
+# Logical (shifted reg)
47
+# Add/subtract (shifted reg)
48
+# Add/subtract (extended reg)
49
+# Add/subtract (carry)
50
+# Rotate right into flags
51
+# Evaluate into flags
52
+# Conditional compare (regster)
53
+# Conditional compare (immediate)
54
+# Conditional select
55
+# Data Processing (3-source)
56
+
57
### Cryptographic AES
58
59
AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
60
--
61
2.34.1
62
63
diff view generated by jsdifflib
1
In the M-profile Arm ARM, rule R_CVJS defines when an interrupt should
1
From: Richard Henderson <richard.henderson@linaro.org>
2
be set to the Pending state:
3
A) when the input line is high and the interrupt is not Active
4
B) when the input line transitions from low to high and the interrupt
5
is Active
6
(Note that the first of these is an ongoing condition, and the
7
second is a point-in-time event.)
8
2
9
This can be rephrased as:
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
1 when the line goes from low to high, set Pending
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
2 when Active goes from 1 to 0, if line is high then set Pending
5
Message-id: 20241211163036.2297116-3-richard.henderson@linaro.org
12
3 ignore attempts to clear Pending when the line is high
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
and Active is 0
7
---
8
target/arm/tcg/a64.decode | 7 ++++
9
target/arm/tcg/translate-a64.c | 64 +++++++++++++++++-----------------
10
2 files changed, 39 insertions(+), 32 deletions(-)
14
11
15
where 1 covers both B and one of the "transition into condition A"
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
cases, 2 deals with the other "transition into condition A"
17
possibility, and 3 is "don't drop Pending if we're already in
18
condition A". Transitions out of condition A don't affect Pending
19
state.
20
21
We handle case 1 in set_irq_level(). For an interrupt (as opposed
22
to other kinds of exception) the only place where we clear Active
23
is in armv7m_nvic_complete_irq(), where we handle case 2 by
24
checking for whether we need to re-pend the exception. For case 3,
25
the only places where we clear Pending state on an interrupt are in
26
armv7m_nvic_acknowledge_irq() (where we are setting Active so it
27
doesn't count) and for writes to NVIC_ICPRn.
28
29
It is the "write to NVIC_ICPRn" case that we missed: we must ignore
30
this if the input line is high and the interrupt is not Active.
31
(This required behaviour is differently and perhaps more clearly
32
stated in the v7M Arm ARM, which has pseudocode in section B3.4.1
33
that implies it.)
34
35
Reported-by: Igor Kotrasiński <i.kotrasinsk@samsung.com>
36
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
37
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
38
Message-id: 20220628154724.3297442-1-peter.maydell@linaro.org
39
---
40
hw/intc/armv7m_nvic.c | 9 ++++++++-
41
1 file changed, 8 insertions(+), 1 deletion(-)
42
43
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
44
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/intc/armv7m_nvic.c
14
--- a/target/arm/tcg/a64.decode
46
+++ b/hw/intc/armv7m_nvic.c
15
+++ b/target/arm/tcg/a64.decode
47
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
16
@@ -XXX,XX +XXX,XX @@
48
startvec = 8 * (offset - 0x280) + NVIC_FIRST_IRQ; /* vector # */
17
&r rn
49
18
&ri rd imm
50
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
19
&rri_sf rd rn imm sf
51
+ /*
20
+&rrr_sf rd rn rm sf
52
+ * Note that if the input line is still held high and the interrupt
21
&i imm
53
+ * is not active then rule R_CVJS requires that the Pending state
22
&rr_e rd rn esz
54
+ * remains set; in that case we mustn't let it be cleared.
23
&rri_e rd rn imm esz
55
+ */
24
@@ -XXX,XX +XXX,XX @@ CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
56
if (value & (1 << i) &&
25
### Data Processing (register)
57
- (attrs.secure || s->itns[startvec + i])) {
26
58
+ (attrs.secure || s->itns[startvec + i]) &&
27
# Data Processing (2-source)
59
+ !(setval == 0 && s->vectors[startvec + i].level &&
28
+
60
+ !s->vectors[startvec + i].active)) {
29
+@rrr_sf sf:1 .......... rm:5 ...... rn:5 rd:5 &rrr_sf
61
s->vectors[startvec + i].pending = setval;
30
+
31
+UDIV . 00 11010110 ..... 00001 0 ..... ..... @rrr_sf
32
+SDIV . 00 11010110 ..... 00001 1 ..... ..... @rrr_sf
33
+
34
# Data Processing (1-source)
35
# Logical (shifted reg)
36
# Add/subtract (shifted reg)
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ TRANS(UQRSHRN_si, do_scalar_shift_imm_narrow, a, uqrshrn_fns, 0, false)
42
TRANS(SQSHRUN_si, do_scalar_shift_imm_narrow, a, sqshrun_fns, MO_SIGN, false)
43
TRANS(SQRSHRUN_si, do_scalar_shift_imm_narrow, a, sqrshrun_fns, MO_SIGN, false)
44
45
+static bool do_div(DisasContext *s, arg_rrr_sf *a, bool is_signed)
46
+{
47
+ TCGv_i64 tcg_n, tcg_m, tcg_rd;
48
+ tcg_rd = cpu_reg(s, a->rd);
49
+
50
+ if (!a->sf && is_signed) {
51
+ tcg_n = tcg_temp_new_i64();
52
+ tcg_m = tcg_temp_new_i64();
53
+ tcg_gen_ext32s_i64(tcg_n, cpu_reg(s, a->rn));
54
+ tcg_gen_ext32s_i64(tcg_m, cpu_reg(s, a->rm));
55
+ } else {
56
+ tcg_n = read_cpu_reg(s, a->rn, a->sf);
57
+ tcg_m = read_cpu_reg(s, a->rm, a->sf);
58
+ }
59
+
60
+ if (is_signed) {
61
+ gen_helper_sdiv64(tcg_rd, tcg_n, tcg_m);
62
+ } else {
63
+ gen_helper_udiv64(tcg_rd, tcg_n, tcg_m);
64
+ }
65
+
66
+ if (!a->sf) { /* zero extend final result */
67
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
68
+ }
69
+ return true;
70
+}
71
+
72
+TRANS(SDIV, do_div, a, true)
73
+TRANS(UDIV, do_div, a, false)
74
+
75
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
76
* Note that it is the caller's responsibility to ensure that the
77
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
78
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
79
#undef MAP
80
}
81
82
-static void handle_div(DisasContext *s, bool is_signed, unsigned int sf,
83
- unsigned int rm, unsigned int rn, unsigned int rd)
84
-{
85
- TCGv_i64 tcg_n, tcg_m, tcg_rd;
86
- tcg_rd = cpu_reg(s, rd);
87
-
88
- if (!sf && is_signed) {
89
- tcg_n = tcg_temp_new_i64();
90
- tcg_m = tcg_temp_new_i64();
91
- tcg_gen_ext32s_i64(tcg_n, cpu_reg(s, rn));
92
- tcg_gen_ext32s_i64(tcg_m, cpu_reg(s, rm));
93
- } else {
94
- tcg_n = read_cpu_reg(s, rn, sf);
95
- tcg_m = read_cpu_reg(s, rm, sf);
96
- }
97
-
98
- if (is_signed) {
99
- gen_helper_sdiv64(tcg_rd, tcg_n, tcg_m);
100
- } else {
101
- gen_helper_udiv64(tcg_rd, tcg_n, tcg_m);
102
- }
103
-
104
- if (!sf) { /* zero extend final result */
105
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
106
- }
107
-}
108
109
/* LSLV, LSRV, ASRV, RORV */
110
static void handle_shift_reg(DisasContext *s,
111
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
62
}
112
}
63
}
113
}
114
break;
115
- case 2: /* UDIV */
116
- handle_div(s, false, sf, rm, rn, rd);
117
- break;
118
- case 3: /* SDIV */
119
- handle_div(s, true, sf, rm, rn, rd);
120
- break;
121
case 4: /* IRG */
122
if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
123
goto do_unallocated;
124
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
125
}
126
default:
127
do_unallocated:
128
+ case 2: /* UDIV */
129
+ case 3: /* SDIV */
130
unallocated_encoding(s);
131
break;
132
}
64
--
133
--
65
2.25.1
134
2.34.1
66
135
67
136
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 4 +++
9
target/arm/tcg/translate-a64.c | 46 ++++++++++++++++------------------
10
2 files changed, 25 insertions(+), 25 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
17
18
UDIV . 00 11010110 ..... 00001 0 ..... ..... @rrr_sf
19
SDIV . 00 11010110 ..... 00001 1 ..... ..... @rrr_sf
20
+LSLV . 00 11010110 ..... 00100 0 ..... ..... @rrr_sf
21
+LSRV . 00 11010110 ..... 00100 1 ..... ..... @rrr_sf
22
+ASRV . 00 11010110 ..... 00101 0 ..... ..... @rrr_sf
23
+RORV . 00 11010110 ..... 00101 1 ..... ..... @rrr_sf
24
25
# Data Processing (1-source)
26
# Logical (shifted reg)
27
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/tcg/translate-a64.c
30
+++ b/target/arm/tcg/translate-a64.c
31
@@ -XXX,XX +XXX,XX @@ static void shift_reg_imm(TCGv_i64 dst, TCGv_i64 src, int sf,
32
}
33
}
34
35
+static bool do_shift_reg(DisasContext *s, arg_rrr_sf *a,
36
+ enum a64_shift_type shift_type)
37
+{
38
+ TCGv_i64 tcg_shift = tcg_temp_new_i64();
39
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
40
+ TCGv_i64 tcg_rn = read_cpu_reg(s, a->rn, a->sf);
41
+
42
+ tcg_gen_andi_i64(tcg_shift, cpu_reg(s, a->rm), a->sf ? 63 : 31);
43
+ shift_reg(tcg_rd, tcg_rn, a->sf, shift_type, tcg_shift);
44
+ return true;
45
+}
46
+
47
+TRANS(LSLV, do_shift_reg, a, A64_SHIFT_TYPE_LSL)
48
+TRANS(LSRV, do_shift_reg, a, A64_SHIFT_TYPE_LSR)
49
+TRANS(ASRV, do_shift_reg, a, A64_SHIFT_TYPE_ASR)
50
+TRANS(RORV, do_shift_reg, a, A64_SHIFT_TYPE_ROR)
51
+
52
/* Logical (shifted register)
53
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
54
* +----+-----+-----------+-------+---+------+--------+------+------+
55
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
56
}
57
58
59
-/* LSLV, LSRV, ASRV, RORV */
60
-static void handle_shift_reg(DisasContext *s,
61
- enum a64_shift_type shift_type, unsigned int sf,
62
- unsigned int rm, unsigned int rn, unsigned int rd)
63
-{
64
- TCGv_i64 tcg_shift = tcg_temp_new_i64();
65
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
66
- TCGv_i64 tcg_rn = read_cpu_reg(s, rn, sf);
67
-
68
- tcg_gen_andi_i64(tcg_shift, cpu_reg(s, rm), sf ? 63 : 31);
69
- shift_reg(tcg_rd, tcg_rn, sf, shift_type, tcg_shift);
70
-}
71
-
72
/* CRC32[BHWX], CRC32C[BHWX] */
73
static void handle_crc32(DisasContext *s,
74
unsigned int sf, unsigned int sz, bool crc32c,
75
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
76
tcg_gen_or_i64(cpu_reg(s, rd), cpu_reg(s, rm), t);
77
}
78
break;
79
- case 8: /* LSLV */
80
- handle_shift_reg(s, A64_SHIFT_TYPE_LSL, sf, rm, rn, rd);
81
- break;
82
- case 9: /* LSRV */
83
- handle_shift_reg(s, A64_SHIFT_TYPE_LSR, sf, rm, rn, rd);
84
- break;
85
- case 10: /* ASRV */
86
- handle_shift_reg(s, A64_SHIFT_TYPE_ASR, sf, rm, rn, rd);
87
- break;
88
- case 11: /* RORV */
89
- handle_shift_reg(s, A64_SHIFT_TYPE_ROR, sf, rm, rn, rd);
90
- break;
91
case 12: /* PACGA */
92
if (sf == 0 || !dc_isar_feature(aa64_pauth, s)) {
93
goto do_unallocated;
94
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
95
do_unallocated:
96
case 2: /* UDIV */
97
case 3: /* SDIV */
98
+ case 8: /* LSLV */
99
+ case 9: /* LSRV */
100
+ case 10: /* ASRV */
101
+ case 11: /* RORV */
102
unallocated_encoding(s);
103
break;
104
}
105
--
106
2.34.1
107
108
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 12 ++++
9
target/arm/tcg/translate-a64.c | 101 +++++++++++++--------------------
10
2 files changed, 53 insertions(+), 60 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
18
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
19
20
+@rrr_b ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=0
21
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
22
+@rrr_s ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=2
23
@rrr_d ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=3
24
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
25
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
26
@@ -XXX,XX +XXX,XX @@ LSRV . 00 11010110 ..... 00100 1 ..... ..... @rrr_sf
27
ASRV . 00 11010110 ..... 00101 0 ..... ..... @rrr_sf
28
RORV . 00 11010110 ..... 00101 1 ..... ..... @rrr_sf
29
30
+CRC32 0 00 11010110 ..... 0100 00 ..... ..... @rrr_b
31
+CRC32 0 00 11010110 ..... 0100 01 ..... ..... @rrr_h
32
+CRC32 0 00 11010110 ..... 0100 10 ..... ..... @rrr_s
33
+CRC32 1 00 11010110 ..... 0100 11 ..... ..... @rrr_d
34
+
35
+CRC32C 0 00 11010110 ..... 0101 00 ..... ..... @rrr_b
36
+CRC32C 0 00 11010110 ..... 0101 01 ..... ..... @rrr_h
37
+CRC32C 0 00 11010110 ..... 0101 10 ..... ..... @rrr_s
38
+CRC32C 1 00 11010110 ..... 0101 11 ..... ..... @rrr_d
39
+
40
# Data Processing (1-source)
41
# Logical (shifted reg)
42
# Add/subtract (shifted reg)
43
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/tcg/translate-a64.c
46
+++ b/target/arm/tcg/translate-a64.c
47
@@ -XXX,XX +XXX,XX @@ TRANS(LSRV, do_shift_reg, a, A64_SHIFT_TYPE_LSR)
48
TRANS(ASRV, do_shift_reg, a, A64_SHIFT_TYPE_ASR)
49
TRANS(RORV, do_shift_reg, a, A64_SHIFT_TYPE_ROR)
50
51
+static bool do_crc32(DisasContext *s, arg_rrr_e *a, bool crc32c)
52
+{
53
+ TCGv_i64 tcg_acc, tcg_val, tcg_rd;
54
+ TCGv_i32 tcg_bytes;
55
+
56
+ switch (a->esz) {
57
+ case MO_8:
58
+ case MO_16:
59
+ case MO_32:
60
+ tcg_val = tcg_temp_new_i64();
61
+ tcg_gen_extract_i64(tcg_val, cpu_reg(s, a->rm), 0, 8 << a->esz);
62
+ break;
63
+ case MO_64:
64
+ tcg_val = cpu_reg(s, a->rm);
65
+ break;
66
+ default:
67
+ g_assert_not_reached();
68
+ }
69
+ tcg_acc = cpu_reg(s, a->rn);
70
+ tcg_bytes = tcg_constant_i32(1 << a->esz);
71
+ tcg_rd = cpu_reg(s, a->rd);
72
+
73
+ if (crc32c) {
74
+ gen_helper_crc32c_64(tcg_rd, tcg_acc, tcg_val, tcg_bytes);
75
+ } else {
76
+ gen_helper_crc32_64(tcg_rd, tcg_acc, tcg_val, tcg_bytes);
77
+ }
78
+ return true;
79
+}
80
+
81
+TRANS_FEAT(CRC32, aa64_crc32, do_crc32, a, false)
82
+TRANS_FEAT(CRC32C, aa64_crc32, do_crc32, a, true)
83
+
84
/* Logical (shifted register)
85
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
86
* +----+-----+-----------+-------+---+------+--------+------+------+
87
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
88
}
89
90
91
-/* CRC32[BHWX], CRC32C[BHWX] */
92
-static void handle_crc32(DisasContext *s,
93
- unsigned int sf, unsigned int sz, bool crc32c,
94
- unsigned int rm, unsigned int rn, unsigned int rd)
95
-{
96
- TCGv_i64 tcg_acc, tcg_val;
97
- TCGv_i32 tcg_bytes;
98
-
99
- if (!dc_isar_feature(aa64_crc32, s)
100
- || (sf == 1 && sz != 3)
101
- || (sf == 0 && sz == 3)) {
102
- unallocated_encoding(s);
103
- return;
104
- }
105
-
106
- if (sz == 3) {
107
- tcg_val = cpu_reg(s, rm);
108
- } else {
109
- uint64_t mask;
110
- switch (sz) {
111
- case 0:
112
- mask = 0xFF;
113
- break;
114
- case 1:
115
- mask = 0xFFFF;
116
- break;
117
- case 2:
118
- mask = 0xFFFFFFFF;
119
- break;
120
- default:
121
- g_assert_not_reached();
122
- }
123
- tcg_val = tcg_temp_new_i64();
124
- tcg_gen_andi_i64(tcg_val, cpu_reg(s, rm), mask);
125
- }
126
-
127
- tcg_acc = cpu_reg(s, rn);
128
- tcg_bytes = tcg_constant_i32(1 << sz);
129
-
130
- if (crc32c) {
131
- gen_helper_crc32c_64(cpu_reg(s, rd), tcg_acc, tcg_val, tcg_bytes);
132
- } else {
133
- gen_helper_crc32_64(cpu_reg(s, rd), tcg_acc, tcg_val, tcg_bytes);
134
- }
135
-}
136
-
137
/* Data-processing (2 source)
138
* 31 30 29 28 21 20 16 15 10 9 5 4 0
139
* +----+---+---+-----------------+------+--------+------+------+
140
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
141
gen_helper_pacga(cpu_reg(s, rd), tcg_env,
142
cpu_reg(s, rn), cpu_reg_sp(s, rm));
143
break;
144
- case 16:
145
- case 17:
146
- case 18:
147
- case 19:
148
- case 20:
149
- case 21:
150
- case 22:
151
- case 23: /* CRC32 */
152
- {
153
- int sz = extract32(opcode, 0, 2);
154
- bool crc32c = extract32(opcode, 2, 1);
155
- handle_crc32(s, sf, sz, crc32c, rm, rn, rd);
156
- break;
157
- }
158
default:
159
do_unallocated:
160
case 2: /* UDIV */
161
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
162
case 9: /* LSRV */
163
case 10: /* ASRV */
164
case 11: /* RORV */
165
+ case 16:
166
+ case 17:
167
+ case 18:
168
+ case 19:
169
+ case 20:
170
+ case 21:
171
+ case 22:
172
+ case 23: /* CRC32 */
173
unallocated_encoding(s);
174
break;
175
}
176
--
177
2.34.1
178
179
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 7 +++
9
target/arm/tcg/translate-a64.c | 94 +++++++++++++++++++---------------
10
2 files changed, 59 insertions(+), 42 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
%hlm 11:1 20:2
18
19
&r rn
20
+&rrr rd rn rm
21
&ri rd imm
22
&rri_sf rd rn imm sf
23
&rrr_sf rd rn rm sf
24
@@ -XXX,XX +XXX,XX @@ CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
25
26
# Data Processing (2-source)
27
28
+@rrr . .......... rm:5 ...... rn:5 rd:5 &rrr
29
@rrr_sf sf:1 .......... rm:5 ...... rn:5 rd:5 &rrr_sf
30
31
UDIV . 00 11010110 ..... 00001 0 ..... ..... @rrr_sf
32
@@ -XXX,XX +XXX,XX @@ CRC32C 0 00 11010110 ..... 0101 01 ..... ..... @rrr_h
33
CRC32C 0 00 11010110 ..... 0101 10 ..... ..... @rrr_s
34
CRC32C 1 00 11010110 ..... 0101 11 ..... ..... @rrr_d
35
36
+SUBP 1 00 11010110 ..... 000000 ..... ..... @rrr
37
+SUBPS 1 01 11010110 ..... 000000 ..... ..... @rrr
38
+IRG 1 00 11010110 ..... 000100 ..... ..... @rrr
39
+GMI 1 00 11010110 ..... 000101 ..... ..... @rrr
40
+
41
# Data Processing (1-source)
42
# Logical (shifted reg)
43
# Add/subtract (shifted reg)
44
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/translate-a64.c
47
+++ b/target/arm/tcg/translate-a64.c
48
@@ -XXX,XX +XXX,XX @@ static bool do_crc32(DisasContext *s, arg_rrr_e *a, bool crc32c)
49
TRANS_FEAT(CRC32, aa64_crc32, do_crc32, a, false)
50
TRANS_FEAT(CRC32C, aa64_crc32, do_crc32, a, true)
51
52
+static bool do_subp(DisasContext *s, arg_rrr *a, bool setflag)
53
+{
54
+ TCGv_i64 tcg_n = read_cpu_reg_sp(s, a->rn, true);
55
+ TCGv_i64 tcg_m = read_cpu_reg_sp(s, a->rm, true);
56
+ TCGv_i64 tcg_d = cpu_reg(s, a->rd);
57
+
58
+ tcg_gen_sextract_i64(tcg_n, tcg_n, 0, 56);
59
+ tcg_gen_sextract_i64(tcg_m, tcg_m, 0, 56);
60
+
61
+ if (setflag) {
62
+ gen_sub_CC(true, tcg_d, tcg_n, tcg_m);
63
+ } else {
64
+ tcg_gen_sub_i64(tcg_d, tcg_n, tcg_m);
65
+ }
66
+ return true;
67
+}
68
+
69
+TRANS_FEAT(SUBP, aa64_mte_insn_reg, do_subp, a, false)
70
+TRANS_FEAT(SUBPS, aa64_mte_insn_reg, do_subp, a, true)
71
+
72
+static bool trans_IRG(DisasContext *s, arg_rrr *a)
73
+{
74
+ if (dc_isar_feature(aa64_mte_insn_reg, s)) {
75
+ TCGv_i64 tcg_rd = cpu_reg_sp(s, a->rd);
76
+ TCGv_i64 tcg_rn = cpu_reg_sp(s, a->rn);
77
+
78
+ if (s->ata[0]) {
79
+ gen_helper_irg(tcg_rd, tcg_env, tcg_rn, cpu_reg(s, a->rm));
80
+ } else {
81
+ gen_address_with_allocation_tag0(tcg_rd, tcg_rn);
82
+ }
83
+ return true;
84
+ }
85
+ return false;
86
+}
87
+
88
+static bool trans_GMI(DisasContext *s, arg_rrr *a)
89
+{
90
+ if (dc_isar_feature(aa64_mte_insn_reg, s)) {
91
+ TCGv_i64 t = tcg_temp_new_i64();
92
+
93
+ tcg_gen_extract_i64(t, cpu_reg_sp(s, a->rn), 56, 4);
94
+ tcg_gen_shl_i64(t, tcg_constant_i64(1), t);
95
+ tcg_gen_or_i64(cpu_reg(s, a->rd), cpu_reg(s, a->rm), t);
96
+ return true;
97
+ }
98
+ return false;
99
+}
100
+
101
/* Logical (shifted register)
102
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
103
* +----+-----+-----------+-------+---+------+--------+------+------+
104
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
105
}
106
107
switch (opcode) {
108
- case 0: /* SUBP(S) */
109
- if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
110
- goto do_unallocated;
111
- } else {
112
- TCGv_i64 tcg_n, tcg_m, tcg_d;
113
-
114
- tcg_n = read_cpu_reg_sp(s, rn, true);
115
- tcg_m = read_cpu_reg_sp(s, rm, true);
116
- tcg_gen_sextract_i64(tcg_n, tcg_n, 0, 56);
117
- tcg_gen_sextract_i64(tcg_m, tcg_m, 0, 56);
118
- tcg_d = cpu_reg(s, rd);
119
-
120
- if (setflag) {
121
- gen_sub_CC(true, tcg_d, tcg_n, tcg_m);
122
- } else {
123
- tcg_gen_sub_i64(tcg_d, tcg_n, tcg_m);
124
- }
125
- }
126
- break;
127
- case 4: /* IRG */
128
- if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
129
- goto do_unallocated;
130
- }
131
- if (s->ata[0]) {
132
- gen_helper_irg(cpu_reg_sp(s, rd), tcg_env,
133
- cpu_reg_sp(s, rn), cpu_reg(s, rm));
134
- } else {
135
- gen_address_with_allocation_tag0(cpu_reg_sp(s, rd),
136
- cpu_reg_sp(s, rn));
137
- }
138
- break;
139
- case 5: /* GMI */
140
- if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
141
- goto do_unallocated;
142
- } else {
143
- TCGv_i64 t = tcg_temp_new_i64();
144
-
145
- tcg_gen_extract_i64(t, cpu_reg_sp(s, rn), 56, 4);
146
- tcg_gen_shl_i64(t, tcg_constant_i64(1), t);
147
- tcg_gen_or_i64(cpu_reg(s, rd), cpu_reg(s, rm), t);
148
- }
149
- break;
150
case 12: /* PACGA */
151
if (sf == 0 || !dc_isar_feature(aa64_pauth, s)) {
152
goto do_unallocated;
153
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
154
break;
155
default:
156
do_unallocated:
157
+ case 0: /* SUBP(S) */
158
case 2: /* UDIV */
159
case 3: /* SDIV */
160
+ case 4: /* IRG */
161
+ case 5: /* GMI */
162
case 8: /* LSLV */
163
case 9: /* LSRV */
164
case 10: /* ASRV */
165
--
166
2.34.1
167
168
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove disas_data_proc_2src, as this was the last insn
4
decoded by that function.
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-7-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 2 ++
12
target/arm/tcg/translate-a64.c | 65 ++++++----------------------------
13
2 files changed, 13 insertions(+), 54 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ SUBPS 1 01 11010110 ..... 000000 ..... ..... @rrr
20
IRG 1 00 11010110 ..... 000100 ..... ..... @rrr
21
GMI 1 00 11010110 ..... 000101 ..... ..... @rrr
22
23
+PACGA 1 00 11010110 ..... 001100 ..... ..... @rrr
24
+
25
# Data Processing (1-source)
26
# Logical (shifted reg)
27
# Add/subtract (shifted reg)
28
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/tcg/translate-a64.c
31
+++ b/target/arm/tcg/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static bool trans_GMI(DisasContext *s, arg_rrr *a)
33
return false;
34
}
35
36
+static bool trans_PACGA(DisasContext *s, arg_rrr *a)
37
+{
38
+ if (dc_isar_feature(aa64_pauth, s)) {
39
+ gen_helper_pacga(cpu_reg(s, a->rd), tcg_env,
40
+ cpu_reg(s, a->rn), cpu_reg_sp(s, a->rm));
41
+ return true;
42
+ }
43
+ return false;
44
+}
45
+
46
/* Logical (shifted register)
47
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
48
* +----+-----+-----------+-------+---+------+--------+------+------+
49
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
50
}
51
52
53
-/* Data-processing (2 source)
54
- * 31 30 29 28 21 20 16 15 10 9 5 4 0
55
- * +----+---+---+-----------------+------+--------+------+------+
56
- * | sf | 0 | S | 1 1 0 1 0 1 1 0 | Rm | opcode | Rn | Rd |
57
- * +----+---+---+-----------------+------+--------+------+------+
58
- */
59
-static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
60
-{
61
- unsigned int sf, rm, opcode, rn, rd, setflag;
62
- sf = extract32(insn, 31, 1);
63
- setflag = extract32(insn, 29, 1);
64
- rm = extract32(insn, 16, 5);
65
- opcode = extract32(insn, 10, 6);
66
- rn = extract32(insn, 5, 5);
67
- rd = extract32(insn, 0, 5);
68
-
69
- if (setflag && opcode != 0) {
70
- unallocated_encoding(s);
71
- return;
72
- }
73
-
74
- switch (opcode) {
75
- case 12: /* PACGA */
76
- if (sf == 0 || !dc_isar_feature(aa64_pauth, s)) {
77
- goto do_unallocated;
78
- }
79
- gen_helper_pacga(cpu_reg(s, rd), tcg_env,
80
- cpu_reg(s, rn), cpu_reg_sp(s, rm));
81
- break;
82
- default:
83
- do_unallocated:
84
- case 0: /* SUBP(S) */
85
- case 2: /* UDIV */
86
- case 3: /* SDIV */
87
- case 4: /* IRG */
88
- case 5: /* GMI */
89
- case 8: /* LSLV */
90
- case 9: /* LSRV */
91
- case 10: /* ASRV */
92
- case 11: /* RORV */
93
- case 16:
94
- case 17:
95
- case 18:
96
- case 19:
97
- case 20:
98
- case 21:
99
- case 22:
100
- case 23: /* CRC32 */
101
- unallocated_encoding(s);
102
- break;
103
- }
104
-}
105
-
106
/*
107
* Data processing - register
108
* 31 30 29 28 25 21 20 16 10 0
109
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
110
if (op0) { /* (1 source) */
111
disas_data_proc_1src(s, insn);
112
} else { /* (2 source) */
113
- disas_data_proc_2src(s, insn);
114
+ goto do_unallocated;
115
}
116
break;
117
case 0x8 ... 0xf: /* (3 source) */
118
--
119
2.34.1
120
121
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 11 +++
9
target/arm/tcg/translate-a64.c | 137 +++++++++++++++------------------
10
2 files changed, 72 insertions(+), 76 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
&r rn
18
&rrr rd rn rm
19
&ri rd imm
20
+&rr rd rn
21
+&rr_sf rd rn sf
22
&rri_sf rd rn imm sf
23
&rrr_sf rd rn rm sf
24
&i imm
25
@@ -XXX,XX +XXX,XX @@ GMI 1 00 11010110 ..... 000101 ..... ..... @rrr
26
PACGA 1 00 11010110 ..... 001100 ..... ..... @rrr
27
28
# Data Processing (1-source)
29
+
30
+@rr . .......... ..... ...... rn:5 rd:5 &rr
31
+@rr_sf sf:1 .......... ..... ...... rn:5 rd:5 &rr_sf
32
+
33
+RBIT . 10 11010110 00000 000000 ..... ..... @rr_sf
34
+REV16 . 10 11010110 00000 000001 ..... ..... @rr_sf
35
+REV32 . 10 11010110 00000 000010 ..... ..... @rr_sf
36
+REV64 1 10 11010110 00000 000011 ..... ..... @rr
37
+
38
# Logical (shifted reg)
39
# Add/subtract (shifted reg)
40
# Add/subtract (extended reg)
41
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/tcg/translate-a64.c
44
+++ b/target/arm/tcg/translate-a64.c
45
@@ -XXX,XX +XXX,XX @@ static bool trans_PACGA(DisasContext *s, arg_rrr *a)
46
return false;
47
}
48
49
+typedef void ArithOneOp(TCGv_i64, TCGv_i64);
50
+
51
+static bool gen_rr(DisasContext *s, int rd, int rn, ArithOneOp fn)
52
+{
53
+ fn(cpu_reg(s, rd), cpu_reg(s, rn));
54
+ return true;
55
+}
56
+
57
+static void gen_rbit32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
58
+{
59
+ TCGv_i32 t32 = tcg_temp_new_i32();
60
+
61
+ tcg_gen_extrl_i64_i32(t32, tcg_rn);
62
+ gen_helper_rbit(t32, t32);
63
+ tcg_gen_extu_i32_i64(tcg_rd, t32);
64
+}
65
+
66
+static void gen_rev16_xx(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 mask)
67
+{
68
+ TCGv_i64 tcg_tmp = tcg_temp_new_i64();
69
+
70
+ tcg_gen_shri_i64(tcg_tmp, tcg_rn, 8);
71
+ tcg_gen_and_i64(tcg_rd, tcg_rn, mask);
72
+ tcg_gen_and_i64(tcg_tmp, tcg_tmp, mask);
73
+ tcg_gen_shli_i64(tcg_rd, tcg_rd, 8);
74
+ tcg_gen_or_i64(tcg_rd, tcg_rd, tcg_tmp);
75
+}
76
+
77
+static void gen_rev16_32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
78
+{
79
+ gen_rev16_xx(tcg_rd, tcg_rn, tcg_constant_i64(0x00ff00ff));
80
+}
81
+
82
+static void gen_rev16_64(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
83
+{
84
+ gen_rev16_xx(tcg_rd, tcg_rn, tcg_constant_i64(0x00ff00ff00ff00ffull));
85
+}
86
+
87
+static void gen_rev_32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
88
+{
89
+ tcg_gen_bswap32_i64(tcg_rd, tcg_rn, TCG_BSWAP_OZ);
90
+}
91
+
92
+static void gen_rev32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
93
+{
94
+ tcg_gen_bswap64_i64(tcg_rd, tcg_rn);
95
+ tcg_gen_rotri_i64(tcg_rd, tcg_rd, 32);
96
+}
97
+
98
+TRANS(RBIT, gen_rr, a->rd, a->rn, a->sf ? gen_helper_rbit64 : gen_rbit32)
99
+TRANS(REV16, gen_rr, a->rd, a->rn, a->sf ? gen_rev16_64 : gen_rev16_32)
100
+TRANS(REV32, gen_rr, a->rd, a->rn, a->sf ? gen_rev32 : gen_rev_32)
101
+TRANS(REV64, gen_rr, a->rd, a->rn, tcg_gen_bswap64_i64)
102
+
103
/* Logical (shifted register)
104
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
105
* +----+-----+-----------+-------+---+------+--------+------+------+
106
@@ -XXX,XX +XXX,XX @@ static void handle_cls(DisasContext *s, unsigned int sf,
107
}
108
}
109
110
-static void handle_rbit(DisasContext *s, unsigned int sf,
111
- unsigned int rn, unsigned int rd)
112
-{
113
- TCGv_i64 tcg_rd, tcg_rn;
114
- tcg_rd = cpu_reg(s, rd);
115
- tcg_rn = cpu_reg(s, rn);
116
-
117
- if (sf) {
118
- gen_helper_rbit64(tcg_rd, tcg_rn);
119
- } else {
120
- TCGv_i32 tcg_tmp32 = tcg_temp_new_i32();
121
- tcg_gen_extrl_i64_i32(tcg_tmp32, tcg_rn);
122
- gen_helper_rbit(tcg_tmp32, tcg_tmp32);
123
- tcg_gen_extu_i32_i64(tcg_rd, tcg_tmp32);
124
- }
125
-}
126
-
127
-/* REV with sf==1, opcode==3 ("REV64") */
128
-static void handle_rev64(DisasContext *s, unsigned int sf,
129
- unsigned int rn, unsigned int rd)
130
-{
131
- if (!sf) {
132
- unallocated_encoding(s);
133
- return;
134
- }
135
- tcg_gen_bswap64_i64(cpu_reg(s, rd), cpu_reg(s, rn));
136
-}
137
-
138
-/* REV with sf==0, opcode==2
139
- * REV32 (sf==1, opcode==2)
140
- */
141
-static void handle_rev32(DisasContext *s, unsigned int sf,
142
- unsigned int rn, unsigned int rd)
143
-{
144
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
145
- TCGv_i64 tcg_rn = cpu_reg(s, rn);
146
-
147
- if (sf) {
148
- tcg_gen_bswap64_i64(tcg_rd, tcg_rn);
149
- tcg_gen_rotri_i64(tcg_rd, tcg_rd, 32);
150
- } else {
151
- tcg_gen_bswap32_i64(tcg_rd, tcg_rn, TCG_BSWAP_OZ);
152
- }
153
-}
154
-
155
-/* REV16 (opcode==1) */
156
-static void handle_rev16(DisasContext *s, unsigned int sf,
157
- unsigned int rn, unsigned int rd)
158
-{
159
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
160
- TCGv_i64 tcg_tmp = tcg_temp_new_i64();
161
- TCGv_i64 tcg_rn = read_cpu_reg(s, rn, sf);
162
- TCGv_i64 mask = tcg_constant_i64(sf ? 0x00ff00ff00ff00ffull : 0x00ff00ff);
163
-
164
- tcg_gen_shri_i64(tcg_tmp, tcg_rn, 8);
165
- tcg_gen_and_i64(tcg_rd, tcg_rn, mask);
166
- tcg_gen_and_i64(tcg_tmp, tcg_tmp, mask);
167
- tcg_gen_shli_i64(tcg_rd, tcg_rd, 8);
168
- tcg_gen_or_i64(tcg_rd, tcg_rd, tcg_tmp);
169
-}
170
-
171
/* Data-processing (1 source)
172
* 31 30 29 28 21 20 16 15 10 9 5 4 0
173
* +----+---+---+-----------------+---------+--------+------+------+
174
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
175
#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
176
177
switch (MAP(sf, opcode2, opcode)) {
178
- case MAP(0, 0x00, 0x00): /* RBIT */
179
- case MAP(1, 0x00, 0x00):
180
- handle_rbit(s, sf, rn, rd);
181
- break;
182
- case MAP(0, 0x00, 0x01): /* REV16 */
183
- case MAP(1, 0x00, 0x01):
184
- handle_rev16(s, sf, rn, rd);
185
- break;
186
- case MAP(0, 0x00, 0x02): /* REV/REV32 */
187
- case MAP(1, 0x00, 0x02):
188
- handle_rev32(s, sf, rn, rd);
189
- break;
190
- case MAP(1, 0x00, 0x03): /* REV64 */
191
- handle_rev64(s, sf, rn, rd);
192
- break;
193
case MAP(0, 0x00, 0x04): /* CLZ */
194
case MAP(1, 0x00, 0x04):
195
handle_clz(s, sf, rn, rd);
196
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
197
break;
198
default:
199
do_unallocated:
200
+ case MAP(0, 0x00, 0x00): /* RBIT */
201
+ case MAP(1, 0x00, 0x00):
202
+ case MAP(0, 0x00, 0x01): /* REV16 */
203
+ case MAP(1, 0x00, 0x01):
204
+ case MAP(0, 0x00, 0x02): /* REV/REV32 */
205
+ case MAP(1, 0x00, 0x02):
206
+ case MAP(1, 0x00, 0x03): /* REV64 */
207
unallocated_encoding(s);
208
break;
209
}
210
--
211
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 3 ++
9
target/arm/tcg/translate-a64.c | 72 ++++++++++++++--------------------
10
2 files changed, 33 insertions(+), 42 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ REV16 . 10 11010110 00000 000001 ..... ..... @rr_sf
17
REV32 . 10 11010110 00000 000010 ..... ..... @rr_sf
18
REV64 1 10 11010110 00000 000011 ..... ..... @rr
19
20
+CLZ . 10 11010110 00000 000100 ..... ..... @rr_sf
21
+CLS . 10 11010110 00000 000101 ..... ..... @rr_sf
22
+
23
# Logical (shifted reg)
24
# Add/subtract (shifted reg)
25
# Add/subtract (extended reg)
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/translate-a64.c
29
+++ b/target/arm/tcg/translate-a64.c
30
@@ -XXX,XX +XXX,XX @@ TRANS(REV16, gen_rr, a->rd, a->rn, a->sf ? gen_rev16_64 : gen_rev16_32)
31
TRANS(REV32, gen_rr, a->rd, a->rn, a->sf ? gen_rev32 : gen_rev_32)
32
TRANS(REV64, gen_rr, a->rd, a->rn, tcg_gen_bswap64_i64)
33
34
+static void gen_clz32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
35
+{
36
+ TCGv_i32 t32 = tcg_temp_new_i32();
37
+
38
+ tcg_gen_extrl_i64_i32(t32, tcg_rn);
39
+ tcg_gen_clzi_i32(t32, t32, 32);
40
+ tcg_gen_extu_i32_i64(tcg_rd, t32);
41
+}
42
+
43
+static void gen_clz64(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
44
+{
45
+ tcg_gen_clzi_i64(tcg_rd, tcg_rn, 64);
46
+}
47
+
48
+static void gen_cls32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
49
+{
50
+ TCGv_i32 t32 = tcg_temp_new_i32();
51
+
52
+ tcg_gen_extrl_i64_i32(t32, tcg_rn);
53
+ tcg_gen_clrsb_i32(t32, t32);
54
+ tcg_gen_extu_i32_i64(tcg_rd, t32);
55
+}
56
+
57
+TRANS(CLZ, gen_rr, a->rd, a->rn, a->sf ? gen_clz64 : gen_clz32)
58
+TRANS(CLS, gen_rr, a->rd, a->rn, a->sf ? tcg_gen_clrsb_i64 : gen_cls32)
59
+
60
/* Logical (shifted register)
61
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
62
* +----+-----+-----------+-------+---+------+--------+------+------+
63
@@ -XXX,XX +XXX,XX @@ static void disas_cond_select(DisasContext *s, uint32_t insn)
64
}
65
}
66
67
-static void handle_clz(DisasContext *s, unsigned int sf,
68
- unsigned int rn, unsigned int rd)
69
-{
70
- TCGv_i64 tcg_rd, tcg_rn;
71
- tcg_rd = cpu_reg(s, rd);
72
- tcg_rn = cpu_reg(s, rn);
73
-
74
- if (sf) {
75
- tcg_gen_clzi_i64(tcg_rd, tcg_rn, 64);
76
- } else {
77
- TCGv_i32 tcg_tmp32 = tcg_temp_new_i32();
78
- tcg_gen_extrl_i64_i32(tcg_tmp32, tcg_rn);
79
- tcg_gen_clzi_i32(tcg_tmp32, tcg_tmp32, 32);
80
- tcg_gen_extu_i32_i64(tcg_rd, tcg_tmp32);
81
- }
82
-}
83
-
84
-static void handle_cls(DisasContext *s, unsigned int sf,
85
- unsigned int rn, unsigned int rd)
86
-{
87
- TCGv_i64 tcg_rd, tcg_rn;
88
- tcg_rd = cpu_reg(s, rd);
89
- tcg_rn = cpu_reg(s, rn);
90
-
91
- if (sf) {
92
- tcg_gen_clrsb_i64(tcg_rd, tcg_rn);
93
- } else {
94
- TCGv_i32 tcg_tmp32 = tcg_temp_new_i32();
95
- tcg_gen_extrl_i64_i32(tcg_tmp32, tcg_rn);
96
- tcg_gen_clrsb_i32(tcg_tmp32, tcg_tmp32);
97
- tcg_gen_extu_i32_i64(tcg_rd, tcg_tmp32);
98
- }
99
-}
100
-
101
/* Data-processing (1 source)
102
* 31 30 29 28 21 20 16 15 10 9 5 4 0
103
* +----+---+---+-----------------+---------+--------+------+------+
104
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
105
#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
106
107
switch (MAP(sf, opcode2, opcode)) {
108
- case MAP(0, 0x00, 0x04): /* CLZ */
109
- case MAP(1, 0x00, 0x04):
110
- handle_clz(s, sf, rn, rd);
111
- break;
112
- case MAP(0, 0x00, 0x05): /* CLS */
113
- case MAP(1, 0x00, 0x05):
114
- handle_cls(s, sf, rn, rd);
115
- break;
116
case MAP(1, 0x01, 0x00): /* PACIA */
117
if (s->pauth_active) {
118
tcg_rd = cpu_reg(s, rd);
119
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
120
case MAP(0, 0x00, 0x02): /* REV/REV32 */
121
case MAP(1, 0x00, 0x02):
122
case MAP(1, 0x00, 0x03): /* REV64 */
123
+ case MAP(0, 0x00, 0x04): /* CLZ */
124
+ case MAP(1, 0x00, 0x04):
125
+ case MAP(0, 0x00, 0x05): /* CLS */
126
+ case MAP(1, 0x00, 0x05):
127
unallocated_encoding(s);
128
break;
129
}
130
--
131
2.34.1
132
133
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes PACIA, PACIZA, PACIB, PACIZB, PACDA, PACDZA, PACDB,
4
PACDZB, AUTIA, AUTIZA, AUTIB, AUTIZB, AUTDA, AUTDZA, AUTDB, AUTDZB.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 13 +++
12
target/arm/tcg/translate-a64.c | 173 +++++++++------------------------
13
2 files changed, 58 insertions(+), 128 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ REV64 1 10 11010110 00000 000011 ..... ..... @rr
20
CLZ . 10 11010110 00000 000100 ..... ..... @rr_sf
21
CLS . 10 11010110 00000 000101 ..... ..... @rr_sf
22
23
+&pacaut rd rn z
24
+@pacaut . .. ........ ..... .. z:1 ... rn:5 rd:5 &pacaut
25
+
26
+PACIA 1 10 11010110 00001 00.000 ..... ..... @pacaut
27
+PACIB 1 10 11010110 00001 00.001 ..... ..... @pacaut
28
+PACDA 1 10 11010110 00001 00.010 ..... ..... @pacaut
29
+PACDB 1 10 11010110 00001 00.011 ..... ..... @pacaut
30
+
31
+AUTIA 1 10 11010110 00001 00.100 ..... ..... @pacaut
32
+AUTIB 1 10 11010110 00001 00.101 ..... ..... @pacaut
33
+AUTDA 1 10 11010110 00001 00.110 ..... ..... @pacaut
34
+AUTDB 1 10 11010110 00001 00.111 ..... ..... @pacaut
35
+
36
# Logical (shifted reg)
37
# Add/subtract (shifted reg)
38
# Add/subtract (extended reg)
39
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/tcg/translate-a64.c
42
+++ b/target/arm/tcg/translate-a64.c
43
@@ -XXX,XX +XXX,XX @@ static void gen_cls32(TCGv_i64 tcg_rd, TCGv_i64 tcg_rn)
44
TRANS(CLZ, gen_rr, a->rd, a->rn, a->sf ? gen_clz64 : gen_clz32)
45
TRANS(CLS, gen_rr, a->rd, a->rn, a->sf ? tcg_gen_clrsb_i64 : gen_cls32)
46
47
+static bool gen_pacaut(DisasContext *s, arg_pacaut *a, NeonGenTwo64OpEnvFn fn)
48
+{
49
+ TCGv_i64 tcg_rd, tcg_rn;
50
+
51
+ if (a->z) {
52
+ if (a->rn != 31) {
53
+ return false;
54
+ }
55
+ tcg_rn = tcg_constant_i64(0);
56
+ } else {
57
+ tcg_rn = cpu_reg_sp(s, a->rn);
58
+ }
59
+ if (s->pauth_active) {
60
+ tcg_rd = cpu_reg(s, a->rd);
61
+ fn(tcg_rd, tcg_env, tcg_rd, tcg_rn);
62
+ }
63
+ return true;
64
+}
65
+
66
+TRANS_FEAT(PACIA, aa64_pauth, gen_pacaut, a, gen_helper_pacia)
67
+TRANS_FEAT(PACIB, aa64_pauth, gen_pacaut, a, gen_helper_pacib)
68
+TRANS_FEAT(PACDA, aa64_pauth, gen_pacaut, a, gen_helper_pacda)
69
+TRANS_FEAT(PACDB, aa64_pauth, gen_pacaut, a, gen_helper_pacdb)
70
+
71
+TRANS_FEAT(AUTIA, aa64_pauth, gen_pacaut, a, gen_helper_autia)
72
+TRANS_FEAT(AUTIB, aa64_pauth, gen_pacaut, a, gen_helper_autib)
73
+TRANS_FEAT(AUTDA, aa64_pauth, gen_pacaut, a, gen_helper_autda)
74
+TRANS_FEAT(AUTDB, aa64_pauth, gen_pacaut, a, gen_helper_autdb)
75
+
76
/* Logical (shifted register)
77
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
78
* +----+-----+-----------+-------+---+------+--------+------+------+
79
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
80
#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
81
82
switch (MAP(sf, opcode2, opcode)) {
83
- case MAP(1, 0x01, 0x00): /* PACIA */
84
- if (s->pauth_active) {
85
- tcg_rd = cpu_reg(s, rd);
86
- gen_helper_pacia(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
87
- } else if (!dc_isar_feature(aa64_pauth, s)) {
88
- goto do_unallocated;
89
- }
90
- break;
91
- case MAP(1, 0x01, 0x01): /* PACIB */
92
- if (s->pauth_active) {
93
- tcg_rd = cpu_reg(s, rd);
94
- gen_helper_pacib(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
95
- } else if (!dc_isar_feature(aa64_pauth, s)) {
96
- goto do_unallocated;
97
- }
98
- break;
99
- case MAP(1, 0x01, 0x02): /* PACDA */
100
- if (s->pauth_active) {
101
- tcg_rd = cpu_reg(s, rd);
102
- gen_helper_pacda(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
103
- } else if (!dc_isar_feature(aa64_pauth, s)) {
104
- goto do_unallocated;
105
- }
106
- break;
107
- case MAP(1, 0x01, 0x03): /* PACDB */
108
- if (s->pauth_active) {
109
- tcg_rd = cpu_reg(s, rd);
110
- gen_helper_pacdb(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
111
- } else if (!dc_isar_feature(aa64_pauth, s)) {
112
- goto do_unallocated;
113
- }
114
- break;
115
- case MAP(1, 0x01, 0x04): /* AUTIA */
116
- if (s->pauth_active) {
117
- tcg_rd = cpu_reg(s, rd);
118
- gen_helper_autia(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
119
- } else if (!dc_isar_feature(aa64_pauth, s)) {
120
- goto do_unallocated;
121
- }
122
- break;
123
- case MAP(1, 0x01, 0x05): /* AUTIB */
124
- if (s->pauth_active) {
125
- tcg_rd = cpu_reg(s, rd);
126
- gen_helper_autib(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
127
- } else if (!dc_isar_feature(aa64_pauth, s)) {
128
- goto do_unallocated;
129
- }
130
- break;
131
- case MAP(1, 0x01, 0x06): /* AUTDA */
132
- if (s->pauth_active) {
133
- tcg_rd = cpu_reg(s, rd);
134
- gen_helper_autda(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
135
- } else if (!dc_isar_feature(aa64_pauth, s)) {
136
- goto do_unallocated;
137
- }
138
- break;
139
- case MAP(1, 0x01, 0x07): /* AUTDB */
140
- if (s->pauth_active) {
141
- tcg_rd = cpu_reg(s, rd);
142
- gen_helper_autdb(tcg_rd, tcg_env, tcg_rd, cpu_reg_sp(s, rn));
143
- } else if (!dc_isar_feature(aa64_pauth, s)) {
144
- goto do_unallocated;
145
- }
146
- break;
147
- case MAP(1, 0x01, 0x08): /* PACIZA */
148
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
149
- goto do_unallocated;
150
- } else if (s->pauth_active) {
151
- tcg_rd = cpu_reg(s, rd);
152
- gen_helper_pacia(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
153
- }
154
- break;
155
- case MAP(1, 0x01, 0x09): /* PACIZB */
156
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
157
- goto do_unallocated;
158
- } else if (s->pauth_active) {
159
- tcg_rd = cpu_reg(s, rd);
160
- gen_helper_pacib(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
161
- }
162
- break;
163
- case MAP(1, 0x01, 0x0a): /* PACDZA */
164
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
165
- goto do_unallocated;
166
- } else if (s->pauth_active) {
167
- tcg_rd = cpu_reg(s, rd);
168
- gen_helper_pacda(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
169
- }
170
- break;
171
- case MAP(1, 0x01, 0x0b): /* PACDZB */
172
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
173
- goto do_unallocated;
174
- } else if (s->pauth_active) {
175
- tcg_rd = cpu_reg(s, rd);
176
- gen_helper_pacdb(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
177
- }
178
- break;
179
- case MAP(1, 0x01, 0x0c): /* AUTIZA */
180
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
181
- goto do_unallocated;
182
- } else if (s->pauth_active) {
183
- tcg_rd = cpu_reg(s, rd);
184
- gen_helper_autia(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
185
- }
186
- break;
187
- case MAP(1, 0x01, 0x0d): /* AUTIZB */
188
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
189
- goto do_unallocated;
190
- } else if (s->pauth_active) {
191
- tcg_rd = cpu_reg(s, rd);
192
- gen_helper_autib(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
193
- }
194
- break;
195
- case MAP(1, 0x01, 0x0e): /* AUTDZA */
196
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
197
- goto do_unallocated;
198
- } else if (s->pauth_active) {
199
- tcg_rd = cpu_reg(s, rd);
200
- gen_helper_autda(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
201
- }
202
- break;
203
- case MAP(1, 0x01, 0x0f): /* AUTDZB */
204
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
205
- goto do_unallocated;
206
- } else if (s->pauth_active) {
207
- tcg_rd = cpu_reg(s, rd);
208
- gen_helper_autdb(tcg_rd, tcg_env, tcg_rd, tcg_constant_i64(0));
209
- }
210
- break;
211
case MAP(1, 0x01, 0x10): /* XPACI */
212
if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
213
goto do_unallocated;
214
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
215
case MAP(1, 0x00, 0x04):
216
case MAP(0, 0x00, 0x05): /* CLS */
217
case MAP(1, 0x00, 0x05):
218
+ case MAP(1, 0x01, 0x00): /* PACIA */
219
+ case MAP(1, 0x01, 0x01): /* PACIB */
220
+ case MAP(1, 0x01, 0x02): /* PACDA */
221
+ case MAP(1, 0x01, 0x03): /* PACDB */
222
+ case MAP(1, 0x01, 0x04): /* AUTIA */
223
+ case MAP(1, 0x01, 0x05): /* AUTIB */
224
+ case MAP(1, 0x01, 0x06): /* AUTDA */
225
+ case MAP(1, 0x01, 0x07): /* AUTDB */
226
+ case MAP(1, 0x01, 0x08): /* PACIZA */
227
+ case MAP(1, 0x01, 0x09): /* PACIZB */
228
+ case MAP(1, 0x01, 0x0a): /* PACDZA */
229
+ case MAP(1, 0x01, 0x0b): /* PACDZB */
230
+ case MAP(1, 0x01, 0x0c): /* AUTIZA */
231
+ case MAP(1, 0x01, 0x0d): /* AUTIZB */
232
+ case MAP(1, 0x01, 0x0e): /* AUTDZA */
233
+ case MAP(1, 0x01, 0x0f): /* AUTDZB */
234
unallocated_encoding(s);
235
break;
236
}
237
--
238
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove disas_data_proc_1src, as these were the last insns
4
decoded by that function.
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 3 ++
12
target/arm/tcg/translate-a64.c | 99 +++++-----------------------------
13
2 files changed, 16 insertions(+), 86 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ AUTIB 1 10 11010110 00001 00.101 ..... ..... @pacaut
20
AUTDA 1 10 11010110 00001 00.110 ..... ..... @pacaut
21
AUTDB 1 10 11010110 00001 00.111 ..... ..... @pacaut
22
23
+XPACI 1 10 11010110 00001 010000 11111 rd:5
24
+XPACD 1 10 11010110 00001 010001 11111 rd:5
25
+
26
# Logical (shifted reg)
27
# Add/subtract (shifted reg)
28
# Add/subtract (extended reg)
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(AUTIB, aa64_pauth, gen_pacaut, a, gen_helper_autib)
34
TRANS_FEAT(AUTDA, aa64_pauth, gen_pacaut, a, gen_helper_autda)
35
TRANS_FEAT(AUTDB, aa64_pauth, gen_pacaut, a, gen_helper_autdb)
36
37
+static bool do_xpac(DisasContext *s, int rd, NeonGenOne64OpEnvFn *fn)
38
+{
39
+ if (s->pauth_active) {
40
+ TCGv_i64 tcg_rd = cpu_reg(s, rd);
41
+ fn(tcg_rd, tcg_env, tcg_rd);
42
+ }
43
+ return true;
44
+}
45
+
46
+TRANS_FEAT(XPACI, aa64_pauth, do_xpac, a->rd, gen_helper_xpaci)
47
+TRANS_FEAT(XPACD, aa64_pauth, do_xpac, a->rd, gen_helper_xpacd)
48
+
49
/* Logical (shifted register)
50
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
51
* +----+-----+-----------+-------+---+------+--------+------+------+
52
@@ -XXX,XX +XXX,XX @@ static void disas_cond_select(DisasContext *s, uint32_t insn)
53
}
54
}
55
56
-/* Data-processing (1 source)
57
- * 31 30 29 28 21 20 16 15 10 9 5 4 0
58
- * +----+---+---+-----------------+---------+--------+------+------+
59
- * | sf | 1 | S | 1 1 0 1 0 1 1 0 | opcode2 | opcode | Rn | Rd |
60
- * +----+---+---+-----------------+---------+--------+------+------+
61
- */
62
-static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
63
-{
64
- unsigned int sf, opcode, opcode2, rn, rd;
65
- TCGv_i64 tcg_rd;
66
-
67
- if (extract32(insn, 29, 1)) {
68
- unallocated_encoding(s);
69
- return;
70
- }
71
-
72
- sf = extract32(insn, 31, 1);
73
- opcode = extract32(insn, 10, 6);
74
- opcode2 = extract32(insn, 16, 5);
75
- rn = extract32(insn, 5, 5);
76
- rd = extract32(insn, 0, 5);
77
-
78
-#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
79
-
80
- switch (MAP(sf, opcode2, opcode)) {
81
- case MAP(1, 0x01, 0x10): /* XPACI */
82
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
83
- goto do_unallocated;
84
- } else if (s->pauth_active) {
85
- tcg_rd = cpu_reg(s, rd);
86
- gen_helper_xpaci(tcg_rd, tcg_env, tcg_rd);
87
- }
88
- break;
89
- case MAP(1, 0x01, 0x11): /* XPACD */
90
- if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
91
- goto do_unallocated;
92
- } else if (s->pauth_active) {
93
- tcg_rd = cpu_reg(s, rd);
94
- gen_helper_xpacd(tcg_rd, tcg_env, tcg_rd);
95
- }
96
- break;
97
- default:
98
- do_unallocated:
99
- case MAP(0, 0x00, 0x00): /* RBIT */
100
- case MAP(1, 0x00, 0x00):
101
- case MAP(0, 0x00, 0x01): /* REV16 */
102
- case MAP(1, 0x00, 0x01):
103
- case MAP(0, 0x00, 0x02): /* REV/REV32 */
104
- case MAP(1, 0x00, 0x02):
105
- case MAP(1, 0x00, 0x03): /* REV64 */
106
- case MAP(0, 0x00, 0x04): /* CLZ */
107
- case MAP(1, 0x00, 0x04):
108
- case MAP(0, 0x00, 0x05): /* CLS */
109
- case MAP(1, 0x00, 0x05):
110
- case MAP(1, 0x01, 0x00): /* PACIA */
111
- case MAP(1, 0x01, 0x01): /* PACIB */
112
- case MAP(1, 0x01, 0x02): /* PACDA */
113
- case MAP(1, 0x01, 0x03): /* PACDB */
114
- case MAP(1, 0x01, 0x04): /* AUTIA */
115
- case MAP(1, 0x01, 0x05): /* AUTIB */
116
- case MAP(1, 0x01, 0x06): /* AUTDA */
117
- case MAP(1, 0x01, 0x07): /* AUTDB */
118
- case MAP(1, 0x01, 0x08): /* PACIZA */
119
- case MAP(1, 0x01, 0x09): /* PACIZB */
120
- case MAP(1, 0x01, 0x0a): /* PACDZA */
121
- case MAP(1, 0x01, 0x0b): /* PACDZB */
122
- case MAP(1, 0x01, 0x0c): /* AUTIZA */
123
- case MAP(1, 0x01, 0x0d): /* AUTIZB */
124
- case MAP(1, 0x01, 0x0e): /* AUTDZA */
125
- case MAP(1, 0x01, 0x0f): /* AUTDZB */
126
- unallocated_encoding(s);
127
- break;
128
- }
129
-
130
-#undef MAP
131
-}
132
-
133
-
134
/*
135
* Data processing - register
136
* 31 30 29 28 25 21 20 16 10 0
137
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
138
*/
139
static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
140
{
141
- int op0 = extract32(insn, 30, 1);
142
int op1 = extract32(insn, 28, 1);
143
int op2 = extract32(insn, 21, 4);
144
int op3 = extract32(insn, 10, 6);
145
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
146
disas_cond_select(s, insn);
147
break;
148
149
- case 0x6: /* Data-processing */
150
- if (op0) { /* (1 source) */
151
- disas_data_proc_1src(s, insn);
152
- } else { /* (2 source) */
153
- goto do_unallocated;
154
- }
155
- break;
156
case 0x8 ... 0xf: /* (3 source) */
157
disas_data_proc_3src(s, insn);
158
break;
159
160
default:
161
do_unallocated:
162
+ case 0x6: /* Data-processing */
163
unallocated_encoding(s);
164
break;
165
}
166
--
167
2.34.1
168
169
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes AND, BIC, ORR, ORN, EOR, EON, ANDS, BICS (shifted reg).
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-12-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 9 +++
11
target/arm/tcg/translate-a64.c | 117 ++++++++++++---------------------
12
2 files changed, 51 insertions(+), 75 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ XPACI 1 10 11010110 00001 010000 11111 rd:5
19
XPACD 1 10 11010110 00001 010001 11111 rd:5
20
21
# Logical (shifted reg)
22
+
23
+&logic_shift rd rn rm sf sa st n
24
+@logic_shift sf:1 .. ..... st:2 n:1 rm:5 sa:6 rn:5 rd:5 &logic_shift
25
+
26
+AND_r . 00 01010 .. . ..... ...... ..... ..... @logic_shift
27
+ORR_r . 01 01010 .. . ..... ...... ..... ..... @logic_shift
28
+EOR_r . 10 01010 .. . ..... ...... ..... ..... @logic_shift
29
+ANDS_r . 11 01010 .. . ..... ...... ..... ..... @logic_shift
30
+
31
# Add/subtract (shifted reg)
32
# Add/subtract (extended reg)
33
# Add/subtract (carry)
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ static bool do_xpac(DisasContext *s, int rd, NeonGenOne64OpEnvFn *fn)
39
TRANS_FEAT(XPACI, aa64_pauth, do_xpac, a->rd, gen_helper_xpaci)
40
TRANS_FEAT(XPACD, aa64_pauth, do_xpac, a->rd, gen_helper_xpacd)
41
42
-/* Logical (shifted register)
43
- * 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
44
- * +----+-----+-----------+-------+---+------+--------+------+------+
45
- * | sf | opc | 0 1 0 1 0 | shift | N | Rm | imm6 | Rn | Rd |
46
- * +----+-----+-----------+-------+---+------+--------+------+------+
47
- */
48
-static void disas_logic_reg(DisasContext *s, uint32_t insn)
49
+static bool do_logic_reg(DisasContext *s, arg_logic_shift *a,
50
+ ArithTwoOp *fn, ArithTwoOp *inv_fn, bool setflags)
51
{
52
TCGv_i64 tcg_rd, tcg_rn, tcg_rm;
53
- unsigned int sf, opc, shift_type, invert, rm, shift_amount, rn, rd;
54
55
- sf = extract32(insn, 31, 1);
56
- opc = extract32(insn, 29, 2);
57
- shift_type = extract32(insn, 22, 2);
58
- invert = extract32(insn, 21, 1);
59
- rm = extract32(insn, 16, 5);
60
- shift_amount = extract32(insn, 10, 6);
61
- rn = extract32(insn, 5, 5);
62
- rd = extract32(insn, 0, 5);
63
-
64
- if (!sf && (shift_amount & (1 << 5))) {
65
- unallocated_encoding(s);
66
- return;
67
+ if (!a->sf && (a->sa & (1 << 5))) {
68
+ return false;
69
}
70
71
- tcg_rd = cpu_reg(s, rd);
72
+ tcg_rd = cpu_reg(s, a->rd);
73
+ tcg_rn = cpu_reg(s, a->rn);
74
75
- if (opc == 1 && shift_amount == 0 && shift_type == 0 && rn == 31) {
76
- /* Unshifted ORR and ORN with WZR/XZR is the standard encoding for
77
- * register-register MOV and MVN, so it is worth special casing.
78
- */
79
- tcg_rm = cpu_reg(s, rm);
80
- if (invert) {
81
+ tcg_rm = read_cpu_reg(s, a->rm, a->sf);
82
+ if (a->sa) {
83
+ shift_reg_imm(tcg_rm, tcg_rm, a->sf, a->st, a->sa);
84
+ }
85
+
86
+ (a->n ? inv_fn : fn)(tcg_rd, tcg_rn, tcg_rm);
87
+ if (!a->sf) {
88
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
89
+ }
90
+ if (setflags) {
91
+ gen_logic_CC(a->sf, tcg_rd);
92
+ }
93
+ return true;
94
+}
95
+
96
+static bool trans_ORR_r(DisasContext *s, arg_logic_shift *a)
97
+{
98
+ /*
99
+ * Unshifted ORR and ORN with WZR/XZR is the standard encoding for
100
+ * register-register MOV and MVN, so it is worth special casing.
101
+ */
102
+ if (a->sa == 0 && a->st == 0 && a->rn == 31) {
103
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
104
+ TCGv_i64 tcg_rm = cpu_reg(s, a->rm);
105
+
106
+ if (a->n) {
107
tcg_gen_not_i64(tcg_rd, tcg_rm);
108
- if (!sf) {
109
+ if (!a->sf) {
110
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
111
}
112
} else {
113
- if (sf) {
114
+ if (a->sf) {
115
tcg_gen_mov_i64(tcg_rd, tcg_rm);
116
} else {
117
tcg_gen_ext32u_i64(tcg_rd, tcg_rm);
118
}
119
}
120
- return;
121
+ return true;
122
}
123
124
- tcg_rm = read_cpu_reg(s, rm, sf);
125
-
126
- if (shift_amount) {
127
- shift_reg_imm(tcg_rm, tcg_rm, sf, shift_type, shift_amount);
128
- }
129
-
130
- tcg_rn = cpu_reg(s, rn);
131
-
132
- switch (opc | (invert << 2)) {
133
- case 0: /* AND */
134
- case 3: /* ANDS */
135
- tcg_gen_and_i64(tcg_rd, tcg_rn, tcg_rm);
136
- break;
137
- case 1: /* ORR */
138
- tcg_gen_or_i64(tcg_rd, tcg_rn, tcg_rm);
139
- break;
140
- case 2: /* EOR */
141
- tcg_gen_xor_i64(tcg_rd, tcg_rn, tcg_rm);
142
- break;
143
- case 4: /* BIC */
144
- case 7: /* BICS */
145
- tcg_gen_andc_i64(tcg_rd, tcg_rn, tcg_rm);
146
- break;
147
- case 5: /* ORN */
148
- tcg_gen_orc_i64(tcg_rd, tcg_rn, tcg_rm);
149
- break;
150
- case 6: /* EON */
151
- tcg_gen_eqv_i64(tcg_rd, tcg_rn, tcg_rm);
152
- break;
153
- default:
154
- assert(FALSE);
155
- break;
156
- }
157
-
158
- if (!sf) {
159
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
160
- }
161
-
162
- if (opc == 3) {
163
- gen_logic_CC(sf, tcg_rd);
164
- }
165
+ return do_logic_reg(s, a, tcg_gen_or_i64, tcg_gen_orc_i64, false);
166
}
167
168
+TRANS(AND_r, do_logic_reg, a, tcg_gen_and_i64, tcg_gen_andc_i64, false)
169
+TRANS(ANDS_r, do_logic_reg, a, tcg_gen_and_i64, tcg_gen_andc_i64, true)
170
+TRANS(EOR_r, do_logic_reg, a, tcg_gen_xor_i64, tcg_gen_eqv_i64, false)
171
+
172
/*
173
* Add/subtract (extended register)
174
*
175
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
176
/* Add/sub (shifted register) */
177
disas_add_sub_reg(s, insn);
178
}
179
- } else {
180
- /* Logical (shifted register) */
181
- disas_logic_reg(s, insn);
182
+ return;
183
}
184
- return;
185
+ goto do_unallocated;
186
}
187
188
switch (op2) {
189
--
190
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes ADD, SUB, ADDS, SUBS (extended register).
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-13-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 9 +++++
11
target/arm/tcg/translate-a64.c | 65 +++++++++++-----------------------
12
2 files changed, 29 insertions(+), 45 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ ANDS_r . 11 01010 .. . ..... ...... ..... ..... @logic_shift
19
20
# Add/subtract (shifted reg)
21
# Add/subtract (extended reg)
22
+
23
+&addsub_ext rd rn rm sf sa st
24
+@addsub_ext sf:1 .. ........ rm:5 st:3 sa:3 rn:5 rd:5 &addsub_ext
25
+
26
+ADD_ext . 00 01011001 ..... ... ... ..... ..... @addsub_ext
27
+SUB_ext . 10 01011001 ..... ... ... ..... ..... @addsub_ext
28
+ADDS_ext . 01 01011001 ..... ... ... ..... ..... @addsub_ext
29
+SUBS_ext . 11 01011001 ..... ... ... ..... ..... @addsub_ext
30
+
31
# Add/subtract (carry)
32
# Rotate right into flags
33
# Evaluate into flags
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ TRANS(AND_r, do_logic_reg, a, tcg_gen_and_i64, tcg_gen_andc_i64, false)
39
TRANS(ANDS_r, do_logic_reg, a, tcg_gen_and_i64, tcg_gen_andc_i64, true)
40
TRANS(EOR_r, do_logic_reg, a, tcg_gen_xor_i64, tcg_gen_eqv_i64, false)
41
42
-/*
43
- * Add/subtract (extended register)
44
- *
45
- * 31|30|29|28 24|23 22|21|20 16|15 13|12 10|9 5|4 0|
46
- * +--+--+--+-----------+-----+--+-------+------+------+----+----+
47
- * |sf|op| S| 0 1 0 1 1 | opt | 1| Rm |option| imm3 | Rn | Rd |
48
- * +--+--+--+-----------+-----+--+-------+------+------+----+----+
49
- *
50
- * sf: 0 -> 32bit, 1 -> 64bit
51
- * op: 0 -> add , 1 -> sub
52
- * S: 1 -> set flags
53
- * opt: 00
54
- * option: extension type (see DecodeRegExtend)
55
- * imm3: optional shift to Rm
56
- *
57
- * Rd = Rn + LSL(extend(Rm), amount)
58
- */
59
-static void disas_add_sub_ext_reg(DisasContext *s, uint32_t insn)
60
+static bool do_addsub_ext(DisasContext *s, arg_addsub_ext *a,
61
+ bool sub_op, bool setflags)
62
{
63
- int rd = extract32(insn, 0, 5);
64
- int rn = extract32(insn, 5, 5);
65
- int imm3 = extract32(insn, 10, 3);
66
- int option = extract32(insn, 13, 3);
67
- int rm = extract32(insn, 16, 5);
68
- int opt = extract32(insn, 22, 2);
69
- bool setflags = extract32(insn, 29, 1);
70
- bool sub_op = extract32(insn, 30, 1);
71
- bool sf = extract32(insn, 31, 1);
72
+ TCGv_i64 tcg_rm, tcg_rn, tcg_rd, tcg_result;
73
74
- TCGv_i64 tcg_rm, tcg_rn; /* temps */
75
- TCGv_i64 tcg_rd;
76
- TCGv_i64 tcg_result;
77
-
78
- if (imm3 > 4 || opt != 0) {
79
- unallocated_encoding(s);
80
- return;
81
+ if (a->sa > 4) {
82
+ return false;
83
}
84
85
/* non-flag setting ops may use SP */
86
if (!setflags) {
87
- tcg_rd = cpu_reg_sp(s, rd);
88
+ tcg_rd = cpu_reg_sp(s, a->rd);
89
} else {
90
- tcg_rd = cpu_reg(s, rd);
91
+ tcg_rd = cpu_reg(s, a->rd);
92
}
93
- tcg_rn = read_cpu_reg_sp(s, rn, sf);
94
+ tcg_rn = read_cpu_reg_sp(s, a->rn, a->sf);
95
96
- tcg_rm = read_cpu_reg(s, rm, sf);
97
- ext_and_shift_reg(tcg_rm, tcg_rm, option, imm3);
98
+ tcg_rm = read_cpu_reg(s, a->rm, a->sf);
99
+ ext_and_shift_reg(tcg_rm, tcg_rm, a->st, a->sa);
100
101
tcg_result = tcg_temp_new_i64();
102
-
103
if (!setflags) {
104
if (sub_op) {
105
tcg_gen_sub_i64(tcg_result, tcg_rn, tcg_rm);
106
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_ext_reg(DisasContext *s, uint32_t insn)
107
}
108
} else {
109
if (sub_op) {
110
- gen_sub_CC(sf, tcg_result, tcg_rn, tcg_rm);
111
+ gen_sub_CC(a->sf, tcg_result, tcg_rn, tcg_rm);
112
} else {
113
- gen_add_CC(sf, tcg_result, tcg_rn, tcg_rm);
114
+ gen_add_CC(a->sf, tcg_result, tcg_rn, tcg_rm);
115
}
116
}
117
118
- if (sf) {
119
+ if (a->sf) {
120
tcg_gen_mov_i64(tcg_rd, tcg_result);
121
} else {
122
tcg_gen_ext32u_i64(tcg_rd, tcg_result);
123
}
124
+ return true;
125
}
126
127
+TRANS(ADD_ext, do_addsub_ext, a, false, false)
128
+TRANS(SUB_ext, do_addsub_ext, a, true, false)
129
+TRANS(ADDS_ext, do_addsub_ext, a, false, true)
130
+TRANS(SUBS_ext, do_addsub_ext, a, true, true)
131
+
132
/*
133
* Add/subtract (shifted register)
134
*
135
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
136
if (!op1) {
137
if (op2 & 8) {
138
if (op2 & 1) {
139
- /* Add/sub (extended register) */
140
- disas_add_sub_ext_reg(s, insn);
141
+ goto do_unallocated;
142
} else {
143
/* Add/sub (shifted register) */
144
disas_add_sub_reg(s, insn);
145
--
146
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes ADD, SUB, ADDS, SUBS (shifted register).
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-14-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 9 +++++
11
target/arm/tcg/translate-a64.c | 64 ++++++++++------------------------
12
2 files changed, 27 insertions(+), 46 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ EOR_r . 10 01010 .. . ..... ...... ..... ..... @logic_shift
19
ANDS_r . 11 01010 .. . ..... ...... ..... ..... @logic_shift
20
21
# Add/subtract (shifted reg)
22
+
23
+&addsub_shift rd rn rm sf sa st
24
+@addsub_shift sf:1 .. ..... st:2 . rm:5 sa:6 rn:5 rd:5 &addsub_shift
25
+
26
+ADD_r . 00 01011 .. 0 ..... ...... ..... ..... @addsub_shift
27
+SUB_r . 10 01011 .. 0 ..... ...... ..... ..... @addsub_shift
28
+ADDS_r . 01 01011 .. 0 ..... ...... ..... ..... @addsub_shift
29
+SUBS_r . 11 01011 .. 0 ..... ...... ..... ..... @addsub_shift
30
+
31
# Add/subtract (extended reg)
32
33
&addsub_ext rd rn rm sf sa st
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ TRANS(SUB_ext, do_addsub_ext, a, true, false)
39
TRANS(ADDS_ext, do_addsub_ext, a, false, true)
40
TRANS(SUBS_ext, do_addsub_ext, a, true, true)
41
42
-/*
43
- * Add/subtract (shifted register)
44
- *
45
- * 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
46
- * +--+--+--+-----------+-----+--+-------+---------+------+------+
47
- * |sf|op| S| 0 1 0 1 1 |shift| 0| Rm | imm6 | Rn | Rd |
48
- * +--+--+--+-----------+-----+--+-------+---------+------+------+
49
- *
50
- * sf: 0 -> 32bit, 1 -> 64bit
51
- * op: 0 -> add , 1 -> sub
52
- * S: 1 -> set flags
53
- * shift: 00 -> LSL, 01 -> LSR, 10 -> ASR, 11 -> RESERVED
54
- * imm6: Shift amount to apply to Rm before the add/sub
55
- */
56
-static void disas_add_sub_reg(DisasContext *s, uint32_t insn)
57
+static bool do_addsub_reg(DisasContext *s, arg_addsub_shift *a,
58
+ bool sub_op, bool setflags)
59
{
60
- int rd = extract32(insn, 0, 5);
61
- int rn = extract32(insn, 5, 5);
62
- int imm6 = extract32(insn, 10, 6);
63
- int rm = extract32(insn, 16, 5);
64
- int shift_type = extract32(insn, 22, 2);
65
- bool setflags = extract32(insn, 29, 1);
66
- bool sub_op = extract32(insn, 30, 1);
67
- bool sf = extract32(insn, 31, 1);
68
+ TCGv_i64 tcg_rd, tcg_rn, tcg_rm, tcg_result;
69
70
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
71
- TCGv_i64 tcg_rn, tcg_rm;
72
- TCGv_i64 tcg_result;
73
-
74
- if ((shift_type == 3) || (!sf && (imm6 > 31))) {
75
- unallocated_encoding(s);
76
- return;
77
+ if (a->st == 3 || (!a->sf && (a->sa & 32))) {
78
+ return false;
79
}
80
81
- tcg_rn = read_cpu_reg(s, rn, sf);
82
- tcg_rm = read_cpu_reg(s, rm, sf);
83
+ tcg_rd = cpu_reg(s, a->rd);
84
+ tcg_rn = read_cpu_reg(s, a->rn, a->sf);
85
+ tcg_rm = read_cpu_reg(s, a->rm, a->sf);
86
87
- shift_reg_imm(tcg_rm, tcg_rm, sf, shift_type, imm6);
88
+ shift_reg_imm(tcg_rm, tcg_rm, a->sf, a->st, a->sa);
89
90
tcg_result = tcg_temp_new_i64();
91
-
92
if (!setflags) {
93
if (sub_op) {
94
tcg_gen_sub_i64(tcg_result, tcg_rn, tcg_rm);
95
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_reg(DisasContext *s, uint32_t insn)
96
}
97
} else {
98
if (sub_op) {
99
- gen_sub_CC(sf, tcg_result, tcg_rn, tcg_rm);
100
+ gen_sub_CC(a->sf, tcg_result, tcg_rn, tcg_rm);
101
} else {
102
- gen_add_CC(sf, tcg_result, tcg_rn, tcg_rm);
103
+ gen_add_CC(a->sf, tcg_result, tcg_rn, tcg_rm);
104
}
105
}
106
107
- if (sf) {
108
+ if (a->sf) {
109
tcg_gen_mov_i64(tcg_rd, tcg_result);
110
} else {
111
tcg_gen_ext32u_i64(tcg_rd, tcg_result);
112
}
113
+ return true;
114
}
115
116
+TRANS(ADD_r, do_addsub_reg, a, false, false)
117
+TRANS(SUB_r, do_addsub_reg, a, true, false)
118
+TRANS(ADDS_r, do_addsub_reg, a, false, true)
119
+TRANS(SUBS_r, do_addsub_reg, a, true, true)
120
+
121
/* Data-processing (3 source)
122
*
123
* 31 30 29 28 24 23 21 20 16 15 14 10 9 5 4 0
124
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
125
int op3 = extract32(insn, 10, 6);
126
127
if (!op1) {
128
- if (op2 & 8) {
129
- if (op2 & 1) {
130
- goto do_unallocated;
131
- } else {
132
- /* Add/sub (shifted register) */
133
- disas_add_sub_reg(s, insn);
134
- }
135
- return;
136
- }
137
goto do_unallocated;
138
}
139
140
--
141
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes MADD, MSUB, SMADDL, SMSUBL, UMADDL, UMSUBL, SMULH, UMULH.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-15-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 16 +++++
11
target/arm/tcg/translate-a64.c | 119 ++++++++++++---------------------
12
2 files changed, 59 insertions(+), 76 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ SUBS_ext . 11 01011001 ..... ... ... ..... ..... @addsub_ext
19
# Conditional select
20
# Data Processing (3-source)
21
22
+&rrrr rd rn rm ra
23
+@rrrr . .. ........ rm:5 . ra:5 rn:5 rd:5 &rrrr
24
+
25
+MADD_w 0 00 11011000 ..... 0 ..... ..... ..... @rrrr
26
+MSUB_w 0 00 11011000 ..... 1 ..... ..... ..... @rrrr
27
+MADD_x 1 00 11011000 ..... 0 ..... ..... ..... @rrrr
28
+MSUB_x 1 00 11011000 ..... 1 ..... ..... ..... @rrrr
29
+
30
+SMADDL 1 00 11011001 ..... 0 ..... ..... ..... @rrrr
31
+SMSUBL 1 00 11011001 ..... 1 ..... ..... ..... @rrrr
32
+UMADDL 1 00 11011101 ..... 0 ..... ..... ..... @rrrr
33
+UMSUBL 1 00 11011101 ..... 1 ..... ..... ..... @rrrr
34
+
35
+SMULH 1 00 11011010 ..... 0 11111 ..... ..... @rrr
36
+UMULH 1 00 11011110 ..... 0 11111 ..... ..... @rrr
37
+
38
### Cryptographic AES
39
40
AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
41
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/tcg/translate-a64.c
44
+++ b/target/arm/tcg/translate-a64.c
45
@@ -XXX,XX +XXX,XX @@ TRANS(SUB_r, do_addsub_reg, a, true, false)
46
TRANS(ADDS_r, do_addsub_reg, a, false, true)
47
TRANS(SUBS_r, do_addsub_reg, a, true, true)
48
49
-/* Data-processing (3 source)
50
- *
51
- * 31 30 29 28 24 23 21 20 16 15 14 10 9 5 4 0
52
- * +--+------+-----------+------+------+----+------+------+------+
53
- * |sf| op54 | 1 1 0 1 1 | op31 | Rm | o0 | Ra | Rn | Rd |
54
- * +--+------+-----------+------+------+----+------+------+------+
55
- */
56
-static void disas_data_proc_3src(DisasContext *s, uint32_t insn)
57
+static bool do_mulh(DisasContext *s, arg_rrr *a,
58
+ void (*fn)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64))
59
{
60
- int rd = extract32(insn, 0, 5);
61
- int rn = extract32(insn, 5, 5);
62
- int ra = extract32(insn, 10, 5);
63
- int rm = extract32(insn, 16, 5);
64
- int op_id = (extract32(insn, 29, 3) << 4) |
65
- (extract32(insn, 21, 3) << 1) |
66
- extract32(insn, 15, 1);
67
- bool sf = extract32(insn, 31, 1);
68
- bool is_sub = extract32(op_id, 0, 1);
69
- bool is_high = extract32(op_id, 2, 1);
70
- bool is_signed = false;
71
- TCGv_i64 tcg_op1;
72
- TCGv_i64 tcg_op2;
73
- TCGv_i64 tcg_tmp;
74
+ TCGv_i64 discard = tcg_temp_new_i64();
75
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
76
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
77
+ TCGv_i64 tcg_rm = cpu_reg(s, a->rm);
78
79
- /* Note that op_id is sf:op54:op31:o0 so it includes the 32/64 size flag */
80
- switch (op_id) {
81
- case 0x42: /* SMADDL */
82
- case 0x43: /* SMSUBL */
83
- case 0x44: /* SMULH */
84
- is_signed = true;
85
- break;
86
- case 0x0: /* MADD (32bit) */
87
- case 0x1: /* MSUB (32bit) */
88
- case 0x40: /* MADD (64bit) */
89
- case 0x41: /* MSUB (64bit) */
90
- case 0x4a: /* UMADDL */
91
- case 0x4b: /* UMSUBL */
92
- case 0x4c: /* UMULH */
93
- break;
94
- default:
95
- unallocated_encoding(s);
96
- return;
97
- }
98
+ fn(discard, tcg_rd, tcg_rn, tcg_rm);
99
+ return true;
100
+}
101
102
- if (is_high) {
103
- TCGv_i64 low_bits = tcg_temp_new_i64(); /* low bits discarded */
104
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
105
- TCGv_i64 tcg_rn = cpu_reg(s, rn);
106
- TCGv_i64 tcg_rm = cpu_reg(s, rm);
107
+TRANS(SMULH, do_mulh, a, tcg_gen_muls2_i64)
108
+TRANS(UMULH, do_mulh, a, tcg_gen_mulu2_i64)
109
110
- if (is_signed) {
111
- tcg_gen_muls2_i64(low_bits, tcg_rd, tcg_rn, tcg_rm);
112
- } else {
113
- tcg_gen_mulu2_i64(low_bits, tcg_rd, tcg_rn, tcg_rm);
114
- }
115
- return;
116
- }
117
+static bool do_muladd(DisasContext *s, arg_rrrr *a,
118
+ bool sf, bool is_sub, MemOp mop)
119
+{
120
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
121
+ TCGv_i64 tcg_op1, tcg_op2;
122
123
- tcg_op1 = tcg_temp_new_i64();
124
- tcg_op2 = tcg_temp_new_i64();
125
- tcg_tmp = tcg_temp_new_i64();
126
-
127
- if (op_id < 0x42) {
128
- tcg_gen_mov_i64(tcg_op1, cpu_reg(s, rn));
129
- tcg_gen_mov_i64(tcg_op2, cpu_reg(s, rm));
130
+ if (mop == MO_64) {
131
+ tcg_op1 = cpu_reg(s, a->rn);
132
+ tcg_op2 = cpu_reg(s, a->rm);
133
} else {
134
- if (is_signed) {
135
- tcg_gen_ext32s_i64(tcg_op1, cpu_reg(s, rn));
136
- tcg_gen_ext32s_i64(tcg_op2, cpu_reg(s, rm));
137
- } else {
138
- tcg_gen_ext32u_i64(tcg_op1, cpu_reg(s, rn));
139
- tcg_gen_ext32u_i64(tcg_op2, cpu_reg(s, rm));
140
- }
141
+ tcg_op1 = tcg_temp_new_i64();
142
+ tcg_op2 = tcg_temp_new_i64();
143
+ tcg_gen_ext_i64(tcg_op1, cpu_reg(s, a->rn), mop);
144
+ tcg_gen_ext_i64(tcg_op2, cpu_reg(s, a->rm), mop);
145
}
146
147
- if (ra == 31 && !is_sub) {
148
+ if (a->ra == 31 && !is_sub) {
149
/* Special-case MADD with rA == XZR; it is the standard MUL alias */
150
- tcg_gen_mul_i64(cpu_reg(s, rd), tcg_op1, tcg_op2);
151
+ tcg_gen_mul_i64(tcg_rd, tcg_op1, tcg_op2);
152
} else {
153
+ TCGv_i64 tcg_tmp = tcg_temp_new_i64();
154
+ TCGv_i64 tcg_ra = cpu_reg(s, a->ra);
155
+
156
tcg_gen_mul_i64(tcg_tmp, tcg_op1, tcg_op2);
157
if (is_sub) {
158
- tcg_gen_sub_i64(cpu_reg(s, rd), cpu_reg(s, ra), tcg_tmp);
159
+ tcg_gen_sub_i64(tcg_rd, tcg_ra, tcg_tmp);
160
} else {
161
- tcg_gen_add_i64(cpu_reg(s, rd), cpu_reg(s, ra), tcg_tmp);
162
+ tcg_gen_add_i64(tcg_rd, tcg_ra, tcg_tmp);
163
}
164
}
165
166
if (!sf) {
167
- tcg_gen_ext32u_i64(cpu_reg(s, rd), cpu_reg(s, rd));
168
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
169
}
170
+ return true;
171
}
172
173
+TRANS(MADD_w, do_muladd, a, false, false, MO_64)
174
+TRANS(MSUB_w, do_muladd, a, false, true, MO_64)
175
+TRANS(MADD_x, do_muladd, a, true, false, MO_64)
176
+TRANS(MSUB_x, do_muladd, a, true, true, MO_64)
177
+
178
+TRANS(SMADDL, do_muladd, a, true, false, MO_SL)
179
+TRANS(SMSUBL, do_muladd, a, true, true, MO_SL)
180
+TRANS(UMADDL, do_muladd, a, true, false, MO_UL)
181
+TRANS(UMSUBL, do_muladd, a, true, true, MO_UL)
182
+
183
/* Add/subtract (with carry)
184
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 10 9 5 4 0
185
* +--+--+--+------------------------+------+-------------+------+-----+
186
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
187
disas_cond_select(s, insn);
188
break;
189
190
- case 0x8 ... 0xf: /* (3 source) */
191
- disas_data_proc_3src(s, insn);
192
- break;
193
-
194
default:
195
do_unallocated:
196
case 0x6: /* Data-processing */
197
+ case 0x8 ... 0xf: /* (3 source) */
198
unallocated_encoding(s);
199
break;
200
}
201
--
202
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes ADC, SBC, ADCS, SBCS.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-16-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 6 +++++
11
target/arm/tcg/translate-a64.c | 43 +++++++++++++---------------------
12
2 files changed, 22 insertions(+), 27 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ ADDS_ext . 01 01011001 ..... ... ... ..... ..... @addsub_ext
19
SUBS_ext . 11 01011001 ..... ... ... ..... ..... @addsub_ext
20
21
# Add/subtract (carry)
22
+
23
+ADC . 00 11010000 ..... 000000 ..... ..... @rrr_sf
24
+ADCS . 01 11010000 ..... 000000 ..... ..... @rrr_sf
25
+SBC . 10 11010000 ..... 000000 ..... ..... @rrr_sf
26
+SBCS . 11 11010000 ..... 000000 ..... ..... @rrr_sf
27
+
28
# Rotate right into flags
29
# Evaluate into flags
30
# Conditional compare (regster)
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-a64.c
34
+++ b/target/arm/tcg/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ TRANS(SMSUBL, do_muladd, a, true, true, MO_SL)
36
TRANS(UMADDL, do_muladd, a, true, false, MO_UL)
37
TRANS(UMSUBL, do_muladd, a, true, true, MO_UL)
38
39
-/* Add/subtract (with carry)
40
- * 31 30 29 28 27 26 25 24 23 22 21 20 16 15 10 9 5 4 0
41
- * +--+--+--+------------------------+------+-------------+------+-----+
42
- * |sf|op| S| 1 1 0 1 0 0 0 0 | rm | 0 0 0 0 0 0 | Rn | Rd |
43
- * +--+--+--+------------------------+------+-------------+------+-----+
44
- */
45
-
46
-static void disas_adc_sbc(DisasContext *s, uint32_t insn)
47
+static bool do_adc_sbc(DisasContext *s, arg_rrr_sf *a,
48
+ bool is_sub, bool setflags)
49
{
50
- unsigned int sf, op, setflags, rm, rn, rd;
51
TCGv_i64 tcg_y, tcg_rn, tcg_rd;
52
53
- sf = extract32(insn, 31, 1);
54
- op = extract32(insn, 30, 1);
55
- setflags = extract32(insn, 29, 1);
56
- rm = extract32(insn, 16, 5);
57
- rn = extract32(insn, 5, 5);
58
- rd = extract32(insn, 0, 5);
59
+ tcg_rd = cpu_reg(s, a->rd);
60
+ tcg_rn = cpu_reg(s, a->rn);
61
62
- tcg_rd = cpu_reg(s, rd);
63
- tcg_rn = cpu_reg(s, rn);
64
-
65
- if (op) {
66
+ if (is_sub) {
67
tcg_y = tcg_temp_new_i64();
68
- tcg_gen_not_i64(tcg_y, cpu_reg(s, rm));
69
+ tcg_gen_not_i64(tcg_y, cpu_reg(s, a->rm));
70
} else {
71
- tcg_y = cpu_reg(s, rm);
72
+ tcg_y = cpu_reg(s, a->rm);
73
}
74
75
if (setflags) {
76
- gen_adc_CC(sf, tcg_rd, tcg_rn, tcg_y);
77
+ gen_adc_CC(a->sf, tcg_rd, tcg_rn, tcg_y);
78
} else {
79
- gen_adc(sf, tcg_rd, tcg_rn, tcg_y);
80
+ gen_adc(a->sf, tcg_rd, tcg_rn, tcg_y);
81
}
82
+ return true;
83
}
84
85
+TRANS(ADC, do_adc_sbc, a, false, false)
86
+TRANS(SBC, do_adc_sbc, a, true, false)
87
+TRANS(ADCS, do_adc_sbc, a, false, true)
88
+TRANS(SBCS, do_adc_sbc, a, true, true)
89
+
90
/*
91
* Rotate right into flags
92
* 31 30 29 21 15 10 5 4 0
93
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
94
switch (op2) {
95
case 0x0:
96
switch (op3) {
97
- case 0x00: /* Add/subtract (with carry) */
98
- disas_adc_sbc(s, insn);
99
- break;
100
-
101
case 0x01: /* Rotate right into flags */
102
case 0x21:
103
disas_rotate_right_into_flags(s, insn);
104
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
105
break;
106
107
default:
108
+ case 0x00: /* Add/subtract (with carry) */
109
goto do_unallocated;
110
}
111
break;
112
--
113
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-17-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 3 +++
9
target/arm/tcg/translate-a64.c | 32 +++++++++-----------------------
10
2 files changed, 12 insertions(+), 23 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SBC . 10 11010000 ..... 000000 ..... ..... @rrr_sf
17
SBCS . 11 11010000 ..... 000000 ..... ..... @rrr_sf
18
19
# Rotate right into flags
20
+
21
+RMIF 1 01 11010000 imm:6 00001 rn:5 0 mask:4
22
+
23
# Evaluate into flags
24
# Conditional compare (regster)
25
# Conditional compare (immediate)
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/translate-a64.c
29
+++ b/target/arm/tcg/translate-a64.c
30
@@ -XXX,XX +XXX,XX @@ TRANS(SBC, do_adc_sbc, a, true, false)
31
TRANS(ADCS, do_adc_sbc, a, false, true)
32
TRANS(SBCS, do_adc_sbc, a, true, true)
33
34
-/*
35
- * Rotate right into flags
36
- * 31 30 29 21 15 10 5 4 0
37
- * +--+--+--+-----------------+--------+-----------+------+--+------+
38
- * |sf|op| S| 1 1 0 1 0 0 0 0 | imm6 | 0 0 0 0 1 | Rn |o2| mask |
39
- * +--+--+--+-----------------+--------+-----------+------+--+------+
40
- */
41
-static void disas_rotate_right_into_flags(DisasContext *s, uint32_t insn)
42
+static bool trans_RMIF(DisasContext *s, arg_RMIF *a)
43
{
44
- int mask = extract32(insn, 0, 4);
45
- int o2 = extract32(insn, 4, 1);
46
- int rn = extract32(insn, 5, 5);
47
- int imm6 = extract32(insn, 15, 6);
48
- int sf_op_s = extract32(insn, 29, 3);
49
+ int mask = a->mask;
50
TCGv_i64 tcg_rn;
51
TCGv_i32 nzcv;
52
53
- if (sf_op_s != 5 || o2 != 0 || !dc_isar_feature(aa64_condm_4, s)) {
54
- unallocated_encoding(s);
55
- return;
56
+ if (!dc_isar_feature(aa64_condm_4, s)) {
57
+ return false;
58
}
59
60
- tcg_rn = read_cpu_reg(s, rn, 1);
61
- tcg_gen_rotri_i64(tcg_rn, tcg_rn, imm6);
62
+ tcg_rn = read_cpu_reg(s, a->rn, 1);
63
+ tcg_gen_rotri_i64(tcg_rn, tcg_rn, a->imm);
64
65
nzcv = tcg_temp_new_i32();
66
tcg_gen_extrl_i64_i32(nzcv, tcg_rn);
67
@@ -XXX,XX +XXX,XX @@ static void disas_rotate_right_into_flags(DisasContext *s, uint32_t insn)
68
if (mask & 1) { /* V */
69
tcg_gen_shli_i32(cpu_VF, nzcv, 31 - 0);
70
}
71
+ return true;
72
}
73
74
/*
75
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
76
switch (op2) {
77
case 0x0:
78
switch (op3) {
79
- case 0x01: /* Rotate right into flags */
80
- case 0x21:
81
- disas_rotate_right_into_flags(s, insn);
82
- break;
83
-
84
case 0x02: /* Evaluate into flags */
85
case 0x12:
86
case 0x22:
87
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
88
89
default:
90
case 0x00: /* Add/subtract (with carry) */
91
+ case 0x01: /* Rotate right into flags */
92
+ case 0x21:
93
goto do_unallocated;
94
}
95
break;
96
--
97
2.34.1
98
99
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 4 +++
9
target/arm/tcg/translate-a64.c | 48 +++++-----------------------------
10
2 files changed, 11 insertions(+), 41 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SBCS . 11 11010000 ..... 000000 ..... ..... @rrr_sf
17
RMIF 1 01 11010000 imm:6 00001 rn:5 0 mask:4
18
19
# Evaluate into flags
20
+
21
+SETF8 0 01 11010000 00000 000010 rn:5 01101
22
+SETF16 0 01 11010000 00000 010010 rn:5 01101
23
+
24
# Conditional compare (regster)
25
# Conditional compare (immediate)
26
# Conditional select
27
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/tcg/translate-a64.c
30
+++ b/target/arm/tcg/translate-a64.c
31
@@ -XXX,XX +XXX,XX @@ static bool trans_RMIF(DisasContext *s, arg_RMIF *a)
32
return true;
33
}
34
35
-/*
36
- * Evaluate into flags
37
- * 31 30 29 21 15 14 10 5 4 0
38
- * +--+--+--+-----------------+---------+----+---------+------+--+------+
39
- * |sf|op| S| 1 1 0 1 0 0 0 0 | opcode2 | sz | 0 0 1 0 | Rn |o3| mask |
40
- * +--+--+--+-----------------+---------+----+---------+------+--+------+
41
- */
42
-static void disas_evaluate_into_flags(DisasContext *s, uint32_t insn)
43
+static bool do_setf(DisasContext *s, int rn, int shift)
44
{
45
- int o3_mask = extract32(insn, 0, 5);
46
- int rn = extract32(insn, 5, 5);
47
- int o2 = extract32(insn, 15, 6);
48
- int sz = extract32(insn, 14, 1);
49
- int sf_op_s = extract32(insn, 29, 3);
50
- TCGv_i32 tmp;
51
- int shift;
52
+ TCGv_i32 tmp = tcg_temp_new_i32();
53
54
- if (sf_op_s != 1 || o2 != 0 || o3_mask != 0xd ||
55
- !dc_isar_feature(aa64_condm_4, s)) {
56
- unallocated_encoding(s);
57
- return;
58
- }
59
- shift = sz ? 16 : 24; /* SETF16 or SETF8 */
60
-
61
- tmp = tcg_temp_new_i32();
62
tcg_gen_extrl_i64_i32(tmp, cpu_reg(s, rn));
63
tcg_gen_shli_i32(cpu_NF, tmp, shift);
64
tcg_gen_shli_i32(cpu_VF, tmp, shift - 1);
65
tcg_gen_mov_i32(cpu_ZF, cpu_NF);
66
tcg_gen_xor_i32(cpu_VF, cpu_VF, cpu_NF);
67
+ return true;
68
}
69
70
+TRANS_FEAT(SETF8, aa64_condm_4, do_setf, a->rn, 24)
71
+TRANS_FEAT(SETF16, aa64_condm_4, do_setf, a->rn, 16)
72
+
73
/* Conditional compare (immediate / register)
74
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
75
* +--+--+--+------------------------+--------+------+----+--+------+--+-----+
76
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
77
{
78
int op1 = extract32(insn, 28, 1);
79
int op2 = extract32(insn, 21, 4);
80
- int op3 = extract32(insn, 10, 6);
81
82
if (!op1) {
83
goto do_unallocated;
84
}
85
86
switch (op2) {
87
- case 0x0:
88
- switch (op3) {
89
- case 0x02: /* Evaluate into flags */
90
- case 0x12:
91
- case 0x22:
92
- case 0x32:
93
- disas_evaluate_into_flags(s, insn);
94
- break;
95
-
96
- default:
97
- case 0x00: /* Add/subtract (with carry) */
98
- case 0x01: /* Rotate right into flags */
99
- case 0x21:
100
- goto do_unallocated;
101
- }
102
- break;
103
-
104
case 0x2: /* Conditional compare */
105
disas_cc(s, insn); /* both imm and reg forms */
106
break;
107
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
108
109
default:
110
do_unallocated:
111
+ case 0x0:
112
case 0x6: /* Data-processing */
113
case 0x8 ... 0xf: /* (3 source) */
114
unallocated_encoding(s);
115
--
116
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 6 ++--
9
target/arm/tcg/translate-a64.c | 66 +++++++++++-----------------------
10
2 files changed, 25 insertions(+), 47 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ RMIF 1 01 11010000 imm:6 00001 rn:5 0 mask:4
17
SETF8 0 01 11010000 00000 000010 rn:5 01101
18
SETF16 0 01 11010000 00000 010010 rn:5 01101
19
20
-# Conditional compare (regster)
21
-# Conditional compare (immediate)
22
+# Conditional compare
23
+
24
+CCMP sf:1 op:1 1 11010010 y:5 cond:4 imm:1 0 rn:5 0 nzcv:4
25
+
26
# Conditional select
27
# Data Processing (3-source)
28
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static bool do_setf(DisasContext *s, int rn, int shift)
34
TRANS_FEAT(SETF8, aa64_condm_4, do_setf, a->rn, 24)
35
TRANS_FEAT(SETF16, aa64_condm_4, do_setf, a->rn, 16)
36
37
-/* Conditional compare (immediate / register)
38
- * 31 30 29 28 27 26 25 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
39
- * +--+--+--+------------------------+--------+------+----+--+------+--+-----+
40
- * |sf|op| S| 1 1 0 1 0 0 1 0 |imm5/rm | cond |i/r |o2| Rn |o3|nzcv |
41
- * +--+--+--+------------------------+--------+------+----+--+------+--+-----+
42
- * [1] y [0] [0]
43
- */
44
-static void disas_cc(DisasContext *s, uint32_t insn)
45
+/* CCMP, CCMN */
46
+static bool trans_CCMP(DisasContext *s, arg_CCMP *a)
47
{
48
- unsigned int sf, op, y, cond, rn, nzcv, is_imm;
49
- TCGv_i32 tcg_t0, tcg_t1, tcg_t2;
50
- TCGv_i64 tcg_tmp, tcg_y, tcg_rn;
51
+ TCGv_i32 tcg_t0 = tcg_temp_new_i32();
52
+ TCGv_i32 tcg_t1 = tcg_temp_new_i32();
53
+ TCGv_i32 tcg_t2 = tcg_temp_new_i32();
54
+ TCGv_i64 tcg_tmp = tcg_temp_new_i64();
55
+ TCGv_i64 tcg_rn, tcg_y;
56
DisasCompare c;
57
-
58
- if (!extract32(insn, 29, 1)) {
59
- unallocated_encoding(s);
60
- return;
61
- }
62
- if (insn & (1 << 10 | 1 << 4)) {
63
- unallocated_encoding(s);
64
- return;
65
- }
66
- sf = extract32(insn, 31, 1);
67
- op = extract32(insn, 30, 1);
68
- is_imm = extract32(insn, 11, 1);
69
- y = extract32(insn, 16, 5); /* y = rm (reg) or imm5 (imm) */
70
- cond = extract32(insn, 12, 4);
71
- rn = extract32(insn, 5, 5);
72
- nzcv = extract32(insn, 0, 4);
73
+ unsigned nzcv;
74
75
/* Set T0 = !COND. */
76
- tcg_t0 = tcg_temp_new_i32();
77
- arm_test_cc(&c, cond);
78
+ arm_test_cc(&c, a->cond);
79
tcg_gen_setcondi_i32(tcg_invert_cond(c.cond), tcg_t0, c.value, 0);
80
81
/* Load the arguments for the new comparison. */
82
- if (is_imm) {
83
- tcg_y = tcg_temp_new_i64();
84
- tcg_gen_movi_i64(tcg_y, y);
85
+ if (a->imm) {
86
+ tcg_y = tcg_constant_i64(a->y);
87
} else {
88
- tcg_y = cpu_reg(s, y);
89
+ tcg_y = cpu_reg(s, a->y);
90
}
91
- tcg_rn = cpu_reg(s, rn);
92
+ tcg_rn = cpu_reg(s, a->rn);
93
94
/* Set the flags for the new comparison. */
95
- tcg_tmp = tcg_temp_new_i64();
96
- if (op) {
97
- gen_sub_CC(sf, tcg_tmp, tcg_rn, tcg_y);
98
+ if (a->op) {
99
+ gen_sub_CC(a->sf, tcg_tmp, tcg_rn, tcg_y);
100
} else {
101
- gen_add_CC(sf, tcg_tmp, tcg_rn, tcg_y);
102
+ gen_add_CC(a->sf, tcg_tmp, tcg_rn, tcg_y);
103
}
104
105
- /* If COND was false, force the flags to #nzcv. Compute two masks
106
+ /*
107
+ * If COND was false, force the flags to #nzcv. Compute two masks
108
* to help with this: T1 = (COND ? 0 : -1), T2 = (COND ? -1 : 0).
109
* For tcg hosts that support ANDC, we can make do with just T1.
110
* In either case, allow the tcg optimizer to delete any unused mask.
111
*/
112
- tcg_t1 = tcg_temp_new_i32();
113
- tcg_t2 = tcg_temp_new_i32();
114
tcg_gen_neg_i32(tcg_t1, tcg_t0);
115
tcg_gen_subi_i32(tcg_t2, tcg_t0, 1);
116
117
+ nzcv = a->nzcv;
118
if (nzcv & 8) { /* N */
119
tcg_gen_or_i32(cpu_NF, cpu_NF, tcg_t1);
120
} else {
121
@@ -XXX,XX +XXX,XX @@ static void disas_cc(DisasContext *s, uint32_t insn)
122
tcg_gen_and_i32(cpu_VF, cpu_VF, tcg_t2);
123
}
124
}
125
+ return true;
126
}
127
128
/* Conditional select
129
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
130
}
131
132
switch (op2) {
133
- case 0x2: /* Conditional compare */
134
- disas_cc(s, insn); /* both imm and reg forms */
135
- break;
136
-
137
case 0x4: /* Conditional select */
138
disas_cond_select(s, insn);
139
break;
140
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
141
default:
142
do_unallocated:
143
case 0x0:
144
+ case 0x2: /* Conditional compare */
145
case 0x6: /* Data-processing */
146
case 0x8 ... 0xf: /* (3 source) */
147
unallocated_encoding(s);
148
--
149
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes CSEL, CSINC, CSINV, CSNEG. Remove disas_data_proc_reg,
4
as these were the last insns decoded by that function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-20-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 3 ++
12
target/arm/tcg/translate-a64.c | 84 ++++++----------------------------
13
2 files changed, 17 insertions(+), 70 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ SETF16 0 01 11010000 00000 010010 rn:5 01101
20
CCMP sf:1 op:1 1 11010010 y:5 cond:4 imm:1 0 rn:5 0 nzcv:4
21
22
# Conditional select
23
+
24
+CSEL sf:1 else_inv:1 011010100 rm:5 cond:4 0 else_inc:1 rn:5 rd:5
25
+
26
# Data Processing (3-source)
27
28
&rrrr rd rn rm ra
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static bool trans_CCMP(DisasContext *s, arg_CCMP *a)
34
return true;
35
}
36
37
-/* Conditional select
38
- * 31 30 29 28 21 20 16 15 12 11 10 9 5 4 0
39
- * +----+----+---+-----------------+------+------+-----+------+------+
40
- * | sf | op | S | 1 1 0 1 0 1 0 0 | Rm | cond | op2 | Rn | Rd |
41
- * +----+----+---+-----------------+------+------+-----+------+------+
42
- */
43
-static void disas_cond_select(DisasContext *s, uint32_t insn)
44
+static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
45
{
46
- unsigned int sf, else_inv, rm, cond, else_inc, rn, rd;
47
- TCGv_i64 tcg_rd, zero;
48
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
49
+ TCGv_i64 zero = tcg_constant_i64(0);
50
DisasCompare64 c;
51
52
- if (extract32(insn, 29, 1) || extract32(insn, 11, 1)) {
53
- /* S == 1 or op2<1> == 1 */
54
- unallocated_encoding(s);
55
- return;
56
- }
57
- sf = extract32(insn, 31, 1);
58
- else_inv = extract32(insn, 30, 1);
59
- rm = extract32(insn, 16, 5);
60
- cond = extract32(insn, 12, 4);
61
- else_inc = extract32(insn, 10, 1);
62
- rn = extract32(insn, 5, 5);
63
- rd = extract32(insn, 0, 5);
64
+ a64_test_cc(&c, a->cond);
65
66
- tcg_rd = cpu_reg(s, rd);
67
-
68
- a64_test_cc(&c, cond);
69
- zero = tcg_constant_i64(0);
70
-
71
- if (rn == 31 && rm == 31 && (else_inc ^ else_inv)) {
72
+ if (a->rn == 31 && a->rm == 31 && (a->else_inc ^ a->else_inv)) {
73
/* CSET & CSETM. */
74
- if (else_inv) {
75
+ if (a->else_inv) {
76
tcg_gen_negsetcond_i64(tcg_invert_cond(c.cond),
77
tcg_rd, c.value, zero);
78
} else {
79
@@ -XXX,XX +XXX,XX @@ static void disas_cond_select(DisasContext *s, uint32_t insn)
80
tcg_rd, c.value, zero);
81
}
82
} else {
83
- TCGv_i64 t_true = cpu_reg(s, rn);
84
- TCGv_i64 t_false = read_cpu_reg(s, rm, 1);
85
- if (else_inv && else_inc) {
86
+ TCGv_i64 t_true = cpu_reg(s, a->rn);
87
+ TCGv_i64 t_false = read_cpu_reg(s, a->rm, 1);
88
+
89
+ if (a->else_inv && a->else_inc) {
90
tcg_gen_neg_i64(t_false, t_false);
91
- } else if (else_inv) {
92
+ } else if (a->else_inv) {
93
tcg_gen_not_i64(t_false, t_false);
94
- } else if (else_inc) {
95
+ } else if (a->else_inc) {
96
tcg_gen_addi_i64(t_false, t_false, 1);
97
}
98
tcg_gen_movcond_i64(c.cond, tcg_rd, c.value, zero, t_true, t_false);
99
}
100
101
- if (!sf) {
102
+ if (!a->sf) {
103
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
104
}
105
-}
106
-
107
-/*
108
- * Data processing - register
109
- * 31 30 29 28 25 21 20 16 10 0
110
- * +--+---+--+---+-------+-----+-------+-------+---------+
111
- * | |op0| |op1| 1 0 1 | op2 | | op3 | |
112
- * +--+---+--+---+-------+-----+-------+-------+---------+
113
- */
114
-static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
115
-{
116
- int op1 = extract32(insn, 28, 1);
117
- int op2 = extract32(insn, 21, 4);
118
-
119
- if (!op1) {
120
- goto do_unallocated;
121
- }
122
-
123
- switch (op2) {
124
- case 0x4: /* Conditional select */
125
- disas_cond_select(s, insn);
126
- break;
127
-
128
- default:
129
- do_unallocated:
130
- case 0x0:
131
- case 0x2: /* Conditional compare */
132
- case 0x6: /* Data-processing */
133
- case 0x8 ... 0xf: /* (3 source) */
134
- unallocated_encoding(s);
135
- break;
136
- }
137
+ return true;
138
}
139
140
static void handle_fp_compare(DisasContext *s, int size,
141
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
142
static void disas_a64_legacy(DisasContext *s, uint32_t insn)
143
{
144
switch (extract32(insn, 25, 4)) {
145
- case 0x5:
146
- case 0xd: /* Data processing - register */
147
- disas_data_proc_reg(s, insn);
148
- break;
149
case 0x7:
150
case 0xf: /* Data processing - SIMD and floating point */
151
disas_data_proc_simd_fp(s, insn);
152
--
153
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Provide a simple way to check for float64, float32,
4
and float16 support, as well as the fpu enabled.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-21-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/translate-a64.c | 62 ++++++++++++++++++----------------
12
1 file changed, 32 insertions(+), 30 deletions(-)
13
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
19
return true;
20
}
21
22
+/*
23
+ * Return <0 for non-supported element sizes, with MO_16 controlled by
24
+ * FEAT_FP16; return 0 for fp disabled; otherwise return >0 for success.
25
+ */
26
+static int fp_access_check_scalar_hsd(DisasContext *s, MemOp esz)
27
+{
28
+ switch (esz) {
29
+ case MO_64:
30
+ case MO_32:
31
+ break;
32
+ case MO_16:
33
+ if (!dc_isar_feature(aa64_fp16, s)) {
34
+ return -1;
35
+ }
36
+ break;
37
+ default:
38
+ return -1;
39
+ }
40
+ return fp_access_check(s);
41
+}
42
+
43
/*
44
* Check that SVE access is enabled. If it is, return true.
45
* If not, emit code to generate an appropriate exception and return false.
46
@@ -XXX,XX +XXX,XX @@ static bool trans_FCSEL(DisasContext *s, arg_FCSEL *a)
47
{
48
TCGv_i64 t_true, t_false;
49
DisasCompare64 c;
50
+ int check = fp_access_check_scalar_hsd(s, a->esz);
51
52
- switch (a->esz) {
53
- case MO_32:
54
- case MO_64:
55
- break;
56
- case MO_16:
57
- if (!dc_isar_feature(aa64_fp16, s)) {
58
- return false;
59
- }
60
- break;
61
- default:
62
- return false;
63
- }
64
-
65
- if (!fp_access_check(s)) {
66
- return true;
67
+ if (check <= 0) {
68
+ return check == 0;
69
}
70
71
/* Zero extend sreg & hreg inputs to 64 bits now. */
72
@@ -XXX,XX +XXX,XX @@ TRANS(FMINV_s, do_fp_reduction, a, gen_helper_vfp_mins)
73
74
static bool trans_FMOVI_s(DisasContext *s, arg_FMOVI_s *a)
75
{
76
- switch (a->esz) {
77
- case MO_32:
78
- case MO_64:
79
- break;
80
- case MO_16:
81
- if (!dc_isar_feature(aa64_fp16, s)) {
82
- return false;
83
- }
84
- break;
85
- default:
86
- return false;
87
- }
88
- if (fp_access_check(s)) {
89
- uint64_t imm = vfp_expand_imm(a->esz, a->imm);
90
- write_fp_dreg(s, a->rd, tcg_constant_i64(imm));
91
+ int check = fp_access_check_scalar_hsd(s, a->esz);
92
+ uint64_t imm;
93
+
94
+ if (check <= 0) {
95
+ return check == 0;
96
}
97
+
98
+ imm = vfp_expand_imm(a->esz, a->imm);
99
+ write_fp_dreg(s, a->rd, tcg_constant_i64(imm));
100
return true;
101
}
102
103
--
104
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Provide a simple way to check for float64, float32, and float16
4
support vs vector width, as well as the fpu enabled.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-22-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/translate-a64.c | 135 +++++++++++++--------------------
12
1 file changed, 54 insertions(+), 81 deletions(-)
13
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static int fp_access_check_scalar_hsd(DisasContext *s, MemOp esz)
19
return fp_access_check(s);
20
}
21
22
+/* Likewise, but vector MO_64 must have two elements. */
23
+static int fp_access_check_vector_hsd(DisasContext *s, bool is_q, MemOp esz)
24
+{
25
+ switch (esz) {
26
+ case MO_64:
27
+ if (!is_q) {
28
+ return -1;
29
+ }
30
+ break;
31
+ case MO_32:
32
+ break;
33
+ case MO_16:
34
+ if (!dc_isar_feature(aa64_fp16, s)) {
35
+ return -1;
36
+ }
37
+ break;
38
+ default:
39
+ return -1;
40
+ }
41
+ return fp_access_check(s);
42
+}
43
+
44
/*
45
* Check that SVE access is enabled. If it is, return true.
46
* If not, emit code to generate an appropriate exception and return false.
47
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a, int data,
48
gen_helper_gvec_3_ptr * const fns[3])
49
{
50
MemOp esz = a->esz;
51
+ int check = fp_access_check_vector_hsd(s, a->q, esz);
52
53
- switch (esz) {
54
- case MO_64:
55
- if (!a->q) {
56
- return false;
57
- }
58
- break;
59
- case MO_32:
60
- break;
61
- case MO_16:
62
- if (!dc_isar_feature(aa64_fp16, s)) {
63
- return false;
64
- }
65
- break;
66
- default:
67
- return false;
68
- }
69
- if (fp_access_check(s)) {
70
- gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
71
- esz == MO_16, data, fns[esz - 1]);
72
+ if (check <= 0) {
73
+ return check == 0;
74
}
75
+
76
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
77
+ esz == MO_16, data, fns[esz - 1]);
78
return true;
79
}
80
81
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FCADD_270, aa64_fcma, do_fp3_vector, a, 1, f_vector_fcadd)
82
83
static bool trans_FCMLA_v(DisasContext *s, arg_FCMLA_v *a)
84
{
85
- gen_helper_gvec_4_ptr *fn;
86
+ static gen_helper_gvec_4_ptr * const fn[] = {
87
+ [MO_16] = gen_helper_gvec_fcmlah,
88
+ [MO_32] = gen_helper_gvec_fcmlas,
89
+ [MO_64] = gen_helper_gvec_fcmlad,
90
+ };
91
+ int check;
92
93
if (!dc_isar_feature(aa64_fcma, s)) {
94
return false;
95
}
96
- switch (a->esz) {
97
- case MO_64:
98
- if (!a->q) {
99
- return false;
100
- }
101
- fn = gen_helper_gvec_fcmlad;
102
- break;
103
- case MO_32:
104
- fn = gen_helper_gvec_fcmlas;
105
- break;
106
- case MO_16:
107
- if (!dc_isar_feature(aa64_fp16, s)) {
108
- return false;
109
- }
110
- fn = gen_helper_gvec_fcmlah;
111
- break;
112
- default:
113
- return false;
114
- }
115
- if (fp_access_check(s)) {
116
- gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
117
- a->esz == MO_16, a->rot, fn);
118
+
119
+ check = fp_access_check_vector_hsd(s, a->q, a->esz);
120
+ if (check <= 0) {
121
+ return check == 0;
122
}
123
+
124
+ gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
125
+ a->esz == MO_16, a->rot, fn[a->esz]);
126
return true;
127
}
128
129
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
130
gen_helper_gvec_3_ptr * const fns[3])
131
{
132
MemOp esz = a->esz;
133
+ int check = fp_access_check_vector_hsd(s, a->q, esz);
134
135
- switch (esz) {
136
- case MO_64:
137
- if (!a->q) {
138
- return false;
139
- }
140
- break;
141
- case MO_32:
142
- break;
143
- case MO_16:
144
- if (!dc_isar_feature(aa64_fp16, s)) {
145
- return false;
146
- }
147
- break;
148
- default:
149
- g_assert_not_reached();
150
- }
151
- if (fp_access_check(s)) {
152
- gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
153
- esz == MO_16, a->idx, fns[esz - 1]);
154
+ if (check <= 0) {
155
+ return check == 0;
156
}
157
+
158
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
159
+ esz == MO_16, a->idx, fns[esz - 1]);
160
return true;
161
}
162
163
@@ -XXX,XX +XXX,XX @@ static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
164
gen_helper_gvec_fmla_idx_d,
165
};
166
MemOp esz = a->esz;
167
+ int check = fp_access_check_vector_hsd(s, a->q, esz);
168
169
- switch (esz) {
170
- case MO_64:
171
- if (!a->q) {
172
- return false;
173
- }
174
- break;
175
- case MO_32:
176
- break;
177
- case MO_16:
178
- if (!dc_isar_feature(aa64_fp16, s)) {
179
- return false;
180
- }
181
- break;
182
- default:
183
- g_assert_not_reached();
184
- }
185
- if (fp_access_check(s)) {
186
- gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
187
- esz == MO_16, (a->idx << 1) | neg,
188
- fns[esz - 1]);
189
+ if (check <= 0) {
190
+ return check == 0;
191
}
192
+
193
+ gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
194
+ esz == MO_16, (a->idx << 1) | neg,
195
+ fns[esz - 1]);
196
return true;
197
}
198
199
--
200
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-23-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 8 +
9
target/arm/tcg/translate-a64.c | 283 ++++++++++++---------------------
10
2 files changed, 112 insertions(+), 179 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FMINV_s 0110 1110 10 11000 01111 10 ..... ..... @rr_q1e2
17
18
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
19
20
+# Floating-point Compare
21
+
22
+FCMP 00011110 .. 1 rm:5 001000 rn:5 e:1 z:1 000 esz=%esz_hsd
23
+
24
+# Floating-point Conditional Compare
25
+
26
+FCCMP 00011110 .. 1 rm:5 cond:4 01 rn:5 e:1 nzcv:4 esz=%esz_hsd
27
+
28
# Advanced SIMD Modified Immediate / Shift by Immediate
29
30
%abcdefgh 16:3 5:5
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-a64.c
34
+++ b/target/arm/tcg/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static bool trans_FMOVI_s(DisasContext *s, arg_FMOVI_s *a)
36
return true;
37
}
38
39
+/*
40
+ * Floating point compare, conditional compare
41
+ */
42
+
43
+static void handle_fp_compare(DisasContext *s, int size,
44
+ unsigned int rn, unsigned int rm,
45
+ bool cmp_with_zero, bool signal_all_nans)
46
+{
47
+ TCGv_i64 tcg_flags = tcg_temp_new_i64();
48
+ TCGv_ptr fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
49
+
50
+ if (size == MO_64) {
51
+ TCGv_i64 tcg_vn, tcg_vm;
52
+
53
+ tcg_vn = read_fp_dreg(s, rn);
54
+ if (cmp_with_zero) {
55
+ tcg_vm = tcg_constant_i64(0);
56
+ } else {
57
+ tcg_vm = read_fp_dreg(s, rm);
58
+ }
59
+ if (signal_all_nans) {
60
+ gen_helper_vfp_cmped_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
61
+ } else {
62
+ gen_helper_vfp_cmpd_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
63
+ }
64
+ } else {
65
+ TCGv_i32 tcg_vn = tcg_temp_new_i32();
66
+ TCGv_i32 tcg_vm = tcg_temp_new_i32();
67
+
68
+ read_vec_element_i32(s, tcg_vn, rn, 0, size);
69
+ if (cmp_with_zero) {
70
+ tcg_gen_movi_i32(tcg_vm, 0);
71
+ } else {
72
+ read_vec_element_i32(s, tcg_vm, rm, 0, size);
73
+ }
74
+
75
+ switch (size) {
76
+ case MO_32:
77
+ if (signal_all_nans) {
78
+ gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
79
+ } else {
80
+ gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
81
+ }
82
+ break;
83
+ case MO_16:
84
+ if (signal_all_nans) {
85
+ gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
86
+ } else {
87
+ gen_helper_vfp_cmph_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
88
+ }
89
+ break;
90
+ default:
91
+ g_assert_not_reached();
92
+ }
93
+ }
94
+
95
+ gen_set_nzcv(tcg_flags);
96
+}
97
+
98
+/* FCMP, FCMPE */
99
+static bool trans_FCMP(DisasContext *s, arg_FCMP *a)
100
+{
101
+ int check = fp_access_check_scalar_hsd(s, a->esz);
102
+
103
+ if (check <= 0) {
104
+ return check == 0;
105
+ }
106
+
107
+ handle_fp_compare(s, a->esz, a->rn, a->rm, a->z, a->e);
108
+ return true;
109
+}
110
+
111
+/* FCCMP, FCCMPE */
112
+static bool trans_FCCMP(DisasContext *s, arg_FCCMP *a)
113
+{
114
+ TCGLabel *label_continue = NULL;
115
+ int check = fp_access_check_scalar_hsd(s, a->esz);
116
+
117
+ if (check <= 0) {
118
+ return check == 0;
119
+ }
120
+
121
+ if (a->cond < 0x0e) { /* not always */
122
+ TCGLabel *label_match = gen_new_label();
123
+ label_continue = gen_new_label();
124
+ arm_gen_test_cc(a->cond, label_match);
125
+ /* nomatch: */
126
+ gen_set_nzcv(tcg_constant_i64(a->nzcv << 28));
127
+ tcg_gen_br(label_continue);
128
+ gen_set_label(label_match);
129
+ }
130
+
131
+ handle_fp_compare(s, a->esz, a->rn, a->rm, false, a->e);
132
+
133
+ if (label_continue) {
134
+ gen_set_label(label_continue);
135
+ }
136
+ return true;
137
+}
138
+
139
/*
140
* Advanced SIMD Modified Immediate
141
*/
142
@@ -XXX,XX +XXX,XX @@ static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
143
return true;
144
}
145
146
-static void handle_fp_compare(DisasContext *s, int size,
147
- unsigned int rn, unsigned int rm,
148
- bool cmp_with_zero, bool signal_all_nans)
149
-{
150
- TCGv_i64 tcg_flags = tcg_temp_new_i64();
151
- TCGv_ptr fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
152
-
153
- if (size == MO_64) {
154
- TCGv_i64 tcg_vn, tcg_vm;
155
-
156
- tcg_vn = read_fp_dreg(s, rn);
157
- if (cmp_with_zero) {
158
- tcg_vm = tcg_constant_i64(0);
159
- } else {
160
- tcg_vm = read_fp_dreg(s, rm);
161
- }
162
- if (signal_all_nans) {
163
- gen_helper_vfp_cmped_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
164
- } else {
165
- gen_helper_vfp_cmpd_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
166
- }
167
- } else {
168
- TCGv_i32 tcg_vn = tcg_temp_new_i32();
169
- TCGv_i32 tcg_vm = tcg_temp_new_i32();
170
-
171
- read_vec_element_i32(s, tcg_vn, rn, 0, size);
172
- if (cmp_with_zero) {
173
- tcg_gen_movi_i32(tcg_vm, 0);
174
- } else {
175
- read_vec_element_i32(s, tcg_vm, rm, 0, size);
176
- }
177
-
178
- switch (size) {
179
- case MO_32:
180
- if (signal_all_nans) {
181
- gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
182
- } else {
183
- gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
184
- }
185
- break;
186
- case MO_16:
187
- if (signal_all_nans) {
188
- gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
189
- } else {
190
- gen_helper_vfp_cmph_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
191
- }
192
- break;
193
- default:
194
- g_assert_not_reached();
195
- }
196
- }
197
-
198
- gen_set_nzcv(tcg_flags);
199
-}
200
-
201
-/* Floating point compare
202
- * 31 30 29 28 24 23 22 21 20 16 15 14 13 10 9 5 4 0
203
- * +---+---+---+-----------+------+---+------+-----+---------+------+-------+
204
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | op | 1 0 0 0 | Rn | op2 |
205
- * +---+---+---+-----------+------+---+------+-----+---------+------+-------+
206
- */
207
-static void disas_fp_compare(DisasContext *s, uint32_t insn)
208
-{
209
- unsigned int mos, type, rm, op, rn, opc, op2r;
210
- int size;
211
-
212
- mos = extract32(insn, 29, 3);
213
- type = extract32(insn, 22, 2);
214
- rm = extract32(insn, 16, 5);
215
- op = extract32(insn, 14, 2);
216
- rn = extract32(insn, 5, 5);
217
- opc = extract32(insn, 3, 2);
218
- op2r = extract32(insn, 0, 3);
219
-
220
- if (mos || op || op2r) {
221
- unallocated_encoding(s);
222
- return;
223
- }
224
-
225
- switch (type) {
226
- case 0:
227
- size = MO_32;
228
- break;
229
- case 1:
230
- size = MO_64;
231
- break;
232
- case 3:
233
- size = MO_16;
234
- if (dc_isar_feature(aa64_fp16, s)) {
235
- break;
236
- }
237
- /* fallthru */
238
- default:
239
- unallocated_encoding(s);
240
- return;
241
- }
242
-
243
- if (!fp_access_check(s)) {
244
- return;
245
- }
246
-
247
- handle_fp_compare(s, size, rn, rm, opc & 1, opc & 2);
248
-}
249
-
250
-/* Floating point conditional compare
251
- * 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
252
- * +---+---+---+-----------+------+---+------+------+-----+------+----+------+
253
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | cond | 0 1 | Rn | op | nzcv |
254
- * +---+---+---+-----------+------+---+------+------+-----+------+----+------+
255
- */
256
-static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
257
-{
258
- unsigned int mos, type, rm, cond, rn, op, nzcv;
259
- TCGLabel *label_continue = NULL;
260
- int size;
261
-
262
- mos = extract32(insn, 29, 3);
263
- type = extract32(insn, 22, 2);
264
- rm = extract32(insn, 16, 5);
265
- cond = extract32(insn, 12, 4);
266
- rn = extract32(insn, 5, 5);
267
- op = extract32(insn, 4, 1);
268
- nzcv = extract32(insn, 0, 4);
269
-
270
- if (mos) {
271
- unallocated_encoding(s);
272
- return;
273
- }
274
-
275
- switch (type) {
276
- case 0:
277
- size = MO_32;
278
- break;
279
- case 1:
280
- size = MO_64;
281
- break;
282
- case 3:
283
- size = MO_16;
284
- if (dc_isar_feature(aa64_fp16, s)) {
285
- break;
286
- }
287
- /* fallthru */
288
- default:
289
- unallocated_encoding(s);
290
- return;
291
- }
292
-
293
- if (!fp_access_check(s)) {
294
- return;
295
- }
296
-
297
- if (cond < 0x0e) { /* not always */
298
- TCGLabel *label_match = gen_new_label();
299
- label_continue = gen_new_label();
300
- arm_gen_test_cc(cond, label_match);
301
- /* nomatch: */
302
- gen_set_nzcv(tcg_constant_i64(nzcv << 28));
303
- tcg_gen_br(label_continue);
304
- gen_set_label(label_match);
305
- }
306
-
307
- handle_fp_compare(s, size, rn, rm, false, op);
308
-
309
- if (cond < 0x0e) {
310
- gen_set_label(label_continue);
311
- }
312
-}
313
-
314
/* Floating-point data-processing (1 source) - half precision */
315
static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
316
{
317
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
318
disas_fp_fixed_conv(s, insn);
319
} else {
320
switch (extract32(insn, 10, 2)) {
321
- case 1:
322
- /* Floating point conditional compare */
323
- disas_fp_ccomp(s, insn);
324
- break;
325
- case 2:
326
- /* Floating point data-processing (2 source) */
327
- unallocated_encoding(s); /* in decodetree */
328
- break;
329
- case 3:
330
- /* Floating point conditional select */
331
+ case 1: /* Floating point conditional compare */
332
+ case 2: /* Floating point data-processing (2 source) */
333
+ case 3: /* Floating point conditional select */
334
unallocated_encoding(s); /* in decodetree */
335
break;
336
case 0:
337
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
338
break;
339
case 1: /* [15:12] == xx10 */
340
/* Floating point compare */
341
- disas_fp_compare(s, insn);
342
+ unallocated_encoding(s); /* in decodetree */
343
break;
344
case 2: /* [15:12] == x100 */
345
/* Floating point data-processing (1 source) */
346
--
347
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
These opcodes are only supported as vector operations,
4
not as advsimd scalar. Set only_in_vector, and remove
5
the unreachable implementation of scalar fneg.
6
7
Reported-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20241211163036.2297116-24-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/tcg/translate-a64.c | 6 +++---
14
1 file changed, 3 insertions(+), 3 deletions(-)
15
16
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/translate-a64.c
19
+++ b/target/arm/tcg/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
21
break;
22
case 0x2f: /* FABS */
23
case 0x6f: /* FNEG */
24
+ only_in_vector = true;
25
need_fpst = false;
26
break;
27
case 0x7d: /* FRSQRTE */
28
+ break;
29
case 0x7f: /* FSQRT (vector) */
30
+ only_in_vector = true;
31
break;
32
default:
33
unallocated_encoding(s);
34
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
35
case 0x7b: /* FCVTZU */
36
gen_helper_advsimd_f16touinth(tcg_res, tcg_op, tcg_fpstatus);
37
break;
38
- case 0x6f: /* FNEG */
39
- tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
40
- break;
41
case 0x7d: /* FRSQRTE */
42
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
43
break;
44
--
45
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-25-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 7 +++
9
target/arm/tcg/translate-a64.c | 105 +++++++++++++++++++++++----------
10
2 files changed, 81 insertions(+), 31 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
18
@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
19
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
20
+@rr_hsd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_hsd
21
22
@rrr_b ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=0
23
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
24
@@ -XXX,XX +XXX,XX @@ FMAXV_s 0110 1110 00 11000 01111 10 ..... ..... @rr_q1e2
25
FMINV_h 0.00 1110 10 11000 01111 10 ..... ..... @qrr_h
26
FMINV_s 0110 1110 10 11000 01111 10 ..... ..... @rr_q1e2
27
28
+# Floating-point data processing (1 source)
29
+
30
+FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
31
+FABS_s 00011110 .. 1 000001 10000 ..... ..... @rr_hsd
32
+FNEG_s 00011110 .. 1 000010 10000 ..... ..... @rr_hsd
33
+
34
# Floating-point Immediate
35
36
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
42
return true;
43
}
44
45
+typedef struct FPScalar1Int {
46
+ void (*gen_h)(TCGv_i32, TCGv_i32);
47
+ void (*gen_s)(TCGv_i32, TCGv_i32);
48
+ void (*gen_d)(TCGv_i64, TCGv_i64);
49
+} FPScalar1Int;
50
+
51
+static bool do_fp1_scalar_int(DisasContext *s, arg_rr_e *a,
52
+ const FPScalar1Int *f)
53
+{
54
+ switch (a->esz) {
55
+ case MO_64:
56
+ if (fp_access_check(s)) {
57
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
58
+ f->gen_d(t, t);
59
+ write_fp_dreg(s, a->rd, t);
60
+ }
61
+ break;
62
+ case MO_32:
63
+ if (fp_access_check(s)) {
64
+ TCGv_i32 t = read_fp_sreg(s, a->rn);
65
+ f->gen_s(t, t);
66
+ write_fp_sreg(s, a->rd, t);
67
+ }
68
+ break;
69
+ case MO_16:
70
+ if (!dc_isar_feature(aa64_fp16, s)) {
71
+ return false;
72
+ }
73
+ if (fp_access_check(s)) {
74
+ TCGv_i32 t = read_fp_hreg(s, a->rn);
75
+ f->gen_h(t, t);
76
+ write_fp_sreg(s, a->rd, t);
77
+ }
78
+ break;
79
+ default:
80
+ return false;
81
+ }
82
+ return true;
83
+}
84
+
85
+static const FPScalar1Int f_scalar_fmov = {
86
+ tcg_gen_mov_i32,
87
+ tcg_gen_mov_i32,
88
+ tcg_gen_mov_i64,
89
+};
90
+TRANS(FMOV_s, do_fp1_scalar_int, a, &f_scalar_fmov)
91
+
92
+static const FPScalar1Int f_scalar_fabs = {
93
+ gen_vfp_absh,
94
+ gen_vfp_abss,
95
+ gen_vfp_absd,
96
+};
97
+TRANS(FABS_s, do_fp1_scalar_int, a, &f_scalar_fabs)
98
+
99
+static const FPScalar1Int f_scalar_fneg = {
100
+ gen_vfp_negh,
101
+ gen_vfp_negs,
102
+ gen_vfp_negd,
103
+};
104
+TRANS(FNEG_s, do_fp1_scalar_int, a, &f_scalar_fneg)
105
+
106
/* Floating-point data-processing (1 source) - half precision */
107
static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
108
{
109
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
110
TCGv_i32 tcg_res = tcg_temp_new_i32();
111
112
switch (opcode) {
113
- case 0x0: /* FMOV */
114
- tcg_gen_mov_i32(tcg_res, tcg_op);
115
- break;
116
- case 0x1: /* FABS */
117
- gen_vfp_absh(tcg_res, tcg_op);
118
- break;
119
- case 0x2: /* FNEG */
120
- gen_vfp_negh(tcg_res, tcg_op);
121
- break;
122
case 0x3: /* FSQRT */
123
fpst = fpstatus_ptr(FPST_FPCR_F16);
124
gen_helper_sqrt_f16(tcg_res, tcg_op, fpst);
125
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
126
gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
127
break;
128
default:
129
+ case 0x0: /* FMOV */
130
+ case 0x1: /* FABS */
131
+ case 0x2: /* FNEG */
132
g_assert_not_reached();
133
}
134
135
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
136
tcg_res = tcg_temp_new_i32();
137
138
switch (opcode) {
139
- case 0x0: /* FMOV */
140
- tcg_gen_mov_i32(tcg_res, tcg_op);
141
- goto done;
142
- case 0x1: /* FABS */
143
- gen_vfp_abss(tcg_res, tcg_op);
144
- goto done;
145
- case 0x2: /* FNEG */
146
- gen_vfp_negs(tcg_res, tcg_op);
147
- goto done;
148
case 0x3: /* FSQRT */
149
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
150
goto done;
151
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
152
gen_fpst = gen_helper_frint64_s;
153
break;
154
default:
155
+ case 0x0: /* FMOV */
156
+ case 0x1: /* FABS */
157
+ case 0x2: /* FNEG */
158
g_assert_not_reached();
159
}
160
161
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
162
TCGv_ptr fpst;
163
int rmode = -1;
164
165
- switch (opcode) {
166
- case 0x0: /* FMOV */
167
- gen_gvec_fn2(s, false, rd, rn, tcg_gen_gvec_mov, 0);
168
- return;
169
- }
170
-
171
tcg_op = read_fp_dreg(s, rn);
172
tcg_res = tcg_temp_new_i64();
173
174
switch (opcode) {
175
- case 0x1: /* FABS */
176
- gen_vfp_absd(tcg_res, tcg_op);
177
- goto done;
178
- case 0x2: /* FNEG */
179
- gen_vfp_negd(tcg_res, tcg_op);
180
- goto done;
181
case 0x3: /* FSQRT */
182
gen_helper_vfp_sqrtd(tcg_res, tcg_op, tcg_env);
183
goto done;
184
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
185
gen_fpst = gen_helper_frint64_d;
186
break;
187
default:
188
+ case 0x0: /* FMOV */
189
+ case 0x1: /* FABS */
190
+ case 0x2: /* FNEG */
191
g_assert_not_reached();
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
195
goto do_unallocated;
196
}
197
/* fall through */
198
- case 0x0 ... 0x3:
199
+ case 0x3:
200
case 0x8 ... 0xc:
201
case 0xe ... 0xf:
202
/* 32-to-32 and 64-to-64 ops */
203
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
204
205
default:
206
do_unallocated:
207
+ case 0x0: /* FMOV */
208
+ case 0x1: /* FABS */
209
+ case 0x2: /* FNEG */
210
unallocated_encoding(s);
211
break;
212
}
213
--
214
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Pass fpstatus not env, like most other fp helpers.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-26-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 6 +++---
11
target/arm/tcg/translate-a64.c | 15 +++++++--------
12
target/arm/tcg/translate-vfp.c | 6 +++---
13
target/arm/vfp_helper.c | 12 ++++++------
14
4 files changed, 19 insertions(+), 20 deletions(-)
15
16
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.h
19
+++ b/target/arm/helper.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_maxnumd, f64, f64, f64, ptr)
21
DEF_HELPER_3(vfp_minnumh, f16, f16, f16, ptr)
22
DEF_HELPER_3(vfp_minnums, f32, f32, f32, ptr)
23
DEF_HELPER_3(vfp_minnumd, f64, f64, f64, ptr)
24
-DEF_HELPER_2(vfp_sqrth, f16, f16, env)
25
-DEF_HELPER_2(vfp_sqrts, f32, f32, env)
26
-DEF_HELPER_2(vfp_sqrtd, f64, f64, env)
27
+DEF_HELPER_2(vfp_sqrth, f16, f16, ptr)
28
+DEF_HELPER_2(vfp_sqrts, f32, f32, ptr)
29
+DEF_HELPER_2(vfp_sqrtd, f64, f64, ptr)
30
DEF_HELPER_3(vfp_cmph, void, f16, f16, env)
31
DEF_HELPER_3(vfp_cmps, void, f32, f32, env)
32
DEF_HELPER_3(vfp_cmpd, void, f64, f64, env)
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
38
39
switch (opcode) {
40
case 0x3: /* FSQRT */
41
- gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
42
- goto done;
43
+ gen_fpst = gen_helper_vfp_sqrts;
44
+ break;
45
case 0x6: /* BFCVT */
46
gen_fpst = gen_helper_bfcvt;
47
break;
48
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
49
gen_fpst(tcg_res, tcg_op, fpst);
50
}
51
52
- done:
53
write_fp_sreg(s, rd, tcg_res);
54
}
55
56
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
57
58
switch (opcode) {
59
case 0x3: /* FSQRT */
60
- gen_helper_vfp_sqrtd(tcg_res, tcg_op, tcg_env);
61
- goto done;
62
+ gen_fpst = gen_helper_vfp_sqrtd;
63
+ break;
64
case 0x8: /* FRINTN */
65
case 0x9: /* FRINTP */
66
case 0xa: /* FRINTM */
67
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
68
gen_fpst(tcg_res, tcg_op, fpst);
69
}
70
71
- done:
72
write_fp_dreg(s, rd, tcg_res);
73
}
74
75
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
76
gen_vfp_negd(tcg_rd, tcg_rn);
77
break;
78
case 0x7f: /* FSQRT */
79
- gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_env);
80
+ gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_fpstatus);
81
break;
82
case 0x1a: /* FCVTNS */
83
case 0x1b: /* FCVTMS */
84
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
85
handle_2misc_fcmp_zero(s, opcode, false, u, is_q, size, rn, rd);
86
return;
87
case 0x7f: /* FSQRT */
88
+ need_fpstatus = true;
89
if (size == 3 && !is_q) {
90
unallocated_encoding(s);
91
return;
92
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
93
gen_vfp_negs(tcg_res, tcg_op);
94
break;
95
case 0x7f: /* FSQRT */
96
- gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
97
+ gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_fpstatus);
98
break;
99
case 0x1a: /* FCVTNS */
100
case 0x1b: /* FCVTMS */
101
diff --git a/target/arm/tcg/translate-vfp.c b/target/arm/tcg/translate-vfp.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/target/arm/tcg/translate-vfp.c
104
+++ b/target/arm/tcg/translate-vfp.c
105
@@ -XXX,XX +XXX,XX @@ DO_VFP_2OP(VNEG, dp, gen_vfp_negd, aa32_fpdp_v2)
106
107
static void gen_VSQRT_hp(TCGv_i32 vd, TCGv_i32 vm)
108
{
109
- gen_helper_vfp_sqrth(vd, vm, tcg_env);
110
+ gen_helper_vfp_sqrth(vd, vm, fpstatus_ptr(FPST_FPCR_F16));
111
}
112
113
static void gen_VSQRT_sp(TCGv_i32 vd, TCGv_i32 vm)
114
{
115
- gen_helper_vfp_sqrts(vd, vm, tcg_env);
116
+ gen_helper_vfp_sqrts(vd, vm, fpstatus_ptr(FPST_FPCR));
117
}
118
119
static void gen_VSQRT_dp(TCGv_i64 vd, TCGv_i64 vm)
120
{
121
- gen_helper_vfp_sqrtd(vd, vm, tcg_env);
122
+ gen_helper_vfp_sqrtd(vd, vm, fpstatus_ptr(FPST_FPCR));
123
}
124
125
DO_VFP_2OP(VSQRT, hp, gen_VSQRT_hp, aa32_fp16_arith)
126
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/target/arm/vfp_helper.c
129
+++ b/target/arm/vfp_helper.c
130
@@ -XXX,XX +XXX,XX @@ VFP_BINOP(minnum)
131
VFP_BINOP(maxnum)
132
#undef VFP_BINOP
133
134
-dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, CPUARMState *env)
135
+dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, void *fpstp)
136
{
137
- return float16_sqrt(a, &env->vfp.fp_status_f16);
138
+ return float16_sqrt(a, fpstp);
139
}
140
141
-float32 VFP_HELPER(sqrt, s)(float32 a, CPUARMState *env)
142
+float32 VFP_HELPER(sqrt, s)(float32 a, void *fpstp)
143
{
144
- return float32_sqrt(a, &env->vfp.fp_status);
145
+ return float32_sqrt(a, fpstp);
146
}
147
148
-float64 VFP_HELPER(sqrt, d)(float64 a, CPUARMState *env)
149
+float64 VFP_HELPER(sqrt, d)(float64 a, void *fpstp)
150
{
151
- return float64_sqrt(a, &env->vfp.fp_status);
152
+ return float64_sqrt(a, fpstp);
153
}
154
155
static void softfloat_to_vfp_compare(CPUARMState *env, FloatRelation cmp)
156
--
157
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This function is identical with helper_vfp_sqrth.
4
Replace all uses.
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-27-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/helper-a64.h | 1 -
12
target/arm/tcg/helper-a64.c | 11 -----------
13
target/arm/tcg/translate-a64.c | 4 ++--
14
3 files changed, 2 insertions(+), 14 deletions(-)
15
16
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/helper-a64.h
19
+++ b/target/arm/tcg/helper-a64.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(advsimd_rinth_exact, f16, f16, ptr)
21
DEF_HELPER_2(advsimd_rinth, f16, f16, ptr)
22
DEF_HELPER_2(advsimd_f16tosinth, i32, f16, ptr)
23
DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
24
-DEF_HELPER_2(sqrt_f16, f16, f16, ptr)
25
26
DEF_HELPER_2(exception_return, void, env, i64)
27
DEF_HELPER_FLAGS_2(dc_zva, TCG_CALL_NO_WG, void, env, i64)
28
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/tcg/helper-a64.c
31
+++ b/target/arm/tcg/helper-a64.c
32
@@ -XXX,XX +XXX,XX @@ illegal_return:
33
"resuming execution at 0x%" PRIx64 "\n", cur_el, env->pc);
34
}
35
36
-/*
37
- * Square Root and Reciprocal square root
38
- */
39
-
40
-uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
41
-{
42
- float_status *s = fpstp;
43
-
44
- return float16_sqrt(a, s);
45
-}
46
-
47
void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
48
{
49
uintptr_t ra = GETPC();
50
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/translate-a64.c
53
+++ b/target/arm/tcg/translate-a64.c
54
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
55
switch (opcode) {
56
case 0x3: /* FSQRT */
57
fpst = fpstatus_ptr(FPST_FPCR_F16);
58
- gen_helper_sqrt_f16(tcg_res, tcg_op, fpst);
59
+ gen_helper_vfp_sqrth(tcg_res, tcg_op, fpst);
60
break;
61
case 0x8: /* FRINTN */
62
case 0x9: /* FRINTP */
63
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
64
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
65
break;
66
case 0x7f: /* FSQRT */
67
- gen_helper_sqrt_f16(tcg_res, tcg_op, tcg_fpstatus);
68
+ gen_helper_vfp_sqrth(tcg_res, tcg_op, tcg_fpstatus);
69
break;
70
default:
71
g_assert_not_reached();
72
--
73
2.34.1
74
75
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 1 +
9
target/arm/tcg/translate-a64.c | 72 ++++++++++++++++++++++++++++------
10
2 files changed, 62 insertions(+), 11 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FMINV_s 0110 1110 10 11000 01111 10 ..... ..... @rr_q1e2
17
FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
18
FABS_s 00011110 .. 1 000001 10000 ..... ..... @rr_hsd
19
FNEG_s 00011110 .. 1 000010 10000 ..... ..... @rr_hsd
20
+FSQRT_s 00011110 .. 1 000011 10000 ..... ..... @rr_hsd
21
22
# Floating-point Immediate
23
24
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/tcg/translate-a64.c
27
+++ b/target/arm/tcg/translate-a64.c
28
@@ -XXX,XX +XXX,XX @@ static const FPScalar1Int f_scalar_fneg = {
29
};
30
TRANS(FNEG_s, do_fp1_scalar_int, a, &f_scalar_fneg)
31
32
+typedef struct FPScalar1 {
33
+ void (*gen_h)(TCGv_i32, TCGv_i32, TCGv_ptr);
34
+ void (*gen_s)(TCGv_i32, TCGv_i32, TCGv_ptr);
35
+ void (*gen_d)(TCGv_i64, TCGv_i64, TCGv_ptr);
36
+} FPScalar1;
37
+
38
+static bool do_fp1_scalar(DisasContext *s, arg_rr_e *a,
39
+ const FPScalar1 *f, int rmode)
40
+{
41
+ TCGv_i32 tcg_rmode = NULL;
42
+ TCGv_ptr fpst;
43
+ TCGv_i64 t64;
44
+ TCGv_i32 t32;
45
+ int check = fp_access_check_scalar_hsd(s, a->esz);
46
+
47
+ if (check <= 0) {
48
+ return check == 0;
49
+ }
50
+
51
+ fpst = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
52
+ if (rmode >= 0) {
53
+ tcg_rmode = gen_set_rmode(rmode, fpst);
54
+ }
55
+
56
+ switch (a->esz) {
57
+ case MO_64:
58
+ t64 = read_fp_dreg(s, a->rn);
59
+ f->gen_d(t64, t64, fpst);
60
+ write_fp_dreg(s, a->rd, t64);
61
+ break;
62
+ case MO_32:
63
+ t32 = read_fp_sreg(s, a->rn);
64
+ f->gen_s(t32, t32, fpst);
65
+ write_fp_sreg(s, a->rd, t32);
66
+ break;
67
+ case MO_16:
68
+ t32 = read_fp_hreg(s, a->rn);
69
+ f->gen_h(t32, t32, fpst);
70
+ write_fp_sreg(s, a->rd, t32);
71
+ break;
72
+ default:
73
+ g_assert_not_reached();
74
+ }
75
+
76
+ if (rmode >= 0) {
77
+ gen_restore_rmode(tcg_rmode, fpst);
78
+ }
79
+ return true;
80
+}
81
+
82
+static const FPScalar1 f_scalar_fsqrt = {
83
+ gen_helper_vfp_sqrth,
84
+ gen_helper_vfp_sqrts,
85
+ gen_helper_vfp_sqrtd,
86
+};
87
+TRANS(FSQRT_s, do_fp1_scalar, a, &f_scalar_fsqrt, -1)
88
+
89
/* Floating-point data-processing (1 source) - half precision */
90
static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
91
{
92
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
93
TCGv_i32 tcg_res = tcg_temp_new_i32();
94
95
switch (opcode) {
96
- case 0x3: /* FSQRT */
97
- fpst = fpstatus_ptr(FPST_FPCR_F16);
98
- gen_helper_vfp_sqrth(tcg_res, tcg_op, fpst);
99
- break;
100
case 0x8: /* FRINTN */
101
case 0x9: /* FRINTP */
102
case 0xa: /* FRINTM */
103
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
104
case 0x0: /* FMOV */
105
case 0x1: /* FABS */
106
case 0x2: /* FNEG */
107
+ case 0x3: /* FSQRT */
108
g_assert_not_reached();
109
}
110
111
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
112
tcg_res = tcg_temp_new_i32();
113
114
switch (opcode) {
115
- case 0x3: /* FSQRT */
116
- gen_fpst = gen_helper_vfp_sqrts;
117
- break;
118
case 0x6: /* BFCVT */
119
gen_fpst = gen_helper_bfcvt;
120
break;
121
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
122
case 0x0: /* FMOV */
123
case 0x1: /* FABS */
124
case 0x2: /* FNEG */
125
+ case 0x3: /* FSQRT */
126
g_assert_not_reached();
127
}
128
129
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
130
tcg_res = tcg_temp_new_i64();
131
132
switch (opcode) {
133
- case 0x3: /* FSQRT */
134
- gen_fpst = gen_helper_vfp_sqrtd;
135
- break;
136
case 0x8: /* FRINTN */
137
case 0x9: /* FRINTP */
138
case 0xa: /* FRINTM */
139
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
140
case 0x0: /* FMOV */
141
case 0x1: /* FABS */
142
case 0x2: /* FNEG */
143
+ case 0x3: /* FSQRT */
144
g_assert_not_reached();
145
}
146
147
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
148
goto do_unallocated;
149
}
150
/* fall through */
151
- case 0x3:
152
case 0x8 ... 0xc:
153
case 0xe ... 0xf:
154
/* 32-to-32 and 64-to-64 ops */
155
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
156
case 0x0: /* FMOV */
157
case 0x1: /* FABS */
158
case 0x2: /* FNEG */
159
+ case 0x3: /* FSQRT */
160
unallocated_encoding(s);
161
break;
162
}
163
--
164
2.34.1
diff view generated by jsdifflib
1
The architecture requires that for faults on loads and stores which
1
From: Richard Henderson <richard.henderson@linaro.org>
2
do writeback, the syndrome information does not have the ISS
3
instruction syndrome information (i.e. ISV is 0). We got this wrong
4
for the load and store instructions covered by disas_ldst_reg_imm9().
5
Calculate iss_valid correctly so that if the insn is a writeback one
6
it is false.
7
2
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1057
3
Remove handle_fp_1src_half as these were the last insns
4
decoded by that function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-29-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20220715123323.1550983-1-peter.maydell@linaro.org
12
---
10
---
13
target/arm/translate-a64.c | 4 +++-
11
target/arm/tcg/a64.decode | 8 +++
14
1 file changed, 3 insertions(+), 1 deletion(-)
12
target/arm/tcg/translate-a64.c | 117 +++++++++++----------------------
13
2 files changed, 46 insertions(+), 79 deletions(-)
15
14
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
17
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/translate-a64.c
18
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
19
@@ -XXX,XX +XXX,XX @@ FABS_s 00011110 .. 1 000001 10000 ..... ..... @rr_hsd
21
bool is_store = false;
20
FNEG_s 00011110 .. 1 000010 10000 ..... ..... @rr_hsd
22
bool is_extended = false;
21
FSQRT_s 00011110 .. 1 000011 10000 ..... ..... @rr_hsd
23
bool is_unpriv = (idx == 2);
22
24
- bool iss_valid = !is_vector;
23
+FRINTN_s 00011110 .. 1 001000 10000 ..... ..... @rr_hsd
25
+ bool iss_valid;
24
+FRINTP_s 00011110 .. 1 001001 10000 ..... ..... @rr_hsd
26
bool post_index;
25
+FRINTM_s 00011110 .. 1 001010 10000 ..... ..... @rr_hsd
27
bool writeback;
26
+FRINTZ_s 00011110 .. 1 001011 10000 ..... ..... @rr_hsd
28
int memidx;
27
+FRINTA_s 00011110 .. 1 001100 10000 ..... ..... @rr_hsd
29
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
28
+FRINTX_s 00011110 .. 1 001110 10000 ..... ..... @rr_hsd
29
+FRINTI_s 00011110 .. 1 001111 10000 ..... ..... @rr_hsd
30
+
31
# Floating-point Immediate
32
33
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ static const FPScalar1 f_scalar_fsqrt = {
39
};
40
TRANS(FSQRT_s, do_fp1_scalar, a, &f_scalar_fsqrt, -1)
41
42
-/* Floating-point data-processing (1 source) - half precision */
43
-static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
44
-{
45
- TCGv_ptr fpst = NULL;
46
- TCGv_i32 tcg_op = read_fp_hreg(s, rn);
47
- TCGv_i32 tcg_res = tcg_temp_new_i32();
48
+static const FPScalar1 f_scalar_frint = {
49
+ gen_helper_advsimd_rinth,
50
+ gen_helper_rints,
51
+ gen_helper_rintd,
52
+};
53
+TRANS(FRINTN_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_TIEEVEN)
54
+TRANS(FRINTP_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_POSINF)
55
+TRANS(FRINTM_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_NEGINF)
56
+TRANS(FRINTZ_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_ZERO)
57
+TRANS(FRINTA_s, do_fp1_scalar, a, &f_scalar_frint, FPROUNDING_TIEAWAY)
58
+TRANS(FRINTI_s, do_fp1_scalar, a, &f_scalar_frint, -1)
59
60
- switch (opcode) {
61
- case 0x8: /* FRINTN */
62
- case 0x9: /* FRINTP */
63
- case 0xa: /* FRINTM */
64
- case 0xb: /* FRINTZ */
65
- case 0xc: /* FRINTA */
66
- {
67
- TCGv_i32 tcg_rmode;
68
-
69
- fpst = fpstatus_ptr(FPST_FPCR_F16);
70
- tcg_rmode = gen_set_rmode(opcode & 7, fpst);
71
- gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
72
- gen_restore_rmode(tcg_rmode, fpst);
73
- break;
74
- }
75
- case 0xe: /* FRINTX */
76
- fpst = fpstatus_ptr(FPST_FPCR_F16);
77
- gen_helper_advsimd_rinth_exact(tcg_res, tcg_op, fpst);
78
- break;
79
- case 0xf: /* FRINTI */
80
- fpst = fpstatus_ptr(FPST_FPCR_F16);
81
- gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
82
- break;
83
- default:
84
- case 0x0: /* FMOV */
85
- case 0x1: /* FABS */
86
- case 0x2: /* FNEG */
87
- case 0x3: /* FSQRT */
88
- g_assert_not_reached();
89
- }
90
-
91
- write_fp_sreg(s, rd, tcg_res);
92
-}
93
+static const FPScalar1 f_scalar_frintx = {
94
+ gen_helper_advsimd_rinth_exact,
95
+ gen_helper_rints_exact,
96
+ gen_helper_rintd_exact,
97
+};
98
+TRANS(FRINTX_s, do_fp1_scalar, a, &f_scalar_frintx, -1)
99
100
/* Floating-point data-processing (1 source) - single precision */
101
static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
102
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
103
case 0x6: /* BFCVT */
104
gen_fpst = gen_helper_bfcvt;
105
break;
106
- case 0x8: /* FRINTN */
107
- case 0x9: /* FRINTP */
108
- case 0xa: /* FRINTM */
109
- case 0xb: /* FRINTZ */
110
- case 0xc: /* FRINTA */
111
- rmode = opcode & 7;
112
- gen_fpst = gen_helper_rints;
113
- break;
114
- case 0xe: /* FRINTX */
115
- gen_fpst = gen_helper_rints_exact;
116
- break;
117
- case 0xf: /* FRINTI */
118
- gen_fpst = gen_helper_rints;
119
- break;
120
case 0x10: /* FRINT32Z */
121
rmode = FPROUNDING_ZERO;
122
gen_fpst = gen_helper_frint32_s;
123
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
124
case 0x1: /* FABS */
125
case 0x2: /* FNEG */
126
case 0x3: /* FSQRT */
127
+ case 0x8: /* FRINTN */
128
+ case 0x9: /* FRINTP */
129
+ case 0xa: /* FRINTM */
130
+ case 0xb: /* FRINTZ */
131
+ case 0xc: /* FRINTA */
132
+ case 0xe: /* FRINTX */
133
+ case 0xf: /* FRINTI */
30
g_assert_not_reached();
134
g_assert_not_reached();
31
}
135
}
32
136
33
+ iss_valid = !is_vector && !writeback;
137
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
34
+
138
tcg_res = tcg_temp_new_i64();
35
if (rn == 31) {
139
36
gen_check_sp_alignment(s);
140
switch (opcode) {
141
- case 0x8: /* FRINTN */
142
- case 0x9: /* FRINTP */
143
- case 0xa: /* FRINTM */
144
- case 0xb: /* FRINTZ */
145
- case 0xc: /* FRINTA */
146
- rmode = opcode & 7;
147
- gen_fpst = gen_helper_rintd;
148
- break;
149
- case 0xe: /* FRINTX */
150
- gen_fpst = gen_helper_rintd_exact;
151
- break;
152
- case 0xf: /* FRINTI */
153
- gen_fpst = gen_helper_rintd;
154
- break;
155
case 0x10: /* FRINT32Z */
156
rmode = FPROUNDING_ZERO;
157
gen_fpst = gen_helper_frint32_d;
158
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
159
case 0x1: /* FABS */
160
case 0x2: /* FNEG */
161
case 0x3: /* FSQRT */
162
+ case 0x8: /* FRINTN */
163
+ case 0x9: /* FRINTP */
164
+ case 0xa: /* FRINTM */
165
+ case 0xb: /* FRINTZ */
166
+ case 0xc: /* FRINTA */
167
+ case 0xe: /* FRINTX */
168
+ case 0xf: /* FRINTI */
169
g_assert_not_reached();
170
}
171
172
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
173
if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
174
goto do_unallocated;
175
}
176
- /* fall through */
177
- case 0x8 ... 0xc:
178
- case 0xe ... 0xf:
179
/* 32-to-32 and 64-to-64 ops */
180
switch (type) {
181
case 0:
182
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
183
handle_fp_1src_double(s, opcode, rd, rn);
184
break;
185
case 3:
186
- if (!dc_isar_feature(aa64_fp16, s)) {
187
- goto do_unallocated;
188
- }
189
-
190
- if (!fp_access_check(s)) {
191
- return;
192
- }
193
- handle_fp_1src_half(s, opcode, rd, rn);
194
- break;
195
default:
196
goto do_unallocated;
197
}
198
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
199
case 0x1: /* FABS */
200
case 0x2: /* FNEG */
201
case 0x3: /* FSQRT */
202
+ case 0x8: /* FRINTN */
203
+ case 0x9: /* FRINTP */
204
+ case 0xa: /* FRINTM */
205
+ case 0xb: /* FRINTZ */
206
+ case 0xc: /* FRINTA */
207
+ case 0xe: /* FRINTX */
208
+ case 0xf: /* FRINTI */
209
unallocated_encoding(s);
210
break;
37
}
211
}
38
--
212
--
39
2.25.1
213
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-30-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 3 +++
9
target/arm/tcg/translate-a64.c | 26 +++++++-------------------
10
2 files changed, 10 insertions(+), 19 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
&qrrrr_e q rd rn rm ra esz
18
19
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
20
+@rr_s ........ ... ..... ...... rn:5 rd:5 &rr_e esz=2
21
@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
22
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
23
@rr_hsd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_hsd
24
@@ -XXX,XX +XXX,XX @@ FRINTA_s 00011110 .. 1 001100 10000 ..... ..... @rr_hsd
25
FRINTX_s 00011110 .. 1 001110 10000 ..... ..... @rr_hsd
26
FRINTI_s 00011110 .. 1 001111 10000 ..... ..... @rr_hsd
27
28
+BFCVT_s 00011110 01 1 000110 10000 ..... ..... @rr_s
29
+
30
# Floating-point Immediate
31
32
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ static const FPScalar1 f_scalar_frintx = {
38
};
39
TRANS(FRINTX_s, do_fp1_scalar, a, &f_scalar_frintx, -1)
40
41
+static const FPScalar1 f_scalar_bfcvt = {
42
+ .gen_s = gen_helper_bfcvt,
43
+};
44
+TRANS_FEAT(BFCVT_s, aa64_bf16, do_fp1_scalar, a, &f_scalar_bfcvt, -1)
45
+
46
/* Floating-point data-processing (1 source) - single precision */
47
static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
48
{
49
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
50
tcg_res = tcg_temp_new_i32();
51
52
switch (opcode) {
53
- case 0x6: /* BFCVT */
54
- gen_fpst = gen_helper_bfcvt;
55
- break;
56
case 0x10: /* FRINT32Z */
57
rmode = FPROUNDING_ZERO;
58
gen_fpst = gen_helper_frint32_s;
59
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
60
case 0x1: /* FABS */
61
case 0x2: /* FNEG */
62
case 0x3: /* FSQRT */
63
+ case 0x6: /* BFCVT */
64
case 0x8: /* FRINTN */
65
case 0x9: /* FRINTP */
66
case 0xa: /* FRINTM */
67
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
68
}
69
break;
70
71
- case 0x6:
72
- switch (type) {
73
- case 1: /* BFCVT */
74
- if (!dc_isar_feature(aa64_bf16, s)) {
75
- goto do_unallocated;
76
- }
77
- if (!fp_access_check(s)) {
78
- return;
79
- }
80
- handle_fp_1src_single(s, opcode, rd, rn);
81
- break;
82
- default:
83
- goto do_unallocated;
84
- }
85
- break;
86
-
87
default:
88
do_unallocated:
89
case 0x0: /* FMOV */
90
case 0x1: /* FABS */
91
case 0x2: /* FNEG */
92
case 0x3: /* FSQRT */
93
+ case 0x6: /* BFCVT */
94
case 0x8: /* FRINTN */
95
case 0x9: /* FRINTP */
96
case 0xa: /* FRINTM */
97
--
98
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_fp_1src_single and handle_fp_1src_double as
4
these were the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-31-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 5 ++
12
target/arm/tcg/translate-a64.c | 146 ++++-----------------------------
13
2 files changed, 22 insertions(+), 129 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FRINTI_s 00011110 .. 1 001111 10000 ..... ..... @rr_hsd
20
21
BFCVT_s 00011110 01 1 000110 10000 ..... ..... @rr_s
22
23
+FRINT32Z_s 00011110 0. 1 010000 10000 ..... ..... @rr_sd
24
+FRINT32X_s 00011110 0. 1 010001 10000 ..... ..... @rr_sd
25
+FRINT64Z_s 00011110 0. 1 010010 10000 ..... ..... @rr_sd
26
+FRINT64X_s 00011110 0. 1 010011 10000 ..... ..... @rr_sd
27
+
28
# Floating-point Immediate
29
30
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-a64.c
34
+++ b/target/arm/tcg/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static const FPScalar1 f_scalar_bfcvt = {
36
};
37
TRANS_FEAT(BFCVT_s, aa64_bf16, do_fp1_scalar, a, &f_scalar_bfcvt, -1)
38
39
-/* Floating-point data-processing (1 source) - single precision */
40
-static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
41
-{
42
- void (*gen_fpst)(TCGv_i32, TCGv_i32, TCGv_ptr);
43
- TCGv_i32 tcg_op, tcg_res;
44
- TCGv_ptr fpst;
45
- int rmode = -1;
46
+static const FPScalar1 f_scalar_frint32 = {
47
+ NULL,
48
+ gen_helper_frint32_s,
49
+ gen_helper_frint32_d,
50
+};
51
+TRANS_FEAT(FRINT32Z_s, aa64_frint, do_fp1_scalar, a,
52
+ &f_scalar_frint32, FPROUNDING_ZERO)
53
+TRANS_FEAT(FRINT32X_s, aa64_frint, do_fp1_scalar, a, &f_scalar_frint32, -1)
54
55
- tcg_op = read_fp_sreg(s, rn);
56
- tcg_res = tcg_temp_new_i32();
57
-
58
- switch (opcode) {
59
- case 0x10: /* FRINT32Z */
60
- rmode = FPROUNDING_ZERO;
61
- gen_fpst = gen_helper_frint32_s;
62
- break;
63
- case 0x11: /* FRINT32X */
64
- gen_fpst = gen_helper_frint32_s;
65
- break;
66
- case 0x12: /* FRINT64Z */
67
- rmode = FPROUNDING_ZERO;
68
- gen_fpst = gen_helper_frint64_s;
69
- break;
70
- case 0x13: /* FRINT64X */
71
- gen_fpst = gen_helper_frint64_s;
72
- break;
73
- default:
74
- case 0x0: /* FMOV */
75
- case 0x1: /* FABS */
76
- case 0x2: /* FNEG */
77
- case 0x3: /* FSQRT */
78
- case 0x6: /* BFCVT */
79
- case 0x8: /* FRINTN */
80
- case 0x9: /* FRINTP */
81
- case 0xa: /* FRINTM */
82
- case 0xb: /* FRINTZ */
83
- case 0xc: /* FRINTA */
84
- case 0xe: /* FRINTX */
85
- case 0xf: /* FRINTI */
86
- g_assert_not_reached();
87
- }
88
-
89
- fpst = fpstatus_ptr(FPST_FPCR);
90
- if (rmode >= 0) {
91
- TCGv_i32 tcg_rmode = gen_set_rmode(rmode, fpst);
92
- gen_fpst(tcg_res, tcg_op, fpst);
93
- gen_restore_rmode(tcg_rmode, fpst);
94
- } else {
95
- gen_fpst(tcg_res, tcg_op, fpst);
96
- }
97
-
98
- write_fp_sreg(s, rd, tcg_res);
99
-}
100
-
101
-/* Floating-point data-processing (1 source) - double precision */
102
-static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
103
-{
104
- void (*gen_fpst)(TCGv_i64, TCGv_i64, TCGv_ptr);
105
- TCGv_i64 tcg_op, tcg_res;
106
- TCGv_ptr fpst;
107
- int rmode = -1;
108
-
109
- tcg_op = read_fp_dreg(s, rn);
110
- tcg_res = tcg_temp_new_i64();
111
-
112
- switch (opcode) {
113
- case 0x10: /* FRINT32Z */
114
- rmode = FPROUNDING_ZERO;
115
- gen_fpst = gen_helper_frint32_d;
116
- break;
117
- case 0x11: /* FRINT32X */
118
- gen_fpst = gen_helper_frint32_d;
119
- break;
120
- case 0x12: /* FRINT64Z */
121
- rmode = FPROUNDING_ZERO;
122
- gen_fpst = gen_helper_frint64_d;
123
- break;
124
- case 0x13: /* FRINT64X */
125
- gen_fpst = gen_helper_frint64_d;
126
- break;
127
- default:
128
- case 0x0: /* FMOV */
129
- case 0x1: /* FABS */
130
- case 0x2: /* FNEG */
131
- case 0x3: /* FSQRT */
132
- case 0x8: /* FRINTN */
133
- case 0x9: /* FRINTP */
134
- case 0xa: /* FRINTM */
135
- case 0xb: /* FRINTZ */
136
- case 0xc: /* FRINTA */
137
- case 0xe: /* FRINTX */
138
- case 0xf: /* FRINTI */
139
- g_assert_not_reached();
140
- }
141
-
142
- fpst = fpstatus_ptr(FPST_FPCR);
143
- if (rmode >= 0) {
144
- TCGv_i32 tcg_rmode = gen_set_rmode(rmode, fpst);
145
- gen_fpst(tcg_res, tcg_op, fpst);
146
- gen_restore_rmode(tcg_rmode, fpst);
147
- } else {
148
- gen_fpst(tcg_res, tcg_op, fpst);
149
- }
150
-
151
- write_fp_dreg(s, rd, tcg_res);
152
-}
153
+static const FPScalar1 f_scalar_frint64 = {
154
+ NULL,
155
+ gen_helper_frint64_s,
156
+ gen_helper_frint64_d,
157
+};
158
+TRANS_FEAT(FRINT64Z_s, aa64_frint, do_fp1_scalar, a,
159
+ &f_scalar_frint64, FPROUNDING_ZERO)
160
+TRANS_FEAT(FRINT64X_s, aa64_frint, do_fp1_scalar, a, &f_scalar_frint64, -1)
161
162
static void handle_fp_fcvt(DisasContext *s, int opcode,
163
int rd, int rn, int dtype, int ntype)
164
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
165
break;
166
}
167
168
- case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
169
- if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
170
- goto do_unallocated;
171
- }
172
- /* 32-to-32 and 64-to-64 ops */
173
- switch (type) {
174
- case 0:
175
- if (!fp_access_check(s)) {
176
- return;
177
- }
178
- handle_fp_1src_single(s, opcode, rd, rn);
179
- break;
180
- case 1:
181
- if (!fp_access_check(s)) {
182
- return;
183
- }
184
- handle_fp_1src_double(s, opcode, rd, rn);
185
- break;
186
- case 3:
187
- default:
188
- goto do_unallocated;
189
- }
190
- break;
191
-
192
default:
193
do_unallocated:
194
case 0x0: /* FMOV */
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
196
case 0xc: /* FRINTA */
197
case 0xe: /* FRINTX */
198
case 0xf: /* FRINTI */
199
+ case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
200
unallocated_encoding(s);
201
break;
202
}
203
--
204
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_fp_fcvt and disas_fp_1src as these were
4
the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-32-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 7 ++
12
target/arm/tcg/translate-a64.c | 172 +++++++++++++--------------------
13
2 files changed, 74 insertions(+), 105 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FRINT32X_s 00011110 0. 1 010001 10000 ..... ..... @rr_sd
20
FRINT64Z_s 00011110 0. 1 010010 10000 ..... ..... @rr_sd
21
FRINT64X_s 00011110 0. 1 010011 10000 ..... ..... @rr_sd
22
23
+FCVT_s_ds 00011110 00 1 000101 10000 ..... ..... @rr
24
+FCVT_s_hs 00011110 00 1 000111 10000 ..... ..... @rr
25
+FCVT_s_sd 00011110 01 1 000100 10000 ..... ..... @rr
26
+FCVT_s_hd 00011110 01 1 000111 10000 ..... ..... @rr
27
+FCVT_s_sh 00011110 11 1 000100 10000 ..... ..... @rr
28
+FCVT_s_dh 00011110 11 1 000101 10000 ..... ..... @rr
29
+
30
# Floating-point Immediate
31
32
FMOVI_s 0001 1110 .. 1 imm:8 100 00000 rd:5 esz=%esz_hsd
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FRINT64Z_s, aa64_frint, do_fp1_scalar, a,
38
&f_scalar_frint64, FPROUNDING_ZERO)
39
TRANS_FEAT(FRINT64X_s, aa64_frint, do_fp1_scalar, a, &f_scalar_frint64, -1)
40
41
-static void handle_fp_fcvt(DisasContext *s, int opcode,
42
- int rd, int rn, int dtype, int ntype)
43
+static bool trans_FCVT_s_ds(DisasContext *s, arg_rr *a)
44
{
45
- switch (ntype) {
46
- case 0x0:
47
- {
48
- TCGv_i32 tcg_rn = read_fp_sreg(s, rn);
49
- if (dtype == 1) {
50
- /* Single to double */
51
- TCGv_i64 tcg_rd = tcg_temp_new_i64();
52
- gen_helper_vfp_fcvtds(tcg_rd, tcg_rn, tcg_env);
53
- write_fp_dreg(s, rd, tcg_rd);
54
- } else {
55
- /* Single to half */
56
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
57
- TCGv_i32 ahp = get_ahp_flag();
58
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
59
+ if (fp_access_check(s)) {
60
+ TCGv_i32 tcg_rn = read_fp_sreg(s, a->rn);
61
+ TCGv_i64 tcg_rd = tcg_temp_new_i64();
62
63
- gen_helper_vfp_fcvt_f32_to_f16(tcg_rd, tcg_rn, fpst, ahp);
64
- /* write_fp_sreg is OK here because top half of tcg_rd is zero */
65
- write_fp_sreg(s, rd, tcg_rd);
66
- }
67
- break;
68
- }
69
- case 0x1:
70
- {
71
- TCGv_i64 tcg_rn = read_fp_dreg(s, rn);
72
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
73
- if (dtype == 0) {
74
- /* Double to single */
75
- gen_helper_vfp_fcvtsd(tcg_rd, tcg_rn, tcg_env);
76
- } else {
77
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
78
- TCGv_i32 ahp = get_ahp_flag();
79
- /* Double to half */
80
- gen_helper_vfp_fcvt_f64_to_f16(tcg_rd, tcg_rn, fpst, ahp);
81
- /* write_fp_sreg is OK here because top half of tcg_rd is zero */
82
- }
83
- write_fp_sreg(s, rd, tcg_rd);
84
- break;
85
- }
86
- case 0x3:
87
- {
88
- TCGv_i32 tcg_rn = read_fp_sreg(s, rn);
89
- TCGv_ptr tcg_fpst = fpstatus_ptr(FPST_FPCR);
90
- TCGv_i32 tcg_ahp = get_ahp_flag();
91
- tcg_gen_ext16u_i32(tcg_rn, tcg_rn);
92
- if (dtype == 0) {
93
- /* Half to single */
94
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
95
- gen_helper_vfp_fcvt_f16_to_f32(tcg_rd, tcg_rn, tcg_fpst, tcg_ahp);
96
- write_fp_sreg(s, rd, tcg_rd);
97
- } else {
98
- /* Half to double */
99
- TCGv_i64 tcg_rd = tcg_temp_new_i64();
100
- gen_helper_vfp_fcvt_f16_to_f64(tcg_rd, tcg_rn, tcg_fpst, tcg_ahp);
101
- write_fp_dreg(s, rd, tcg_rd);
102
- }
103
- break;
104
- }
105
- default:
106
- g_assert_not_reached();
107
+ gen_helper_vfp_fcvtds(tcg_rd, tcg_rn, tcg_env);
108
+ write_fp_dreg(s, a->rd, tcg_rd);
109
}
110
+ return true;
111
}
112
113
-/* Floating point data-processing (1 source)
114
- * 31 30 29 28 24 23 22 21 20 15 14 10 9 5 4 0
115
- * +---+---+---+-----------+------+---+--------+-----------+------+------+
116
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | opcode | 1 0 0 0 0 | Rn | Rd |
117
- * +---+---+---+-----------+------+---+--------+-----------+------+------+
118
- */
119
-static void disas_fp_1src(DisasContext *s, uint32_t insn)
120
+static bool trans_FCVT_s_hs(DisasContext *s, arg_rr *a)
121
{
122
- int mos = extract32(insn, 29, 3);
123
- int type = extract32(insn, 22, 2);
124
- int opcode = extract32(insn, 15, 6);
125
- int rn = extract32(insn, 5, 5);
126
- int rd = extract32(insn, 0, 5);
127
+ if (fp_access_check(s)) {
128
+ TCGv_i32 tmp = read_fp_sreg(s, a->rn);
129
+ TCGv_i32 ahp = get_ahp_flag();
130
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
131
132
- if (mos) {
133
- goto do_unallocated;
134
+ gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
135
+ /* write_fp_sreg is OK here because top half of result is zero */
136
+ write_fp_sreg(s, a->rd, tmp);
137
}
138
+ return true;
139
+}
140
141
- switch (opcode) {
142
- case 0x4: case 0x5: case 0x7:
143
- {
144
- /* FCVT between half, single and double precision */
145
- int dtype = extract32(opcode, 0, 2);
146
- if (type == 2 || dtype == type) {
147
- goto do_unallocated;
148
- }
149
- if (!fp_access_check(s)) {
150
- return;
151
- }
152
+static bool trans_FCVT_s_sd(DisasContext *s, arg_rr *a)
153
+{
154
+ if (fp_access_check(s)) {
155
+ TCGv_i64 tcg_rn = read_fp_dreg(s, a->rn);
156
+ TCGv_i32 tcg_rd = tcg_temp_new_i32();
157
158
- handle_fp_fcvt(s, opcode, rd, rn, dtype, type);
159
- break;
160
+ gen_helper_vfp_fcvtsd(tcg_rd, tcg_rn, tcg_env);
161
+ write_fp_sreg(s, a->rd, tcg_rd);
162
}
163
+ return true;
164
+}
165
166
- default:
167
- do_unallocated:
168
- case 0x0: /* FMOV */
169
- case 0x1: /* FABS */
170
- case 0x2: /* FNEG */
171
- case 0x3: /* FSQRT */
172
- case 0x6: /* BFCVT */
173
- case 0x8: /* FRINTN */
174
- case 0x9: /* FRINTP */
175
- case 0xa: /* FRINTM */
176
- case 0xb: /* FRINTZ */
177
- case 0xc: /* FRINTA */
178
- case 0xe: /* FRINTX */
179
- case 0xf: /* FRINTI */
180
- case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
181
- unallocated_encoding(s);
182
- break;
183
+static bool trans_FCVT_s_hd(DisasContext *s, arg_rr *a)
184
+{
185
+ if (fp_access_check(s)) {
186
+ TCGv_i64 tcg_rn = read_fp_dreg(s, a->rn);
187
+ TCGv_i32 tcg_rd = tcg_temp_new_i32();
188
+ TCGv_i32 ahp = get_ahp_flag();
189
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
190
+
191
+ gen_helper_vfp_fcvt_f64_to_f16(tcg_rd, tcg_rn, fpst, ahp);
192
+ /* write_fp_sreg is OK here because top half of tcg_rd is zero */
193
+ write_fp_sreg(s, a->rd, tcg_rd);
194
}
195
+ return true;
196
+}
197
+
198
+static bool trans_FCVT_s_sh(DisasContext *s, arg_rr *a)
199
+{
200
+ if (fp_access_check(s)) {
201
+ TCGv_i32 tcg_rn = read_fp_hreg(s, a->rn);
202
+ TCGv_i32 tcg_rd = tcg_temp_new_i32();
203
+ TCGv_ptr tcg_fpst = fpstatus_ptr(FPST_FPCR);
204
+ TCGv_i32 tcg_ahp = get_ahp_flag();
205
+
206
+ gen_helper_vfp_fcvt_f16_to_f32(tcg_rd, tcg_rn, tcg_fpst, tcg_ahp);
207
+ write_fp_sreg(s, a->rd, tcg_rd);
208
+ }
209
+ return true;
210
+}
211
+
212
+static bool trans_FCVT_s_dh(DisasContext *s, arg_rr *a)
213
+{
214
+ if (fp_access_check(s)) {
215
+ TCGv_i32 tcg_rn = read_fp_hreg(s, a->rn);
216
+ TCGv_i64 tcg_rd = tcg_temp_new_i64();
217
+ TCGv_ptr tcg_fpst = fpstatus_ptr(FPST_FPCR);
218
+ TCGv_i32 tcg_ahp = get_ahp_flag();
219
+
220
+ gen_helper_vfp_fcvt_f16_to_f64(tcg_rd, tcg_rn, tcg_fpst, tcg_ahp);
221
+ write_fp_dreg(s, a->rd, tcg_rd);
222
+ }
223
+ return true;
224
}
225
226
/* Handle floating point <=> fixed point conversions. Note that we can
227
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
228
break;
229
case 2: /* [15:12] == x100 */
230
/* Floating point data-processing (1 source) */
231
- disas_fp_1src(s, insn);
232
+ unallocated_encoding(s); /* in decodetree */
233
break;
234
case 3: /* [15:12] == 1000 */
235
unallocated_encoding(s);
236
--
237
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes SCVTF, UCVTF, FCVT{N,P,M,Z,A}{S,U}.
4
Remove disas_fp_fixed_conv as those were the last insns
5
decoded by that function.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241211163036.2297116-33-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/tcg/a64.decode | 40 ++++
13
target/arm/tcg/translate-a64.c | 391 ++++++++++++++-------------------
14
2 files changed, 209 insertions(+), 222 deletions(-)
15
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ FMAXV_s 0110 1110 00 11000 01111 10 ..... ..... @rr_q1e2
21
FMINV_h 0.00 1110 10 11000 01111 10 ..... ..... @qrr_h
22
FMINV_s 0110 1110 10 11000 01111 10 ..... ..... @rr_q1e2
23
24
+# Conversion between floating-point and fixed-point (general register)
25
+
26
+&fcvt rd rn esz sf shift
27
+%fcvt_shift32 10:5 !function=rsub_32
28
+%fcvt_shift64 10:6 !function=rsub_64
29
+
30
+@fcvt32 0 ....... .. ...... 1..... rn:5 rd:5 \
31
+ &fcvt sf=0 esz=%esz_hsd shift=%fcvt_shift32
32
+@fcvt64 1 ....... .. ...... ...... rn:5 rd:5 \
33
+ &fcvt sf=1 esz=%esz_hsd shift=%fcvt_shift64
34
+
35
+SCVTF_g . 0011110 .. 000010 ...... ..... ..... @fcvt32
36
+SCVTF_g . 0011110 .. 000010 ...... ..... ..... @fcvt64
37
+UCVTF_g . 0011110 .. 000011 ...... ..... ..... @fcvt32
38
+UCVTF_g . 0011110 .. 000011 ...... ..... ..... @fcvt64
39
+
40
+FCVTZS_g . 0011110 .. 011000 ...... ..... ..... @fcvt32
41
+FCVTZS_g . 0011110 .. 011000 ...... ..... ..... @fcvt64
42
+FCVTZU_g . 0011110 .. 011001 ...... ..... ..... @fcvt32
43
+FCVTZU_g . 0011110 .. 011001 ...... ..... ..... @fcvt64
44
+
45
+# Conversion between floating-point and integer (general register)
46
+
47
+@icvt sf:1 ....... .. ...... ...... rn:5 rd:5 \
48
+ &fcvt esz=%esz_hsd shift=0
49
+
50
+SCVTF_g . 0011110 .. 100010 000000 ..... ..... @icvt
51
+UCVTF_g . 0011110 .. 100011 000000 ..... ..... @icvt
52
+
53
+FCVTNS_g . 0011110 .. 100000 000000 ..... ..... @icvt
54
+FCVTNU_g . 0011110 .. 100001 000000 ..... ..... @icvt
55
+FCVTPS_g . 0011110 .. 101000 000000 ..... ..... @icvt
56
+FCVTPU_g . 0011110 .. 101001 000000 ..... ..... @icvt
57
+FCVTMS_g . 0011110 .. 110000 000000 ..... ..... @icvt
58
+FCVTMU_g . 0011110 .. 110001 000000 ..... ..... @icvt
59
+FCVTZS_g . 0011110 .. 111000 000000 ..... ..... @icvt
60
+FCVTZU_g . 0011110 .. 111001 000000 ..... ..... @icvt
61
+FCVTAS_g . 0011110 .. 100100 000000 ..... ..... @icvt
62
+FCVTAU_g . 0011110 .. 100101 000000 ..... ..... @icvt
63
+
64
# Floating-point data processing (1 source)
65
66
FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
67
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/tcg/translate-a64.c
70
+++ b/target/arm/tcg/translate-a64.c
71
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVT_s_dh(DisasContext *s, arg_rr *a)
72
return true;
73
}
74
75
-/* Handle floating point <=> fixed point conversions. Note that we can
76
- * also deal with fp <=> integer conversions as a special case (scale == 64)
77
- * OPTME: consider handling that special case specially or at least skipping
78
- * the call to scalbn in the helpers for zero shifts.
79
- */
80
-static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
81
- bool itof, int rmode, int scale, int sf, int type)
82
+static bool do_cvtf_scalar(DisasContext *s, MemOp esz, int rd, int shift,
83
+ TCGv_i64 tcg_int, bool is_signed)
84
{
85
- bool is_signed = !(opcode & 1);
86
TCGv_ptr tcg_fpstatus;
87
TCGv_i32 tcg_shift, tcg_single;
88
TCGv_i64 tcg_double;
89
90
- tcg_fpstatus = fpstatus_ptr(type == 3 ? FPST_FPCR_F16 : FPST_FPCR);
91
+ tcg_fpstatus = fpstatus_ptr(esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
92
+ tcg_shift = tcg_constant_i32(shift);
93
94
- tcg_shift = tcg_constant_i32(64 - scale);
95
-
96
- if (itof) {
97
- TCGv_i64 tcg_int = cpu_reg(s, rn);
98
- if (!sf) {
99
- TCGv_i64 tcg_extend = tcg_temp_new_i64();
100
-
101
- if (is_signed) {
102
- tcg_gen_ext32s_i64(tcg_extend, tcg_int);
103
- } else {
104
- tcg_gen_ext32u_i64(tcg_extend, tcg_int);
105
- }
106
-
107
- tcg_int = tcg_extend;
108
+ switch (esz) {
109
+ case MO_64:
110
+ tcg_double = tcg_temp_new_i64();
111
+ if (is_signed) {
112
+ gen_helper_vfp_sqtod(tcg_double, tcg_int, tcg_shift, tcg_fpstatus);
113
+ } else {
114
+ gen_helper_vfp_uqtod(tcg_double, tcg_int, tcg_shift, tcg_fpstatus);
115
}
116
+ write_fp_dreg(s, rd, tcg_double);
117
+ break;
118
119
- switch (type) {
120
- case 1: /* float64 */
121
- tcg_double = tcg_temp_new_i64();
122
- if (is_signed) {
123
- gen_helper_vfp_sqtod(tcg_double, tcg_int,
124
- tcg_shift, tcg_fpstatus);
125
- } else {
126
- gen_helper_vfp_uqtod(tcg_double, tcg_int,
127
- tcg_shift, tcg_fpstatus);
128
- }
129
- write_fp_dreg(s, rd, tcg_double);
130
- break;
131
-
132
- case 0: /* float32 */
133
- tcg_single = tcg_temp_new_i32();
134
- if (is_signed) {
135
- gen_helper_vfp_sqtos(tcg_single, tcg_int,
136
- tcg_shift, tcg_fpstatus);
137
- } else {
138
- gen_helper_vfp_uqtos(tcg_single, tcg_int,
139
- tcg_shift, tcg_fpstatus);
140
- }
141
- write_fp_sreg(s, rd, tcg_single);
142
- break;
143
-
144
- case 3: /* float16 */
145
- tcg_single = tcg_temp_new_i32();
146
- if (is_signed) {
147
- gen_helper_vfp_sqtoh(tcg_single, tcg_int,
148
- tcg_shift, tcg_fpstatus);
149
- } else {
150
- gen_helper_vfp_uqtoh(tcg_single, tcg_int,
151
- tcg_shift, tcg_fpstatus);
152
- }
153
- write_fp_sreg(s, rd, tcg_single);
154
- break;
155
-
156
- default:
157
- g_assert_not_reached();
158
+ case MO_32:
159
+ tcg_single = tcg_temp_new_i32();
160
+ if (is_signed) {
161
+ gen_helper_vfp_sqtos(tcg_single, tcg_int, tcg_shift, tcg_fpstatus);
162
+ } else {
163
+ gen_helper_vfp_uqtos(tcg_single, tcg_int, tcg_shift, tcg_fpstatus);
164
}
165
- } else {
166
- TCGv_i64 tcg_int = cpu_reg(s, rd);
167
- TCGv_i32 tcg_rmode;
168
+ write_fp_sreg(s, rd, tcg_single);
169
+ break;
170
171
- if (extract32(opcode, 2, 1)) {
172
- /* There are too many rounding modes to all fit into rmode,
173
- * so FCVTA[US] is a special case.
174
- */
175
- rmode = FPROUNDING_TIEAWAY;
176
+ case MO_16:
177
+ tcg_single = tcg_temp_new_i32();
178
+ if (is_signed) {
179
+ gen_helper_vfp_sqtoh(tcg_single, tcg_int, tcg_shift, tcg_fpstatus);
180
+ } else {
181
+ gen_helper_vfp_uqtoh(tcg_single, tcg_int, tcg_shift, tcg_fpstatus);
182
}
183
+ write_fp_sreg(s, rd, tcg_single);
184
+ break;
185
186
- tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
187
-
188
- switch (type) {
189
- case 1: /* float64 */
190
- tcg_double = read_fp_dreg(s, rn);
191
- if (is_signed) {
192
- if (!sf) {
193
- gen_helper_vfp_tosld(tcg_int, tcg_double,
194
- tcg_shift, tcg_fpstatus);
195
- } else {
196
- gen_helper_vfp_tosqd(tcg_int, tcg_double,
197
- tcg_shift, tcg_fpstatus);
198
- }
199
- } else {
200
- if (!sf) {
201
- gen_helper_vfp_tould(tcg_int, tcg_double,
202
- tcg_shift, tcg_fpstatus);
203
- } else {
204
- gen_helper_vfp_touqd(tcg_int, tcg_double,
205
- tcg_shift, tcg_fpstatus);
206
- }
207
- }
208
- if (!sf) {
209
- tcg_gen_ext32u_i64(tcg_int, tcg_int);
210
- }
211
- break;
212
-
213
- case 0: /* float32 */
214
- tcg_single = read_fp_sreg(s, rn);
215
- if (sf) {
216
- if (is_signed) {
217
- gen_helper_vfp_tosqs(tcg_int, tcg_single,
218
- tcg_shift, tcg_fpstatus);
219
- } else {
220
- gen_helper_vfp_touqs(tcg_int, tcg_single,
221
- tcg_shift, tcg_fpstatus);
222
- }
223
- } else {
224
- TCGv_i32 tcg_dest = tcg_temp_new_i32();
225
- if (is_signed) {
226
- gen_helper_vfp_tosls(tcg_dest, tcg_single,
227
- tcg_shift, tcg_fpstatus);
228
- } else {
229
- gen_helper_vfp_touls(tcg_dest, tcg_single,
230
- tcg_shift, tcg_fpstatus);
231
- }
232
- tcg_gen_extu_i32_i64(tcg_int, tcg_dest);
233
- }
234
- break;
235
-
236
- case 3: /* float16 */
237
- tcg_single = read_fp_sreg(s, rn);
238
- if (sf) {
239
- if (is_signed) {
240
- gen_helper_vfp_tosqh(tcg_int, tcg_single,
241
- tcg_shift, tcg_fpstatus);
242
- } else {
243
- gen_helper_vfp_touqh(tcg_int, tcg_single,
244
- tcg_shift, tcg_fpstatus);
245
- }
246
- } else {
247
- TCGv_i32 tcg_dest = tcg_temp_new_i32();
248
- if (is_signed) {
249
- gen_helper_vfp_toslh(tcg_dest, tcg_single,
250
- tcg_shift, tcg_fpstatus);
251
- } else {
252
- gen_helper_vfp_toulh(tcg_dest, tcg_single,
253
- tcg_shift, tcg_fpstatus);
254
- }
255
- tcg_gen_extu_i32_i64(tcg_int, tcg_dest);
256
- }
257
- break;
258
-
259
- default:
260
- g_assert_not_reached();
261
- }
262
-
263
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
264
+ default:
265
+ g_assert_not_reached();
266
}
267
+ return true;
268
}
269
270
-/* Floating point <-> fixed point conversions
271
- * 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
272
- * +----+---+---+-----------+------+---+-------+--------+-------+------+------+
273
- * | sf | 0 | S | 1 1 1 1 0 | type | 0 | rmode | opcode | scale | Rn | Rd |
274
- * +----+---+---+-----------+------+---+-------+--------+-------+------+------+
275
- */
276
-static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
277
+static bool do_cvtf_g(DisasContext *s, arg_fcvt *a, bool is_signed)
278
{
279
- int rd = extract32(insn, 0, 5);
280
- int rn = extract32(insn, 5, 5);
281
- int scale = extract32(insn, 10, 6);
282
- int opcode = extract32(insn, 16, 3);
283
- int rmode = extract32(insn, 19, 2);
284
- int type = extract32(insn, 22, 2);
285
- bool sbit = extract32(insn, 29, 1);
286
- bool sf = extract32(insn, 31, 1);
287
- bool itof;
288
+ TCGv_i64 tcg_int;
289
+ int check = fp_access_check_scalar_hsd(s, a->esz);
290
291
- if (sbit || (!sf && scale < 32)) {
292
- unallocated_encoding(s);
293
- return;
294
+ if (check <= 0) {
295
+ return check == 0;
296
}
297
298
- switch (type) {
299
- case 0: /* float32 */
300
- case 1: /* float64 */
301
- break;
302
- case 3: /* float16 */
303
- if (dc_isar_feature(aa64_fp16, s)) {
304
- break;
305
+ if (a->sf) {
306
+ tcg_int = cpu_reg(s, a->rn);
307
+ } else {
308
+ tcg_int = read_cpu_reg(s, a->rn, true);
309
+ if (is_signed) {
310
+ tcg_gen_ext32s_i64(tcg_int, tcg_int);
311
+ } else {
312
+ tcg_gen_ext32u_i64(tcg_int, tcg_int);
313
}
314
- /* fallthru */
315
- default:
316
- unallocated_encoding(s);
317
- return;
318
}
319
-
320
- switch ((rmode << 3) | opcode) {
321
- case 0x2: /* SCVTF */
322
- case 0x3: /* UCVTF */
323
- itof = true;
324
- break;
325
- case 0x18: /* FCVTZS */
326
- case 0x19: /* FCVTZU */
327
- itof = false;
328
- break;
329
- default:
330
- unallocated_encoding(s);
331
- return;
332
- }
333
-
334
- if (!fp_access_check(s)) {
335
- return;
336
- }
337
-
338
- handle_fpfpcvt(s, rd, rn, opcode, itof, FPROUNDING_ZERO, scale, sf, type);
339
+ return do_cvtf_scalar(s, a->esz, a->rd, a->shift, tcg_int, is_signed);
340
}
341
342
+TRANS(SCVTF_g, do_cvtf_g, a, true)
343
+TRANS(UCVTF_g, do_cvtf_g, a, false)
344
+
345
+static void do_fcvt_scalar(DisasContext *s, MemOp out, MemOp esz,
346
+ TCGv_i64 tcg_out, int shift, int rn,
347
+ ARMFPRounding rmode)
348
+{
349
+ TCGv_ptr tcg_fpstatus;
350
+ TCGv_i32 tcg_shift, tcg_rmode, tcg_single;
351
+
352
+ tcg_fpstatus = fpstatus_ptr(esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
353
+ tcg_shift = tcg_constant_i32(shift);
354
+ tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
355
+
356
+ switch (esz) {
357
+ case MO_64:
358
+ read_vec_element(s, tcg_out, rn, 0, MO_64);
359
+ switch (out) {
360
+ case MO_64 | MO_SIGN:
361
+ gen_helper_vfp_tosqd(tcg_out, tcg_out, tcg_shift, tcg_fpstatus);
362
+ break;
363
+ case MO_64:
364
+ gen_helper_vfp_touqd(tcg_out, tcg_out, tcg_shift, tcg_fpstatus);
365
+ break;
366
+ case MO_32 | MO_SIGN:
367
+ gen_helper_vfp_tosld(tcg_out, tcg_out, tcg_shift, tcg_fpstatus);
368
+ break;
369
+ case MO_32:
370
+ gen_helper_vfp_tould(tcg_out, tcg_out, tcg_shift, tcg_fpstatus);
371
+ break;
372
+ default:
373
+ g_assert_not_reached();
374
+ }
375
+ break;
376
+
377
+ case MO_32:
378
+ tcg_single = read_fp_sreg(s, rn);
379
+ switch (out) {
380
+ case MO_64 | MO_SIGN:
381
+ gen_helper_vfp_tosqs(tcg_out, tcg_single, tcg_shift, tcg_fpstatus);
382
+ break;
383
+ case MO_64:
384
+ gen_helper_vfp_touqs(tcg_out, tcg_single, tcg_shift, tcg_fpstatus);
385
+ break;
386
+ case MO_32 | MO_SIGN:
387
+ gen_helper_vfp_tosls(tcg_single, tcg_single,
388
+ tcg_shift, tcg_fpstatus);
389
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
390
+ break;
391
+ case MO_32:
392
+ gen_helper_vfp_touls(tcg_single, tcg_single,
393
+ tcg_shift, tcg_fpstatus);
394
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
395
+ break;
396
+ default:
397
+ g_assert_not_reached();
398
+ }
399
+ break;
400
+
401
+ case MO_16:
402
+ tcg_single = read_fp_hreg(s, rn);
403
+ switch (out) {
404
+ case MO_64 | MO_SIGN:
405
+ gen_helper_vfp_tosqh(tcg_out, tcg_single, tcg_shift, tcg_fpstatus);
406
+ break;
407
+ case MO_64:
408
+ gen_helper_vfp_touqh(tcg_out, tcg_single, tcg_shift, tcg_fpstatus);
409
+ break;
410
+ case MO_32 | MO_SIGN:
411
+ gen_helper_vfp_toslh(tcg_single, tcg_single,
412
+ tcg_shift, tcg_fpstatus);
413
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
414
+ break;
415
+ case MO_32:
416
+ gen_helper_vfp_toulh(tcg_single, tcg_single,
417
+ tcg_shift, tcg_fpstatus);
418
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
419
+ break;
420
+ default:
421
+ g_assert_not_reached();
422
+ }
423
+ break;
424
+
425
+ default:
426
+ g_assert_not_reached();
427
+ }
428
+
429
+ gen_restore_rmode(tcg_rmode, tcg_fpstatus);
430
+}
431
+
432
+static bool do_fcvt_g(DisasContext *s, arg_fcvt *a,
433
+ ARMFPRounding rmode, bool is_signed)
434
+{
435
+ TCGv_i64 tcg_int;
436
+ int check = fp_access_check_scalar_hsd(s, a->esz);
437
+
438
+ if (check <= 0) {
439
+ return check == 0;
440
+ }
441
+
442
+ tcg_int = cpu_reg(s, a->rd);
443
+ do_fcvt_scalar(s, (a->sf ? MO_64 : MO_32) | (is_signed ? MO_SIGN : 0),
444
+ a->esz, tcg_int, a->shift, a->rn, rmode);
445
+
446
+ if (!a->sf) {
447
+ tcg_gen_ext32u_i64(tcg_int, tcg_int);
448
+ }
449
+ return true;
450
+}
451
+
452
+TRANS(FCVTNS_g, do_fcvt_g, a, FPROUNDING_TIEEVEN, true)
453
+TRANS(FCVTNU_g, do_fcvt_g, a, FPROUNDING_TIEEVEN, false)
454
+TRANS(FCVTPS_g, do_fcvt_g, a, FPROUNDING_POSINF, true)
455
+TRANS(FCVTPU_g, do_fcvt_g, a, FPROUNDING_POSINF, false)
456
+TRANS(FCVTMS_g, do_fcvt_g, a, FPROUNDING_NEGINF, true)
457
+TRANS(FCVTMU_g, do_fcvt_g, a, FPROUNDING_NEGINF, false)
458
+TRANS(FCVTZS_g, do_fcvt_g, a, FPROUNDING_ZERO, true)
459
+TRANS(FCVTZU_g, do_fcvt_g, a, FPROUNDING_ZERO, false)
460
+TRANS(FCVTAS_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, true)
461
+TRANS(FCVTAU_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, false)
462
+
463
static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
464
{
465
/* FMOV: gpr to or from float, double, or top half of quad fp reg,
466
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
467
switch (opcode) {
468
case 2: /* SCVTF */
469
case 3: /* UCVTF */
470
- itof = true;
471
- /* fallthru */
472
case 4: /* FCVTAS */
473
case 5: /* FCVTAU */
474
- if (rmode != 0) {
475
- goto do_unallocated;
476
- }
477
- /* fallthru */
478
case 0: /* FCVT[NPMZ]S */
479
case 1: /* FCVT[NPMZ]U */
480
- switch (type) {
481
- case 0: /* float32 */
482
- case 1: /* float64 */
483
- break;
484
- case 3: /* float16 */
485
- if (!dc_isar_feature(aa64_fp16, s)) {
486
- goto do_unallocated;
487
- }
488
- break;
489
- default:
490
- goto do_unallocated;
491
- }
492
- if (!fp_access_check(s)) {
493
- return;
494
- }
495
- handle_fpfpcvt(s, rd, rn, opcode, itof, rmode, 64, sf, type);
496
- break;
497
+ goto do_unallocated;
498
499
default:
500
switch (sf << 7 | type << 5 | rmode << 3 | opcode) {
501
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
502
unallocated_encoding(s); /* in decodetree */
503
} else if (extract32(insn, 21, 1) == 0) {
504
/* Floating point to fixed point conversions */
505
- disas_fp_fixed_conv(s, insn);
506
+ unallocated_encoding(s); /* in decodetree */
507
} else {
508
switch (extract32(insn, 10, 2)) {
509
case 1: /* Floating point conditional compare */
510
--
511
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-34-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 2 ++
9
target/arm/tcg/translate-a64.c | 41 +++++++++++++++++-----------------
10
2 files changed, 22 insertions(+), 21 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FCVTZU_g . 0011110 .. 111001 000000 ..... ..... @icvt
17
FCVTAS_g . 0011110 .. 100100 000000 ..... ..... @icvt
18
FCVTAU_g . 0011110 .. 100101 000000 ..... ..... @icvt
19
20
+FJCVTZS 0 0011110 01 111110 000000 ..... ..... @rr
21
+
22
# Floating-point data processing (1 source)
23
24
FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
25
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/tcg/translate-a64.c
28
+++ b/target/arm/tcg/translate-a64.c
29
@@ -XXX,XX +XXX,XX @@ TRANS(FCVTZU_g, do_fcvt_g, a, FPROUNDING_ZERO, false)
30
TRANS(FCVTAS_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, true)
31
TRANS(FCVTAU_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, false)
32
33
+static bool trans_FJCVTZS(DisasContext *s, arg_FJCVTZS *a)
34
+{
35
+ if (!dc_isar_feature(aa64_jscvt, s)) {
36
+ return false;
37
+ }
38
+ if (fp_access_check(s)) {
39
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
40
+ TCGv_ptr fpstatus = fpstatus_ptr(FPST_FPCR);
41
+
42
+ gen_helper_fjcvtzs(t, t, fpstatus);
43
+
44
+ tcg_gen_ext32u_i64(cpu_reg(s, a->rd), t);
45
+ tcg_gen_extrh_i64_i32(cpu_ZF, t);
46
+ tcg_gen_movi_i32(cpu_CF, 0);
47
+ tcg_gen_movi_i32(cpu_NF, 0);
48
+ tcg_gen_movi_i32(cpu_VF, 0);
49
+ }
50
+ return true;
51
+}
52
+
53
static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
54
{
55
/* FMOV: gpr to or from float, double, or top half of quad fp reg,
56
@@ -XXX,XX +XXX,XX @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
57
}
58
}
59
60
-static void handle_fjcvtzs(DisasContext *s, int rd, int rn)
61
-{
62
- TCGv_i64 t = read_fp_dreg(s, rn);
63
- TCGv_ptr fpstatus = fpstatus_ptr(FPST_FPCR);
64
-
65
- gen_helper_fjcvtzs(t, t, fpstatus);
66
-
67
- tcg_gen_ext32u_i64(cpu_reg(s, rd), t);
68
- tcg_gen_extrh_i64_i32(cpu_ZF, t);
69
- tcg_gen_movi_i32(cpu_CF, 0);
70
- tcg_gen_movi_i32(cpu_NF, 0);
71
- tcg_gen_movi_i32(cpu_VF, 0);
72
-}
73
-
74
/* Floating point <-> integer conversions
75
* 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
76
* +----+---+---+-----------+------+---+-------+-----+-------------+----+----+
77
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
78
break;
79
80
case 0b00111110: /* FJCVTZS */
81
- if (!dc_isar_feature(aa64_jscvt, s)) {
82
- goto do_unallocated;
83
- } else if (fp_access_check(s)) {
84
- handle_fjcvtzs(s, rd, rn);
85
- }
86
- break;
87
-
88
default:
89
do_unallocated:
90
unallocated_encoding(s);
91
--
92
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove disas_fp_int_conv and disas_data_proc_fp as these
4
were the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-35-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 14 ++
12
target/arm/tcg/translate-a64.c | 232 ++++++++++-----------------------
13
2 files changed, 86 insertions(+), 160 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FCVTAU_g . 0011110 .. 100101 000000 ..... ..... @icvt
20
21
FJCVTZS 0 0011110 01 111110 000000 ..... ..... @rr
22
23
+FMOV_ws 0 0011110 00 100110 000000 ..... ..... @rr
24
+FMOV_sw 0 0011110 00 100111 000000 ..... ..... @rr
25
+
26
+FMOV_xd 1 0011110 01 100110 000000 ..... ..... @rr
27
+FMOV_dx 1 0011110 01 100111 000000 ..... ..... @rr
28
+
29
+# Move to/from upper half of 128-bit
30
+FMOV_xu 1 0011110 10 101110 000000 ..... ..... @rr
31
+FMOV_ux 1 0011110 10 101111 000000 ..... ..... @rr
32
+
33
+# Half-precision allows both sf=0 and sf=1 with identical results
34
+FMOV_xh - 0011110 11 100110 000000 ..... ..... @rr
35
+FMOV_hx - 0011110 11 100111 000000 ..... ..... @rr
36
+
37
# Floating-point data processing (1 source)
38
39
FMOV_s 00011110 .. 1 000000 10000 ..... ..... @rr_hsd
40
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/tcg/translate-a64.c
43
+++ b/target/arm/tcg/translate-a64.c
44
@@ -XXX,XX +XXX,XX @@ static bool trans_FJCVTZS(DisasContext *s, arg_FJCVTZS *a)
45
return true;
46
}
47
48
-static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
49
+static bool trans_FMOV_hx(DisasContext *s, arg_rr *a)
50
{
51
- /* FMOV: gpr to or from float, double, or top half of quad fp reg,
52
- * without conversion.
53
- */
54
-
55
- if (itof) {
56
- TCGv_i64 tcg_rn = cpu_reg(s, rn);
57
- TCGv_i64 tmp;
58
-
59
- switch (type) {
60
- case 0:
61
- /* 32 bit */
62
- tmp = tcg_temp_new_i64();
63
- tcg_gen_ext32u_i64(tmp, tcg_rn);
64
- write_fp_dreg(s, rd, tmp);
65
- break;
66
- case 1:
67
- /* 64 bit */
68
- write_fp_dreg(s, rd, tcg_rn);
69
- break;
70
- case 2:
71
- /* 64 bit to top half. */
72
- tcg_gen_st_i64(tcg_rn, tcg_env, fp_reg_hi_offset(s, rd));
73
- clear_vec_high(s, true, rd);
74
- break;
75
- case 3:
76
- /* 16 bit */
77
- tmp = tcg_temp_new_i64();
78
- tcg_gen_ext16u_i64(tmp, tcg_rn);
79
- write_fp_dreg(s, rd, tmp);
80
- break;
81
- default:
82
- g_assert_not_reached();
83
- }
84
- } else {
85
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
86
-
87
- switch (type) {
88
- case 0:
89
- /* 32 bit */
90
- tcg_gen_ld32u_i64(tcg_rd, tcg_env, fp_reg_offset(s, rn, MO_32));
91
- break;
92
- case 1:
93
- /* 64 bit */
94
- tcg_gen_ld_i64(tcg_rd, tcg_env, fp_reg_offset(s, rn, MO_64));
95
- break;
96
- case 2:
97
- /* 64 bits from top half */
98
- tcg_gen_ld_i64(tcg_rd, tcg_env, fp_reg_hi_offset(s, rn));
99
- break;
100
- case 3:
101
- /* 16 bit */
102
- tcg_gen_ld16u_i64(tcg_rd, tcg_env, fp_reg_offset(s, rn, MO_16));
103
- break;
104
- default:
105
- g_assert_not_reached();
106
- }
107
+ if (!dc_isar_feature(aa64_fp16, s)) {
108
+ return false;
109
}
110
+ if (fp_access_check(s)) {
111
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
112
+ TCGv_i64 tmp = tcg_temp_new_i64();
113
+ tcg_gen_ext16u_i64(tmp, tcg_rn);
114
+ write_fp_dreg(s, a->rd, tmp);
115
+ }
116
+ return true;
117
}
118
119
-/* Floating point <-> integer conversions
120
- * 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
121
- * +----+---+---+-----------+------+---+-------+-----+-------------+----+----+
122
- * | sf | 0 | S | 1 1 1 1 0 | type | 1 | rmode | opc | 0 0 0 0 0 0 | Rn | Rd |
123
- * +----+---+---+-----------+------+---+-------+-----+-------------+----+----+
124
- */
125
-static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
126
+static bool trans_FMOV_sw(DisasContext *s, arg_rr *a)
127
{
128
- int rd = extract32(insn, 0, 5);
129
- int rn = extract32(insn, 5, 5);
130
- int opcode = extract32(insn, 16, 3);
131
- int rmode = extract32(insn, 19, 2);
132
- int type = extract32(insn, 22, 2);
133
- bool sbit = extract32(insn, 29, 1);
134
- bool sf = extract32(insn, 31, 1);
135
- bool itof = false;
136
-
137
- if (sbit) {
138
- goto do_unallocated;
139
- }
140
-
141
- switch (opcode) {
142
- case 2: /* SCVTF */
143
- case 3: /* UCVTF */
144
- case 4: /* FCVTAS */
145
- case 5: /* FCVTAU */
146
- case 0: /* FCVT[NPMZ]S */
147
- case 1: /* FCVT[NPMZ]U */
148
- goto do_unallocated;
149
-
150
- default:
151
- switch (sf << 7 | type << 5 | rmode << 3 | opcode) {
152
- case 0b01100110: /* FMOV half <-> 32-bit int */
153
- case 0b01100111:
154
- case 0b11100110: /* FMOV half <-> 64-bit int */
155
- case 0b11100111:
156
- if (!dc_isar_feature(aa64_fp16, s)) {
157
- goto do_unallocated;
158
- }
159
- /* fallthru */
160
- case 0b00000110: /* FMOV 32-bit */
161
- case 0b00000111:
162
- case 0b10100110: /* FMOV 64-bit */
163
- case 0b10100111:
164
- case 0b11001110: /* FMOV top half of 128-bit */
165
- case 0b11001111:
166
- if (!fp_access_check(s)) {
167
- return;
168
- }
169
- itof = opcode & 1;
170
- handle_fmov(s, rd, rn, type, itof);
171
- break;
172
-
173
- case 0b00111110: /* FJCVTZS */
174
- default:
175
- do_unallocated:
176
- unallocated_encoding(s);
177
- return;
178
- }
179
- break;
180
+ if (fp_access_check(s)) {
181
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
182
+ TCGv_i64 tmp = tcg_temp_new_i64();
183
+ tcg_gen_ext32u_i64(tmp, tcg_rn);
184
+ write_fp_dreg(s, a->rd, tmp);
185
}
186
+ return true;
187
}
188
189
-/* FP-specific subcases of table C3-6 (SIMD and FP data processing)
190
- * 31 30 29 28 25 24 0
191
- * +---+---+---+---------+-----------------------------+
192
- * | | 0 | | 1 1 1 1 | |
193
- * +---+---+---+---------+-----------------------------+
194
- */
195
-static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
196
+static bool trans_FMOV_dx(DisasContext *s, arg_rr *a)
197
{
198
- if (extract32(insn, 24, 1)) {
199
- unallocated_encoding(s); /* in decodetree */
200
- } else if (extract32(insn, 21, 1) == 0) {
201
- /* Floating point to fixed point conversions */
202
- unallocated_encoding(s); /* in decodetree */
203
- } else {
204
- switch (extract32(insn, 10, 2)) {
205
- case 1: /* Floating point conditional compare */
206
- case 2: /* Floating point data-processing (2 source) */
207
- case 3: /* Floating point conditional select */
208
- unallocated_encoding(s); /* in decodetree */
209
- break;
210
- case 0:
211
- switch (ctz32(extract32(insn, 12, 4))) {
212
- case 0: /* [15:12] == xxx1 */
213
- /* Floating point immediate */
214
- unallocated_encoding(s); /* in decodetree */
215
- break;
216
- case 1: /* [15:12] == xx10 */
217
- /* Floating point compare */
218
- unallocated_encoding(s); /* in decodetree */
219
- break;
220
- case 2: /* [15:12] == x100 */
221
- /* Floating point data-processing (1 source) */
222
- unallocated_encoding(s); /* in decodetree */
223
- break;
224
- case 3: /* [15:12] == 1000 */
225
- unallocated_encoding(s);
226
- break;
227
- default: /* [15:12] == 0000 */
228
- /* Floating point <-> integer conversions */
229
- disas_fp_int_conv(s, insn);
230
- break;
231
- }
232
- break;
233
- }
234
+ if (fp_access_check(s)) {
235
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
236
+ write_fp_dreg(s, a->rd, tcg_rn);
237
}
238
+ return true;
239
+}
240
+
241
+static bool trans_FMOV_ux(DisasContext *s, arg_rr *a)
242
+{
243
+ if (fp_access_check(s)) {
244
+ TCGv_i64 tcg_rn = cpu_reg(s, a->rn);
245
+ tcg_gen_st_i64(tcg_rn, tcg_env, fp_reg_hi_offset(s, a->rd));
246
+ clear_vec_high(s, true, a->rd);
247
+ }
248
+ return true;
249
+}
250
+
251
+static bool trans_FMOV_xh(DisasContext *s, arg_rr *a)
252
+{
253
+ if (!dc_isar_feature(aa64_fp16, s)) {
254
+ return false;
255
+ }
256
+ if (fp_access_check(s)) {
257
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
258
+ tcg_gen_ld16u_i64(tcg_rd, tcg_env, fp_reg_offset(s, a->rn, MO_16));
259
+ }
260
+ return true;
261
+}
262
+
263
+static bool trans_FMOV_ws(DisasContext *s, arg_rr *a)
264
+{
265
+ if (fp_access_check(s)) {
266
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
267
+ tcg_gen_ld32u_i64(tcg_rd, tcg_env, fp_reg_offset(s, a->rn, MO_32));
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_FMOV_xd(DisasContext *s, arg_rr *a)
273
+{
274
+ if (fp_access_check(s)) {
275
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
276
+ tcg_gen_ld_i64(tcg_rd, tcg_env, fp_reg_offset(s, a->rn, MO_64));
277
+ }
278
+ return true;
279
+}
280
+
281
+static bool trans_FMOV_xu(DisasContext *s, arg_rr *a)
282
+{
283
+ if (fp_access_check(s)) {
284
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
285
+ tcg_gen_ld_i64(tcg_rd, tcg_env, fp_reg_hi_offset(s, a->rn));
286
+ }
287
+ return true;
288
}
289
290
/* Common vector code for handling integer to FP conversion */
291
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_simd(DisasContext *s, uint32_t insn)
292
static void disas_data_proc_simd_fp(DisasContext *s, uint32_t insn)
293
{
294
if (extract32(insn, 28, 1) == 1 && extract32(insn, 30, 1) == 0) {
295
- disas_data_proc_fp(s, insn);
296
+ unallocated_encoding(s); /* in decodetree */
297
} else {
298
/* SIMD, including crypto */
299
disas_data_proc_simd(s, insn);
300
--
301
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-36-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 11 +++
9
target/arm/tcg/translate-a64.c | 123 +++++++++++++++++++++------------
10
2 files changed, 89 insertions(+), 45 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
18
@rr_s ........ ... ..... ...... rn:5 rd:5 &rr_e esz=2
19
@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
20
+@rr_e ........ esz:2 . ..... ...... rn:5 rd:5 &rr_e
21
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
22
@rr_hsd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_hsd
23
24
@@ -XXX,XX +XXX,XX @@ UQRSHRN_si 0111 11110 .... ... 10011 1 ..... ..... @shri_s
25
SQRSHRUN_si 0111 11110 .... ... 10001 1 ..... ..... @shri_b
26
SQRSHRUN_si 0111 11110 .... ... 10001 1 ..... ..... @shri_h
27
SQRSHRUN_si 0111 11110 .... ... 10001 1 ..... ..... @shri_s
28
+
29
+# Advanced SIMD scalar two-register miscellaneous
30
+
31
+SQABS_s 0101 1110 ..1 00000 01111 0 ..... ..... @rr_e
32
+SQNEG_s 0111 1110 ..1 00000 01111 0 ..... ..... @rr_e
33
+
34
+# Advanced SIMD two-register miscellaneous
35
+
36
+SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
37
+SQNEG_v 0.10 1110 ..1 00000 01111 0 ..... ..... @qrr_e
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
41
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_FMOV_xu(DisasContext *s, arg_rr *a)
43
return true;
44
}
45
46
+typedef struct ENVScalar1 {
47
+ NeonGenOneOpEnvFn *gen_bhs[3];
48
+ NeonGenOne64OpEnvFn *gen_d;
49
+} ENVScalar1;
50
+
51
+static bool do_env_scalar1(DisasContext *s, arg_rr_e *a, const ENVScalar1 *f)
52
+{
53
+ if (!fp_access_check(s)) {
54
+ return true;
55
+ }
56
+ if (a->esz == MO_64) {
57
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
58
+ f->gen_d(t, tcg_env, t);
59
+ write_fp_dreg(s, a->rd, t);
60
+ } else {
61
+ TCGv_i32 t = tcg_temp_new_i32();
62
+
63
+ read_vec_element_i32(s, t, a->rn, 0, a->esz);
64
+ f->gen_bhs[a->esz](t, tcg_env, t);
65
+ write_fp_sreg(s, a->rd, t);
66
+ }
67
+ return true;
68
+}
69
+
70
+static bool do_env_vector1(DisasContext *s, arg_qrr_e *a, const ENVScalar1 *f)
71
+{
72
+ if (a->esz == MO_64 && !a->q) {
73
+ return false;
74
+ }
75
+ if (!fp_access_check(s)) {
76
+ return true;
77
+ }
78
+ if (a->esz == MO_64) {
79
+ TCGv_i64 t = tcg_temp_new_i64();
80
+
81
+ for (int i = 0; i < 2; ++i) {
82
+ read_vec_element(s, t, a->rn, i, MO_64);
83
+ f->gen_d(t, tcg_env, t);
84
+ write_vec_element(s, t, a->rd, i, MO_64);
85
+ }
86
+ } else {
87
+ TCGv_i32 t = tcg_temp_new_i32();
88
+ int n = (a->q ? 16 : 8) >> a->esz;
89
+
90
+ for (int i = 0; i < n; ++i) {
91
+ read_vec_element_i32(s, t, a->rn, i, a->esz);
92
+ f->gen_bhs[a->esz](t, tcg_env, t);
93
+ write_vec_element_i32(s, t, a->rd, i, a->esz);
94
+ }
95
+ }
96
+ clear_vec_high(s, a->q, a->rd);
97
+ return true;
98
+}
99
+
100
+static const ENVScalar1 f_scalar_sqabs = {
101
+ { gen_helper_neon_qabs_s8,
102
+ gen_helper_neon_qabs_s16,
103
+ gen_helper_neon_qabs_s32 },
104
+ gen_helper_neon_qabs_s64,
105
+};
106
+TRANS(SQABS_s, do_env_scalar1, a, &f_scalar_sqabs)
107
+TRANS(SQABS_v, do_env_vector1, a, &f_scalar_sqabs)
108
+
109
+static const ENVScalar1 f_scalar_sqneg = {
110
+ { gen_helper_neon_qneg_s8,
111
+ gen_helper_neon_qneg_s16,
112
+ gen_helper_neon_qneg_s32 },
113
+ gen_helper_neon_qneg_s64,
114
+};
115
+TRANS(SQNEG_s, do_env_scalar1, a, &f_scalar_sqneg)
116
+TRANS(SQNEG_v, do_env_vector1, a, &f_scalar_sqneg)
117
+
118
/* Common vector code for handling integer to FP conversion */
119
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
120
int elements, int is_signed,
121
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
122
*/
123
tcg_gen_not_i64(tcg_rd, tcg_rn);
124
break;
125
- case 0x7: /* SQABS, SQNEG */
126
- if (u) {
127
- gen_helper_neon_qneg_s64(tcg_rd, tcg_env, tcg_rn);
128
- } else {
129
- gen_helper_neon_qabs_s64(tcg_rd, tcg_env, tcg_rn);
130
- }
131
- break;
132
case 0xa: /* CMLT */
133
cond = TCG_COND_LT;
134
do_cmop:
135
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
136
gen_helper_frint64_d(tcg_rd, tcg_rn, tcg_fpstatus);
137
break;
138
default:
139
+ case 0x7: /* SQABS, SQNEG */
140
g_assert_not_reached();
141
}
142
}
143
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
144
TCGv_ptr tcg_fpstatus;
145
146
switch (opcode) {
147
- case 0x7: /* SQABS / SQNEG */
148
- break;
149
case 0xa: /* CMLT */
150
if (u) {
151
unallocated_encoding(s);
152
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
153
break;
154
default:
155
case 0x3: /* USQADD / SUQADD */
156
+ case 0x7: /* SQABS / SQNEG */
157
unallocated_encoding(s);
158
return;
159
}
160
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
161
read_vec_element_i32(s, tcg_rn, rn, 0, size);
162
163
switch (opcode) {
164
- case 0x7: /* SQABS, SQNEG */
165
- {
166
- NeonGenOneOpEnvFn *genfn;
167
- static NeonGenOneOpEnvFn * const fns[3][2] = {
168
- { gen_helper_neon_qabs_s8, gen_helper_neon_qneg_s8 },
169
- { gen_helper_neon_qabs_s16, gen_helper_neon_qneg_s16 },
170
- { gen_helper_neon_qabs_s32, gen_helper_neon_qneg_s32 },
171
- };
172
- genfn = fns[size][u];
173
- genfn(tcg_rd, tcg_env, tcg_rn);
174
- break;
175
- }
176
case 0x1a: /* FCVTNS */
177
case 0x1b: /* FCVTMS */
178
case 0x1c: /* FCVTAS */
179
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
180
tcg_fpstatus);
181
break;
182
default:
183
+ case 0x7: /* SQABS, SQNEG */
184
g_assert_not_reached();
185
}
186
187
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
188
return;
189
}
190
break;
191
- case 0x7: /* SQABS, SQNEG */
192
- if (size == 3 && !is_q) {
193
- unallocated_encoding(s);
194
- return;
195
- }
196
- break;
197
case 0xc ... 0xf:
198
case 0x16 ... 0x1f:
199
{
200
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
201
}
202
default:
203
case 0x3: /* SUQADD, USQADD */
204
+ case 0x7: /* SQABS, SQNEG */
205
unallocated_encoding(s);
206
return;
207
}
208
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
209
tcg_gen_clrsb_i32(tcg_res, tcg_op);
210
}
211
break;
212
- case 0x7: /* SQABS, SQNEG */
213
- if (u) {
214
- gen_helper_neon_qneg_s32(tcg_res, tcg_env, tcg_op);
215
- } else {
216
- gen_helper_neon_qabs_s32(tcg_res, tcg_env, tcg_op);
217
- }
218
- break;
219
case 0x2f: /* FABS */
220
gen_vfp_abss(tcg_res, tcg_op);
221
break;
222
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
223
gen_helper_frint64_s(tcg_res, tcg_op, tcg_fpstatus);
224
break;
225
default:
226
+ case 0x7: /* SQABS, SQNEG */
227
g_assert_not_reached();
228
}
229
} else {
230
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
231
gen_helper_neon_cnt_u8(tcg_res, tcg_op);
232
}
233
break;
234
- case 0x7: /* SQABS, SQNEG */
235
- {
236
- NeonGenOneOpEnvFn *genfn;
237
- static NeonGenOneOpEnvFn * const fns[2][2] = {
238
- { gen_helper_neon_qabs_s8, gen_helper_neon_qneg_s8 },
239
- { gen_helper_neon_qabs_s16, gen_helper_neon_qneg_s16 },
240
- };
241
- genfn = fns[size][u];
242
- genfn(tcg_res, tcg_env, tcg_op);
243
- break;
244
- }
245
case 0x4: /* CLS, CLZ */
246
if (u) {
247
if (size == 0) {
248
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
249
}
250
break;
251
default:
252
+ case 0x7: /* SQABS, SQNEG */
253
g_assert_not_reached();
254
}
255
}
256
--
257
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-37-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 4 +++
9
target/arm/tcg/translate-a64.c | 46 +++++++++++++++++++++++-----------
10
2 files changed, 35 insertions(+), 15 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SQRSHRUN_si 0111 11110 .... ... 10001 1 ..... ..... @shri_s
17
18
SQABS_s 0101 1110 ..1 00000 01111 0 ..... ..... @rr_e
19
SQNEG_s 0111 1110 ..1 00000 01111 0 ..... ..... @rr_e
20
+ABS_s 0101 1110 111 00000 10111 0 ..... ..... @rr
21
+NEG_s 0111 1110 111 00000 10111 0 ..... ..... @rr
22
23
# Advanced SIMD two-register miscellaneous
24
25
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
26
SQNEG_v 0.10 1110 ..1 00000 01111 0 ..... ..... @qrr_e
27
+ABS_v 0.00 1110 ..1 00000 10111 0 ..... ..... @qrr_e
28
+NEG_v 0.10 1110 ..1 00000 10111 0 ..... ..... @qrr_e
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static const ENVScalar1 f_scalar_sqneg = {
34
TRANS(SQNEG_s, do_env_scalar1, a, &f_scalar_sqneg)
35
TRANS(SQNEG_v, do_env_vector1, a, &f_scalar_sqneg)
36
37
+static bool do_scalar1_d(DisasContext *s, arg_rr *a, ArithOneOp *f)
38
+{
39
+ if (fp_access_check(s)) {
40
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
41
+ f(t, t);
42
+ write_fp_dreg(s, a->rd, t);
43
+ }
44
+ return true;
45
+}
46
+
47
+TRANS(ABS_s, do_scalar1_d, a, tcg_gen_abs_i64)
48
+TRANS(NEG_s, do_scalar1_d, a, tcg_gen_neg_i64)
49
+
50
+static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
51
+{
52
+ if (!a->q && a->esz == MO_64) {
53
+ return false;
54
+ }
55
+ if (fp_access_check(s)) {
56
+ gen_gvec_fn2(s, a->q, a->rd, a->rn, fn, a->esz);
57
+ }
58
+ return true;
59
+}
60
+
61
+TRANS(ABS_v, do_gvec_fn2, a, tcg_gen_gvec_abs)
62
+TRANS(NEG_v, do_gvec_fn2, a, tcg_gen_gvec_neg)
63
+
64
/* Common vector code for handling integer to FP conversion */
65
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
66
int elements, int is_signed,
67
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
68
case 0x9: /* CMEQ, CMLE */
69
cond = u ? TCG_COND_LE : TCG_COND_EQ;
70
goto do_cmop;
71
- case 0xb: /* ABS, NEG */
72
- if (u) {
73
- tcg_gen_neg_i64(tcg_rd, tcg_rn);
74
- } else {
75
- tcg_gen_abs_i64(tcg_rd, tcg_rn);
76
- }
77
- break;
78
case 0x2f: /* FABS */
79
gen_vfp_absd(tcg_rd, tcg_rn);
80
break;
81
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
82
break;
83
default:
84
case 0x7: /* SQABS, SQNEG */
85
+ case 0xb: /* ABS, NEG */
86
g_assert_not_reached();
87
}
88
}
89
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
90
/* fall through */
91
case 0x8: /* CMGT, CMGE */
92
case 0x9: /* CMEQ, CMLE */
93
- case 0xb: /* ABS, NEG */
94
if (size != 3) {
95
unallocated_encoding(s);
96
return;
97
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
98
default:
99
case 0x3: /* USQADD / SUQADD */
100
case 0x7: /* SQABS / SQNEG */
101
+ case 0xb: /* ABS, NEG */
102
unallocated_encoding(s);
103
return;
104
}
105
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
106
/* fall through */
107
case 0x8: /* CMGT, CMGE */
108
case 0x9: /* CMEQ, CMLE */
109
- case 0xb: /* ABS, NEG */
110
if (size == 3 && !is_q) {
111
unallocated_encoding(s);
112
return;
113
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
114
default:
115
case 0x3: /* SUQADD, USQADD */
116
case 0x7: /* SQABS, SQNEG */
117
+ case 0xb: /* ABS, NEG */
118
unallocated_encoding(s);
119
return;
120
}
121
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
122
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
123
return;
124
case 0xb:
125
- if (u) { /* ABS, NEG */
126
- gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_neg, size);
127
- } else {
128
- gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_abs, size);
129
- }
130
- return;
131
+ g_assert_not_reached();
132
}
133
134
if (size == 3) {
135
--
136
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Add gvec interfaces for CLS and CLZ operations.
4
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-38-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/translate.h | 5 +++++
11
target/arm/tcg/gengvec.c | 35 +++++++++++++++++++++++++++++++++
12
target/arm/tcg/translate-a64.c | 29 +++++++--------------------
13
target/arm/tcg/translate-neon.c | 29 ++-------------------------
14
4 files changed, 49 insertions(+), 49 deletions(-)
15
16
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/translate.h
19
+++ b/target/arm/tcg/translate.h
20
@@ -XXX,XX +XXX,XX @@ void gen_gvec_umaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
21
void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
22
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
23
24
+void gen_gvec_cls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
25
+ uint32_t opr_sz, uint32_t max_sz);
26
+void gen_gvec_clz(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
27
+ uint32_t opr_sz, uint32_t max_sz);
28
+
29
/*
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
31
*/
32
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/tcg/gengvec.c
35
+++ b/target/arm/tcg/gengvec.c
36
@@ -XXX,XX +XXX,XX @@ void gen_gvec_urhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
37
assert(vece <= MO_32);
38
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
39
}
40
+
41
+void gen_gvec_cls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
42
+ uint32_t opr_sz, uint32_t max_sz)
43
+{
44
+ static const GVecGen2 g[] = {
45
+ { .fni4 = gen_helper_neon_cls_s8,
46
+ .vece = MO_8 },
47
+ { .fni4 = gen_helper_neon_cls_s16,
48
+ .vece = MO_16 },
49
+ { .fni4 = tcg_gen_clrsb_i32,
50
+ .vece = MO_32 },
51
+ };
52
+ assert(vece <= MO_32);
53
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
54
+}
55
+
56
+static void gen_clz32_i32(TCGv_i32 d, TCGv_i32 n)
57
+{
58
+ tcg_gen_clzi_i32(d, n, 32);
59
+}
60
+
61
+void gen_gvec_clz(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
62
+ uint32_t opr_sz, uint32_t max_sz)
63
+{
64
+ static const GVecGen2 g[] = {
65
+ { .fni4 = gen_helper_neon_clz_u8,
66
+ .vece = MO_8 },
67
+ { .fni4 = gen_helper_neon_clz_u16,
68
+ .vece = MO_16 },
69
+ { .fni4 = gen_clz32_i32,
70
+ .vece = MO_32 },
71
+ };
72
+ assert(vece <= MO_32);
73
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
74
+}
75
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/tcg/translate-a64.c
78
+++ b/target/arm/tcg/translate-a64.c
79
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
80
}
81
82
switch (opcode) {
83
+ case 0x4: /* CLZ, CLS */
84
+ if (u) {
85
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clz, size);
86
+ } else {
87
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cls, size);
88
+ }
89
+ return;
90
case 0x5:
91
if (u && size == 0) { /* NOT */
92
gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_not, 0);
93
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
94
if (size == 2) {
95
/* Special cases for 32 bit elements */
96
switch (opcode) {
97
- case 0x4: /* CLS */
98
- if (u) {
99
- tcg_gen_clzi_i32(tcg_res, tcg_op, 32);
100
- } else {
101
- tcg_gen_clrsb_i32(tcg_res, tcg_op);
102
- }
103
- break;
104
case 0x2f: /* FABS */
105
gen_vfp_abss(tcg_res, tcg_op);
106
break;
107
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
108
gen_helper_neon_cnt_u8(tcg_res, tcg_op);
109
}
110
break;
111
- case 0x4: /* CLS, CLZ */
112
- if (u) {
113
- if (size == 0) {
114
- gen_helper_neon_clz_u8(tcg_res, tcg_op);
115
- } else {
116
- gen_helper_neon_clz_u16(tcg_res, tcg_op);
117
- }
118
- } else {
119
- if (size == 0) {
120
- gen_helper_neon_cls_s8(tcg_res, tcg_op);
121
- } else {
122
- gen_helper_neon_cls_s16(tcg_res, tcg_op);
123
- }
124
- }
125
- break;
126
default:
127
case 0x7: /* SQABS, SQNEG */
128
g_assert_not_reached();
129
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/target/arm/tcg/translate-neon.c
132
+++ b/target/arm/tcg/translate-neon.c
133
@@ -XXX,XX +XXX,XX @@ DO_2MISC_VEC(VCGT0, gen_gvec_cgt0)
134
DO_2MISC_VEC(VCLE0, gen_gvec_cle0)
135
DO_2MISC_VEC(VCGE0, gen_gvec_cge0)
136
DO_2MISC_VEC(VCLT0, gen_gvec_clt0)
137
+DO_2MISC_VEC(VCLS, gen_gvec_cls)
138
+DO_2MISC_VEC(VCLZ, gen_gvec_clz)
139
140
static bool trans_VMVN(DisasContext *s, arg_2misc *a)
141
{
142
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV16(DisasContext *s, arg_2misc *a)
143
return do_2misc(s, a, gen_rev16);
144
}
145
146
-static bool trans_VCLS(DisasContext *s, arg_2misc *a)
147
-{
148
- static NeonGenOneOpFn * const fn[] = {
149
- gen_helper_neon_cls_s8,
150
- gen_helper_neon_cls_s16,
151
- gen_helper_neon_cls_s32,
152
- NULL,
153
- };
154
- return do_2misc(s, a, fn[a->size]);
155
-}
156
-
157
-static void do_VCLZ_32(TCGv_i32 rd, TCGv_i32 rm)
158
-{
159
- tcg_gen_clzi_i32(rd, rm, 32);
160
-}
161
-
162
-static bool trans_VCLZ(DisasContext *s, arg_2misc *a)
163
-{
164
- static NeonGenOneOpFn * const fn[] = {
165
- gen_helper_neon_clz_u8,
166
- gen_helper_neon_clz_u16,
167
- do_VCLZ_32,
168
- NULL,
169
- };
170
- return do_2misc(s, a, fn[a->size]);
171
-}
172
-
173
static bool trans_VCNT(DisasContext *s, arg_2misc *a)
174
{
175
if (a->size != 0) {
176
--
177
2.34.1
178
179
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-39-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 2 ++
9
target/arm/tcg/translate-a64.c | 37 ++++++++++++++++------------------
10
2 files changed, 19 insertions(+), 20 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
17
SQNEG_v 0.10 1110 ..1 00000 01111 0 ..... ..... @qrr_e
18
ABS_v 0.00 1110 ..1 00000 10111 0 ..... ..... @qrr_e
19
NEG_v 0.10 1110 ..1 00000 10111 0 ..... ..... @qrr_e
20
+CLS_v 0.00 1110 ..1 00000 01001 0 ..... ..... @qrr_e
21
+CLZ_v 0.10 1110 ..1 00000 01001 0 ..... ..... @qrr_e
22
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/tcg/translate-a64.c
25
+++ b/target/arm/tcg/translate-a64.c
26
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
27
TRANS(ABS_v, do_gvec_fn2, a, tcg_gen_gvec_abs)
28
TRANS(NEG_v, do_gvec_fn2, a, tcg_gen_gvec_neg)
29
30
+static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
31
+{
32
+ if (a->esz == MO_64) {
33
+ return false;
34
+ }
35
+ if (fp_access_check(s)) {
36
+ gen_gvec_fn2(s, a->q, a->rd, a->rn, fn, a->esz);
37
+ }
38
+ return true;
39
+}
40
+
41
+TRANS(CLS_v, do_gvec_fn2_bhs, a, gen_gvec_cls)
42
+TRANS(CLZ_v, do_gvec_fn2_bhs, a, gen_gvec_clz)
43
+
44
/* Common vector code for handling integer to FP conversion */
45
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
46
int elements, int is_signed,
47
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
48
TCGCond cond;
49
50
switch (opcode) {
51
- case 0x4: /* CLS, CLZ */
52
- if (u) {
53
- tcg_gen_clzi_i64(tcg_rd, tcg_rn, 64);
54
- } else {
55
- tcg_gen_clrsb_i64(tcg_rd, tcg_rn);
56
- }
57
- break;
58
case 0x5: /* NOT */
59
/* This opcode is shared with CNT and RBIT but we have earlier
60
* enforced that size == 3 if and only if this is the NOT insn.
61
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
62
gen_helper_frint64_d(tcg_rd, tcg_rn, tcg_fpstatus);
63
break;
64
default:
65
+ case 0x4: /* CLS, CLZ */
66
case 0x7: /* SQABS, SQNEG */
67
case 0xb: /* ABS, NEG */
68
g_assert_not_reached();
69
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
70
71
handle_2misc_narrow(s, false, opcode, u, is_q, size, rn, rd);
72
return;
73
- case 0x4: /* CLS, CLZ */
74
- if (size == 3) {
75
- unallocated_encoding(s);
76
- return;
77
- }
78
- break;
79
case 0x2: /* SADDLP, UADDLP */
80
case 0x6: /* SADALP, UADALP */
81
if (size == 3) {
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
83
}
84
default:
85
case 0x3: /* SUQADD, USQADD */
86
+ case 0x4: /* CLS, CLZ */
87
case 0x7: /* SQABS, SQNEG */
88
case 0xb: /* ABS, NEG */
89
unallocated_encoding(s);
90
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
91
}
92
93
switch (opcode) {
94
- case 0x4: /* CLZ, CLS */
95
- if (u) {
96
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clz, size);
97
- } else {
98
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cls, size);
99
- }
100
- return;
101
case 0x5:
102
if (u && size == 0) { /* NOT */
103
gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_not, 0);
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
105
case 0xa: /* CMLT */
106
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
107
return;
108
+ case 0x4: /* CLZ, CLS */
109
case 0xb:
110
g_assert_not_reached();
111
}
112
--
113
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Add gvec interfaces for CNT and RBIT operations.
4
Use ctpop8 for CNT and revbit+bswap for RBIT.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-40-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 4 ++--
12
target/arm/tcg/translate.h | 4 ++++
13
target/arm/tcg/gengvec.c | 16 ++++++++++++++++
14
target/arm/tcg/neon_helper.c | 21 ---------------------
15
target/arm/tcg/translate-a64.c | 32 +++++++++-----------------------
16
target/arm/tcg/translate-neon.c | 16 ++++++++--------
17
target/arm/tcg/vec_helper.c | 24 ++++++++++++++++++++++++
18
7 files changed, 63 insertions(+), 54 deletions(-)
19
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
23
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(neon_clz_u16, i32, i32)
25
DEF_HELPER_1(neon_cls_s8, i32, i32)
26
DEF_HELPER_1(neon_cls_s16, i32, i32)
27
DEF_HELPER_1(neon_cls_s32, i32, i32)
28
-DEF_HELPER_1(neon_cnt_u8, i32, i32)
29
-DEF_HELPER_FLAGS_1(neon_rbit_u8, TCG_CALL_NO_RWG_SE, i32, i32)
30
+DEF_HELPER_FLAGS_3(gvec_cnt_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(gvec_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
33
DEF_HELPER_3(neon_qdmulh_s16, i32, env, i32, i32)
34
DEF_HELPER_3(neon_qrdmulh_s16, i32, env, i32, i32)
35
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/tcg/translate.h
38
+++ b/target/arm/tcg/translate.h
39
@@ -XXX,XX +XXX,XX @@ void gen_gvec_cls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
40
uint32_t opr_sz, uint32_t max_sz);
41
void gen_gvec_clz(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
42
uint32_t opr_sz, uint32_t max_sz);
43
+void gen_gvec_cnt(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
44
+ uint32_t opr_sz, uint32_t max_sz);
45
+void gen_gvec_rbit(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
46
+ uint32_t opr_sz, uint32_t max_sz);
47
48
/*
49
* Forward to the isar_feature_* tests given a DisasContext pointer.
50
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/gengvec.c
53
+++ b/target/arm/tcg/gengvec.c
54
@@ -XXX,XX +XXX,XX @@ void gen_gvec_clz(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
55
assert(vece <= MO_32);
56
tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
57
}
58
+
59
+void gen_gvec_cnt(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
60
+ uint32_t opr_sz, uint32_t max_sz)
61
+{
62
+ assert(vece == MO_8);
63
+ tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
64
+ gen_helper_gvec_cnt_b);
65
+}
66
+
67
+void gen_gvec_rbit(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
68
+ uint32_t opr_sz, uint32_t max_sz)
69
+{
70
+ assert(vece == MO_8);
71
+ tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
72
+ gen_helper_gvec_rbit_b);
73
+}
74
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/tcg/neon_helper.c
77
+++ b/target/arm/tcg/neon_helper.c
78
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_cls_s32)(uint32_t x)
79
return count - 1;
80
}
81
82
-/* Bit count. */
83
-uint32_t HELPER(neon_cnt_u8)(uint32_t x)
84
-{
85
- x = (x & 0x55555555) + ((x >> 1) & 0x55555555);
86
- x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
87
- x = (x & 0x0f0f0f0f) + ((x >> 4) & 0x0f0f0f0f);
88
- return x;
89
-}
90
-
91
-/* Reverse bits in each 8 bit word */
92
-uint32_t HELPER(neon_rbit_u8)(uint32_t x)
93
-{
94
- x = ((x & 0xf0f0f0f0) >> 4)
95
- | ((x & 0x0f0f0f0f) << 4);
96
- x = ((x & 0x88888888) >> 3)
97
- | ((x & 0x44444444) >> 1)
98
- | ((x & 0x22222222) << 1)
99
- | ((x & 0x11111111) << 3);
100
- return x;
101
-}
102
-
103
#define NEON_QDMULH16(dest, src1, src2, round) do { \
104
uint32_t tmp = (int32_t)(int16_t) src1 * (int16_t) src2; \
105
if ((tmp ^ (tmp << 1)) & SIGNBIT) { \
106
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/tcg/translate-a64.c
109
+++ b/target/arm/tcg/translate-a64.c
110
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
111
}
112
113
switch (opcode) {
114
- case 0x5:
115
- if (u && size == 0) { /* NOT */
116
+ case 0x5: /* CNT, NOT, RBIT */
117
+ if (!u) {
118
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cnt, 0);
119
+ } else if (size) {
120
+ gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_rbit, 0);
121
+ } else {
122
gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_not, 0);
123
- return;
124
}
125
- break;
126
+ return;
127
case 0x8: /* CMGT, CMGE */
128
if (u) {
129
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cge0, size);
130
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
131
} else {
132
int pass;
133
134
+ assert(size == 2);
135
for (pass = 0; pass < (is_q ? 4 : 2); pass++) {
136
TCGv_i32 tcg_op = tcg_temp_new_i32();
137
TCGv_i32 tcg_res = tcg_temp_new_i32();
138
139
read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
140
141
- if (size == 2) {
142
+ {
143
/* Special cases for 32 bit elements */
144
switch (opcode) {
145
case 0x2f: /* FABS */
146
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
147
case 0x7: /* SQABS, SQNEG */
148
g_assert_not_reached();
149
}
150
- } else {
151
- /* Use helpers for 8 and 16 bit elements */
152
- switch (opcode) {
153
- case 0x5: /* CNT, RBIT */
154
- /* For these two insns size is part of the opcode specifier
155
- * (handled earlier); they always operate on byte elements.
156
- */
157
- if (u) {
158
- gen_helper_neon_rbit_u8(tcg_res, tcg_op);
159
- } else {
160
- gen_helper_neon_cnt_u8(tcg_res, tcg_op);
161
- }
162
- break;
163
- default:
164
- case 0x7: /* SQABS, SQNEG */
165
- g_assert_not_reached();
166
- }
167
}
168
-
169
write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
170
}
171
}
172
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/target/arm/tcg/translate-neon.c
175
+++ b/target/arm/tcg/translate-neon.c
176
@@ -XXX,XX +XXX,XX @@ static bool trans_VMVN(DisasContext *s, arg_2misc *a)
177
return do_2misc_vec(s, a, tcg_gen_gvec_not);
178
}
179
180
+static bool trans_VCNT(DisasContext *s, arg_2misc *a)
181
+{
182
+ if (a->size != 0) {
183
+ return false;
184
+ }
185
+ return do_2misc_vec(s, a, gen_gvec_cnt);
186
+}
187
+
188
#define WRAP_2M_3_OOL_FN(WRAPNAME, FUNC, DATA) \
189
static void WRAPNAME(unsigned vece, uint32_t rd_ofs, \
190
uint32_t rm_ofs, uint32_t oprsz, \
191
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV16(DisasContext *s, arg_2misc *a)
192
return do_2misc(s, a, gen_rev16);
193
}
194
195
-static bool trans_VCNT(DisasContext *s, arg_2misc *a)
196
-{
197
- if (a->size != 0) {
198
- return false;
199
- }
200
- return do_2misc(s, a, gen_helper_neon_cnt_u8);
201
-}
202
-
203
static void gen_VABS_F(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
204
uint32_t oprsz, uint32_t maxsz)
205
{
206
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/tcg/vec_helper.c
209
+++ b/target/arm/tcg/vec_helper.c
210
@@ -XXX,XX +XXX,XX @@ DO_CLAMP(gvec_uclamp_b, uint8_t)
211
DO_CLAMP(gvec_uclamp_h, uint16_t)
212
DO_CLAMP(gvec_uclamp_s, uint32_t)
213
DO_CLAMP(gvec_uclamp_d, uint64_t)
214
+
215
+/* Bit count in each 8-bit word. */
216
+void HELPER(gvec_cnt_b)(void *vd, void *vn, uint32_t desc)
217
+{
218
+ intptr_t i, opr_sz = simd_oprsz(desc);
219
+ uint8_t *d = vd, *n = vn;
220
+
221
+ for (i = 0; i < opr_sz; ++i) {
222
+ d[i] = ctpop8(n[i]);
223
+ }
224
+ clear_tail(d, opr_sz, simd_maxsz(desc));
225
+}
226
+
227
+/* Reverse bits in each 8 bit word */
228
+void HELPER(gvec_rbit_b)(void *vd, void *vn, uint32_t desc)
229
+{
230
+ intptr_t i, opr_sz = simd_oprsz(desc);
231
+ uint64_t *d = vd, *n = vn;
232
+
233
+ for (i = 0; i < opr_sz / 8; ++i) {
234
+ d[i] = revbit64(bswap64(n[i]));
235
+ }
236
+ clear_tail(d, opr_sz, simd_maxsz(desc));
237
+}
238
--
239
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-41-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 4 ++++
9
target/arm/tcg/translate-a64.c | 34 ++++++----------------------------
10
2 files changed, 10 insertions(+), 28 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
18
@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
19
20
+@qrr_b . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=0
21
@qrr_h . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=1
22
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
23
24
@@ -XXX,XX +XXX,XX @@ ABS_v 0.00 1110 ..1 00000 10111 0 ..... ..... @qrr_e
25
NEG_v 0.10 1110 ..1 00000 10111 0 ..... ..... @qrr_e
26
CLS_v 0.00 1110 ..1 00000 01001 0 ..... ..... @qrr_e
27
CLZ_v 0.10 1110 ..1 00000 01001 0 ..... ..... @qrr_e
28
+CNT_v 0.00 1110 001 00000 01011 0 ..... ..... @qrr_b
29
+NOT_v 0.10 1110 001 00000 01011 0 ..... ..... @qrr_b
30
+RBIT_v 0.10 1110 011 00000 01011 0 ..... ..... @qrr_b
31
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-a64.c
34
+++ b/target/arm/tcg/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
36
37
TRANS(ABS_v, do_gvec_fn2, a, tcg_gen_gvec_abs)
38
TRANS(NEG_v, do_gvec_fn2, a, tcg_gen_gvec_neg)
39
+TRANS(NOT_v, do_gvec_fn2, a, tcg_gen_gvec_not)
40
+TRANS(CNT_v, do_gvec_fn2, a, gen_gvec_cnt)
41
+TRANS(RBIT_v, do_gvec_fn2, a, gen_gvec_rbit)
42
43
static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
44
{
45
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
46
TCGCond cond;
47
48
switch (opcode) {
49
- case 0x5: /* NOT */
50
- /* This opcode is shared with CNT and RBIT but we have earlier
51
- * enforced that size == 3 if and only if this is the NOT insn.
52
- */
53
- tcg_gen_not_i64(tcg_rd, tcg_rn);
54
- break;
55
case 0xa: /* CMLT */
56
cond = TCG_COND_LT;
57
do_cmop:
58
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
59
break;
60
default:
61
case 0x4: /* CLS, CLZ */
62
+ case 0x5: /* NOT */
63
case 0x7: /* SQABS, SQNEG */
64
case 0xb: /* ABS, NEG */
65
g_assert_not_reached();
66
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
67
case 0x1: /* REV16 */
68
handle_rev(s, opcode, u, is_q, size, rn, rd);
69
return;
70
- case 0x5: /* CNT, NOT, RBIT */
71
- if (u && size == 0) {
72
- /* NOT */
73
- break;
74
- } else if (u && size == 1) {
75
- /* RBIT */
76
- break;
77
- } else if (!u && size == 0) {
78
- /* CNT */
79
- break;
80
- }
81
- unallocated_encoding(s);
82
- return;
83
case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
84
case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
85
if (size == 3) {
86
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
87
default:
88
case 0x3: /* SUQADD, USQADD */
89
case 0x4: /* CLS, CLZ */
90
+ case 0x5: /* CNT, NOT, RBIT */
91
case 0x7: /* SQABS, SQNEG */
92
case 0xb: /* ABS, NEG */
93
unallocated_encoding(s);
94
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
95
}
96
97
switch (opcode) {
98
- case 0x5: /* CNT, NOT, RBIT */
99
- if (!u) {
100
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cnt, 0);
101
- } else if (size) {
102
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_rbit, 0);
103
- } else {
104
- gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_not, 0);
105
- }
106
- return;
107
case 0x8: /* CMGT, CMGE */
108
if (u) {
109
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cge0, size);
110
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
111
gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
112
return;
113
case 0x4: /* CLZ, CLS */
114
+ case 0x5: /* CNT, NOT, RBIT */
115
case 0xb:
116
g_assert_not_reached();
117
}
118
--
119
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-42-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 10 ++++
9
target/arm/tcg/translate-a64.c | 94 +++++++++++-----------------------
10
2 files changed, 40 insertions(+), 64 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ SQABS_s 0101 1110 ..1 00000 01111 0 ..... ..... @rr_e
17
SQNEG_s 0111 1110 ..1 00000 01111 0 ..... ..... @rr_e
18
ABS_s 0101 1110 111 00000 10111 0 ..... ..... @rr
19
NEG_s 0111 1110 111 00000 10111 0 ..... ..... @rr
20
+CMGT0_s 0101 1110 111 00000 10001 0 ..... ..... @rr
21
+CMGE0_s 0111 1110 111 00000 10001 0 ..... ..... @rr
22
+CMEQ0_s 0101 1110 111 00000 10011 0 ..... ..... @rr
23
+CMLE0_s 0111 1110 111 00000 10011 0 ..... ..... @rr
24
+CMLT0_s 0101 1110 111 00000 10101 0 ..... ..... @rr
25
26
# Advanced SIMD two-register miscellaneous
27
28
@@ -XXX,XX +XXX,XX @@ CLZ_v 0.10 1110 ..1 00000 01001 0 ..... ..... @qrr_e
29
CNT_v 0.00 1110 001 00000 01011 0 ..... ..... @qrr_b
30
NOT_v 0.10 1110 001 00000 01011 0 ..... ..... @qrr_b
31
RBIT_v 0.10 1110 011 00000 01011 0 ..... ..... @qrr_b
32
+CMGT0_v 0.00 1110 ..1 00000 10001 0 ..... ..... @qrr_e
33
+CMGE0_v 0.10 1110 ..1 00000 10001 0 ..... ..... @qrr_e
34
+CMEQ0_v 0.00 1110 ..1 00000 10011 0 ..... ..... @qrr_e
35
+CMLE0_v 0.10 1110 ..1 00000 10011 0 ..... ..... @qrr_e
36
+CMLT0_v 0.00 1110 ..1 00000 10101 0 ..... ..... @qrr_e
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static bool do_scalar1_d(DisasContext *s, arg_rr *a, ArithOneOp *f)
42
TRANS(ABS_s, do_scalar1_d, a, tcg_gen_abs_i64)
43
TRANS(NEG_s, do_scalar1_d, a, tcg_gen_neg_i64)
44
45
+static bool do_cmop0_d(DisasContext *s, arg_rr *a, TCGCond cond)
46
+{
47
+ if (fp_access_check(s)) {
48
+ TCGv_i64 t = read_fp_dreg(s, a->rn);
49
+ tcg_gen_negsetcond_i64(cond, t, t, tcg_constant_i64(0));
50
+ write_fp_dreg(s, a->rd, t);
51
+ }
52
+ return true;
53
+}
54
+
55
+TRANS(CMGT0_s, do_cmop0_d, a, TCG_COND_GT)
56
+TRANS(CMGE0_s, do_cmop0_d, a, TCG_COND_GE)
57
+TRANS(CMLE0_s, do_cmop0_d, a, TCG_COND_LE)
58
+TRANS(CMLT0_s, do_cmop0_d, a, TCG_COND_LT)
59
+TRANS(CMEQ0_s, do_cmop0_d, a, TCG_COND_EQ)
60
+
61
static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
62
{
63
if (!a->q && a->esz == MO_64) {
64
@@ -XXX,XX +XXX,XX @@ TRANS(NEG_v, do_gvec_fn2, a, tcg_gen_gvec_neg)
65
TRANS(NOT_v, do_gvec_fn2, a, tcg_gen_gvec_not)
66
TRANS(CNT_v, do_gvec_fn2, a, gen_gvec_cnt)
67
TRANS(RBIT_v, do_gvec_fn2, a, gen_gvec_rbit)
68
+TRANS(CMGT0_v, do_gvec_fn2, a, gen_gvec_cgt0)
69
+TRANS(CMGE0_v, do_gvec_fn2, a, gen_gvec_cge0)
70
+TRANS(CMLT0_v, do_gvec_fn2, a, gen_gvec_clt0)
71
+TRANS(CMLE0_v, do_gvec_fn2, a, gen_gvec_cle0)
72
+TRANS(CMEQ0_v, do_gvec_fn2, a, gen_gvec_ceq0)
73
74
static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
75
{
76
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
77
* The caller only need provide tcg_rmode and tcg_fpstatus if the op
78
* requires them.
79
*/
80
- TCGCond cond;
81
-
82
switch (opcode) {
83
- case 0xa: /* CMLT */
84
- cond = TCG_COND_LT;
85
- do_cmop:
86
- /* 64 bit integer comparison against zero, result is test ? -1 : 0. */
87
- tcg_gen_negsetcond_i64(cond, tcg_rd, tcg_rn, tcg_constant_i64(0));
88
- break;
89
- case 0x8: /* CMGT, CMGE */
90
- cond = u ? TCG_COND_GE : TCG_COND_GT;
91
- goto do_cmop;
92
- case 0x9: /* CMEQ, CMLE */
93
- cond = u ? TCG_COND_LE : TCG_COND_EQ;
94
- goto do_cmop;
95
case 0x2f: /* FABS */
96
gen_vfp_absd(tcg_rd, tcg_rn);
97
break;
98
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
99
case 0x4: /* CLS, CLZ */
100
case 0x5: /* NOT */
101
case 0x7: /* SQABS, SQNEG */
102
+ case 0x8: /* CMGT, CMGE */
103
+ case 0x9: /* CMEQ, CMLE */
104
+ case 0xa: /* CMLT */
105
case 0xb: /* ABS, NEG */
106
g_assert_not_reached();
107
}
108
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
109
TCGv_ptr tcg_fpstatus;
110
111
switch (opcode) {
112
- case 0xa: /* CMLT */
113
- if (u) {
114
- unallocated_encoding(s);
115
- return;
116
- }
117
- /* fall through */
118
- case 0x8: /* CMGT, CMGE */
119
- case 0x9: /* CMEQ, CMLE */
120
- if (size != 3) {
121
- unallocated_encoding(s);
122
- return;
123
- }
124
- break;
125
case 0x12: /* SQXTUN */
126
if (!u) {
127
unallocated_encoding(s);
128
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
129
default:
130
case 0x3: /* USQADD / SUQADD */
131
case 0x7: /* SQABS / SQNEG */
132
+ case 0x8: /* CMGT, CMGE */
133
+ case 0x9: /* CMEQ, CMLE */
134
+ case 0xa: /* CMLT */
135
case 0xb: /* ABS, NEG */
136
unallocated_encoding(s);
137
return;
138
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
139
}
140
handle_shll(s, is_q, size, rn, rd);
141
return;
142
- case 0xa: /* CMLT */
143
- if (u == 1) {
144
- unallocated_encoding(s);
145
- return;
146
- }
147
- /* fall through */
148
- case 0x8: /* CMGT, CMGE */
149
- case 0x9: /* CMEQ, CMLE */
150
- if (size == 3 && !is_q) {
151
- unallocated_encoding(s);
152
- return;
153
- }
154
- break;
155
case 0xc ... 0xf:
156
case 0x16 ... 0x1f:
157
{
158
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
159
case 0x4: /* CLS, CLZ */
160
case 0x5: /* CNT, NOT, RBIT */
161
case 0x7: /* SQABS, SQNEG */
162
+ case 0x8: /* CMGT, CMGE */
163
+ case 0x9: /* CMEQ, CMLE */
164
+ case 0xa: /* CMLT */
165
case 0xb: /* ABS, NEG */
166
unallocated_encoding(s);
167
return;
168
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
169
tcg_rmode = NULL;
170
}
171
172
- switch (opcode) {
173
- case 0x8: /* CMGT, CMGE */
174
- if (u) {
175
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cge0, size);
176
- } else {
177
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cgt0, size);
178
- }
179
- return;
180
- case 0x9: /* CMEQ, CMLE */
181
- if (u) {
182
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_cle0, size);
183
- } else {
184
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_ceq0, size);
185
- }
186
- return;
187
- case 0xa: /* CMLT */
188
- gen_gvec_fn2(s, is_q, rd, rn, gen_gvec_clt0, size);
189
- return;
190
- case 0x4: /* CLZ, CLS */
191
- case 0x5: /* CNT, NOT, RBIT */
192
- case 0xb:
193
- g_assert_not_reached();
194
- }
195
-
196
if (size == 3) {
197
/* All 64-bit element operations can be shared with scalar 2misc */
198
int pass;
199
--
200
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-43-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/translate.h | 6 +++
9
target/arm/tcg/gengvec.c | 58 ++++++++++++++++++++++
10
target/arm/tcg/translate-neon.c | 88 +++++++--------------------------
11
3 files changed, 81 insertions(+), 71 deletions(-)
12
13
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/tcg/translate.h
16
+++ b/target/arm/tcg/translate.h
17
@@ -XXX,XX +XXX,XX @@ void gen_gvec_cnt(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
18
uint32_t opr_sz, uint32_t max_sz);
19
void gen_gvec_rbit(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
20
uint32_t opr_sz, uint32_t max_sz);
21
+void gen_gvec_rev16(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
22
+ uint32_t opr_sz, uint32_t max_sz);
23
+void gen_gvec_rev32(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
24
+ uint32_t opr_sz, uint32_t max_sz);
25
+void gen_gvec_rev64(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
26
+ uint32_t opr_sz, uint32_t max_sz);
27
28
/*
29
* Forward to the isar_feature_* tests given a DisasContext pointer.
30
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/gengvec.c
33
+++ b/target/arm/tcg/gengvec.c
34
@@ -XXX,XX +XXX,XX @@ void gen_gvec_rbit(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
35
tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
36
gen_helper_gvec_rbit_b);
37
}
38
+
39
+void gen_gvec_rev16(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
40
+ uint32_t opr_sz, uint32_t max_sz)
41
+{
42
+ assert(vece == MO_8);
43
+ tcg_gen_gvec_rotli(MO_16, rd_ofs, rn_ofs, 8, opr_sz, max_sz);
44
+}
45
+
46
+static void gen_bswap32_i64(TCGv_i64 d, TCGv_i64 n)
47
+{
48
+ tcg_gen_bswap64_i64(d, n);
49
+ tcg_gen_rotli_i64(d, d, 32);
50
+}
51
+
52
+void gen_gvec_rev32(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
53
+ uint32_t opr_sz, uint32_t max_sz)
54
+{
55
+ static const GVecGen2 g = {
56
+ .fni8 = gen_bswap32_i64,
57
+ .fni4 = tcg_gen_bswap32_i32,
58
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
59
+ .vece = MO_32
60
+ };
61
+
62
+ switch (vece) {
63
+ case MO_16:
64
+ tcg_gen_gvec_rotli(MO_32, rd_ofs, rn_ofs, 16, opr_sz, max_sz);
65
+ break;
66
+ case MO_8:
67
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g);
68
+ break;
69
+ default:
70
+ g_assert_not_reached();
71
+ }
72
+}
73
+
74
+void gen_gvec_rev64(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
75
+ uint32_t opr_sz, uint32_t max_sz)
76
+{
77
+ static const GVecGen2 g[] = {
78
+ { .fni8 = tcg_gen_bswap64_i64,
79
+ .vece = MO_64 },
80
+ { .fni8 = tcg_gen_hswap_i64,
81
+ .vece = MO_64 },
82
+ };
83
+
84
+ switch (vece) {
85
+ case MO_32:
86
+ tcg_gen_gvec_rotli(MO_64, rd_ofs, rn_ofs, 32, opr_sz, max_sz);
87
+ break;
88
+ case MO_8:
89
+ case MO_16:
90
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
91
+ break;
92
+ default:
93
+ g_assert_not_reached();
94
+ }
95
+}
96
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/tcg/translate-neon.c
99
+++ b/target/arm/tcg/translate-neon.c
100
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
101
return true;
102
}
103
104
-static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
105
-{
106
- int pass, half;
107
- TCGv_i32 tmp[2];
108
-
109
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
110
- return false;
111
- }
112
-
113
- /* UNDEF accesses to D16-D31 if they don't exist. */
114
- if (!dc_isar_feature(aa32_simd_r32, s) &&
115
- ((a->vd | a->vm) & 0x10)) {
116
- return false;
117
- }
118
-
119
- if ((a->vd | a->vm) & a->q) {
120
- return false;
121
- }
122
-
123
- if (a->size == 3) {
124
- return false;
125
- }
126
-
127
- if (!vfp_access_check(s)) {
128
- return true;
129
- }
130
-
131
- tmp[0] = tcg_temp_new_i32();
132
- tmp[1] = tcg_temp_new_i32();
133
-
134
- for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
135
- for (half = 0; half < 2; half++) {
136
- read_neon_element32(tmp[half], a->vm, pass * 2 + half, MO_32);
137
- switch (a->size) {
138
- case 0:
139
- tcg_gen_bswap32_i32(tmp[half], tmp[half]);
140
- break;
141
- case 1:
142
- gen_swap_half(tmp[half], tmp[half]);
143
- break;
144
- case 2:
145
- break;
146
- default:
147
- g_assert_not_reached();
148
- }
149
- }
150
- write_neon_element32(tmp[1], a->vd, pass * 2, MO_32);
151
- write_neon_element32(tmp[0], a->vd, pass * 2 + 1, MO_32);
152
- }
153
- return true;
154
-}
155
-
156
static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
157
NeonGenWidenFn *widenfn,
158
NeonGenTwo64OpFn *opfn,
159
@@ -XXX,XX +XXX,XX @@ DO_2MISC_VEC(VCGE0, gen_gvec_cge0)
160
DO_2MISC_VEC(VCLT0, gen_gvec_clt0)
161
DO_2MISC_VEC(VCLS, gen_gvec_cls)
162
DO_2MISC_VEC(VCLZ, gen_gvec_clz)
163
+DO_2MISC_VEC(VREV64, gen_gvec_rev64)
164
165
static bool trans_VMVN(DisasContext *s, arg_2misc *a)
166
{
167
@@ -XXX,XX +XXX,XX @@ static bool trans_VCNT(DisasContext *s, arg_2misc *a)
168
return do_2misc_vec(s, a, gen_gvec_cnt);
169
}
170
171
+static bool trans_VREV16(DisasContext *s, arg_2misc *a)
172
+{
173
+ if (a->size != 0) {
174
+ return false;
175
+ }
176
+ return do_2misc_vec(s, a, gen_gvec_rev16);
177
+}
178
+
179
+static bool trans_VREV32(DisasContext *s, arg_2misc *a)
180
+{
181
+ if (a->size != 0 && a->size != 1) {
182
+ return false;
183
+ }
184
+ return do_2misc_vec(s, a, gen_gvec_rev32);
185
+}
186
+
187
#define WRAP_2M_3_OOL_FN(WRAPNAME, FUNC, DATA) \
188
static void WRAPNAME(unsigned vece, uint32_t rd_ofs, \
189
uint32_t rm_ofs, uint32_t oprsz, \
190
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
191
return true;
192
}
193
194
-static bool trans_VREV32(DisasContext *s, arg_2misc *a)
195
-{
196
- static NeonGenOneOpFn * const fn[] = {
197
- tcg_gen_bswap32_i32,
198
- gen_swap_half,
199
- NULL,
200
- NULL,
201
- };
202
- return do_2misc(s, a, fn[a->size]);
203
-}
204
-
205
-static bool trans_VREV16(DisasContext *s, arg_2misc *a)
206
-{
207
- if (a->size != 0) {
208
- return false;
209
- }
210
- return do_2misc(s, a, gen_rev16);
211
-}
212
-
213
static void gen_VABS_F(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
214
uint32_t oprsz, uint32_t maxsz)
215
{
216
--
217
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes REV16, REV32, REV64.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-44-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/a64.decode | 5 +++
11
target/arm/tcg/translate-a64.c | 79 +++-------------------------------
12
2 files changed, 10 insertions(+), 74 deletions(-)
13
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@
19
20
@qrr_b . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=0
21
@qrr_h . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=1
22
+@qrr_bh . q:1 ...... . esz:1 ...... ...... rn:5 rd:5 &qrr_e
23
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
24
25
@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
26
@@ -XXX,XX +XXX,XX @@ CMGE0_v 0.10 1110 ..1 00000 10001 0 ..... ..... @qrr_e
27
CMEQ0_v 0.00 1110 ..1 00000 10011 0 ..... ..... @qrr_e
28
CMLE0_v 0.10 1110 ..1 00000 10011 0 ..... ..... @qrr_e
29
CMLT0_v 0.00 1110 ..1 00000 10101 0 ..... ..... @qrr_e
30
+
31
+REV16_v 0.00 1110 001 00000 00011 0 ..... ..... @qrr_b
32
+REV32_v 0.10 1110 0.1 00000 00001 0 ..... ..... @qrr_bh
33
+REV64_v 0.00 1110 ..1 00000 00001 0 ..... ..... @qrr_e
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ TRANS(CMGE0_v, do_gvec_fn2, a, gen_gvec_cge0)
39
TRANS(CMLT0_v, do_gvec_fn2, a, gen_gvec_clt0)
40
TRANS(CMLE0_v, do_gvec_fn2, a, gen_gvec_cle0)
41
TRANS(CMEQ0_v, do_gvec_fn2, a, gen_gvec_ceq0)
42
+TRANS(REV16_v, do_gvec_fn2, a, gen_gvec_rev16)
43
+TRANS(REV32_v, do_gvec_fn2, a, gen_gvec_rev32)
44
45
static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
46
{
47
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
48
49
TRANS(CLS_v, do_gvec_fn2_bhs, a, gen_gvec_cls)
50
TRANS(CLZ_v, do_gvec_fn2_bhs, a, gen_gvec_clz)
51
+TRANS(REV64_v, do_gvec_fn2_bhs, a, gen_gvec_rev64)
52
53
/* Common vector code for handling integer to FP conversion */
54
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
55
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
56
}
57
}
58
59
-static void handle_rev(DisasContext *s, int opcode, bool u,
60
- bool is_q, int size, int rn, int rd)
61
-{
62
- int op = (opcode << 1) | u;
63
- int opsz = op + size;
64
- int grp_size = 3 - opsz;
65
- int dsize = is_q ? 128 : 64;
66
- int i;
67
-
68
- if (opsz >= 3) {
69
- unallocated_encoding(s);
70
- return;
71
- }
72
-
73
- if (!fp_access_check(s)) {
74
- return;
75
- }
76
-
77
- if (size == 0) {
78
- /* Special case bytes, use bswap op on each group of elements */
79
- int groups = dsize / (8 << grp_size);
80
-
81
- for (i = 0; i < groups; i++) {
82
- TCGv_i64 tcg_tmp = tcg_temp_new_i64();
83
-
84
- read_vec_element(s, tcg_tmp, rn, i, grp_size);
85
- switch (grp_size) {
86
- case MO_16:
87
- tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp, TCG_BSWAP_IZ);
88
- break;
89
- case MO_32:
90
- tcg_gen_bswap32_i64(tcg_tmp, tcg_tmp, TCG_BSWAP_IZ);
91
- break;
92
- case MO_64:
93
- tcg_gen_bswap64_i64(tcg_tmp, tcg_tmp);
94
- break;
95
- default:
96
- g_assert_not_reached();
97
- }
98
- write_vec_element(s, tcg_tmp, rd, i, grp_size);
99
- }
100
- clear_vec_high(s, is_q, rd);
101
- } else {
102
- int revmask = (1 << grp_size) - 1;
103
- int esize = 8 << size;
104
- int elements = dsize / esize;
105
- TCGv_i64 tcg_rn = tcg_temp_new_i64();
106
- TCGv_i64 tcg_rd[2];
107
-
108
- for (i = 0; i < 2; i++) {
109
- tcg_rd[i] = tcg_temp_new_i64();
110
- tcg_gen_movi_i64(tcg_rd[i], 0);
111
- }
112
-
113
- for (i = 0; i < elements; i++) {
114
- int e_rev = (i & 0xf) ^ revmask;
115
- int w = (e_rev * esize) / 64;
116
- int o = (e_rev * esize) % 64;
117
-
118
- read_vec_element(s, tcg_rn, rn, i, size);
119
- tcg_gen_deposit_i64(tcg_rd[w], tcg_rd[w], tcg_rn, o, esize);
120
- }
121
-
122
- for (i = 0; i < 2; i++) {
123
- write_vec_element(s, tcg_rd[i], rd, i, MO_64);
124
- }
125
- clear_vec_high(s, true, rd);
126
- }
127
-}
128
-
129
static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
130
bool is_q, int size, int rn, int rd)
131
{
132
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
133
TCGv_ptr tcg_fpstatus;
134
135
switch (opcode) {
136
- case 0x0: /* REV64, REV32 */
137
- case 0x1: /* REV16 */
138
- handle_rev(s, opcode, u, is_q, size, rn, rd);
139
- return;
140
case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
141
case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
142
if (size == 3) {
143
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
144
break;
145
}
146
default:
147
+ case 0x0: /* REV64, REV32 */
148
+ case 0x1: /* REV16 */
149
case 0x3: /* SUQADD, USQADD */
150
case 0x4: /* CLS, CLZ */
151
case 0x5: /* CNT, NOT, RBIT */
152
--
153
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Move from helper-a64.c to neon_helper.c so that these
4
functions are available for arm32 code as well.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-45-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 2 ++
12
target/arm/tcg/helper-a64.h | 2 --
13
target/arm/tcg/helper-a64.c | 43 ------------------------------------
14
target/arm/tcg/neon_helper.c | 43 ++++++++++++++++++++++++++++++++++++
15
4 files changed, 45 insertions(+), 45 deletions(-)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(neon_addl_u16, i64, i64, i64)
22
DEF_HELPER_2(neon_addl_u32, i64, i64, i64)
23
DEF_HELPER_2(neon_paddl_u16, i64, i64, i64)
24
DEF_HELPER_2(neon_paddl_u32, i64, i64, i64)
25
+DEF_HELPER_FLAGS_1(neon_addlp_s8, TCG_CALL_NO_RWG_SE, i64, i64)
26
+DEF_HELPER_FLAGS_1(neon_addlp_s16, TCG_CALL_NO_RWG_SE, i64, i64)
27
DEF_HELPER_2(neon_subl_u16, i64, i64, i64)
28
DEF_HELPER_2(neon_subl_u32, i64, i64, i64)
29
DEF_HELPER_3(neon_addl_saturate_s32, i64, env, i64, i64)
30
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/helper-a64.h
33
+++ b/target/arm/tcg/helper-a64.h
34
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(recpsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
35
DEF_HELPER_FLAGS_3(rsqrtsf_f16, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
36
DEF_HELPER_FLAGS_3(rsqrtsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
37
DEF_HELPER_FLAGS_3(rsqrtsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
38
-DEF_HELPER_FLAGS_1(neon_addlp_s8, TCG_CALL_NO_RWG_SE, i64, i64)
39
DEF_HELPER_FLAGS_1(neon_addlp_u8, TCG_CALL_NO_RWG_SE, i64, i64)
40
-DEF_HELPER_FLAGS_1(neon_addlp_s16, TCG_CALL_NO_RWG_SE, i64, i64)
41
DEF_HELPER_FLAGS_1(neon_addlp_u16, TCG_CALL_NO_RWG_SE, i64, i64)
42
DEF_HELPER_FLAGS_2(frecpx_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
43
DEF_HELPER_FLAGS_2(frecpx_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
44
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/helper-a64.c
47
+++ b/target/arm/tcg/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrtsf_f64)(float64 a, float64 b, void *fpstp)
49
return float64_muladd(a, b, float64_three, float_muladd_halve_result, fpst);
50
}
51
52
-/* Pairwise long add: add pairs of adjacent elements into
53
- * double-width elements in the result (eg _s8 is an 8x8->16 op)
54
- */
55
-uint64_t HELPER(neon_addlp_s8)(uint64_t a)
56
-{
57
- uint64_t nsignmask = 0x0080008000800080ULL;
58
- uint64_t wsignmask = 0x8000800080008000ULL;
59
- uint64_t elementmask = 0x00ff00ff00ff00ffULL;
60
- uint64_t tmp1, tmp2;
61
- uint64_t res, signres;
62
-
63
- /* Extract odd elements, sign extend each to a 16 bit field */
64
- tmp1 = a & elementmask;
65
- tmp1 ^= nsignmask;
66
- tmp1 |= wsignmask;
67
- tmp1 = (tmp1 - nsignmask) ^ wsignmask;
68
- /* Ditto for the even elements */
69
- tmp2 = (a >> 8) & elementmask;
70
- tmp2 ^= nsignmask;
71
- tmp2 |= wsignmask;
72
- tmp2 = (tmp2 - nsignmask) ^ wsignmask;
73
-
74
- /* calculate the result by summing bits 0..14, 16..22, etc,
75
- * and then adjusting the sign bits 15, 23, etc manually.
76
- * This ensures the addition can't overflow the 16 bit field.
77
- */
78
- signres = (tmp1 ^ tmp2) & wsignmask;
79
- res = (tmp1 & ~wsignmask) + (tmp2 & ~wsignmask);
80
- res ^= signres;
81
-
82
- return res;
83
-}
84
-
85
uint64_t HELPER(neon_addlp_u8)(uint64_t a)
86
{
87
uint64_t tmp;
88
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u8)(uint64_t a)
89
return tmp;
90
}
91
92
-uint64_t HELPER(neon_addlp_s16)(uint64_t a)
93
-{
94
- int32_t reslo, reshi;
95
-
96
- reslo = (int32_t)(int16_t)a + (int32_t)(int16_t)(a >> 16);
97
- reshi = (int32_t)(int16_t)(a >> 32) + (int32_t)(int16_t)(a >> 48);
98
-
99
- return (uint32_t)reslo | (((uint64_t)reshi) << 32);
100
-}
101
-
102
uint64_t HELPER(neon_addlp_u16)(uint64_t a)
103
{
104
uint64_t tmp;
105
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/tcg/neon_helper.c
108
+++ b/target/arm/tcg/neon_helper.c
109
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_paddl_u32)(uint64_t a, uint64_t b)
110
return low + ((uint64_t)high << 32);
111
}
112
113
+/* Pairwise long add: add pairs of adjacent elements into
114
+ * double-width elements in the result (eg _s8 is an 8x8->16 op)
115
+ */
116
+uint64_t HELPER(neon_addlp_s8)(uint64_t a)
117
+{
118
+ uint64_t nsignmask = 0x0080008000800080ULL;
119
+ uint64_t wsignmask = 0x8000800080008000ULL;
120
+ uint64_t elementmask = 0x00ff00ff00ff00ffULL;
121
+ uint64_t tmp1, tmp2;
122
+ uint64_t res, signres;
123
+
124
+ /* Extract odd elements, sign extend each to a 16 bit field */
125
+ tmp1 = a & elementmask;
126
+ tmp1 ^= nsignmask;
127
+ tmp1 |= wsignmask;
128
+ tmp1 = (tmp1 - nsignmask) ^ wsignmask;
129
+ /* Ditto for the even elements */
130
+ tmp2 = (a >> 8) & elementmask;
131
+ tmp2 ^= nsignmask;
132
+ tmp2 |= wsignmask;
133
+ tmp2 = (tmp2 - nsignmask) ^ wsignmask;
134
+
135
+ /* calculate the result by summing bits 0..14, 16..22, etc,
136
+ * and then adjusting the sign bits 15, 23, etc manually.
137
+ * This ensures the addition can't overflow the 16 bit field.
138
+ */
139
+ signres = (tmp1 ^ tmp2) & wsignmask;
140
+ res = (tmp1 & ~wsignmask) + (tmp2 & ~wsignmask);
141
+ res ^= signres;
142
+
143
+ return res;
144
+}
145
+
146
+uint64_t HELPER(neon_addlp_s16)(uint64_t a)
147
+{
148
+ int32_t reslo, reshi;
149
+
150
+ reslo = (int32_t)(int16_t)a + (int32_t)(int16_t)(a >> 16);
151
+ reshi = (int32_t)(int16_t)(a >> 32) + (int32_t)(int16_t)(a >> 48);
152
+
153
+ return (uint32_t)reslo | (((uint64_t)reshi) << 32);
154
+}
155
+
156
uint64_t HELPER(neon_subl_u16)(uint64_t a, uint64_t b)
157
{
158
uint64_t mask;
159
--
160
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Pairwise addition with and without accumulation.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-46-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 2 -
11
target/arm/tcg/translate.h | 9 ++
12
target/arm/tcg/gengvec.c | 230 ++++++++++++++++++++++++++++++++
13
target/arm/tcg/neon_helper.c | 22 ---
14
target/arm/tcg/translate-neon.c | 150 +--------------------
15
5 files changed, 243 insertions(+), 170 deletions(-)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(neon_widen_s16, i64, i32)
22
23
DEF_HELPER_2(neon_addl_u16, i64, i64, i64)
24
DEF_HELPER_2(neon_addl_u32, i64, i64, i64)
25
-DEF_HELPER_2(neon_paddl_u16, i64, i64, i64)
26
-DEF_HELPER_2(neon_paddl_u32, i64, i64, i64)
27
DEF_HELPER_FLAGS_1(neon_addlp_s8, TCG_CALL_NO_RWG_SE, i64, i64)
28
DEF_HELPER_FLAGS_1(neon_addlp_s16, TCG_CALL_NO_RWG_SE, i64, i64)
29
DEF_HELPER_2(neon_subl_u16, i64, i64, i64)
30
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/translate.h
33
+++ b/target/arm/tcg/translate.h
34
@@ -XXX,XX +XXX,XX @@ void gen_gvec_rev32(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
35
void gen_gvec_rev64(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
36
uint32_t opr_sz, uint32_t max_sz);
37
38
+void gen_gvec_saddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
39
+ uint32_t opr_sz, uint32_t max_sz);
40
+void gen_gvec_sadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
41
+ uint32_t opr_sz, uint32_t max_sz);
42
+void gen_gvec_uaddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
43
+ uint32_t opr_sz, uint32_t max_sz);
44
+void gen_gvec_uadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
45
+ uint32_t opr_sz, uint32_t max_sz);
46
+
47
/*
48
* Forward to the isar_feature_* tests given a DisasContext pointer.
49
*/
50
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/gengvec.c
53
+++ b/target/arm/tcg/gengvec.c
54
@@ -XXX,XX +XXX,XX @@ void gen_gvec_rev64(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
55
g_assert_not_reached();
56
}
57
}
58
+
59
+static void gen_saddlp_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
60
+{
61
+ int half = 4 << vece;
62
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
63
+
64
+ tcg_gen_shli_vec(vece, t, n, half);
65
+ tcg_gen_sari_vec(vece, d, n, half);
66
+ tcg_gen_sari_vec(vece, t, t, half);
67
+ tcg_gen_add_vec(vece, d, d, t);
68
+}
69
+
70
+static void gen_saddlp_s_i64(TCGv_i64 d, TCGv_i64 n)
71
+{
72
+ TCGv_i64 t = tcg_temp_new_i64();
73
+
74
+ tcg_gen_ext32s_i64(t, n);
75
+ tcg_gen_sari_i64(d, n, 32);
76
+ tcg_gen_add_i64(d, d, t);
77
+}
78
+
79
+void gen_gvec_saddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
80
+ uint32_t opr_sz, uint32_t max_sz)
81
+{
82
+ static const TCGOpcode vecop_list[] = {
83
+ INDEX_op_sari_vec, INDEX_op_shli_vec, INDEX_op_add_vec, 0
84
+ };
85
+ static const GVecGen2 g[] = {
86
+ { .fniv = gen_saddlp_vec,
87
+ .fni8 = gen_helper_neon_addlp_s8,
88
+ .opt_opc = vecop_list,
89
+ .vece = MO_16 },
90
+ { .fniv = gen_saddlp_vec,
91
+ .fni8 = gen_helper_neon_addlp_s16,
92
+ .opt_opc = vecop_list,
93
+ .vece = MO_32 },
94
+ { .fniv = gen_saddlp_vec,
95
+ .fni8 = gen_saddlp_s_i64,
96
+ .opt_opc = vecop_list,
97
+ .vece = MO_64 },
98
+ };
99
+ assert(vece <= MO_32);
100
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
101
+}
102
+
103
+static void gen_sadalp_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
104
+{
105
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
106
+
107
+ gen_saddlp_vec(vece, t, n);
108
+ tcg_gen_add_vec(vece, d, d, t);
109
+}
110
+
111
+static void gen_sadalp_b_i64(TCGv_i64 d, TCGv_i64 n)
112
+{
113
+ TCGv_i64 t = tcg_temp_new_i64();
114
+
115
+ gen_helper_neon_addlp_s8(t, n);
116
+ tcg_gen_vec_add16_i64(d, d, t);
117
+}
118
+
119
+static void gen_sadalp_h_i64(TCGv_i64 d, TCGv_i64 n)
120
+{
121
+ TCGv_i64 t = tcg_temp_new_i64();
122
+
123
+ gen_helper_neon_addlp_s16(t, n);
124
+ tcg_gen_vec_add32_i64(d, d, t);
125
+}
126
+
127
+static void gen_sadalp_s_i64(TCGv_i64 d, TCGv_i64 n)
128
+{
129
+ TCGv_i64 t = tcg_temp_new_i64();
130
+
131
+ gen_saddlp_s_i64(t, n);
132
+ tcg_gen_add_i64(d, d, t);
133
+}
134
+
135
+void gen_gvec_sadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
136
+ uint32_t opr_sz, uint32_t max_sz)
137
+{
138
+ static const TCGOpcode vecop_list[] = {
139
+ INDEX_op_sari_vec, INDEX_op_shli_vec, INDEX_op_add_vec, 0
140
+ };
141
+ static const GVecGen2 g[] = {
142
+ { .fniv = gen_sadalp_vec,
143
+ .fni8 = gen_sadalp_b_i64,
144
+ .opt_opc = vecop_list,
145
+ .load_dest = true,
146
+ .vece = MO_16 },
147
+ { .fniv = gen_sadalp_vec,
148
+ .fni8 = gen_sadalp_h_i64,
149
+ .opt_opc = vecop_list,
150
+ .load_dest = true,
151
+ .vece = MO_32 },
152
+ { .fniv = gen_sadalp_vec,
153
+ .fni8 = gen_sadalp_s_i64,
154
+ .opt_opc = vecop_list,
155
+ .load_dest = true,
156
+ .vece = MO_64 },
157
+ };
158
+ assert(vece <= MO_32);
159
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
160
+}
161
+
162
+static void gen_uaddlp_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
163
+{
164
+ int half = 4 << vece;
165
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
166
+ TCGv_vec m = tcg_constant_vec_matching(d, vece, MAKE_64BIT_MASK(0, half));
167
+
168
+ tcg_gen_shri_vec(vece, t, n, half);
169
+ tcg_gen_and_vec(vece, d, n, m);
170
+ tcg_gen_add_vec(vece, d, d, t);
171
+}
172
+
173
+static void gen_uaddlp_b_i64(TCGv_i64 d, TCGv_i64 n)
174
+{
175
+ TCGv_i64 t = tcg_temp_new_i64();
176
+ TCGv_i64 m = tcg_constant_i64(dup_const(MO_16, 0xff));
177
+
178
+ tcg_gen_shri_i64(t, n, 8);
179
+ tcg_gen_and_i64(d, n, m);
180
+ tcg_gen_and_i64(t, t, m);
181
+ /* No carry between widened unsigned elements. */
182
+ tcg_gen_add_i64(d, d, t);
183
+}
184
+
185
+static void gen_uaddlp_h_i64(TCGv_i64 d, TCGv_i64 n)
186
+{
187
+ TCGv_i64 t = tcg_temp_new_i64();
188
+ TCGv_i64 m = tcg_constant_i64(dup_const(MO_32, 0xffff));
189
+
190
+ tcg_gen_shri_i64(t, n, 16);
191
+ tcg_gen_and_i64(d, n, m);
192
+ tcg_gen_and_i64(t, t, m);
193
+ /* No carry between widened unsigned elements. */
194
+ tcg_gen_add_i64(d, d, t);
195
+}
196
+
197
+static void gen_uaddlp_s_i64(TCGv_i64 d, TCGv_i64 n)
198
+{
199
+ TCGv_i64 t = tcg_temp_new_i64();
200
+
201
+ tcg_gen_ext32u_i64(t, n);
202
+ tcg_gen_shri_i64(d, n, 32);
203
+ tcg_gen_add_i64(d, d, t);
204
+}
205
+
206
+void gen_gvec_uaddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
207
+ uint32_t opr_sz, uint32_t max_sz)
208
+{
209
+ static const TCGOpcode vecop_list[] = {
210
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
211
+ };
212
+ static const GVecGen2 g[] = {
213
+ { .fniv = gen_uaddlp_vec,
214
+ .fni8 = gen_uaddlp_b_i64,
215
+ .opt_opc = vecop_list,
216
+ .vece = MO_16 },
217
+ { .fniv = gen_uaddlp_vec,
218
+ .fni8 = gen_uaddlp_h_i64,
219
+ .opt_opc = vecop_list,
220
+ .vece = MO_32 },
221
+ { .fniv = gen_uaddlp_vec,
222
+ .fni8 = gen_uaddlp_s_i64,
223
+ .opt_opc = vecop_list,
224
+ .vece = MO_64 },
225
+ };
226
+ assert(vece <= MO_32);
227
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
228
+}
229
+
230
+static void gen_uadalp_vec(unsigned vece, TCGv_vec d, TCGv_vec n)
231
+{
232
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
233
+
234
+ gen_uaddlp_vec(vece, t, n);
235
+ tcg_gen_add_vec(vece, d, d, t);
236
+}
237
+
238
+static void gen_uadalp_b_i64(TCGv_i64 d, TCGv_i64 n)
239
+{
240
+ TCGv_i64 t = tcg_temp_new_i64();
241
+
242
+ gen_uaddlp_b_i64(t, n);
243
+ tcg_gen_vec_add16_i64(d, d, t);
244
+}
245
+
246
+static void gen_uadalp_h_i64(TCGv_i64 d, TCGv_i64 n)
247
+{
248
+ TCGv_i64 t = tcg_temp_new_i64();
249
+
250
+ gen_uaddlp_h_i64(t, n);
251
+ tcg_gen_vec_add32_i64(d, d, t);
252
+}
253
+
254
+static void gen_uadalp_s_i64(TCGv_i64 d, TCGv_i64 n)
255
+{
256
+ TCGv_i64 t = tcg_temp_new_i64();
257
+
258
+ gen_uaddlp_s_i64(t, n);
259
+ tcg_gen_add_i64(d, d, t);
260
+}
261
+
262
+void gen_gvec_uadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
263
+ uint32_t opr_sz, uint32_t max_sz)
264
+{
265
+ static const TCGOpcode vecop_list[] = {
266
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
267
+ };
268
+ static const GVecGen2 g[] = {
269
+ { .fniv = gen_uadalp_vec,
270
+ .fni8 = gen_uadalp_b_i64,
271
+ .load_dest = true,
272
+ .opt_opc = vecop_list,
273
+ .vece = MO_16 },
274
+ { .fniv = gen_uadalp_vec,
275
+ .fni8 = gen_uadalp_h_i64,
276
+ .load_dest = true,
277
+ .opt_opc = vecop_list,
278
+ .vece = MO_32 },
279
+ { .fniv = gen_uadalp_vec,
280
+ .fni8 = gen_uadalp_s_i64,
281
+ .load_dest = true,
282
+ .opt_opc = vecop_list,
283
+ .vece = MO_64 },
284
+ };
285
+ assert(vece <= MO_32);
286
+ tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
287
+}
288
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
289
index XXXXXXX..XXXXXXX 100644
290
--- a/target/arm/tcg/neon_helper.c
291
+++ b/target/arm/tcg/neon_helper.c
292
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addl_u32)(uint64_t a, uint64_t b)
293
return (a + b) ^ mask;
294
}
295
296
-uint64_t HELPER(neon_paddl_u16)(uint64_t a, uint64_t b)
297
-{
298
- uint64_t tmp;
299
- uint64_t tmp2;
300
-
301
- tmp = a & 0x0000ffff0000ffffull;
302
- tmp += (a >> 16) & 0x0000ffff0000ffffull;
303
- tmp2 = b & 0xffff0000ffff0000ull;
304
- tmp2 += (b << 16) & 0xffff0000ffff0000ull;
305
- return ( tmp & 0xffff)
306
- | ((tmp >> 16) & 0xffff0000ull)
307
- | ((tmp2 << 16) & 0xffff00000000ull)
308
- | ( tmp2 & 0xffff000000000000ull);
309
-}
310
-
311
-uint64_t HELPER(neon_paddl_u32)(uint64_t a, uint64_t b)
312
-{
313
- uint32_t low = a + (a >> 32);
314
- uint32_t high = b + (b >> 32);
315
- return low + ((uint64_t)high << 32);
316
-}
317
-
318
/* Pairwise long add: add pairs of adjacent elements into
319
* double-width elements in the result (eg _s8 is an 8x8->16 op)
320
*/
321
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/tcg/translate-neon.c
324
+++ b/target/arm/tcg/translate-neon.c
325
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
326
return true;
327
}
328
329
-static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
330
- NeonGenWidenFn *widenfn,
331
- NeonGenTwo64OpFn *opfn,
332
- NeonGenTwo64OpFn *accfn)
333
-{
334
- /*
335
- * Pairwise long operations: widen both halves of the pair,
336
- * combine the pairs with the opfn, and then possibly accumulate
337
- * into the destination with the accfn.
338
- */
339
- int pass;
340
-
341
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
342
- return false;
343
- }
344
-
345
- /* UNDEF accesses to D16-D31 if they don't exist. */
346
- if (!dc_isar_feature(aa32_simd_r32, s) &&
347
- ((a->vd | a->vm) & 0x10)) {
348
- return false;
349
- }
350
-
351
- if ((a->vd | a->vm) & a->q) {
352
- return false;
353
- }
354
-
355
- if (!widenfn) {
356
- return false;
357
- }
358
-
359
- if (!vfp_access_check(s)) {
360
- return true;
361
- }
362
-
363
- for (pass = 0; pass < a->q + 1; pass++) {
364
- TCGv_i32 tmp;
365
- TCGv_i64 rm0_64, rm1_64, rd_64;
366
-
367
- rm0_64 = tcg_temp_new_i64();
368
- rm1_64 = tcg_temp_new_i64();
369
- rd_64 = tcg_temp_new_i64();
370
-
371
- tmp = tcg_temp_new_i32();
372
- read_neon_element32(tmp, a->vm, pass * 2, MO_32);
373
- widenfn(rm0_64, tmp);
374
- read_neon_element32(tmp, a->vm, pass * 2 + 1, MO_32);
375
- widenfn(rm1_64, tmp);
376
-
377
- opfn(rd_64, rm0_64, rm1_64);
378
-
379
- if (accfn) {
380
- TCGv_i64 tmp64 = tcg_temp_new_i64();
381
- read_neon_element64(tmp64, a->vd, pass, MO_64);
382
- accfn(rd_64, tmp64, rd_64);
383
- }
384
- write_neon_element64(rd_64, a->vd, pass, MO_64);
385
- }
386
- return true;
387
-}
388
-
389
-static bool trans_VPADDL_S(DisasContext *s, arg_2misc *a)
390
-{
391
- static NeonGenWidenFn * const widenfn[] = {
392
- gen_helper_neon_widen_s8,
393
- gen_helper_neon_widen_s16,
394
- tcg_gen_ext_i32_i64,
395
- NULL,
396
- };
397
- static NeonGenTwo64OpFn * const opfn[] = {
398
- gen_helper_neon_paddl_u16,
399
- gen_helper_neon_paddl_u32,
400
- tcg_gen_add_i64,
401
- NULL,
402
- };
403
-
404
- return do_2misc_pairwise(s, a, widenfn[a->size], opfn[a->size], NULL);
405
-}
406
-
407
-static bool trans_VPADDL_U(DisasContext *s, arg_2misc *a)
408
-{
409
- static NeonGenWidenFn * const widenfn[] = {
410
- gen_helper_neon_widen_u8,
411
- gen_helper_neon_widen_u16,
412
- tcg_gen_extu_i32_i64,
413
- NULL,
414
- };
415
- static NeonGenTwo64OpFn * const opfn[] = {
416
- gen_helper_neon_paddl_u16,
417
- gen_helper_neon_paddl_u32,
418
- tcg_gen_add_i64,
419
- NULL,
420
- };
421
-
422
- return do_2misc_pairwise(s, a, widenfn[a->size], opfn[a->size], NULL);
423
-}
424
-
425
-static bool trans_VPADAL_S(DisasContext *s, arg_2misc *a)
426
-{
427
- static NeonGenWidenFn * const widenfn[] = {
428
- gen_helper_neon_widen_s8,
429
- gen_helper_neon_widen_s16,
430
- tcg_gen_ext_i32_i64,
431
- NULL,
432
- };
433
- static NeonGenTwo64OpFn * const opfn[] = {
434
- gen_helper_neon_paddl_u16,
435
- gen_helper_neon_paddl_u32,
436
- tcg_gen_add_i64,
437
- NULL,
438
- };
439
- static NeonGenTwo64OpFn * const accfn[] = {
440
- gen_helper_neon_addl_u16,
441
- gen_helper_neon_addl_u32,
442
- tcg_gen_add_i64,
443
- NULL,
444
- };
445
-
446
- return do_2misc_pairwise(s, a, widenfn[a->size], opfn[a->size],
447
- accfn[a->size]);
448
-}
449
-
450
-static bool trans_VPADAL_U(DisasContext *s, arg_2misc *a)
451
-{
452
- static NeonGenWidenFn * const widenfn[] = {
453
- gen_helper_neon_widen_u8,
454
- gen_helper_neon_widen_u16,
455
- tcg_gen_extu_i32_i64,
456
- NULL,
457
- };
458
- static NeonGenTwo64OpFn * const opfn[] = {
459
- gen_helper_neon_paddl_u16,
460
- gen_helper_neon_paddl_u32,
461
- tcg_gen_add_i64,
462
- NULL,
463
- };
464
- static NeonGenTwo64OpFn * const accfn[] = {
465
- gen_helper_neon_addl_u16,
466
- gen_helper_neon_addl_u32,
467
- tcg_gen_add_i64,
468
- NULL,
469
- };
470
-
471
- return do_2misc_pairwise(s, a, widenfn[a->size], opfn[a->size],
472
- accfn[a->size]);
473
-}
474
-
475
typedef void ZipFn(TCGv_ptr, TCGv_ptr);
476
477
static bool do_zip_uzp(DisasContext *s, arg_2misc *a,
478
@@ -XXX,XX +XXX,XX @@ DO_2MISC_VEC(VCLT0, gen_gvec_clt0)
479
DO_2MISC_VEC(VCLS, gen_gvec_cls)
480
DO_2MISC_VEC(VCLZ, gen_gvec_clz)
481
DO_2MISC_VEC(VREV64, gen_gvec_rev64)
482
+DO_2MISC_VEC(VPADDL_S, gen_gvec_saddlp)
483
+DO_2MISC_VEC(VPADDL_U, gen_gvec_uaddlp)
484
+DO_2MISC_VEC(VPADAL_S, gen_gvec_sadalp)
485
+DO_2MISC_VEC(VPADAL_U, gen_gvec_uadalp)
486
487
static bool trans_VMVN(DisasContext *s, arg_2misc *a)
488
{
489
--
490
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes SADDLP, UADDLP, SADALP, UADALP.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-47-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/helper-a64.h | 2 -
11
target/arm/tcg/a64.decode | 5 ++
12
target/arm/tcg/helper-a64.c | 18 --------
13
target/arm/tcg/translate-a64.c | 84 +++-------------------------------
14
4 files changed, 11 insertions(+), 98 deletions(-)
15
16
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/helper-a64.h
19
+++ b/target/arm/tcg/helper-a64.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(recpsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
21
DEF_HELPER_FLAGS_3(rsqrtsf_f16, TCG_CALL_NO_RWG, f16, f16, f16, ptr)
22
DEF_HELPER_FLAGS_3(rsqrtsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
23
DEF_HELPER_FLAGS_3(rsqrtsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
24
-DEF_HELPER_FLAGS_1(neon_addlp_u8, TCG_CALL_NO_RWG_SE, i64, i64)
25
-DEF_HELPER_FLAGS_1(neon_addlp_u16, TCG_CALL_NO_RWG_SE, i64, i64)
26
DEF_HELPER_FLAGS_2(frecpx_f64, TCG_CALL_NO_RWG, f64, f64, ptr)
27
DEF_HELPER_FLAGS_2(frecpx_f32, TCG_CALL_NO_RWG, f32, f32, ptr)
28
DEF_HELPER_FLAGS_2(frecpx_f16, TCG_CALL_NO_RWG, f16, f16, ptr)
29
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/a64.decode
32
+++ b/target/arm/tcg/a64.decode
33
@@ -XXX,XX +XXX,XX @@ CMLT0_v 0.00 1110 ..1 00000 10101 0 ..... ..... @qrr_e
34
REV16_v 0.00 1110 001 00000 00011 0 ..... ..... @qrr_b
35
REV32_v 0.10 1110 0.1 00000 00001 0 ..... ..... @qrr_bh
36
REV64_v 0.00 1110 ..1 00000 00001 0 ..... ..... @qrr_e
37
+
38
+SADDLP_v 0.00 1110 ..1 00000 00101 0 ..... ..... @qrr_e
39
+UADDLP_v 0.10 1110 ..1 00000 00101 0 ..... ..... @qrr_e
40
+SADALP_v 0.00 1110 ..1 00000 01101 0 ..... ..... @qrr_e
41
+UADALP_v 0.10 1110 ..1 00000 01101 0 ..... ..... @qrr_e
42
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/helper-a64.c
45
+++ b/target/arm/tcg/helper-a64.c
46
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrtsf_f64)(float64 a, float64 b, void *fpstp)
47
return float64_muladd(a, b, float64_three, float_muladd_halve_result, fpst);
48
}
49
50
-uint64_t HELPER(neon_addlp_u8)(uint64_t a)
51
-{
52
- uint64_t tmp;
53
-
54
- tmp = a & 0x00ff00ff00ff00ffULL;
55
- tmp += (a >> 8) & 0x00ff00ff00ff00ffULL;
56
- return tmp;
57
-}
58
-
59
-uint64_t HELPER(neon_addlp_u16)(uint64_t a)
60
-{
61
- uint64_t tmp;
62
-
63
- tmp = a & 0x0000ffff0000ffffULL;
64
- tmp += (a >> 16) & 0x0000ffff0000ffffULL;
65
- return tmp;
66
-}
67
-
68
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
69
uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
70
{
71
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/tcg/translate-a64.c
74
+++ b/target/arm/tcg/translate-a64.c
75
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
76
TRANS(CLS_v, do_gvec_fn2_bhs, a, gen_gvec_cls)
77
TRANS(CLZ_v, do_gvec_fn2_bhs, a, gen_gvec_clz)
78
TRANS(REV64_v, do_gvec_fn2_bhs, a, gen_gvec_rev64)
79
+TRANS(SADDLP_v, do_gvec_fn2_bhs, a, gen_gvec_saddlp)
80
+TRANS(UADDLP_v, do_gvec_fn2_bhs, a, gen_gvec_uaddlp)
81
+TRANS(SADALP_v, do_gvec_fn2_bhs, a, gen_gvec_sadalp)
82
+TRANS(UADALP_v, do_gvec_fn2_bhs, a, gen_gvec_uadalp)
83
84
/* Common vector code for handling integer to FP conversion */
85
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
86
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
87
}
88
}
89
90
-static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
91
- bool is_q, int size, int rn, int rd)
92
-{
93
- /* Implement the pairwise operations from 2-misc:
94
- * SADDLP, UADDLP, SADALP, UADALP.
95
- * These all add pairs of elements in the input to produce a
96
- * double-width result element in the output (possibly accumulating).
97
- */
98
- bool accum = (opcode == 0x6);
99
- int maxpass = is_q ? 2 : 1;
100
- int pass;
101
- TCGv_i64 tcg_res[2];
102
-
103
- if (size == 2) {
104
- /* 32 + 32 -> 64 op */
105
- MemOp memop = size + (u ? 0 : MO_SIGN);
106
-
107
- for (pass = 0; pass < maxpass; pass++) {
108
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
109
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
110
-
111
- tcg_res[pass] = tcg_temp_new_i64();
112
-
113
- read_vec_element(s, tcg_op1, rn, pass * 2, memop);
114
- read_vec_element(s, tcg_op2, rn, pass * 2 + 1, memop);
115
- tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
116
- if (accum) {
117
- read_vec_element(s, tcg_op1, rd, pass, MO_64);
118
- tcg_gen_add_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
119
- }
120
- }
121
- } else {
122
- for (pass = 0; pass < maxpass; pass++) {
123
- TCGv_i64 tcg_op = tcg_temp_new_i64();
124
- NeonGenOne64OpFn *genfn;
125
- static NeonGenOne64OpFn * const fns[2][2] = {
126
- { gen_helper_neon_addlp_s8, gen_helper_neon_addlp_u8 },
127
- { gen_helper_neon_addlp_s16, gen_helper_neon_addlp_u16 },
128
- };
129
-
130
- genfn = fns[size][u];
131
-
132
- tcg_res[pass] = tcg_temp_new_i64();
133
-
134
- read_vec_element(s, tcg_op, rn, pass, MO_64);
135
- genfn(tcg_res[pass], tcg_op);
136
-
137
- if (accum) {
138
- read_vec_element(s, tcg_op, rd, pass, MO_64);
139
- if (size == 0) {
140
- gen_helper_neon_addl_u16(tcg_res[pass],
141
- tcg_res[pass], tcg_op);
142
- } else {
143
- gen_helper_neon_addl_u32(tcg_res[pass],
144
- tcg_res[pass], tcg_op);
145
- }
146
- }
147
- }
148
- }
149
- if (!is_q) {
150
- tcg_res[1] = tcg_constant_i64(0);
151
- }
152
- for (pass = 0; pass < 2; pass++) {
153
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
154
- }
155
-}
156
-
157
static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
158
{
159
/* Implement SHLL and SHLL2 */
160
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
161
162
handle_2misc_narrow(s, false, opcode, u, is_q, size, rn, rd);
163
return;
164
- case 0x2: /* SADDLP, UADDLP */
165
- case 0x6: /* SADALP, UADALP */
166
- if (size == 3) {
167
- unallocated_encoding(s);
168
- return;
169
- }
170
- if (!fp_access_check(s)) {
171
- return;
172
- }
173
- handle_2misc_pairwise(s, opcode, u, is_q, size, rn, rd);
174
- return;
175
case 0x13: /* SHLL, SHLL2 */
176
if (u == 0 || size == 3) {
177
unallocated_encoding(s);
178
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
179
default:
180
case 0x0: /* REV64, REV32 */
181
case 0x1: /* REV16 */
182
+ case 0x2: /* SADDLP, UADDLP */
183
case 0x3: /* SUQADD, USQADD */
184
case 0x4: /* CLS, CLZ */
185
case 0x5: /* CNT, NOT, RBIT */
186
+ case 0x6: /* SADALP, UADALP */
187
case 0x7: /* SQABS, SQNEG */
188
case 0x8: /* CMGT, CMGE */
189
case 0x9: /* CMEQ, CMLE */
190
--
191
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
These have generic equivalents: tcg_gen_vec_{add,sub}{16,32}_i64.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-48-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 4 ----
11
target/arm/tcg/neon_helper.c | 36 ---------------------------------
12
target/arm/tcg/translate-neon.c | 22 ++++++++++----------
13
3 files changed, 11 insertions(+), 51 deletions(-)
14
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
18
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(neon_widen_s8, i64, i32)
20
DEF_HELPER_1(neon_widen_u16, i64, i32)
21
DEF_HELPER_1(neon_widen_s16, i64, i32)
22
23
-DEF_HELPER_2(neon_addl_u16, i64, i64, i64)
24
-DEF_HELPER_2(neon_addl_u32, i64, i64, i64)
25
DEF_HELPER_FLAGS_1(neon_addlp_s8, TCG_CALL_NO_RWG_SE, i64, i64)
26
DEF_HELPER_FLAGS_1(neon_addlp_s16, TCG_CALL_NO_RWG_SE, i64, i64)
27
-DEF_HELPER_2(neon_subl_u16, i64, i64, i64)
28
-DEF_HELPER_2(neon_subl_u32, i64, i64, i64)
29
DEF_HELPER_3(neon_addl_saturate_s32, i64, env, i64, i64)
30
DEF_HELPER_3(neon_addl_saturate_s64, i64, env, i64, i64)
31
DEF_HELPER_2(neon_abdl_u16, i64, i32, i32)
32
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/tcg/neon_helper.c
35
+++ b/target/arm/tcg/neon_helper.c
36
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_widen_s16)(uint32_t x)
37
return ((uint32_t)(int16_t)x) | (high << 32);
38
}
39
40
-uint64_t HELPER(neon_addl_u16)(uint64_t a, uint64_t b)
41
-{
42
- uint64_t mask;
43
- mask = (a ^ b) & 0x8000800080008000ull;
44
- a &= ~0x8000800080008000ull;
45
- b &= ~0x8000800080008000ull;
46
- return (a + b) ^ mask;
47
-}
48
-
49
-uint64_t HELPER(neon_addl_u32)(uint64_t a, uint64_t b)
50
-{
51
- uint64_t mask;
52
- mask = (a ^ b) & 0x8000000080000000ull;
53
- a &= ~0x8000000080000000ull;
54
- b &= ~0x8000000080000000ull;
55
- return (a + b) ^ mask;
56
-}
57
-
58
/* Pairwise long add: add pairs of adjacent elements into
59
* double-width elements in the result (eg _s8 is an 8x8->16 op)
60
*/
61
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_s16)(uint64_t a)
62
return (uint32_t)reslo | (((uint64_t)reshi) << 32);
63
}
64
65
-uint64_t HELPER(neon_subl_u16)(uint64_t a, uint64_t b)
66
-{
67
- uint64_t mask;
68
- mask = (a ^ ~b) & 0x8000800080008000ull;
69
- a |= 0x8000800080008000ull;
70
- b &= ~0x8000800080008000ull;
71
- return (a - b) ^ mask;
72
-}
73
-
74
-uint64_t HELPER(neon_subl_u32)(uint64_t a, uint64_t b)
75
-{
76
- uint64_t mask;
77
- mask = (a ^ ~b) & 0x8000000080000000ull;
78
- a |= 0x8000000080000000ull;
79
- b &= ~0x8000000080000000ull;
80
- return (a - b) ^ mask;
81
-}
82
-
83
uint64_t HELPER(neon_addl_saturate_s32)(CPUARMState *env, uint64_t a, uint64_t b)
84
{
85
uint32_t x, y;
86
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/tcg/translate-neon.c
89
+++ b/target/arm/tcg/translate-neon.c
90
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
91
NULL, NULL, \
92
}; \
93
static NeonGenTwo64OpFn * const addfn[] = { \
94
- gen_helper_neon_##OP##l_u16, \
95
- gen_helper_neon_##OP##l_u32, \
96
+ tcg_gen_vec_##OP##16_i64, \
97
+ tcg_gen_vec_##OP##32_i64, \
98
tcg_gen_##OP##_i64, \
99
NULL, \
100
}; \
101
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
102
static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
103
{ \
104
static NeonGenTwo64OpFn * const addfn[] = { \
105
- gen_helper_neon_##OP##l_u16, \
106
- gen_helper_neon_##OP##l_u32, \
107
+ tcg_gen_vec_##OP##16_i64, \
108
+ tcg_gen_vec_##OP##32_i64, \
109
tcg_gen_##OP##_i64, \
110
NULL, \
111
}; \
112
@@ -XXX,XX +XXX,XX @@ static bool trans_VABAL_S_3d(DisasContext *s, arg_3diff *a)
113
NULL,
114
};
115
static NeonGenTwo64OpFn * const addfn[] = {
116
- gen_helper_neon_addl_u16,
117
- gen_helper_neon_addl_u32,
118
+ tcg_gen_vec_add16_i64,
119
+ tcg_gen_vec_add32_i64,
120
tcg_gen_add_i64,
121
NULL,
122
};
123
@@ -XXX,XX +XXX,XX @@ static bool trans_VABAL_U_3d(DisasContext *s, arg_3diff *a)
124
NULL,
125
};
126
static NeonGenTwo64OpFn * const addfn[] = {
127
- gen_helper_neon_addl_u16,
128
- gen_helper_neon_addl_u32,
129
+ tcg_gen_vec_add16_i64,
130
+ tcg_gen_vec_add32_i64,
131
tcg_gen_add_i64,
132
NULL,
133
};
134
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_U_3d(DisasContext *s, arg_3diff *a)
135
NULL, \
136
}; \
137
static NeonGenTwo64OpFn * const accfn[] = { \
138
- gen_helper_neon_##ACC##l_u16, \
139
- gen_helper_neon_##ACC##l_u32, \
140
+ tcg_gen_vec_##ACC##16_i64, \
141
+ tcg_gen_vec_##ACC##32_i64, \
142
tcg_gen_##ACC##_i64, \
143
NULL, \
144
}; \
145
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_U_2sc(DisasContext *s, arg_2scalar *a)
146
}; \
147
static NeonGenTwo64OpFn * const accfn[] = { \
148
NULL, \
149
- gen_helper_neon_##ACC##l_u32, \
150
+ tcg_gen_vec_##ACC##32_i64, \
151
tcg_gen_##ACC##_i64, \
152
NULL, \
153
}; \
154
--
155
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
In a couple of places, clearing the entire vector before storing one
4
element is the easiest solution. Wrap that into a helper function.
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-49-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/translate-a64.c | 21 ++++++++++++---------
12
1 file changed, 12 insertions(+), 9 deletions(-)
13
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
19
return v;
20
}
21
22
-/* Clear the bits above an N-bit vector, for N = (is_q ? 128 : 64).
23
+static void clear_vec(DisasContext *s, int rd)
24
+{
25
+ unsigned ofs = fp_reg_offset(s, rd, MO_64);
26
+ unsigned vsz = vec_full_reg_size(s);
27
+
28
+ tcg_gen_gvec_dup_imm(MO_64, ofs, vsz, vsz, 0);
29
+}
30
+
31
+/*
32
+ * Clear the bits above an N-bit vector, for N = (is_q ? 128 : 64).
33
* If SVE is not enabled, then there are only 128 bits in the vector.
34
*/
35
static void clear_vec_high(DisasContext *s, bool is_q, int rd)
36
@@ -XXX,XX +XXX,XX @@ static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
37
TCGv_i32 tcg_op2 = tcg_temp_new_i32();
38
TCGv_i32 tcg_op3 = tcg_temp_new_i32();
39
TCGv_i32 tcg_res = tcg_temp_new_i32();
40
- unsigned vsz, dofs;
41
42
read_vec_element_i32(s, tcg_op1, a->rn, 3, MO_32);
43
read_vec_element_i32(s, tcg_op2, a->rm, 3, MO_32);
44
@@ -XXX,XX +XXX,XX @@ static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
45
tcg_gen_rotri_i32(tcg_res, tcg_res, 25);
46
47
/* Clear the whole register first, then store bits [127:96]. */
48
- vsz = vec_full_reg_size(s);
49
- dofs = vec_full_reg_offset(s, a->rd);
50
- tcg_gen_gvec_dup_imm(MO_64, dofs, vsz, vsz, 0);
51
+ clear_vec(s, a->rd);
52
write_vec_element_i32(s, tcg_res, a->rd, 3, MO_32);
53
}
54
return true;
55
@@ -XXX,XX +XXX,XX @@ static bool do_scalar_muladd_widening_idx(DisasContext *s, arg_rrx_e *a,
56
TCGv_i64 t0 = tcg_temp_new_i64();
57
TCGv_i64 t1 = tcg_temp_new_i64();
58
TCGv_i64 t2 = tcg_temp_new_i64();
59
- unsigned vsz, dofs;
60
61
if (acc) {
62
read_vec_element(s, t0, a->rd, 0, a->esz + 1);
63
@@ -XXX,XX +XXX,XX @@ static bool do_scalar_muladd_widening_idx(DisasContext *s, arg_rrx_e *a,
64
fn(t0, t1, t2);
65
66
/* Clear the whole register first, then store scalar. */
67
- vsz = vec_full_reg_size(s);
68
- dofs = vec_full_reg_offset(s, a->rd);
69
- tcg_gen_gvec_dup_imm(MO_64, dofs, vsz, vsz, 0);
70
+ clear_vec(s, a->rd);
71
write_vec_element(s, t0, a->rd, 0, a->esz + 1);
72
}
73
return true;
74
--
75
2.34.1
76
77
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-50-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 9 ++
9
target/arm/tcg/translate-a64.c | 153 ++++++++++++++++++++-------------
10
2 files changed, 102 insertions(+), 60 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ CMEQ0_s 0101 1110 111 00000 10011 0 ..... ..... @rr
17
CMLE0_s 0111 1110 111 00000 10011 0 ..... ..... @rr
18
CMLT0_s 0101 1110 111 00000 10101 0 ..... ..... @rr
19
20
+SQXTUN_s 0111 1110 ..1 00001 00101 0 ..... ..... @rr_e
21
+SQXTN_s 0101 1110 ..1 00001 01001 0 ..... ..... @rr_e
22
+UQXTN_s 0111 1110 ..1 00001 01001 0 ..... ..... @rr_e
23
+
24
# Advanced SIMD two-register miscellaneous
25
26
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
27
@@ -XXX,XX +XXX,XX @@ SADDLP_v 0.00 1110 ..1 00000 00101 0 ..... ..... @qrr_e
28
UADDLP_v 0.10 1110 ..1 00000 00101 0 ..... ..... @qrr_e
29
SADALP_v 0.00 1110 ..1 00000 01101 0 ..... ..... @qrr_e
30
UADALP_v 0.10 1110 ..1 00000 01101 0 ..... ..... @qrr_e
31
+
32
+XTN 0.00 1110 ..1 00001 00101 0 ..... ..... @qrr_e
33
+SQXTUN_v 0.10 1110 ..1 00001 00101 0 ..... ..... @qrr_e
34
+SQXTN_v 0.00 1110 ..1 00001 01001 0 ..... ..... @qrr_e
35
+UQXTN_v 0.10 1110 ..1 00001 01001 0 ..... ..... @qrr_e
36
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/tcg/translate-a64.c
39
+++ b/target/arm/tcg/translate-a64.c
40
@@ -XXX,XX +XXX,XX @@ TRANS(CMLE0_s, do_cmop0_d, a, TCG_COND_LE)
41
TRANS(CMLT0_s, do_cmop0_d, a, TCG_COND_LT)
42
TRANS(CMEQ0_s, do_cmop0_d, a, TCG_COND_EQ)
43
44
+static bool do_2misc_narrow_scalar(DisasContext *s, arg_rr_e *a,
45
+ ArithOneOp * const fn[3])
46
+{
47
+ if (a->esz == MO_64) {
48
+ return false;
49
+ }
50
+ if (fp_access_check(s)) {
51
+ TCGv_i64 t = tcg_temp_new_i64();
52
+
53
+ read_vec_element(s, t, a->rn, 0, a->esz + 1);
54
+ fn[a->esz](t, t);
55
+ clear_vec(s, a->rd);
56
+ write_vec_element(s, t, a->rd, 0, a->esz);
57
+ }
58
+ return true;
59
+}
60
+
61
+#define WRAP_ENV(NAME) \
62
+ static void gen_##NAME(TCGv_i64 d, TCGv_i64 n) \
63
+ { gen_helper_##NAME(d, tcg_env, n); }
64
+
65
+WRAP_ENV(neon_unarrow_sat8)
66
+WRAP_ENV(neon_unarrow_sat16)
67
+WRAP_ENV(neon_unarrow_sat32)
68
+
69
+static ArithOneOp * const f_scalar_sqxtun[] = {
70
+ gen_neon_unarrow_sat8,
71
+ gen_neon_unarrow_sat16,
72
+ gen_neon_unarrow_sat32,
73
+};
74
+TRANS(SQXTUN_s, do_2misc_narrow_scalar, a, f_scalar_sqxtun)
75
+
76
+WRAP_ENV(neon_narrow_sat_s8)
77
+WRAP_ENV(neon_narrow_sat_s16)
78
+WRAP_ENV(neon_narrow_sat_s32)
79
+
80
+static ArithOneOp * const f_scalar_sqxtn[] = {
81
+ gen_neon_narrow_sat_s8,
82
+ gen_neon_narrow_sat_s16,
83
+ gen_neon_narrow_sat_s32,
84
+};
85
+TRANS(SQXTN_s, do_2misc_narrow_scalar, a, f_scalar_sqxtn)
86
+
87
+WRAP_ENV(neon_narrow_sat_u8)
88
+WRAP_ENV(neon_narrow_sat_u16)
89
+WRAP_ENV(neon_narrow_sat_u32)
90
+
91
+static ArithOneOp * const f_scalar_uqxtn[] = {
92
+ gen_neon_narrow_sat_u8,
93
+ gen_neon_narrow_sat_u16,
94
+ gen_neon_narrow_sat_u32,
95
+};
96
+TRANS(UQXTN_s, do_2misc_narrow_scalar, a, f_scalar_uqxtn)
97
+
98
+#undef WRAP_ENV
99
+
100
static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
101
{
102
if (!a->q && a->esz == MO_64) {
103
@@ -XXX,XX +XXX,XX @@ TRANS(UADDLP_v, do_gvec_fn2_bhs, a, gen_gvec_uaddlp)
104
TRANS(SADALP_v, do_gvec_fn2_bhs, a, gen_gvec_sadalp)
105
TRANS(UADALP_v, do_gvec_fn2_bhs, a, gen_gvec_uadalp)
106
107
+static bool do_2misc_narrow_vector(DisasContext *s, arg_qrr_e *a,
108
+ ArithOneOp * const fn[3])
109
+{
110
+ if (a->esz == MO_64) {
111
+ return false;
112
+ }
113
+ if (fp_access_check(s)) {
114
+ TCGv_i64 t0 = tcg_temp_new_i64();
115
+ TCGv_i64 t1 = tcg_temp_new_i64();
116
+
117
+ read_vec_element(s, t0, a->rn, 0, MO_64);
118
+ read_vec_element(s, t1, a->rn, 1, MO_64);
119
+ fn[a->esz](t0, t0);
120
+ fn[a->esz](t1, t1);
121
+ write_vec_element(s, t0, a->rd, a->q ? 2 : 0, MO_32);
122
+ write_vec_element(s, t1, a->rd, a->q ? 3 : 1, MO_32);
123
+ clear_vec_high(s, a->q, a->rd);
124
+ }
125
+ return true;
126
+}
127
+
128
+static ArithOneOp * const f_scalar_xtn[] = {
129
+ gen_helper_neon_narrow_u8,
130
+ gen_helper_neon_narrow_u16,
131
+ tcg_gen_ext32u_i64,
132
+};
133
+TRANS(XTN, do_2misc_narrow_vector, a, f_scalar_xtn)
134
+TRANS(SQXTUN_v, do_2misc_narrow_vector, a, f_scalar_sqxtun)
135
+TRANS(SQXTN_v, do_2misc_narrow_vector, a, f_scalar_sqxtn)
136
+TRANS(UQXTN_v, do_2misc_narrow_vector, a, f_scalar_uqxtn)
137
+
138
/* Common vector code for handling integer to FP conversion */
139
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
140
int elements, int is_signed,
141
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
142
tcg_res[pass] = tcg_temp_new_i64();
143
144
switch (opcode) {
145
- case 0x12: /* XTN, SQXTUN */
146
- {
147
- static NeonGenOne64OpFn * const xtnfns[3] = {
148
- gen_helper_neon_narrow_u8,
149
- gen_helper_neon_narrow_u16,
150
- tcg_gen_ext32u_i64,
151
- };
152
- static NeonGenOne64OpEnvFn * const sqxtunfns[3] = {
153
- gen_helper_neon_unarrow_sat8,
154
- gen_helper_neon_unarrow_sat16,
155
- gen_helper_neon_unarrow_sat32,
156
- };
157
- if (u) {
158
- genenvfn = sqxtunfns[size];
159
- } else {
160
- genfn = xtnfns[size];
161
- }
162
- break;
163
- }
164
- case 0x14: /* SQXTN, UQXTN */
165
- {
166
- static NeonGenOne64OpEnvFn * const fns[3][2] = {
167
- { gen_helper_neon_narrow_sat_s8,
168
- gen_helper_neon_narrow_sat_u8 },
169
- { gen_helper_neon_narrow_sat_s16,
170
- gen_helper_neon_narrow_sat_u16 },
171
- { gen_helper_neon_narrow_sat_s32,
172
- gen_helper_neon_narrow_sat_u32 },
173
- };
174
- genenvfn = fns[size][u];
175
- break;
176
- }
177
case 0x16: /* FCVTN, FCVTN2 */
178
/* 32 bit to 16 bit or 64 bit to 32 bit float conversion */
179
if (size == 2) {
180
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
181
}
182
break;
183
default:
184
+ case 0x12: /* XTN, SQXTUN */
185
+ case 0x14: /* SQXTN, UQXTN */
186
g_assert_not_reached();
187
}
188
189
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
190
TCGv_ptr tcg_fpstatus;
191
192
switch (opcode) {
193
- case 0x12: /* SQXTUN */
194
- if (!u) {
195
- unallocated_encoding(s);
196
- return;
197
- }
198
- /* fall through */
199
- case 0x14: /* SQXTN, UQXTN */
200
- if (size == 3) {
201
- unallocated_encoding(s);
202
- return;
203
- }
204
- if (!fp_access_check(s)) {
205
- return;
206
- }
207
- handle_2misc_narrow(s, true, opcode, u, false, size, rn, rd);
208
- return;
209
case 0xc ... 0xf:
210
case 0x16 ... 0x1d:
211
case 0x1f:
212
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
213
case 0x9: /* CMEQ, CMLE */
214
case 0xa: /* CMLT */
215
case 0xb: /* ABS, NEG */
216
+ case 0x12: /* SQXTUN */
217
+ case 0x14: /* SQXTN, UQXTN */
218
unallocated_encoding(s);
219
return;
220
}
221
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
222
TCGv_ptr tcg_fpstatus;
223
224
switch (opcode) {
225
- case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
226
- case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
227
- if (size == 3) {
228
- unallocated_encoding(s);
229
- return;
230
- }
231
- if (!fp_access_check(s)) {
232
- return;
233
- }
234
-
235
- handle_2misc_narrow(s, false, opcode, u, is_q, size, rn, rd);
236
- return;
237
case 0x13: /* SHLL, SHLL2 */
238
if (u == 0 || size == 3) {
239
unallocated_encoding(s);
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
241
case 0x9: /* CMEQ, CMLE */
242
case 0xa: /* CMLT */
243
case 0xb: /* ABS, NEG */
244
+ case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
245
+ case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
246
unallocated_encoding(s);
247
return;
248
}
249
--
250
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-51-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 5 ++
9
target/arm/tcg/translate-a64.c | 89 ++++++++++++++++++----------------
10
2 files changed, 52 insertions(+), 42 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@
17
18
%rd 0:5
19
%esz_sd 22:1 !function=plus_2
20
+%esz_hs 22:1 !function=plus_1
21
%esz_hsd 22:2 !function=xor_2
22
%hl 11:1 21:1
23
%hlm 11:1 20:2
24
@@ -XXX,XX +XXX,XX @@
25
@qrr_b . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=0
26
@qrr_h . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=1
27
@qrr_bh . q:1 ...... . esz:1 ...... ...... rn:5 rd:5 &qrr_e
28
+@qrr_hs . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=%esz_hs
29
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
30
31
@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
32
@@ -XXX,XX +XXX,XX @@ XTN 0.00 1110 ..1 00001 00101 0 ..... ..... @qrr_e
33
SQXTUN_v 0.10 1110 ..1 00001 00101 0 ..... ..... @qrr_e
34
SQXTN_v 0.00 1110 ..1 00001 01001 0 ..... ..... @qrr_e
35
UQXTN_v 0.10 1110 ..1 00001 01001 0 ..... ..... @qrr_e
36
+
37
+FCVTN_v 0.00 1110 0.1 00001 01101 0 ..... ..... @qrr_hs
38
+BFCVTN_v 0.00 1110 101 00001 01101 0 ..... ..... @qrr_h
39
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/tcg/translate-a64.c
42
+++ b/target/arm/tcg/translate-a64.c
43
@@ -XXX,XX +XXX,XX @@ TRANS(SQXTUN_v, do_2misc_narrow_vector, a, f_scalar_sqxtun)
44
TRANS(SQXTN_v, do_2misc_narrow_vector, a, f_scalar_sqxtn)
45
TRANS(UQXTN_v, do_2misc_narrow_vector, a, f_scalar_uqxtn)
46
47
+static void gen_fcvtn_hs(TCGv_i64 d, TCGv_i64 n)
48
+{
49
+ TCGv_i32 tcg_lo = tcg_temp_new_i32();
50
+ TCGv_i32 tcg_hi = tcg_temp_new_i32();
51
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
52
+ TCGv_i32 ahp = get_ahp_flag();
53
+
54
+ tcg_gen_extr_i64_i32(tcg_lo, tcg_hi, n);
55
+ gen_helper_vfp_fcvt_f32_to_f16(tcg_lo, tcg_lo, fpst, ahp);
56
+ gen_helper_vfp_fcvt_f32_to_f16(tcg_hi, tcg_hi, fpst, ahp);
57
+ tcg_gen_deposit_i32(tcg_lo, tcg_lo, tcg_hi, 16, 16);
58
+ tcg_gen_extu_i32_i64(d, tcg_lo);
59
+}
60
+
61
+static void gen_fcvtn_sd(TCGv_i64 d, TCGv_i64 n)
62
+{
63
+ TCGv_i32 tmp = tcg_temp_new_i32();
64
+ gen_helper_vfp_fcvtsd(tmp, n, tcg_env);
65
+ tcg_gen_extu_i32_i64(d, tmp);
66
+}
67
+
68
+static ArithOneOp * const f_vector_fcvtn[] = {
69
+ NULL,
70
+ gen_fcvtn_hs,
71
+ gen_fcvtn_sd,
72
+};
73
+TRANS(FCVTN_v, do_2misc_narrow_vector, a, f_vector_fcvtn)
74
+
75
+static void gen_bfcvtn_hs(TCGv_i64 d, TCGv_i64 n)
76
+{
77
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
78
+ TCGv_i32 tmp = tcg_temp_new_i32();
79
+ gen_helper_bfcvt_pair(tmp, n, fpst);
80
+ tcg_gen_extu_i32_i64(d, tmp);
81
+}
82
+
83
+static ArithOneOp * const f_vector_bfcvtn[] = {
84
+ NULL,
85
+ gen_bfcvtn_hs,
86
+ NULL,
87
+};
88
+TRANS_FEAT(BFCVTN_v, aa64_bf16, do_2misc_narrow_vector, a, f_vector_bfcvtn)
89
+
90
/* Common vector code for handling integer to FP conversion */
91
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
92
int elements, int is_signed,
93
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
94
tcg_res[pass] = tcg_temp_new_i64();
95
96
switch (opcode) {
97
- case 0x16: /* FCVTN, FCVTN2 */
98
- /* 32 bit to 16 bit or 64 bit to 32 bit float conversion */
99
- if (size == 2) {
100
- TCGv_i32 tmp = tcg_temp_new_i32();
101
- gen_helper_vfp_fcvtsd(tmp, tcg_op, tcg_env);
102
- tcg_gen_extu_i32_i64(tcg_res[pass], tmp);
103
- } else {
104
- TCGv_i32 tcg_lo = tcg_temp_new_i32();
105
- TCGv_i32 tcg_hi = tcg_temp_new_i32();
106
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
107
- TCGv_i32 ahp = get_ahp_flag();
108
-
109
- tcg_gen_extr_i64_i32(tcg_lo, tcg_hi, tcg_op);
110
- gen_helper_vfp_fcvt_f32_to_f16(tcg_lo, tcg_lo, fpst, ahp);
111
- gen_helper_vfp_fcvt_f32_to_f16(tcg_hi, tcg_hi, fpst, ahp);
112
- tcg_gen_deposit_i32(tcg_lo, tcg_lo, tcg_hi, 16, 16);
113
- tcg_gen_extu_i32_i64(tcg_res[pass], tcg_lo);
114
- }
115
- break;
116
- case 0x36: /* BFCVTN, BFCVTN2 */
117
- {
118
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
119
- TCGv_i32 tmp = tcg_temp_new_i32();
120
- gen_helper_bfcvt_pair(tmp, tcg_op, fpst);
121
- tcg_gen_extu_i32_i64(tcg_res[pass], tmp);
122
- }
123
- break;
124
case 0x56: /* FCVTXN, FCVTXN2 */
125
{
126
/*
127
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
128
default:
129
case 0x12: /* XTN, SQXTUN */
130
case 0x14: /* SQXTN, UQXTN */
131
+ case 0x16: /* FCVTN, FCVTN2 */
132
+ case 0x36: /* BFCVTN, BFCVTN2 */
133
g_assert_not_reached();
134
}
135
136
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
137
unallocated_encoding(s);
138
return;
139
}
140
- /* fall through */
141
- case 0x16: /* FCVTN, FCVTN2 */
142
- /* handle_2misc_narrow does a 2*size -> size operation, but these
143
- * instructions encode the source size rather than dest size.
144
- */
145
- if (!fp_access_check(s)) {
146
- return;
147
- }
148
- handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
149
- return;
150
- case 0x36: /* BFCVTN, BFCVTN2 */
151
- if (!dc_isar_feature(aa64_bf16, s) || size != 2) {
152
- unallocated_encoding(s);
153
- return;
154
- }
155
if (!fp_access_check(s)) {
156
return;
157
}
158
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
159
}
160
break;
161
default:
162
+ case 0x16: /* FCVTN, FCVTN2 */
163
+ case 0x36: /* BFCVTN, BFCVTN2 */
164
unallocated_encoding(s);
165
return;
166
}
167
--
168
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When PSTATE.SM, VL = SVL even if SVE is disabled.
3
Remove handle_2misc_narrow as this was the last insn decoded
4
This is visible in kselftest ssve-test.
4
by that function.
5
5
6
Reported-by: Mark Brown <broonie@kernel.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220713045848.217364-2-richard.henderson@linaro.org
8
Message-id: 20241211163036.2297116-52-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.c | 10 ++++++++--
11
target/arm/tcg/a64.decode | 4 ++
13
1 file changed, 8 insertions(+), 2 deletions(-)
12
target/arm/tcg/translate-a64.c | 101 +++++++--------------------------
13
2 files changed, 24 insertions(+), 81 deletions(-)
14
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/helper.c
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
19
@@ -XXX,XX +XXX,XX @@
20
21
@qrr_b . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=0
22
@qrr_h . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=1
23
+@qrr_s . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=2
24
@qrr_bh . q:1 ...... . esz:1 ...... ...... rn:5 rd:5 &qrr_e
25
@qrr_hs . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=%esz_hs
26
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
27
@@ -XXX,XX +XXX,XX @@ SQXTUN_s 0111 1110 ..1 00001 00101 0 ..... ..... @rr_e
28
SQXTN_s 0101 1110 ..1 00001 01001 0 ..... ..... @rr_e
29
UQXTN_s 0111 1110 ..1 00001 01001 0 ..... ..... @rr_e
30
31
+FCVTXN_s 0111 1110 011 00001 01101 0 ..... ..... @rr_s
32
+
33
# Advanced SIMD two-register miscellaneous
34
35
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
36
@@ -XXX,XX +XXX,XX @@ SQXTN_v 0.00 1110 ..1 00001 01001 0 ..... ..... @qrr_e
37
UQXTN_v 0.10 1110 ..1 00001 01001 0 ..... ..... @qrr_e
38
39
FCVTN_v 0.00 1110 0.1 00001 01101 0 ..... ..... @qrr_hs
40
+FCVTXN_v 0.10 1110 011 00001 01101 0 ..... ..... @qrr_s
41
BFCVTN_v 0.00 1110 101 00001 01101 0 ..... ..... @qrr_h
42
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/translate-a64.c
45
+++ b/target/arm/tcg/translate-a64.c
46
@@ -XXX,XX +XXX,XX @@ static ArithOneOp * const f_scalar_uqxtn[] = {
47
};
48
TRANS(UQXTN_s, do_2misc_narrow_scalar, a, f_scalar_uqxtn)
49
50
+static void gen_fcvtxn_sd(TCGv_i64 d, TCGv_i64 n)
51
+{
52
+ /*
53
+ * 64 bit to 32 bit float conversion
54
+ * with von Neumann rounding (round to odd)
55
+ */
56
+ TCGv_i32 tmp = tcg_temp_new_i32();
57
+ gen_helper_fcvtx_f64_to_f32(tmp, n, tcg_env);
58
+ tcg_gen_extu_i32_i64(d, tmp);
59
+}
60
+
61
+static ArithOneOp * const f_scalar_fcvtxn[] = {
62
+ NULL,
63
+ NULL,
64
+ gen_fcvtxn_sd,
65
+};
66
+TRANS(FCVTXN_s, do_2misc_narrow_scalar, a, f_scalar_fcvtxn)
67
+
68
#undef WRAP_ENV
69
70
static bool do_gvec_fn2(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
71
@@ -XXX,XX +XXX,XX @@ static ArithOneOp * const f_vector_fcvtn[] = {
72
gen_fcvtn_sd,
73
};
74
TRANS(FCVTN_v, do_2misc_narrow_vector, a, f_vector_fcvtn)
75
+TRANS(FCVTXN_v, do_2misc_narrow_vector, a, f_scalar_fcvtxn)
76
77
static void gen_bfcvtn_hs(TCGv_i64 d, TCGv_i64 n)
78
{
79
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
20
}
80
}
21
if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
81
}
22
int sme_el = sme_exception_el(env, el);
82
23
+ bool sm = FIELD_EX64(env->svcr, SVCR, SM);
83
-static void handle_2misc_narrow(DisasContext *s, bool scalar,
24
84
- int opcode, bool u, bool is_q,
25
DP_TBFLAG_A64(flags, SMEEXC_EL, sme_el);
85
- int size, int rn, int rd)
26
if (sme_el == 0) {
86
-{
27
/* Similarly, do not compute SVL if SME is disabled. */
87
- /* Handle 2-reg-misc ops which are narrowing (so each 2*size element
28
- DP_TBFLAG_A64(flags, SVL, sve_vqm1_for_el_sm(env, el, true));
88
- * in the source becomes a size element in the destination).
29
+ int svl = sve_vqm1_for_el_sm(env, el, true);
89
- */
30
+ DP_TBFLAG_A64(flags, SVL, svl);
90
- int pass;
31
+ if (sm) {
91
- TCGv_i64 tcg_res[2];
32
+ /* If SVE is disabled, we will not have set VL above. */
92
- int destelt = is_q ? 2 : 0;
33
+ DP_TBFLAG_A64(flags, VL, svl);
93
- int passes = scalar ? 1 : 2;
34
+ }
94
-
35
}
95
- if (scalar) {
36
- if (FIELD_EX64(env->svcr, SVCR, SM)) {
96
- tcg_res[1] = tcg_constant_i64(0);
37
+ if (sm) {
97
- }
38
DP_TBFLAG_A64(flags, PSTATE_SM, 1);
98
-
39
DP_TBFLAG_A64(flags, SME_TRAP_NONSTREAMING, !sme_fa64(env, el));
99
- for (pass = 0; pass < passes; pass++) {
100
- TCGv_i64 tcg_op = tcg_temp_new_i64();
101
- NeonGenOne64OpFn *genfn = NULL;
102
- NeonGenOne64OpEnvFn *genenvfn = NULL;
103
-
104
- if (scalar) {
105
- read_vec_element(s, tcg_op, rn, pass, size + 1);
106
- } else {
107
- read_vec_element(s, tcg_op, rn, pass, MO_64);
108
- }
109
- tcg_res[pass] = tcg_temp_new_i64();
110
-
111
- switch (opcode) {
112
- case 0x56: /* FCVTXN, FCVTXN2 */
113
- {
114
- /*
115
- * 64 bit to 32 bit float conversion
116
- * with von Neumann rounding (round to odd)
117
- */
118
- TCGv_i32 tmp = tcg_temp_new_i32();
119
- assert(size == 2);
120
- gen_helper_fcvtx_f64_to_f32(tmp, tcg_op, tcg_env);
121
- tcg_gen_extu_i32_i64(tcg_res[pass], tmp);
122
- }
123
- break;
124
- default:
125
- case 0x12: /* XTN, SQXTUN */
126
- case 0x14: /* SQXTN, UQXTN */
127
- case 0x16: /* FCVTN, FCVTN2 */
128
- case 0x36: /* BFCVTN, BFCVTN2 */
129
- g_assert_not_reached();
130
- }
131
-
132
- if (genfn) {
133
- genfn(tcg_res[pass], tcg_op);
134
- } else if (genenvfn) {
135
- genenvfn(tcg_res[pass], tcg_env, tcg_op);
136
- }
137
- }
138
-
139
- for (pass = 0; pass < 2; pass++) {
140
- write_vec_element(s, tcg_res[pass], rd, destelt + pass, MO_32);
141
- }
142
- clear_vec_high(s, is_q, rd);
143
-}
144
-
145
/* AdvSIMD scalar two reg misc
146
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
147
* +-----+---+-----------+------+-----------+--------+-----+------+------+
148
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
149
rmode = FPROUNDING_TIEAWAY;
150
break;
151
case 0x56: /* FCVTXN, FCVTXN2 */
152
- if (size == 2) {
153
- unallocated_encoding(s);
154
- return;
155
- }
156
- if (!fp_access_check(s)) {
157
- return;
158
- }
159
- handle_2misc_narrow(s, true, opcode, u, false, size - 1, rn, rd);
160
- return;
161
default:
162
unallocated_encoding(s);
163
return;
164
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
165
}
166
handle_2misc_reciprocal(s, opcode, false, u, is_q, size, rn, rd);
167
return;
168
- case 0x56: /* FCVTXN, FCVTXN2 */
169
- if (size == 2) {
170
- unallocated_encoding(s);
171
- return;
172
- }
173
- if (!fp_access_check(s)) {
174
- return;
175
- }
176
- handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
177
- return;
178
case 0x17: /* FCVTL, FCVTL2 */
179
if (!fp_access_check(s)) {
180
return;
181
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
182
default:
183
case 0x16: /* FCVTN, FCVTN2 */
184
case 0x36: /* BFCVTN, BFCVTN2 */
185
+ case 0x56: /* FCVTXN, FCVTXN2 */
186
unallocated_encoding(s);
187
return;
40
}
188
}
41
--
189
--
42
2.25.1
190
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-53-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 2 +
9
target/arm/tcg/translate-a64.c | 75 +++++++++++++++++-----------------
10
2 files changed, 40 insertions(+), 37 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ UQXTN_v 0.10 1110 ..1 00001 01001 0 ..... ..... @qrr_e
17
FCVTN_v 0.00 1110 0.1 00001 01101 0 ..... ..... @qrr_hs
18
FCVTXN_v 0.10 1110 011 00001 01101 0 ..... ..... @qrr_s
19
BFCVTN_v 0.00 1110 101 00001 01101 0 ..... ..... @qrr_h
20
+
21
+SHLL_v 0.10 1110 ..1 00001 00111 0 ..... ..... @qrr_e
22
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/tcg/translate-a64.c
25
+++ b/target/arm/tcg/translate-a64.c
26
@@ -XXX,XX +XXX,XX @@ static ArithOneOp * const f_vector_bfcvtn[] = {
27
};
28
TRANS_FEAT(BFCVTN_v, aa64_bf16, do_2misc_narrow_vector, a, f_vector_bfcvtn)
29
30
+static bool trans_SHLL_v(DisasContext *s, arg_qrr_e *a)
31
+{
32
+ static NeonGenWidenFn * const widenfns[3] = {
33
+ gen_helper_neon_widen_u8,
34
+ gen_helper_neon_widen_u16,
35
+ tcg_gen_extu_i32_i64,
36
+ };
37
+ NeonGenWidenFn *widenfn;
38
+ TCGv_i64 tcg_res[2];
39
+ TCGv_i32 tcg_op;
40
+ int part, pass;
41
+
42
+ if (a->esz == MO_64) {
43
+ return false;
44
+ }
45
+ if (!fp_access_check(s)) {
46
+ return true;
47
+ }
48
+
49
+ tcg_op = tcg_temp_new_i32();
50
+ widenfn = widenfns[a->esz];
51
+ part = a->q ? 2 : 0;
52
+
53
+ for (pass = 0; pass < 2; pass++) {
54
+ read_vec_element_i32(s, tcg_op, a->rn, part + pass, MO_32);
55
+ tcg_res[pass] = tcg_temp_new_i64();
56
+ widenfn(tcg_res[pass], tcg_op);
57
+ tcg_gen_shli_i64(tcg_res[pass], tcg_res[pass], 8 << a->esz);
58
+ }
59
+
60
+ for (pass = 0; pass < 2; pass++) {
61
+ write_vec_element(s, tcg_res[pass], a->rd, pass, MO_64);
62
+ }
63
+ return true;
64
+}
65
+
66
+
67
/* Common vector code for handling integer to FP conversion */
68
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
69
int elements, int is_signed,
70
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
71
}
72
}
73
74
-static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
75
-{
76
- /* Implement SHLL and SHLL2 */
77
- int pass;
78
- int part = is_q ? 2 : 0;
79
- TCGv_i64 tcg_res[2];
80
-
81
- for (pass = 0; pass < 2; pass++) {
82
- static NeonGenWidenFn * const widenfns[3] = {
83
- gen_helper_neon_widen_u8,
84
- gen_helper_neon_widen_u16,
85
- tcg_gen_extu_i32_i64,
86
- };
87
- NeonGenWidenFn *widenfn = widenfns[size];
88
- TCGv_i32 tcg_op = tcg_temp_new_i32();
89
-
90
- read_vec_element_i32(s, tcg_op, rn, part + pass, MO_32);
91
- tcg_res[pass] = tcg_temp_new_i64();
92
- widenfn(tcg_res[pass], tcg_op);
93
- tcg_gen_shli_i64(tcg_res[pass], tcg_res[pass], 8 << size);
94
- }
95
-
96
- for (pass = 0; pass < 2; pass++) {
97
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
98
- }
99
-}
100
-
101
/* AdvSIMD two reg misc
102
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
103
* +---+---+---+-----------+------+-----------+--------+-----+------+------+
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
105
TCGv_ptr tcg_fpstatus;
106
107
switch (opcode) {
108
- case 0x13: /* SHLL, SHLL2 */
109
- if (u == 0 || size == 3) {
110
- unallocated_encoding(s);
111
- return;
112
- }
113
- if (!fp_access_check(s)) {
114
- return;
115
- }
116
- handle_shll(s, is_q, size, rn, rd);
117
- return;
118
case 0xc ... 0xf:
119
case 0x16 ... 0x1f:
120
{
121
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
122
case 0xa: /* CMLT */
123
case 0xb: /* ABS, NEG */
124
case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
125
+ case 0x13: /* SHLL, SHLL2 */
126
case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
127
unallocated_encoding(s);
128
return;
129
--
130
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Move the current implementation out of translate-neon.c,
4
and extend to handle all element sizes.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-54-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/translate.h | 6 ++++++
12
target/arm/tcg/gengvec.c | 14 ++++++++++++++
13
target/arm/tcg/translate-neon.c | 20 ++------------------
14
3 files changed, 22 insertions(+), 18 deletions(-)
15
16
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/translate.h
19
+++ b/target/arm/tcg/translate.h
20
@@ -XXX,XX +XXX,XX @@ void gen_gvec_uaddlp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
21
void gen_gvec_uadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
22
uint32_t opr_sz, uint32_t max_sz);
23
24
+/* These exclusively manipulate the sign bit. */
25
+void gen_gvec_fabs(unsigned vece, uint32_t dofs, uint32_t aofs,
26
+ uint32_t oprsz, uint32_t maxsz);
27
+void gen_gvec_fneg(unsigned vece, uint32_t dofs, uint32_t aofs,
28
+ uint32_t oprsz, uint32_t maxsz);
29
+
30
/*
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
32
*/
33
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/gengvec.c
36
+++ b/target/arm/tcg/gengvec.c
37
@@ -XXX,XX +XXX,XX @@ void gen_gvec_uadalp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
38
assert(vece <= MO_32);
39
tcg_gen_gvec_2(rd_ofs, rn_ofs, opr_sz, max_sz, &g[vece]);
40
}
41
+
42
+void gen_gvec_fabs(unsigned vece, uint32_t dofs, uint32_t aofs,
43
+ uint32_t oprsz, uint32_t maxsz)
44
+{
45
+ uint64_t s_bit = 1ull << ((8 << vece) - 1);
46
+ tcg_gen_gvec_andi(vece, dofs, aofs, s_bit - 1, oprsz, maxsz);
47
+}
48
+
49
+void gen_gvec_fneg(unsigned vece, uint32_t dofs, uint32_t aofs,
50
+ uint32_t oprsz, uint32_t maxsz)
51
+{
52
+ uint64_t s_bit = 1ull << ((8 << vece) - 1);
53
+ tcg_gen_gvec_xori(vece, dofs, aofs, s_bit, oprsz, maxsz);
54
+}
55
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/tcg/translate-neon.c
58
+++ b/target/arm/tcg/translate-neon.c
59
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
60
return true;
61
}
62
63
-static void gen_VABS_F(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
64
- uint32_t oprsz, uint32_t maxsz)
65
-{
66
- tcg_gen_gvec_andi(vece, rd_ofs, rm_ofs,
67
- vece == MO_16 ? 0x7fff : 0x7fffffff,
68
- oprsz, maxsz);
69
-}
70
-
71
static bool trans_VABS_F(DisasContext *s, arg_2misc *a)
72
{
73
if (a->size == MO_16) {
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VABS_F(DisasContext *s, arg_2misc *a)
75
} else if (a->size != MO_32) {
76
return false;
77
}
78
- return do_2misc_vec(s, a, gen_VABS_F);
79
-}
80
-
81
-static void gen_VNEG_F(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
82
- uint32_t oprsz, uint32_t maxsz)
83
-{
84
- tcg_gen_gvec_xori(vece, rd_ofs, rm_ofs,
85
- vece == MO_16 ? 0x8000 : 0x80000000,
86
- oprsz, maxsz);
87
+ return do_2misc_vec(s, a, gen_gvec_fabs);
88
}
89
90
static bool trans_VNEG_F(DisasContext *s, arg_2misc *a)
91
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_F(DisasContext *s, arg_2misc *a)
92
} else if (a->size != MO_32) {
93
return false;
94
}
95
- return do_2misc_vec(s, a, gen_VNEG_F);
96
+ return do_2misc_vec(s, a, gen_gvec_fneg);
97
}
98
99
static bool trans_VRECPE(DisasContext *s, arg_2misc *a)
100
--
101
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The documentation for PROT_MTE says that it cannot be cleared
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
by mprotect. Further, the implementation of the VM_ARCH_CLEAR bit,
5
contains PROT_BTI confiming that bit should be cleared.
6
7
Introduce PAGE_TARGET_STICKY to allow target/arch/cpu.h to control
8
which bits may be reset during page_set_flags. This is sort of the
9
opposite of VM_ARCH_CLEAR, but works better with qemu's PAGE_* bits
10
that are separate from PROT_* bits.
11
12
Reported-by: Vitaly Buka <vitalybuka@google.com>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20220711031420.17820-1-richard.henderson@linaro.org
5
Message-id: 20241211163036.2297116-55-richard.henderson@linaro.org
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
7
---
18
target/arm/cpu.h | 7 +++++--
8
target/arm/tcg/a64.decode | 7 +++++
19
accel/tcg/translate-all.c | 13 +++++++++++--
9
target/arm/tcg/translate-a64.c | 54 +++++++++++++++-------------------
20
2 files changed, 16 insertions(+), 4 deletions(-)
10
2 files changed, 31 insertions(+), 30 deletions(-)
21
11
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
23
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
14
--- a/target/arm/tcg/a64.decode
25
+++ b/target/arm/cpu.h
15
+++ b/target/arm/tcg/a64.decode
26
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
16
@@ -XXX,XX +XXX,XX @@
27
17
@qrr_s . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=2
28
/*
18
@qrr_bh . q:1 ...... . esz:1 ...... ...... rn:5 rd:5 &qrr_e
29
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
19
@qrr_hs . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=%esz_hs
30
+ * Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect
20
+@qrr_sd . q:1 ...... .. ...... ...... rn:5 rd:5 &qrr_e esz=%esz_sd
31
+ * mprotect but PROT_BTI may be cleared. C.f. the kernel's VM_ARCH_CLEAR.
21
@qrr_e . q:1 ...... esz:2 ...... ...... rn:5 rd:5 &qrr_e
32
*/
22
33
-#define PAGE_BTI PAGE_TARGET_1
23
@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
34
-#define PAGE_MTE PAGE_TARGET_2
24
@@ -XXX,XX +XXX,XX @@ FCVTXN_v 0.10 1110 011 00001 01101 0 ..... ..... @qrr_s
35
+#define PAGE_BTI PAGE_TARGET_1
25
BFCVTN_v 0.00 1110 101 00001 01101 0 ..... ..... @qrr_h
36
+#define PAGE_MTE PAGE_TARGET_2
26
37
+#define PAGE_TARGET_STICKY PAGE_MTE
27
SHLL_v 0.10 1110 ..1 00001 00111 0 ..... ..... @qrr_e
38
28
+
39
#ifdef TARGET_TAGGED_ADDRESSES
29
+FABS_v 0.00 1110 111 11000 11111 0 ..... ..... @qrr_h
40
/**
30
+FABS_v 0.00 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
31
+
32
+FNEG_v 0.10 1110 111 11000 11111 0 ..... ..... @qrr_h
33
+FNEG_v 0.10 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
42
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/translate-all.c
36
--- a/target/arm/tcg/translate-a64.c
44
+++ b/accel/tcg/translate-all.c
37
+++ b/target/arm/tcg/translate-a64.c
45
@@ -XXX,XX +XXX,XX @@ int page_get_flags(target_ulong address)
38
@@ -XXX,XX +XXX,XX @@ static bool trans_SHLL_v(DisasContext *s, arg_qrr_e *a)
46
return p->flags;
39
return true;
47
}
40
}
48
41
49
+/*
42
+static bool do_fabs_fneg_v(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
50
+ * Allow the target to decide if PAGE_TARGET_[12] may be reset.
43
+{
51
+ * By default, they are not kept.
44
+ int check = fp_access_check_vector_hsd(s, a->q, a->esz);
52
+ */
53
+#ifndef PAGE_TARGET_STICKY
54
+#define PAGE_TARGET_STICKY 0
55
+#endif
56
+#define PAGE_STICKY (PAGE_ANON | PAGE_TARGET_STICKY)
57
+
45
+
58
/* Modify the flags of a page and invalidate the code if necessary.
46
+ if (check <= 0) {
59
The flag PAGE_WRITE_ORG is positioned automatically depending
47
+ return check == 0;
60
on PAGE_WRITE. The mmap_lock should already be held. */
48
+ }
61
@@ -XXX,XX +XXX,XX @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
49
+
62
p->target_data = NULL;
50
+ gen_gvec_fn2(s, a->q, a->rd, a->rn, fn, a->esz);
63
p->flags = flags;
51
+ return true;
64
} else {
52
+}
65
- /* Using mprotect on a page does not change MAP_ANON. */
53
+
66
- p->flags = (p->flags & PAGE_ANON) | flags;
54
+TRANS(FABS_v, do_fabs_fneg_v, a, gen_gvec_fabs)
67
+ /* Using mprotect on a page does not change sticky bits. */
55
+TRANS(FNEG_v, do_fabs_fneg_v, a, gen_gvec_fneg)
68
+ p->flags = (p->flags & PAGE_STICKY) | flags;
56
69
}
57
/* Common vector code for handling integer to FP conversion */
58
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
59
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
60
* requires them.
61
*/
62
switch (opcode) {
63
- case 0x2f: /* FABS */
64
- gen_vfp_absd(tcg_rd, tcg_rn);
65
- break;
66
- case 0x6f: /* FNEG */
67
- gen_vfp_negd(tcg_rd, tcg_rn);
68
- break;
69
case 0x7f: /* FSQRT */
70
gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_fpstatus);
71
break;
72
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
73
case 0x9: /* CMEQ, CMLE */
74
case 0xa: /* CMLT */
75
case 0xb: /* ABS, NEG */
76
+ case 0x2f: /* FABS */
77
+ case 0x6f: /* FNEG */
78
g_assert_not_reached();
70
}
79
}
71
}
80
}
81
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
82
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
83
size = is_double ? 3 : 2;
84
switch (opcode) {
85
- case 0x2f: /* FABS */
86
- case 0x6f: /* FNEG */
87
- if (size == 3 && !is_q) {
88
- unallocated_encoding(s);
89
- return;
90
- }
91
- break;
92
case 0x1d: /* SCVTF */
93
case 0x5d: /* UCVTF */
94
{
95
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
96
case 0x16: /* FCVTN, FCVTN2 */
97
case 0x36: /* BFCVTN, BFCVTN2 */
98
case 0x56: /* FCVTXN, FCVTXN2 */
99
+ case 0x2f: /* FABS */
100
+ case 0x6f: /* FNEG */
101
unallocated_encoding(s);
102
return;
103
}
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
105
{
106
/* Special cases for 32 bit elements */
107
switch (opcode) {
108
- case 0x2f: /* FABS */
109
- gen_vfp_abss(tcg_res, tcg_op);
110
- break;
111
- case 0x6f: /* FNEG */
112
- gen_vfp_negs(tcg_res, tcg_op);
113
- break;
114
case 0x7f: /* FSQRT */
115
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_fpstatus);
116
break;
117
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
118
break;
119
default:
120
case 0x7: /* SQABS, SQNEG */
121
+ case 0x2f: /* FABS */
122
+ case 0x6f: /* FNEG */
123
g_assert_not_reached();
124
}
125
}
126
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
127
case 0x7b: /* FCVTZU */
128
rmode = FPROUNDING_ZERO;
129
break;
130
- case 0x2f: /* FABS */
131
- case 0x6f: /* FNEG */
132
- only_in_vector = true;
133
- need_fpst = false;
134
- break;
135
case 0x7d: /* FRSQRTE */
136
break;
137
case 0x7f: /* FSQRT (vector) */
138
only_in_vector = true;
139
break;
140
default:
141
+ case 0x2f: /* FABS */
142
+ case 0x6f: /* FNEG */
143
unallocated_encoding(s);
144
return;
145
}
146
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
147
case 0x59: /* FRINTX */
148
gen_helper_advsimd_rinth_exact(tcg_res, tcg_op, tcg_fpstatus);
149
break;
150
- case 0x2f: /* FABS */
151
- tcg_gen_andi_i32(tcg_res, tcg_op, 0x7fff);
152
- break;
153
- case 0x6f: /* FNEG */
154
- tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
155
- break;
156
case 0x7d: /* FRSQRTE */
157
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
158
break;
159
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
160
gen_helper_vfp_sqrth(tcg_res, tcg_op, tcg_fpstatus);
161
break;
162
default:
163
+ case 0x2f: /* FABS */
164
+ case 0x6f: /* FNEG */
165
g_assert_not_reached();
166
}
167
72
--
168
--
73
2.25.1
169
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-56-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 3 ++
9
target/arm/tcg/translate-a64.c | 69 ++++++++++++++++++++++++----------
10
2 files changed, 53 insertions(+), 19 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FABS_v 0.00 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
17
18
FNEG_v 0.10 1110 111 11000 11111 0 ..... ..... @qrr_h
19
FNEG_v 0.10 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
20
+
21
+FSQRT_v 0.10 1110 111 11001 11111 0 ..... ..... @qrr_h
22
+FSQRT_v 0.10 1110 1.1 00001 11111 0 ..... ..... @qrr_sd
23
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/tcg/translate-a64.c
26
+++ b/target/arm/tcg/translate-a64.c
27
@@ -XXX,XX +XXX,XX @@ static bool do_fabs_fneg_v(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
28
TRANS(FABS_v, do_fabs_fneg_v, a, gen_gvec_fabs)
29
TRANS(FNEG_v, do_fabs_fneg_v, a, gen_gvec_fneg)
30
31
+static bool do_fp1_vector(DisasContext *s, arg_qrr_e *a,
32
+ const FPScalar1 *f, int rmode)
33
+{
34
+ TCGv_i32 tcg_rmode = NULL;
35
+ TCGv_ptr fpst;
36
+ int check = fp_access_check_vector_hsd(s, a->q, a->esz);
37
+
38
+ if (check <= 0) {
39
+ return check == 0;
40
+ }
41
+
42
+ fpst = fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
43
+ if (rmode >= 0) {
44
+ tcg_rmode = gen_set_rmode(rmode, fpst);
45
+ }
46
+
47
+ if (a->esz == MO_64) {
48
+ TCGv_i64 t64 = tcg_temp_new_i64();
49
+
50
+ for (int pass = 0; pass < 2; ++pass) {
51
+ read_vec_element(s, t64, a->rn, pass, MO_64);
52
+ f->gen_d(t64, t64, fpst);
53
+ write_vec_element(s, t64, a->rd, pass, MO_64);
54
+ }
55
+ } else {
56
+ TCGv_i32 t32 = tcg_temp_new_i32();
57
+ void (*gen)(TCGv_i32, TCGv_i32, TCGv_ptr)
58
+ = (a->esz == MO_16 ? f->gen_h : f->gen_s);
59
+
60
+ for (int pass = 0, n = (a->q ? 16 : 8) >> a->esz; pass < n; ++pass) {
61
+ read_vec_element_i32(s, t32, a->rn, pass, a->esz);
62
+ gen(t32, t32, fpst);
63
+ write_vec_element_i32(s, t32, a->rd, pass, a->esz);
64
+ }
65
+ }
66
+ clear_vec_high(s, a->q, a->rd);
67
+
68
+ if (rmode >= 0) {
69
+ gen_restore_rmode(tcg_rmode, fpst);
70
+ }
71
+ return true;
72
+}
73
+
74
+TRANS(FSQRT_v, do_fp1_vector, a, &f_scalar_fsqrt, -1)
75
+
76
/* Common vector code for handling integer to FP conversion */
77
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
78
int elements, int is_signed,
79
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
80
* requires them.
81
*/
82
switch (opcode) {
83
- case 0x7f: /* FSQRT */
84
- gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_fpstatus);
85
- break;
86
case 0x1a: /* FCVTNS */
87
case 0x1b: /* FCVTMS */
88
case 0x1c: /* FCVTAS */
89
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
90
case 0xb: /* ABS, NEG */
91
case 0x2f: /* FABS */
92
case 0x6f: /* FNEG */
93
+ case 0x7f: /* FSQRT */
94
g_assert_not_reached();
95
}
96
}
97
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
98
}
99
handle_2misc_fcmp_zero(s, opcode, false, u, is_q, size, rn, rd);
100
return;
101
- case 0x7f: /* FSQRT */
102
- need_fpstatus = true;
103
- if (size == 3 && !is_q) {
104
- unallocated_encoding(s);
105
- return;
106
- }
107
- break;
108
case 0x1a: /* FCVTNS */
109
case 0x1b: /* FCVTMS */
110
case 0x3a: /* FCVTPS */
111
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
112
case 0x56: /* FCVTXN, FCVTXN2 */
113
case 0x2f: /* FABS */
114
case 0x6f: /* FNEG */
115
+ case 0x7f: /* FSQRT */
116
unallocated_encoding(s);
117
return;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
120
{
121
/* Special cases for 32 bit elements */
122
switch (opcode) {
123
- case 0x7f: /* FSQRT */
124
- gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_fpstatus);
125
- break;
126
case 0x1a: /* FCVTNS */
127
case 0x1b: /* FCVTMS */
128
case 0x1c: /* FCVTAS */
129
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
130
case 0x7: /* SQABS, SQNEG */
131
case 0x2f: /* FABS */
132
case 0x6f: /* FNEG */
133
+ case 0x7f: /* FSQRT */
134
g_assert_not_reached();
135
}
136
}
137
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
138
break;
139
case 0x7d: /* FRSQRTE */
140
break;
141
- case 0x7f: /* FSQRT (vector) */
142
- only_in_vector = true;
143
- break;
144
default:
145
case 0x2f: /* FABS */
146
case 0x6f: /* FNEG */
147
+ case 0x7f: /* FSQRT (vector) */
148
unallocated_encoding(s);
149
return;
150
}
151
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
152
case 0x7d: /* FRSQRTE */
153
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
154
break;
155
- case 0x7f: /* FSQRT */
156
- gen_helper_vfp_sqrth(tcg_res, tcg_op, tcg_fpstatus);
157
- break;
158
default:
159
case 0x2f: /* FABS */
160
case 0x6f: /* FNEG */
161
+ case 0x7f: /* FSQRT */
162
g_assert_not_reached();
163
}
164
165
--
166
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-57-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 26 +++++
9
target/arm/tcg/translate-a64.c | 176 ++++++++++++---------------------
10
2 files changed, 88 insertions(+), 114 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FNEG_v 0.10 1110 1.1 00000 11111 0 ..... ..... @qrr_sd
17
18
FSQRT_v 0.10 1110 111 11001 11111 0 ..... ..... @qrr_h
19
FSQRT_v 0.10 1110 1.1 00001 11111 0 ..... ..... @qrr_sd
20
+
21
+FRINTN_v 0.00 1110 011 11001 10001 0 ..... ..... @qrr_h
22
+FRINTN_v 0.00 1110 0.1 00001 10001 0 ..... ..... @qrr_sd
23
+
24
+FRINTM_v 0.00 1110 011 11001 10011 0 ..... ..... @qrr_h
25
+FRINTM_v 0.00 1110 0.1 00001 10011 0 ..... ..... @qrr_sd
26
+
27
+FRINTP_v 0.00 1110 111 11001 10001 0 ..... ..... @qrr_h
28
+FRINTP_v 0.00 1110 1.1 00001 10001 0 ..... ..... @qrr_sd
29
+
30
+FRINTZ_v 0.00 1110 111 11001 10011 0 ..... ..... @qrr_h
31
+FRINTZ_v 0.00 1110 1.1 00001 10011 0 ..... ..... @qrr_sd
32
+
33
+FRINTA_v 0.10 1110 011 11001 10001 0 ..... ..... @qrr_h
34
+FRINTA_v 0.10 1110 0.1 00001 10001 0 ..... ..... @qrr_sd
35
+
36
+FRINTX_v 0.10 1110 011 11001 10011 0 ..... ..... @qrr_h
37
+FRINTX_v 0.10 1110 0.1 00001 10011 0 ..... ..... @qrr_sd
38
+
39
+FRINTI_v 0.10 1110 111 11001 10011 0 ..... ..... @qrr_h
40
+FRINTI_v 0.10 1110 1.1 00001 10011 0 ..... ..... @qrr_sd
41
+
42
+FRINT32Z_v 0.00 1110 0.1 00001 11101 0 ..... ..... @qrr_sd
43
+FRINT32X_v 0.10 1110 0.1 00001 11101 0 ..... ..... @qrr_sd
44
+FRINT64Z_v 0.00 1110 0.1 00001 11111 0 ..... ..... @qrr_sd
45
+FRINT64X_v 0.10 1110 0.1 00001 11111 0 ..... ..... @qrr_sd
46
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/tcg/translate-a64.c
49
+++ b/target/arm/tcg/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@ static bool do_fp1_vector(DisasContext *s, arg_qrr_e *a,
51
52
TRANS(FSQRT_v, do_fp1_vector, a, &f_scalar_fsqrt, -1)
53
54
+TRANS(FRINTN_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_TIEEVEN)
55
+TRANS(FRINTP_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_POSINF)
56
+TRANS(FRINTM_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_NEGINF)
57
+TRANS(FRINTZ_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_ZERO)
58
+TRANS(FRINTA_v, do_fp1_vector, a, &f_scalar_frint, FPROUNDING_TIEAWAY)
59
+TRANS(FRINTI_v, do_fp1_vector, a, &f_scalar_frint, -1)
60
+TRANS(FRINTX_v, do_fp1_vector, a, &f_scalar_frintx, -1)
61
+
62
+TRANS_FEAT(FRINT32Z_v, aa64_frint, do_fp1_vector, a,
63
+ &f_scalar_frint32, FPROUNDING_ZERO)
64
+TRANS_FEAT(FRINT32X_v, aa64_frint, do_fp1_vector, a, &f_scalar_frint32, -1)
65
+TRANS_FEAT(FRINT64Z_v, aa64_frint, do_fp1_vector, a,
66
+ &f_scalar_frint64, FPROUNDING_ZERO)
67
+TRANS_FEAT(FRINT64X_v, aa64_frint, do_fp1_vector, a, &f_scalar_frint64, -1)
68
+
69
/* Common vector code for handling integer to FP conversion */
70
static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
71
int elements, int is_signed,
72
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
73
case 0x7b: /* FCVTZU */
74
gen_helper_vfp_touqd(tcg_rd, tcg_rn, tcg_constant_i32(0), tcg_fpstatus);
75
break;
76
- case 0x18: /* FRINTN */
77
- case 0x19: /* FRINTM */
78
- case 0x38: /* FRINTP */
79
- case 0x39: /* FRINTZ */
80
- case 0x58: /* FRINTA */
81
- case 0x79: /* FRINTI */
82
- gen_helper_rintd(tcg_rd, tcg_rn, tcg_fpstatus);
83
- break;
84
- case 0x59: /* FRINTX */
85
- gen_helper_rintd_exact(tcg_rd, tcg_rn, tcg_fpstatus);
86
- break;
87
- case 0x1e: /* FRINT32Z */
88
- case 0x5e: /* FRINT32X */
89
- gen_helper_frint32_d(tcg_rd, tcg_rn, tcg_fpstatus);
90
- break;
91
- case 0x1f: /* FRINT64Z */
92
- case 0x5f: /* FRINT64X */
93
- gen_helper_frint64_d(tcg_rd, tcg_rn, tcg_fpstatus);
94
- break;
95
default:
96
case 0x4: /* CLS, CLZ */
97
case 0x5: /* NOT */
98
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
99
case 0x2f: /* FABS */
100
case 0x6f: /* FNEG */
101
case 0x7f: /* FSQRT */
102
+ case 0x18: /* FRINTN */
103
+ case 0x19: /* FRINTM */
104
+ case 0x38: /* FRINTP */
105
+ case 0x39: /* FRINTZ */
106
+ case 0x58: /* FRINTA */
107
+ case 0x79: /* FRINTI */
108
+ case 0x59: /* FRINTX */
109
+ case 0x1e: /* FRINT32Z */
110
+ case 0x5e: /* FRINT32X */
111
+ case 0x1f: /* FRINT64Z */
112
+ case 0x5f: /* FRINT64X */
113
g_assert_not_reached();
114
}
115
}
116
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
117
}
118
handle_2misc_widening(s, opcode, is_q, size, rn, rd);
119
return;
120
- case 0x18: /* FRINTN */
121
- case 0x19: /* FRINTM */
122
- case 0x38: /* FRINTP */
123
- case 0x39: /* FRINTZ */
124
- rmode = extract32(opcode, 5, 1) | (extract32(opcode, 0, 1) << 1);
125
- /* fall through */
126
- case 0x59: /* FRINTX */
127
- case 0x79: /* FRINTI */
128
- need_fpstatus = true;
129
- if (size == 3 && !is_q) {
130
- unallocated_encoding(s);
131
- return;
132
- }
133
- break;
134
- case 0x58: /* FRINTA */
135
- rmode = FPROUNDING_TIEAWAY;
136
- need_fpstatus = true;
137
- if (size == 3 && !is_q) {
138
- unallocated_encoding(s);
139
- return;
140
- }
141
- break;
142
case 0x7c: /* URSQRTE */
143
if (size == 3) {
144
unallocated_encoding(s);
145
return;
146
}
147
break;
148
- case 0x1e: /* FRINT32Z */
149
- case 0x1f: /* FRINT64Z */
150
- rmode = FPROUNDING_ZERO;
151
- /* fall through */
152
- case 0x5e: /* FRINT32X */
153
- case 0x5f: /* FRINT64X */
154
- need_fpstatus = true;
155
- if ((size == 3 && !is_q) || !dc_isar_feature(aa64_frint, s)) {
156
- unallocated_encoding(s);
157
- return;
158
- }
159
- break;
160
default:
161
case 0x16: /* FCVTN, FCVTN2 */
162
case 0x36: /* BFCVTN, BFCVTN2 */
163
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
164
case 0x2f: /* FABS */
165
case 0x6f: /* FNEG */
166
case 0x7f: /* FSQRT */
167
+ case 0x18: /* FRINTN */
168
+ case 0x19: /* FRINTM */
169
+ case 0x38: /* FRINTP */
170
+ case 0x39: /* FRINTZ */
171
+ case 0x59: /* FRINTX */
172
+ case 0x79: /* FRINTI */
173
+ case 0x58: /* FRINTA */
174
+ case 0x1e: /* FRINT32Z */
175
+ case 0x1f: /* FRINT64Z */
176
+ case 0x5e: /* FRINT32X */
177
+ case 0x5f: /* FRINT64X */
178
unallocated_encoding(s);
179
return;
180
}
181
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
182
gen_helper_vfp_touls(tcg_res, tcg_op,
183
tcg_constant_i32(0), tcg_fpstatus);
184
break;
185
- case 0x18: /* FRINTN */
186
- case 0x19: /* FRINTM */
187
- case 0x38: /* FRINTP */
188
- case 0x39: /* FRINTZ */
189
- case 0x58: /* FRINTA */
190
- case 0x79: /* FRINTI */
191
- gen_helper_rints(tcg_res, tcg_op, tcg_fpstatus);
192
- break;
193
- case 0x59: /* FRINTX */
194
- gen_helper_rints_exact(tcg_res, tcg_op, tcg_fpstatus);
195
- break;
196
case 0x7c: /* URSQRTE */
197
gen_helper_rsqrte_u32(tcg_res, tcg_op);
198
break;
199
- case 0x1e: /* FRINT32Z */
200
- case 0x5e: /* FRINT32X */
201
- gen_helper_frint32_s(tcg_res, tcg_op, tcg_fpstatus);
202
- break;
203
- case 0x1f: /* FRINT64Z */
204
- case 0x5f: /* FRINT64X */
205
- gen_helper_frint64_s(tcg_res, tcg_op, tcg_fpstatus);
206
- break;
207
default:
208
case 0x7: /* SQABS, SQNEG */
209
case 0x2f: /* FABS */
210
case 0x6f: /* FNEG */
211
case 0x7f: /* FSQRT */
212
+ case 0x18: /* FRINTN */
213
+ case 0x19: /* FRINTM */
214
+ case 0x38: /* FRINTP */
215
+ case 0x39: /* FRINTZ */
216
+ case 0x58: /* FRINTA */
217
+ case 0x79: /* FRINTI */
218
+ case 0x59: /* FRINTX */
219
+ case 0x1e: /* FRINT32Z */
220
+ case 0x5e: /* FRINT32X */
221
+ case 0x1f: /* FRINT64Z */
222
+ case 0x5f: /* FRINT64X */
223
g_assert_not_reached();
224
}
225
}
226
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
227
int rn, rd;
228
bool is_q;
229
bool is_scalar;
230
- bool only_in_vector = false;
231
232
int pass;
233
TCGv_i32 tcg_rmode = NULL;
234
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
235
case 0x3d: /* FRECPE */
236
case 0x3f: /* FRECPX */
237
break;
238
- case 0x18: /* FRINTN */
239
- only_in_vector = true;
240
- rmode = FPROUNDING_TIEEVEN;
241
- break;
242
- case 0x19: /* FRINTM */
243
- only_in_vector = true;
244
- rmode = FPROUNDING_NEGINF;
245
- break;
246
- case 0x38: /* FRINTP */
247
- only_in_vector = true;
248
- rmode = FPROUNDING_POSINF;
249
- break;
250
- case 0x39: /* FRINTZ */
251
- only_in_vector = true;
252
- rmode = FPROUNDING_ZERO;
253
- break;
254
- case 0x58: /* FRINTA */
255
- only_in_vector = true;
256
- rmode = FPROUNDING_TIEAWAY;
257
- break;
258
- case 0x59: /* FRINTX */
259
- case 0x79: /* FRINTI */
260
- only_in_vector = true;
261
- /* current rounding mode */
262
- break;
263
case 0x1a: /* FCVTNS */
264
rmode = FPROUNDING_TIEEVEN;
265
break;
266
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
267
case 0x2f: /* FABS */
268
case 0x6f: /* FNEG */
269
case 0x7f: /* FSQRT (vector) */
270
+ case 0x18: /* FRINTN */
271
+ case 0x19: /* FRINTM */
272
+ case 0x38: /* FRINTP */
273
+ case 0x39: /* FRINTZ */
274
+ case 0x58: /* FRINTA */
275
+ case 0x59: /* FRINTX */
276
+ case 0x79: /* FRINTI */
277
unallocated_encoding(s);
278
return;
279
}
280
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
281
unallocated_encoding(s);
282
return;
283
}
284
- /* FRINTxx is only in the vector form */
285
- if (only_in_vector) {
286
- unallocated_encoding(s);
287
- return;
288
- }
289
}
290
291
if (!fp_access_check(s)) {
292
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
293
case 0x7b: /* FCVTZU */
294
gen_helper_advsimd_f16touinth(tcg_res, tcg_op, tcg_fpstatus);
295
break;
296
- case 0x18: /* FRINTN */
297
- case 0x19: /* FRINTM */
298
- case 0x38: /* FRINTP */
299
- case 0x39: /* FRINTZ */
300
- case 0x58: /* FRINTA */
301
- case 0x79: /* FRINTI */
302
- gen_helper_advsimd_rinth(tcg_res, tcg_op, tcg_fpstatus);
303
- break;
304
- case 0x59: /* FRINTX */
305
- gen_helper_advsimd_rinth_exact(tcg_res, tcg_op, tcg_fpstatus);
306
- break;
307
case 0x7d: /* FRSQRTE */
308
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
309
break;
310
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
311
case 0x2f: /* FABS */
312
case 0x6f: /* FNEG */
313
case 0x7f: /* FSQRT */
314
+ case 0x18: /* FRINTN */
315
+ case 0x19: /* FRINTM */
316
+ case 0x38: /* FRINTP */
317
+ case 0x39: /* FRINTZ */
318
+ case 0x58: /* FRINTA */
319
+ case 0x79: /* FRINTI */
320
+ case 0x59: /* FRINTX */
321
g_assert_not_reached();
322
}
323
324
--
325
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Arm silliness with naming, the scalar insns described
4
as part of the vector instructions, as separate from
5
the "regular" scalar insns which output to general registers.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241211163036.2297116-58-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/tcg/a64.decode | 30 ++++++++
13
target/arm/tcg/translate-a64.c | 133 ++++++++++++++-------------------
14
2 files changed, 86 insertions(+), 77 deletions(-)
15
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ UQXTN_s 0111 1110 ..1 00001 01001 0 ..... ..... @rr_e
21
22
FCVTXN_s 0111 1110 011 00001 01101 0 ..... ..... @rr_s
23
24
+@icvt_h . ....... .. ...... ...... rn:5 rd:5 \
25
+ &fcvt sf=0 esz=1 shift=0
26
+@icvt_sd . ....... .. ...... ...... rn:5 rd:5 \
27
+ &fcvt sf=0 esz=%esz_sd shift=0
28
+
29
+FCVTNS_f 0101 1110 011 11001 10101 0 ..... ..... @icvt_h
30
+FCVTNS_f 0101 1110 0.1 00001 10101 0 ..... ..... @icvt_sd
31
+FCVTNU_f 0111 1110 011 11001 10101 0 ..... ..... @icvt_h
32
+FCVTNU_f 0111 1110 0.1 00001 10101 0 ..... ..... @icvt_sd
33
+
34
+FCVTPS_f 0101 1110 111 11001 10101 0 ..... ..... @icvt_h
35
+FCVTPS_f 0101 1110 1.1 00001 10101 0 ..... ..... @icvt_sd
36
+FCVTPU_f 0111 1110 111 11001 10101 0 ..... ..... @icvt_h
37
+FCVTPU_f 0111 1110 1.1 00001 10101 0 ..... ..... @icvt_sd
38
+
39
+FCVTMS_f 0101 1110 011 11001 10111 0 ..... ..... @icvt_h
40
+FCVTMS_f 0101 1110 0.1 00001 10111 0 ..... ..... @icvt_sd
41
+FCVTMU_f 0111 1110 011 11001 10111 0 ..... ..... @icvt_h
42
+FCVTMU_f 0111 1110 0.1 00001 10111 0 ..... ..... @icvt_sd
43
+
44
+FCVTZS_f 0101 1110 111 11001 10111 0 ..... ..... @icvt_h
45
+FCVTZS_f 0101 1110 1.1 00001 10111 0 ..... ..... @icvt_sd
46
+FCVTZU_f 0111 1110 111 11001 10111 0 ..... ..... @icvt_h
47
+FCVTZU_f 0111 1110 1.1 00001 10111 0 ..... ..... @icvt_sd
48
+
49
+FCVTAS_f 0101 1110 011 11001 11001 0 ..... ..... @icvt_h
50
+FCVTAS_f 0101 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
51
+FCVTAU_f 0111 1110 011 11001 11001 0 ..... ..... @icvt_h
52
+FCVTAU_f 0111 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
53
+
54
# Advanced SIMD two-register miscellaneous
55
56
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
57
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/tcg/translate-a64.c
60
+++ b/target/arm/tcg/translate-a64.c
61
@@ -XXX,XX +XXX,XX @@ static void do_fcvt_scalar(DisasContext *s, MemOp out, MemOp esz,
62
tcg_shift, tcg_fpstatus);
63
tcg_gen_extu_i32_i64(tcg_out, tcg_single);
64
break;
65
+ case MO_16 | MO_SIGN:
66
+ gen_helper_vfp_toshh(tcg_single, tcg_single,
67
+ tcg_shift, tcg_fpstatus);
68
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
69
+ break;
70
+ case MO_16:
71
+ gen_helper_vfp_touhh(tcg_single, tcg_single,
72
+ tcg_shift, tcg_fpstatus);
73
+ tcg_gen_extu_i32_i64(tcg_out, tcg_single);
74
+ break;
75
default:
76
g_assert_not_reached();
77
}
78
@@ -XXX,XX +XXX,XX @@ TRANS(FCVTZU_g, do_fcvt_g, a, FPROUNDING_ZERO, false)
79
TRANS(FCVTAS_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, true)
80
TRANS(FCVTAU_g, do_fcvt_g, a, FPROUNDING_TIEAWAY, false)
81
82
+/*
83
+ * FCVT* (vector), scalar version.
84
+ * Which sounds weird, but really just means output to fp register
85
+ * instead of output to general register. Input and output element
86
+ * size are always equal.
87
+ */
88
+static bool do_fcvt_f(DisasContext *s, arg_fcvt *a,
89
+ ARMFPRounding rmode, bool is_signed)
90
+{
91
+ TCGv_i64 tcg_int;
92
+ int check = fp_access_check_scalar_hsd(s, a->esz);
93
+
94
+ if (check <= 0) {
95
+ return check == 0;
96
+ }
97
+
98
+ tcg_int = tcg_temp_new_i64();
99
+ do_fcvt_scalar(s, a->esz | (is_signed ? MO_SIGN : 0),
100
+ a->esz, tcg_int, a->shift, a->rn, rmode);
101
+
102
+ clear_vec(s, a->rd);
103
+ write_vec_element(s, tcg_int, a->rd, 0, a->esz);
104
+ return true;
105
+}
106
+
107
+TRANS(FCVTNS_f, do_fcvt_f, a, FPROUNDING_TIEEVEN, true)
108
+TRANS(FCVTNU_f, do_fcvt_f, a, FPROUNDING_TIEEVEN, false)
109
+TRANS(FCVTPS_f, do_fcvt_f, a, FPROUNDING_POSINF, true)
110
+TRANS(FCVTPU_f, do_fcvt_f, a, FPROUNDING_POSINF, false)
111
+TRANS(FCVTMS_f, do_fcvt_f, a, FPROUNDING_NEGINF, true)
112
+TRANS(FCVTMU_f, do_fcvt_f, a, FPROUNDING_NEGINF, false)
113
+TRANS(FCVTZS_f, do_fcvt_f, a, FPROUNDING_ZERO, true)
114
+TRANS(FCVTZU_f, do_fcvt_f, a, FPROUNDING_ZERO, false)
115
+TRANS(FCVTAS_f, do_fcvt_f, a, FPROUNDING_TIEAWAY, true)
116
+TRANS(FCVTAU_f, do_fcvt_f, a, FPROUNDING_TIEAWAY, false)
117
+
118
static bool trans_FJCVTZS(DisasContext *s, arg_FJCVTZS *a)
119
{
120
if (!dc_isar_feature(aa64_jscvt, s)) {
121
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
122
int opcode = extract32(insn, 12, 5);
123
int size = extract32(insn, 22, 2);
124
bool u = extract32(insn, 29, 1);
125
- bool is_fcvt = false;
126
- int rmode;
127
- TCGv_i32 tcg_rmode;
128
- TCGv_ptr tcg_fpstatus;
129
130
switch (opcode) {
131
case 0xc ... 0xf:
132
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
133
case 0x5b: /* FCVTMU */
134
case 0x7a: /* FCVTPU */
135
case 0x7b: /* FCVTZU */
136
- is_fcvt = true;
137
- rmode = extract32(opcode, 5, 1) | (extract32(opcode, 0, 1) << 1);
138
- break;
139
case 0x1c: /* FCVTAS */
140
case 0x5c: /* FCVTAU */
141
- /* TIEAWAY doesn't fit in the usual rounding mode encoding */
142
- is_fcvt = true;
143
- rmode = FPROUNDING_TIEAWAY;
144
- break;
145
case 0x56: /* FCVTXN, FCVTXN2 */
146
default:
147
unallocated_encoding(s);
148
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
149
unallocated_encoding(s);
150
return;
151
}
152
-
153
- if (!fp_access_check(s)) {
154
- return;
155
- }
156
-
157
- if (is_fcvt) {
158
- tcg_fpstatus = fpstatus_ptr(FPST_FPCR);
159
- tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
160
- } else {
161
- tcg_fpstatus = NULL;
162
- tcg_rmode = NULL;
163
- }
164
-
165
- if (size == 3) {
166
- TCGv_i64 tcg_rn = read_fp_dreg(s, rn);
167
- TCGv_i64 tcg_rd = tcg_temp_new_i64();
168
-
169
- handle_2misc_64(s, opcode, u, tcg_rd, tcg_rn, tcg_rmode, tcg_fpstatus);
170
- write_fp_dreg(s, rd, tcg_rd);
171
- } else {
172
- TCGv_i32 tcg_rn = tcg_temp_new_i32();
173
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
174
-
175
- read_vec_element_i32(s, tcg_rn, rn, 0, size);
176
-
177
- switch (opcode) {
178
- case 0x1a: /* FCVTNS */
179
- case 0x1b: /* FCVTMS */
180
- case 0x1c: /* FCVTAS */
181
- case 0x3a: /* FCVTPS */
182
- case 0x3b: /* FCVTZS */
183
- gen_helper_vfp_tosls(tcg_rd, tcg_rn, tcg_constant_i32(0),
184
- tcg_fpstatus);
185
- break;
186
- case 0x5a: /* FCVTNU */
187
- case 0x5b: /* FCVTMU */
188
- case 0x5c: /* FCVTAU */
189
- case 0x7a: /* FCVTPU */
190
- case 0x7b: /* FCVTZU */
191
- gen_helper_vfp_touls(tcg_rd, tcg_rn, tcg_constant_i32(0),
192
- tcg_fpstatus);
193
- break;
194
- default:
195
- case 0x7: /* SQABS, SQNEG */
196
- g_assert_not_reached();
197
- }
198
-
199
- write_fp_sreg(s, rd, tcg_rd);
200
- }
201
-
202
- if (is_fcvt) {
203
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
204
- }
205
+ g_assert_not_reached();
206
}
207
208
/* AdvSIMD shift by immediate
209
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
210
TCGv_i32 tcg_res = tcg_temp_new_i32();
211
212
switch (fpop) {
213
- case 0x1a: /* FCVTNS */
214
- case 0x1b: /* FCVTMS */
215
- case 0x1c: /* FCVTAS */
216
- case 0x3a: /* FCVTPS */
217
- case 0x3b: /* FCVTZS */
218
- gen_helper_advsimd_f16tosinth(tcg_res, tcg_op, tcg_fpstatus);
219
- break;
220
case 0x3d: /* FRECPE */
221
gen_helper_recpe_f16(tcg_res, tcg_op, tcg_fpstatus);
222
break;
223
case 0x3f: /* FRECPX */
224
gen_helper_frecpx_f16(tcg_res, tcg_op, tcg_fpstatus);
225
break;
226
+ case 0x7d: /* FRSQRTE */
227
+ gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
228
+ break;
229
+ default:
230
+ case 0x1a: /* FCVTNS */
231
+ case 0x1b: /* FCVTMS */
232
+ case 0x1c: /* FCVTAS */
233
+ case 0x3a: /* FCVTPS */
234
+ case 0x3b: /* FCVTZS */
235
case 0x5a: /* FCVTNU */
236
case 0x5b: /* FCVTMU */
237
case 0x5c: /* FCVTAU */
238
case 0x7a: /* FCVTPU */
239
case 0x7b: /* FCVTZU */
240
- gen_helper_advsimd_f16touinth(tcg_res, tcg_op, tcg_fpstatus);
241
- break;
242
- case 0x7d: /* FRSQRTE */
243
- gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
244
- break;
245
- default:
246
g_assert_not_reached();
247
}
248
249
--
250
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-59-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/tcg/a64.decode | 19 +++++++++++++++++++
9
target/arm/tcg/translate-a64.c | 4 +---
10
2 files changed, 20 insertions(+), 3 deletions(-)
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ FCVTAS_f 0101 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
17
FCVTAU_f 0111 1110 011 11001 11001 0 ..... ..... @icvt_h
18
FCVTAU_f 0111 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
19
20
+%fcvt_f_sh_h 16:4 !function=rsub_16
21
+%fcvt_f_sh_s 16:5 !function=rsub_32
22
+%fcvt_f_sh_d 16:6 !function=rsub_64
23
+
24
+@fcvt_fixed_h .... .... . 001 .... ...... rn:5 rd:5 \
25
+ &fcvt sf=0 esz=1 shift=%fcvt_f_sh_h
26
+@fcvt_fixed_s .... .... . 01 ..... ...... rn:5 rd:5 \
27
+ &fcvt sf=0 esz=2 shift=%fcvt_f_sh_s
28
+@fcvt_fixed_d .... .... . 1 ...... ...... rn:5 rd:5 \
29
+ &fcvt sf=0 esz=3 shift=%fcvt_f_sh_d
30
+
31
+FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_h
32
+FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_s
33
+FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_d
34
+
35
+FCVTZU_f 0111 1111 0 ....... 111111 ..... ..... @fcvt_fixed_h
36
+FCVTZU_f 0111 1111 0 ....... 111111 ..... ..... @fcvt_fixed_s
37
+FCVTZU_f 0111 1111 0 ....... 111111 ..... ..... @fcvt_fixed_d
38
+
39
# Advanced SIMD two-register miscellaneous
40
41
SQABS_v 0.00 1110 ..1 00000 01111 0 ..... ..... @qrr_e
42
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/translate-a64.c
45
+++ b/target/arm/tcg/translate-a64.c
46
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_shift_imm(DisasContext *s, uint32_t insn)
47
handle_simd_shift_intfp_conv(s, true, false, is_u, immh, immb,
48
opcode, rn, rd);
49
break;
50
- case 0x1f: /* FCVTZS, FCVTZU */
51
- handle_simd_shift_fpint_conv(s, true, false, is_u, immh, immb, rn, rd);
52
- break;
53
default:
54
case 0x00: /* SSHR / USHR */
55
case 0x02: /* SSRA / USRA */
56
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_shift_imm(DisasContext *s, uint32_t insn)
57
case 0x11: /* SQRSHRUN */
58
case 0x12: /* SQSHRN, UQSHRN */
59
case 0x13: /* SQRSHRN, UQRSHRN */
60
+ case 0x1f: /* FCVTZS, FCVTZU */
61
unallocated_encoding(s);
62
break;
63
}
64
--
65
2.34.1
diff view generated by jsdifflib
1
From: Andrey Makarov <ph.makarov@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There is nothing in the specs on DMA engine interrupt lines: it should have
4
been in the "BCM2835 ARM Peripherals" datasheet but the appropriate
5
"ARM peripherals interrupt table" (p.113) is nearly empty.
6
7
All Raspberry Pi models 1-3 (based on bcm2835) have
8
Linux device tree (arch/arm/boot/dts/bcm2835-common.dtsi +25):
9
10
/* dma channel 11-14 share one irq */
11
12
This information is repeated in the driver code
13
(drivers/dma/bcm2835-dma.c +1344):
14
15
/*
16
* in case of channel >= 11
17
* use the 11th interrupt and that is shared
18
*/
19
20
In this patch channels 0--10 and 11--14 are handled separately.
21
22
Signed-off-by: Andrey Makarov <andrey.makarov@auriga.com>
23
Message-id: 20220716113210.349153-1-andrey.makarov@auriga.com
24
[PMM: fixed checkpatch nits]
25
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-60-richard.henderson@linaro.org
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
---
7
---
28
include/hw/arm/bcm2835_peripherals.h | 2 +
8
target/arm/tcg/a64.decode | 6 ++++++
29
hw/arm/bcm2835_peripherals.c | 26 +++++-
9
target/arm/tcg/translate-a64.c | 35 ++++++++++++++++++++++++----------
30
tests/qtest/bcm2835-dma-test.c | 118 +++++++++++++++++++++++++++
10
2 files changed, 31 insertions(+), 10 deletions(-)
31
tests/qtest/meson.build | 3 +-
32
4 files changed, 147 insertions(+), 2 deletions(-)
33
create mode 100644 tests/qtest/bcm2835-dma-test.c
34
11
35
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
36
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
37
--- a/include/hw/arm/bcm2835_peripherals.h
14
--- a/target/arm/tcg/a64.decode
38
+++ b/include/hw/arm/bcm2835_peripherals.h
15
+++ b/target/arm/tcg/a64.decode
39
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ FCVTXN_s 0111 1110 011 00001 01101 0 ..... ..... @rr_s
40
#include "hw/char/bcm2835_aux.h"
17
@icvt_sd . ....... .. ...... ...... rn:5 rd:5 \
41
#include "hw/display/bcm2835_fb.h"
18
&fcvt sf=0 esz=%esz_sd shift=0
42
#include "hw/dma/bcm2835_dma.h"
19
43
+#include "hw/or-irq.h"
20
+SCVTF_f 0101 1110 011 11001 11011 0 ..... ..... @icvt_h
44
#include "hw/intc/bcm2835_ic.h"
21
+SCVTF_f 0101 1110 0.1 00001 11011 0 ..... ..... @icvt_sd
45
#include "hw/misc/bcm2835_property.h"
22
+
46
#include "hw/misc/bcm2835_rng.h"
23
+UCVTF_f 0111 1110 011 11001 11011 0 ..... ..... @icvt_h
47
@@ -XXX,XX +XXX,XX @@ struct BCM2835PeripheralState {
24
+UCVTF_f 0111 1110 0.1 00001 11011 0 ..... ..... @icvt_sd
48
BCM2835AuxState aux;
25
+
49
BCM2835FBState fb;
26
FCVTNS_f 0101 1110 011 11001 10101 0 ..... ..... @icvt_h
50
BCM2835DMAState dma;
27
FCVTNS_f 0101 1110 0.1 00001 10101 0 ..... ..... @icvt_sd
51
+ qemu_or_irq orgated_dma_irq;
28
FCVTNU_f 0111 1110 011 11001 10101 0 ..... ..... @icvt_h
52
BCM2835ICState ic;
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
53
BCM2835PropertyState property;
54
BCM2835RngState rng;
55
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
56
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/bcm2835_peripherals.c
31
--- a/target/arm/tcg/translate-a64.c
58
+++ b/hw/arm/bcm2835_peripherals.c
32
+++ b/target/arm/tcg/translate-a64.c
59
@@ -XXX,XX +XXX,XX @@
33
@@ -XXX,XX +XXX,XX @@ static bool do_cvtf_g(DisasContext *s, arg_fcvt *a, bool is_signed)
60
/* Capabilities for SD controller: no DMA, high-speed, default clocks etc. */
34
TRANS(SCVTF_g, do_cvtf_g, a, true)
61
#define BCM2835_SDHC_CAPAREG 0x52134b4
35
TRANS(UCVTF_g, do_cvtf_g, a, false)
62
36
63
+/*
37
+/*
64
+ * According to Linux driver & DTS, dma channels 0--10 have separate IRQ,
38
+ * [US]CVTF (vector), scalar version.
65
+ * while channels 11--14 share one IRQ:
39
+ * Which sounds weird, but really just means input from fp register
40
+ * instead of input from general register. Input and output element
41
+ * size are always equal.
66
+ */
42
+ */
67
+#define SEPARATE_DMA_IRQ_MAX 10
43
+static bool do_cvtf_f(DisasContext *s, arg_fcvt *a, bool is_signed)
68
+#define ORGATED_DMA_IRQ_COUNT 4
44
+{
45
+ TCGv_i64 tcg_int;
46
+ int check = fp_access_check_scalar_hsd(s, a->esz);
69
+
47
+
70
static void create_unimp(BCM2835PeripheralState *ps,
48
+ if (check <= 0) {
71
UnimplementedDeviceState *uds,
49
+ return check == 0;
72
const char *name, hwaddr ofs, hwaddr size)
73
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
74
/* DMA Channels */
75
object_initialize_child(obj, "dma", &s->dma, TYPE_BCM2835_DMA);
76
77
+ object_initialize_child(obj, "orgated-dma-irq",
78
+ &s->orgated_dma_irq, TYPE_OR_IRQ);
79
+ object_property_set_int(OBJECT(&s->orgated_dma_irq), "num-lines",
80
+ ORGATED_DMA_IRQ_COUNT, &error_abort);
81
+
82
object_property_add_const_link(OBJECT(&s->dma), "dma-mr",
83
OBJECT(&s->gpu_bus_mr));
84
85
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
86
memory_region_add_subregion(&s->peri_mr, DMA15_OFFSET,
87
sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->dma), 1));
88
89
- for (n = 0; n <= 12; n++) {
90
+ for (n = 0; n <= SEPARATE_DMA_IRQ_MAX; n++) {
91
sysbus_connect_irq(SYS_BUS_DEVICE(&s->dma), n,
92
qdev_get_gpio_in_named(DEVICE(&s->ic),
93
BCM2835_IC_GPU_IRQ,
94
INTERRUPT_DMA0 + n));
95
}
96
+ if (!qdev_realize(DEVICE(&s->orgated_dma_irq), NULL, errp)) {
97
+ return;
98
+ }
99
+ for (n = 0; n < ORGATED_DMA_IRQ_COUNT; n++) {
100
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->dma),
101
+ SEPARATE_DMA_IRQ_MAX + 1 + n,
102
+ qdev_get_gpio_in(DEVICE(&s->orgated_dma_irq), n));
103
+ }
104
+ qdev_connect_gpio_out(DEVICE(&s->orgated_dma_irq), 0,
105
+ qdev_get_gpio_in_named(DEVICE(&s->ic),
106
+ BCM2835_IC_GPU_IRQ,
107
+ INTERRUPT_DMA0 + SEPARATE_DMA_IRQ_MAX + 1));
108
109
/* THERMAL */
110
if (!sysbus_realize(SYS_BUS_DEVICE(&s->thermal), errp)) {
111
diff --git a/tests/qtest/bcm2835-dma-test.c b/tests/qtest/bcm2835-dma-test.c
112
new file mode 100644
113
index XXXXXXX..XXXXXXX
114
--- /dev/null
115
+++ b/tests/qtest/bcm2835-dma-test.c
116
@@ -XXX,XX +XXX,XX @@
117
+/*
118
+ * QTest testcase for BCM283x DMA engine (on Raspberry Pi 3)
119
+ * and its interrupts coming to Interrupt Controller.
120
+ *
121
+ * Copyright (c) 2022 Auriga LLC
122
+ *
123
+ * SPDX-License-Identifier: GPL-2.0-or-later
124
+ */
125
+
126
+#include "qemu/osdep.h"
127
+#include "libqtest-single.h"
128
+
129
+/* Offsets in raspi3b platform: */
130
+#define RASPI3_DMA_BASE 0x3f007000
131
+#define RASPI3_IC_BASE 0x3f00b200
132
+
133
+/* Used register/fields definitions */
134
+
135
+/* DMA engine registers: */
136
+#define BCM2708_DMA_CS 0
137
+#define BCM2708_DMA_ACTIVE (1 << 0)
138
+#define BCM2708_DMA_INT (1 << 2)
139
+
140
+#define BCM2708_DMA_ADDR 0x04
141
+
142
+#define BCM2708_DMA_INT_STATUS 0xfe0
143
+
144
+/* DMA Trasfer Info fields: */
145
+#define BCM2708_DMA_INT_EN (1 << 0)
146
+#define BCM2708_DMA_D_INC (1 << 4)
147
+#define BCM2708_DMA_S_INC (1 << 8)
148
+
149
+/* Interrupt controller registers: */
150
+#define IRQ_PENDING_BASIC 0x00
151
+#define IRQ_GPU_PENDING1_AGGR (1 << 8)
152
+#define IRQ_PENDING_1 0x04
153
+#define IRQ_ENABLE_1 0x10
154
+
155
+/* Data for the test: */
156
+#define SCB_ADDR 256
157
+#define S_ADDR 32
158
+#define D_ADDR 64
159
+#define TXFR_LEN 32
160
+const uint32_t check_data = 0x12345678;
161
+
162
+static void bcm2835_dma_test_interrupt(int dma_c, int irq_line)
163
+{
164
+ uint64_t dma_base = RASPI3_DMA_BASE + dma_c * 0x100;
165
+ int gpu_irq_line = 16 + irq_line;
166
+
167
+ /* Check that interrupts are silent by default: */
168
+ writel(RASPI3_IC_BASE + IRQ_ENABLE_1, 1 << gpu_irq_line);
169
+ int isr = readl(dma_base + BCM2708_DMA_INT_STATUS);
170
+ g_assert_cmpint(isr, ==, 0);
171
+ uint32_t reg0 = readl(dma_base + BCM2708_DMA_CS);
172
+ g_assert_cmpint(reg0, ==, 0);
173
+ uint32_t ic_pending = readl(RASPI3_IC_BASE + IRQ_PENDING_BASIC);
174
+ g_assert_cmpint(ic_pending, ==, 0);
175
+ uint32_t gpu_pending1 = readl(RASPI3_IC_BASE + IRQ_PENDING_1);
176
+ g_assert_cmpint(gpu_pending1, ==, 0);
177
+
178
+ /* Prepare Control Block: */
179
+ writel(SCB_ADDR + 0, BCM2708_DMA_S_INC | BCM2708_DMA_D_INC |
180
+ BCM2708_DMA_INT_EN); /* transfer info */
181
+ writel(SCB_ADDR + 4, S_ADDR); /* source address */
182
+ writel(SCB_ADDR + 8, D_ADDR); /* destination address */
183
+ writel(SCB_ADDR + 12, TXFR_LEN); /* transfer length */
184
+ writel(dma_base + BCM2708_DMA_ADDR, SCB_ADDR);
185
+
186
+ writel(S_ADDR, check_data);
187
+ for (int word = S_ADDR + 4; word < S_ADDR + TXFR_LEN; word += 4) {
188
+ writel(word, ~check_data);
189
+ }
190
+ /* Perform the transfer: */
191
+ writel(dma_base + BCM2708_DMA_CS, BCM2708_DMA_ACTIVE);
192
+
193
+ /* Check that destination == source: */
194
+ uint32_t data = readl(D_ADDR);
195
+ g_assert_cmpint(data, ==, check_data);
196
+ for (int word = D_ADDR + 4; word < D_ADDR + TXFR_LEN; word += 4) {
197
+ data = readl(word);
198
+ g_assert_cmpint(data, ==, ~check_data);
199
+ }
50
+ }
200
+
51
+
201
+ /* Check that interrupt status is set both in DMA and IC controllers: */
52
+ tcg_int = tcg_temp_new_i64();
202
+ isr = readl(RASPI3_DMA_BASE + BCM2708_DMA_INT_STATUS);
53
+ read_vec_element(s, tcg_int, a->rn, 0, a->esz | (is_signed ? MO_SIGN : 0));
203
+ g_assert_cmpint(isr, ==, 1 << dma_c);
54
+ return do_cvtf_scalar(s, a->esz, a->rd, a->shift, tcg_int, is_signed);
204
+
205
+ ic_pending = readl(RASPI3_IC_BASE + IRQ_PENDING_BASIC);
206
+ g_assert_cmpint(ic_pending, ==, IRQ_GPU_PENDING1_AGGR);
207
+
208
+ gpu_pending1 = readl(RASPI3_IC_BASE + IRQ_PENDING_1);
209
+ g_assert_cmpint(gpu_pending1, ==, 1 << gpu_irq_line);
210
+
211
+ /* Clean up, clear interrupt: */
212
+ writel(dma_base + BCM2708_DMA_CS, BCM2708_DMA_INT);
213
+}
55
+}
214
+
56
+
215
+static void bcm2835_dma_test_interrupts(void)
57
+TRANS(SCVTF_f, do_cvtf_f, a, true)
216
+{
58
+TRANS(UCVTF_f, do_cvtf_f, a, false)
217
+ /* DMA engines 0--10 have separate IRQ lines, 11--14 - only one: */
218
+ bcm2835_dma_test_interrupt(0, 0);
219
+ bcm2835_dma_test_interrupt(10, 10);
220
+ bcm2835_dma_test_interrupt(11, 11);
221
+ bcm2835_dma_test_interrupt(14, 11);
222
+}
223
+
59
+
224
+int main(int argc, char **argv)
60
static void do_fcvt_scalar(DisasContext *s, MemOp out, MemOp esz,
225
+{
61
TCGv_i64 tcg_out, int shift, int rn,
226
+ int ret;
62
ARMFPRounding rmode)
227
+ g_test_init(&argc, &argv, NULL);
63
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
228
+ qtest_add_func("/bcm2835/dma/test_interrupts",
64
case 0x6d: /* FCMLE (zero) */
229
+ bcm2835_dma_test_interrupts);
65
handle_2misc_fcmp_zero(s, opcode, true, u, true, size, rn, rd);
230
+ qtest_start("-machine raspi3b");
66
return;
231
+ ret = g_test_run();
67
- case 0x1d: /* SCVTF */
232
+ qtest_end();
68
- case 0x5d: /* UCVTF */
233
+ return ret;
69
- {
234
+}
70
- bool is_signed = (opcode == 0x1d);
235
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
71
- if (!fp_access_check(s)) {
236
index XXXXXXX..XXXXXXX 100644
72
- return;
237
--- a/tests/qtest/meson.build
73
- }
238
+++ b/tests/qtest/meson.build
74
- handle_simd_intfp_conv(s, rd, rn, 1, is_signed, 0, size);
239
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
75
- return;
240
['arm-cpu-features',
76
- }
241
'numa-test',
77
case 0x3d: /* FRECPE */
242
'boot-serial-test',
78
case 0x3f: /* FRECPX */
243
- 'migration-test']
79
case 0x7d: /* FRSQRTE */
244
+ 'migration-test',
80
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
245
+ 'bcm2835-dma-test']
81
case 0x1c: /* FCVTAS */
246
82
case 0x5c: /* FCVTAU */
247
qtests_s390x = \
83
case 0x56: /* FCVTXN, FCVTXN2 */
248
(slirp.found() ? ['pxe-test', 'test-netfilter'] : []) + \
84
+ case 0x1d: /* SCVTF */
85
+ case 0x5d: /* UCVTF */
86
default:
87
unallocated_encoding(s);
88
return;
249
--
89
--
250
2.25.1
90
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove disas_simd_scalar_shift_imm as these were the
4
last insns decoded by that function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-61-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 8 ++++++
12
target/arm/tcg/translate-a64.c | 47 ----------------------------------
13
2 files changed, 8 insertions(+), 47 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FCVTAU_f 0111 1110 0.1 00001 11001 0 ..... ..... @icvt_sd
20
@fcvt_fixed_d .... .... . 1 ...... ...... rn:5 rd:5 \
21
&fcvt sf=0 esz=3 shift=%fcvt_f_sh_d
22
23
+SCVTF_f 0101 1111 0 ....... 111001 ..... ..... @fcvt_fixed_h
24
+SCVTF_f 0101 1111 0 ....... 111001 ..... ..... @fcvt_fixed_s
25
+SCVTF_f 0101 1111 0 ....... 111001 ..... ..... @fcvt_fixed_d
26
+
27
+UCVTF_f 0111 1111 0 ....... 111001 ..... ..... @fcvt_fixed_h
28
+UCVTF_f 0111 1111 0 ....... 111001 ..... ..... @fcvt_fixed_s
29
+UCVTF_f 0111 1111 0 ....... 111001 ..... ..... @fcvt_fixed_d
30
+
31
FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_h
32
FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_s
33
FCVTZS_f 0101 1111 0 ....... 111111 ..... ..... @fcvt_fixed_d
34
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/translate-a64.c
37
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
39
gen_restore_rmode(tcg_rmode, tcg_fpstatus);
40
}
41
42
-/* AdvSIMD scalar shift by immediate
43
- * 31 30 29 28 23 22 19 18 16 15 11 10 9 5 4 0
44
- * +-----+---+-------------+------+------+--------+---+------+------+
45
- * | 0 1 | U | 1 1 1 1 1 0 | immh | immb | opcode | 1 | Rn | Rd |
46
- * +-----+---+-------------+------+------+--------+---+------+------+
47
- *
48
- * This is the scalar version so it works on a fixed sized registers
49
- */
50
-static void disas_simd_scalar_shift_imm(DisasContext *s, uint32_t insn)
51
-{
52
- int rd = extract32(insn, 0, 5);
53
- int rn = extract32(insn, 5, 5);
54
- int opcode = extract32(insn, 11, 5);
55
- int immb = extract32(insn, 16, 3);
56
- int immh = extract32(insn, 19, 4);
57
- bool is_u = extract32(insn, 29, 1);
58
-
59
- if (immh == 0) {
60
- unallocated_encoding(s);
61
- return;
62
- }
63
-
64
- switch (opcode) {
65
- case 0x1c: /* SCVTF, UCVTF */
66
- handle_simd_shift_intfp_conv(s, true, false, is_u, immh, immb,
67
- opcode, rn, rd);
68
- break;
69
- default:
70
- case 0x00: /* SSHR / USHR */
71
- case 0x02: /* SSRA / USRA */
72
- case 0x04: /* SRSHR / URSHR */
73
- case 0x06: /* SRSRA / URSRA */
74
- case 0x08: /* SRI */
75
- case 0x0a: /* SHL / SLI */
76
- case 0x0c: /* SQSHLU */
77
- case 0x0e: /* SQSHL, UQSHL */
78
- case 0x10: /* SQSHRUN */
79
- case 0x11: /* SQRSHRUN */
80
- case 0x12: /* SQSHRN, UQSHRN */
81
- case 0x13: /* SQRSHRN, UQRSHRN */
82
- case 0x1f: /* FCVTZS, FCVTZU */
83
- unallocated_encoding(s);
84
- break;
85
- }
86
-}
87
-
88
static void handle_2misc_64(DisasContext *s, int opcode, bool u,
89
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn,
90
TCGv_i32 tcg_rmode, TCGv_ptr tcg_fpstatus)
91
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
92
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
93
{ 0x0f000400, 0x9f800400, disas_simd_shift_imm },
94
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
95
- { 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
96
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
97
{ 0x00000000, 0x00000000, NULL }
98
};
99
--
100
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Emphasize that these functions use round-to-zero mode.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-62-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 8 ++++----
11
target/arm/tcg/translate-neon.c | 8 ++++----
12
target/arm/tcg/vec_helper.c | 8 ++++----
13
3 files changed, 12 insertions(+), 12 deletions(-)
14
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
18
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_touizs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
21
DEF_HELPER_FLAGS_4(gvec_vcvt_sf, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(gvec_vcvt_uf, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
-DEF_HELPER_FLAGS_4(gvec_vcvt_fs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
-DEF_HELPER_FLAGS_4(gvec_vcvt_fu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_fs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_fu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
28
DEF_HELPER_FLAGS_4(gvec_vcvt_sh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_4(gvec_vcvt_uh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
-DEF_HELPER_FLAGS_4(gvec_vcvt_hs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
-DEF_HELPER_FLAGS_4(gvec_vcvt_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
35
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-neon.c
40
+++ b/target/arm/tcg/translate-neon.c
41
@@ -XXX,XX +XXX,XX @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
42
43
DO_FP_2SH(VCVT_SF, gen_helper_gvec_vcvt_sf)
44
DO_FP_2SH(VCVT_UF, gen_helper_gvec_vcvt_uf)
45
-DO_FP_2SH(VCVT_FS, gen_helper_gvec_vcvt_fs)
46
-DO_FP_2SH(VCVT_FU, gen_helper_gvec_vcvt_fu)
47
+DO_FP_2SH(VCVT_FS, gen_helper_gvec_vcvt_rz_fs)
48
+DO_FP_2SH(VCVT_FU, gen_helper_gvec_vcvt_rz_fu)
49
50
DO_FP_2SH(VCVT_SH, gen_helper_gvec_vcvt_sh)
51
DO_FP_2SH(VCVT_UH, gen_helper_gvec_vcvt_uh)
52
-DO_FP_2SH(VCVT_HS, gen_helper_gvec_vcvt_hs)
53
-DO_FP_2SH(VCVT_HU, gen_helper_gvec_vcvt_hu)
54
+DO_FP_2SH(VCVT_HS, gen_helper_gvec_vcvt_rz_hs)
55
+DO_FP_2SH(VCVT_HU, gen_helper_gvec_vcvt_rz_hu)
56
57
static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
58
GVecGen2iFn *fn)
59
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/tcg/vec_helper.c
62
+++ b/target/arm/tcg/vec_helper.c
63
@@ -XXX,XX +XXX,XX @@ DO_3OP_PAIR(gvec_uminp_s, MIN, uint32_t, H4)
64
65
DO_VCVT_FIXED(gvec_vcvt_sf, helper_vfp_sltos, uint32_t)
66
DO_VCVT_FIXED(gvec_vcvt_uf, helper_vfp_ultos, uint32_t)
67
-DO_VCVT_FIXED(gvec_vcvt_fs, helper_vfp_tosls_round_to_zero, uint32_t)
68
-DO_VCVT_FIXED(gvec_vcvt_fu, helper_vfp_touls_round_to_zero, uint32_t)
69
+DO_VCVT_FIXED(gvec_vcvt_rz_fs, helper_vfp_tosls_round_to_zero, uint32_t)
70
+DO_VCVT_FIXED(gvec_vcvt_rz_fu, helper_vfp_touls_round_to_zero, uint32_t)
71
DO_VCVT_FIXED(gvec_vcvt_sh, helper_vfp_shtoh, uint16_t)
72
DO_VCVT_FIXED(gvec_vcvt_uh, helper_vfp_uhtoh, uint16_t)
73
-DO_VCVT_FIXED(gvec_vcvt_hs, helper_vfp_toshh_round_to_zero, uint16_t)
74
-DO_VCVT_FIXED(gvec_vcvt_hu, helper_vfp_touhh_round_to_zero, uint16_t)
75
+DO_VCVT_FIXED(gvec_vcvt_rz_hs, helper_vfp_toshh_round_to_zero, uint16_t)
76
+DO_VCVT_FIXED(gvec_vcvt_rz_hu, helper_vfp_touhh_round_to_zero, uint16_t)
77
78
#undef DO_VCVT_FIXED
79
80
--
81
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_simd_intfp_conv and handle_simd_shift_intfp_conv
4
as these were the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-63-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 3 +
12
target/arm/tcg/a64.decode | 22 ++++
13
target/arm/tcg/translate-a64.c | 201 ++++++---------------------------
14
target/arm/tcg/vec_helper.c | 7 +-
15
4 files changed, 66 insertions(+), 167 deletions(-)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_vcvt_uh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(gvec_vcvt_sd, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(gvec_vcvt_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+
28
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/a64.decode
34
+++ b/target/arm/tcg/a64.decode
35
@@ -XXX,XX +XXX,XX @@ FRINT32Z_v 0.00 1110 0.1 00001 11101 0 ..... ..... @qrr_sd
36
FRINT32X_v 0.10 1110 0.1 00001 11101 0 ..... ..... @qrr_sd
37
FRINT64Z_v 0.00 1110 0.1 00001 11111 0 ..... ..... @qrr_sd
38
FRINT64X_v 0.10 1110 0.1 00001 11111 0 ..... ..... @qrr_sd
39
+
40
+SCVTF_vi 0.00 1110 011 11001 11011 0 ..... ..... @qrr_h
41
+SCVTF_vi 0.00 1110 0.1 00001 11011 0 ..... ..... @qrr_sd
42
+
43
+UCVTF_vi 0.10 1110 011 11001 11011 0 ..... ..... @qrr_h
44
+UCVTF_vi 0.10 1110 0.1 00001 11011 0 ..... ..... @qrr_sd
45
+
46
+&fcvt_q rd rn esz q shift
47
+@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
48
+ &fcvt_q esz=1 shift=%fcvt_f_sh_h
49
+@fcvtq_s . q:1 . ...... 01 ..... ...... rn:5 rd:5 \
50
+ &fcvt_q esz=2 shift=%fcvt_f_sh_s
51
+@fcvtq_d . q:1 . ...... 1 ...... ...... rn:5 rd:5 \
52
+ &fcvt_q esz=3 shift=%fcvt_f_sh_d
53
+
54
+SCVTF_vf 0.00 11110 ....... 111001 ..... ..... @fcvtq_h
55
+SCVTF_vf 0.00 11110 ....... 111001 ..... ..... @fcvtq_s
56
+SCVTF_vf 0.00 11110 ....... 111001 ..... ..... @fcvtq_d
57
+
58
+UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_h
59
+UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_s
60
+UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_d
61
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/tcg/translate-a64.c
64
+++ b/target/arm/tcg/translate-a64.c
65
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FRINT64Z_v, aa64_frint, do_fp1_vector, a,
66
&f_scalar_frint64, FPROUNDING_ZERO)
67
TRANS_FEAT(FRINT64X_v, aa64_frint, do_fp1_vector, a, &f_scalar_frint64, -1)
68
69
-/* Common vector code for handling integer to FP conversion */
70
-static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
71
- int elements, int is_signed,
72
- int fracbits, int size)
73
+static bool do_gvec_op2_fpst(DisasContext *s, MemOp esz, bool is_q,
74
+ int rd, int rn, int data,
75
+ gen_helper_gvec_2_ptr * const fns[3])
76
{
77
- TCGv_ptr tcg_fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
78
- TCGv_i32 tcg_shift = NULL;
79
+ int check = fp_access_check_vector_hsd(s, is_q, esz);
80
+ TCGv_ptr fpst;
81
82
- MemOp mop = size | (is_signed ? MO_SIGN : 0);
83
- int pass;
84
-
85
- if (fracbits || size == MO_64) {
86
- tcg_shift = tcg_constant_i32(fracbits);
87
+ if (check <= 0) {
88
+ return check == 0;
89
}
90
91
- if (size == MO_64) {
92
- TCGv_i64 tcg_int64 = tcg_temp_new_i64();
93
- TCGv_i64 tcg_double = tcg_temp_new_i64();
94
-
95
- for (pass = 0; pass < elements; pass++) {
96
- read_vec_element(s, tcg_int64, rn, pass, mop);
97
-
98
- if (is_signed) {
99
- gen_helper_vfp_sqtod(tcg_double, tcg_int64,
100
- tcg_shift, tcg_fpst);
101
- } else {
102
- gen_helper_vfp_uqtod(tcg_double, tcg_int64,
103
- tcg_shift, tcg_fpst);
104
- }
105
- if (elements == 1) {
106
- write_fp_dreg(s, rd, tcg_double);
107
- } else {
108
- write_vec_element(s, tcg_double, rd, pass, MO_64);
109
- }
110
- }
111
- } else {
112
- TCGv_i32 tcg_int32 = tcg_temp_new_i32();
113
- TCGv_i32 tcg_float = tcg_temp_new_i32();
114
-
115
- for (pass = 0; pass < elements; pass++) {
116
- read_vec_element_i32(s, tcg_int32, rn, pass, mop);
117
-
118
- switch (size) {
119
- case MO_32:
120
- if (fracbits) {
121
- if (is_signed) {
122
- gen_helper_vfp_sltos(tcg_float, tcg_int32,
123
- tcg_shift, tcg_fpst);
124
- } else {
125
- gen_helper_vfp_ultos(tcg_float, tcg_int32,
126
- tcg_shift, tcg_fpst);
127
- }
128
- } else {
129
- if (is_signed) {
130
- gen_helper_vfp_sitos(tcg_float, tcg_int32, tcg_fpst);
131
- } else {
132
- gen_helper_vfp_uitos(tcg_float, tcg_int32, tcg_fpst);
133
- }
134
- }
135
- break;
136
- case MO_16:
137
- if (fracbits) {
138
- if (is_signed) {
139
- gen_helper_vfp_sltoh(tcg_float, tcg_int32,
140
- tcg_shift, tcg_fpst);
141
- } else {
142
- gen_helper_vfp_ultoh(tcg_float, tcg_int32,
143
- tcg_shift, tcg_fpst);
144
- }
145
- } else {
146
- if (is_signed) {
147
- gen_helper_vfp_sitoh(tcg_float, tcg_int32, tcg_fpst);
148
- } else {
149
- gen_helper_vfp_uitoh(tcg_float, tcg_int32, tcg_fpst);
150
- }
151
- }
152
- break;
153
- default:
154
- g_assert_not_reached();
155
- }
156
-
157
- if (elements == 1) {
158
- write_fp_sreg(s, rd, tcg_float);
159
- } else {
160
- write_vec_element_i32(s, tcg_float, rd, pass, size);
161
- }
162
- }
163
- }
164
-
165
- clear_vec_high(s, elements << size == 16, rd);
166
+ fpst = fpstatus_ptr(esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
167
+ tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, rd),
168
+ vec_full_reg_offset(s, rn), fpst,
169
+ is_q ? 16 : 8, vec_full_reg_size(s),
170
+ data, fns[esz - 1]);
171
+ return true;
172
}
173
174
-/* UCVTF/SCVTF - Integer to FP conversion */
175
-static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
176
- bool is_q, bool is_u,
177
- int immh, int immb, int opcode,
178
- int rn, int rd)
179
-{
180
- int size, elements, fracbits;
181
- int immhb = immh << 3 | immb;
182
+static gen_helper_gvec_2_ptr * const f_scvtf_v[] = {
183
+ gen_helper_gvec_vcvt_sh,
184
+ gen_helper_gvec_vcvt_sf,
185
+ gen_helper_gvec_vcvt_sd,
186
+};
187
+TRANS(SCVTF_vi, do_gvec_op2_fpst,
188
+ a->esz, a->q, a->rd, a->rn, 0, f_scvtf_v)
189
+TRANS(SCVTF_vf, do_gvec_op2_fpst,
190
+ a->esz, a->q, a->rd, a->rn, a->shift, f_scvtf_v)
191
192
- if (immh & 8) {
193
- size = MO_64;
194
- if (!is_scalar && !is_q) {
195
- unallocated_encoding(s);
196
- return;
197
- }
198
- } else if (immh & 4) {
199
- size = MO_32;
200
- } else if (immh & 2) {
201
- size = MO_16;
202
- if (!dc_isar_feature(aa64_fp16, s)) {
203
- unallocated_encoding(s);
204
- return;
205
- }
206
- } else {
207
- /* immh == 0 would be a failure of the decode logic */
208
- g_assert(immh == 1);
209
- unallocated_encoding(s);
210
- return;
211
- }
212
-
213
- if (is_scalar) {
214
- elements = 1;
215
- } else {
216
- elements = (8 << is_q) >> size;
217
- }
218
- fracbits = (16 << size) - immhb;
219
-
220
- if (!fp_access_check(s)) {
221
- return;
222
- }
223
-
224
- handle_simd_intfp_conv(s, rd, rn, elements, !is_u, fracbits, size);
225
-}
226
+static gen_helper_gvec_2_ptr * const f_ucvtf_v[] = {
227
+ gen_helper_gvec_vcvt_uh,
228
+ gen_helper_gvec_vcvt_uf,
229
+ gen_helper_gvec_vcvt_ud,
230
+};
231
+TRANS(UCVTF_vi, do_gvec_op2_fpst,
232
+ a->esz, a->q, a->rd, a->rn, 0, f_ucvtf_v)
233
+TRANS(UCVTF_vf, do_gvec_op2_fpst,
234
+ a->esz, a->q, a->rd, a->rn, a->shift, f_ucvtf_v)
235
236
/* FCVTZS, FVCVTZU - FP to fixedpoint conversion */
237
static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
238
@@ -XXX,XX +XXX,XX @@ static void disas_simd_shift_imm(DisasContext *s, uint32_t insn)
239
}
240
241
switch (opcode) {
242
- case 0x1c: /* SCVTF / UCVTF */
243
- handle_simd_shift_intfp_conv(s, false, is_q, is_u, immh, immb,
244
- opcode, rn, rd);
245
- break;
246
case 0x1f: /* FCVTZS/ FCVTZU */
247
handle_simd_shift_fpint_conv(s, false, is_q, is_u, immh, immb, rn, rd);
248
return;
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_shift_imm(DisasContext *s, uint32_t insn)
250
case 0x12: /* SQSHRN / UQSHRN */
251
case 0x13: /* SQRSHRN / UQRSHRN */
252
case 0x14: /* SSHLL / USHLL */
253
+ case 0x1c: /* SCVTF / UCVTF */
254
unallocated_encoding(s);
255
return;
256
}
257
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
258
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
259
size = is_double ? 3 : 2;
260
switch (opcode) {
261
- case 0x1d: /* SCVTF */
262
- case 0x5d: /* UCVTF */
263
- {
264
- bool is_signed = (opcode == 0x1d) ? true : false;
265
- int elements = is_double ? 2 : is_q ? 4 : 2;
266
- if (is_double && !is_q) {
267
- unallocated_encoding(s);
268
- return;
269
- }
270
- if (!fp_access_check(s)) {
271
- return;
272
- }
273
- handle_simd_intfp_conv(s, rd, rn, elements, is_signed, 0, size);
274
- return;
275
- }
276
case 0x2c: /* FCMGT (zero) */
277
case 0x2d: /* FCMEQ (zero) */
278
case 0x2e: /* FCMLT (zero) */
279
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
280
case 0x1f: /* FRINT64Z */
281
case 0x5e: /* FRINT32X */
282
case 0x5f: /* FRINT64X */
283
+ case 0x1d: /* SCVTF */
284
+ case 0x5d: /* UCVTF */
285
unallocated_encoding(s);
286
return;
287
}
288
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
289
fpop = deposit32(fpop, 6, 1, u);
290
291
switch (fpop) {
292
- case 0x1d: /* SCVTF */
293
- case 0x5d: /* UCVTF */
294
- {
295
- int elements;
296
-
297
- if (is_scalar) {
298
- elements = 1;
299
- } else {
300
- elements = (is_q ? 8 : 4);
301
- }
302
-
303
- if (!fp_access_check(s)) {
304
- return;
305
- }
306
- handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_16);
307
- return;
308
- }
309
- break;
310
case 0x2c: /* FCMGT (zero) */
311
case 0x2d: /* FCMEQ (zero) */
312
case 0x2e: /* FCMLT (zero) */
313
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
314
case 0x58: /* FRINTA */
315
case 0x59: /* FRINTX */
316
case 0x79: /* FRINTI */
317
+ case 0x1d: /* SCVTF */
318
+ case 0x5d: /* UCVTF */
319
unallocated_encoding(s);
320
return;
321
}
322
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
323
index XXXXXXX..XXXXXXX 100644
324
--- a/target/arm/tcg/vec_helper.c
325
+++ b/target/arm/tcg/vec_helper.c
326
@@ -XXX,XX +XXX,XX @@ DO_3OP_PAIR(gvec_uminp_s, MIN, uint32_t, H4)
327
clear_tail(d, oprsz, simd_maxsz(desc)); \
328
}
329
330
+DO_VCVT_FIXED(gvec_vcvt_sd, helper_vfp_sqtod, uint64_t)
331
+DO_VCVT_FIXED(gvec_vcvt_ud, helper_vfp_uqtod, uint64_t)
332
DO_VCVT_FIXED(gvec_vcvt_sf, helper_vfp_sltos, uint32_t)
333
DO_VCVT_FIXED(gvec_vcvt_uf, helper_vfp_ultos, uint32_t)
334
-DO_VCVT_FIXED(gvec_vcvt_rz_fs, helper_vfp_tosls_round_to_zero, uint32_t)
335
-DO_VCVT_FIXED(gvec_vcvt_rz_fu, helper_vfp_touls_round_to_zero, uint32_t)
336
DO_VCVT_FIXED(gvec_vcvt_sh, helper_vfp_shtoh, uint16_t)
337
DO_VCVT_FIXED(gvec_vcvt_uh, helper_vfp_uhtoh, uint16_t)
338
+
339
+DO_VCVT_FIXED(gvec_vcvt_rz_fs, helper_vfp_tosls_round_to_zero, uint32_t)
340
+DO_VCVT_FIXED(gvec_vcvt_rz_fu, helper_vfp_touls_round_to_zero, uint32_t)
341
DO_VCVT_FIXED(gvec_vcvt_rz_hs, helper_vfp_toshh_round_to_zero, uint16_t)
342
DO_VCVT_FIXED(gvec_vcvt_rz_hu, helper_vfp_touhh_round_to_zero, uint16_t)
343
344
--
345
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_simd_shift_fpint_conv and disas_simd_shift_imm
4
as these were the last insns decoded by those functions.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-64-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.h | 4 +
12
target/arm/tcg/a64.decode | 8 ++
13
target/arm/tcg/translate-a64.c | 160 +++------------------------------
14
target/arm/tcg/vec_helper.c | 2 +
15
target/arm/vfp_helper.c | 4 +
16
5 files changed, 32 insertions(+), 146 deletions(-)
17
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
21
+++ b/target/arm/helper.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_touhs_round_to_zero, i32, f32, i32, ptr)
23
DEF_HELPER_3(vfp_touls_round_to_zero, i32, f32, i32, ptr)
24
DEF_HELPER_3(vfp_toshd_round_to_zero, i64, f64, i32, ptr)
25
DEF_HELPER_3(vfp_tosld_round_to_zero, i64, f64, i32, ptr)
26
+DEF_HELPER_3(vfp_tosqd_round_to_zero, i64, f64, i32, ptr)
27
DEF_HELPER_3(vfp_touhd_round_to_zero, i64, f64, i32, ptr)
28
DEF_HELPER_3(vfp_tould_round_to_zero, i64, f64, i32, ptr)
29
+DEF_HELPER_3(vfp_touqd_round_to_zero, i64, f64, i32, ptr)
30
DEF_HELPER_3(vfp_touhh, i32, f16, i32, ptr)
31
DEF_HELPER_3(vfp_toshh, i32, f16, i32, ptr)
32
DEF_HELPER_3(vfp_toulh, i32, f16, i32, ptr)
33
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_vcvt_rz_hu, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
35
DEF_HELPER_FLAGS_4(gvec_vcvt_sd, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(gvec_vcvt_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_ds, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(gvec_vcvt_rz_du, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
40
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/a64.decode
45
+++ b/target/arm/tcg/a64.decode
46
@@ -XXX,XX +XXX,XX @@ SCVTF_vf 0.00 11110 ....... 111001 ..... ..... @fcvtq_d
47
UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_h
48
UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_s
49
UCVTF_vf 0.10 11110 ....... 111001 ..... ..... @fcvtq_d
50
+
51
+FCVTZS_vf 0.00 11110 ....... 111111 ..... ..... @fcvtq_h
52
+FCVTZS_vf 0.00 11110 ....... 111111 ..... ..... @fcvtq_s
53
+FCVTZS_vf 0.00 11110 ....... 111111 ..... ..... @fcvtq_d
54
+
55
+FCVTZU_vf 0.10 11110 ....... 111111 ..... ..... @fcvtq_h
56
+FCVTZU_vf 0.10 11110 ....... 111111 ..... ..... @fcvtq_s
57
+FCVTZU_vf 0.10 11110 ....... 111111 ..... ..... @fcvtq_d
58
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/tcg/translate-a64.c
61
+++ b/target/arm/tcg/translate-a64.c
62
@@ -XXX,XX +XXX,XX @@ TRANS(UCVTF_vi, do_gvec_op2_fpst,
63
TRANS(UCVTF_vf, do_gvec_op2_fpst,
64
a->esz, a->q, a->rd, a->rn, a->shift, f_ucvtf_v)
65
66
-/* FCVTZS, FVCVTZU - FP to fixedpoint conversion */
67
-static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
68
- bool is_q, bool is_u,
69
- int immh, int immb, int rn, int rd)
70
-{
71
- int immhb = immh << 3 | immb;
72
- int pass, size, fracbits;
73
- TCGv_ptr tcg_fpstatus;
74
- TCGv_i32 tcg_rmode, tcg_shift;
75
+static gen_helper_gvec_2_ptr * const f_fcvtzs_vf[] = {
76
+ gen_helper_gvec_vcvt_rz_hs,
77
+ gen_helper_gvec_vcvt_rz_fs,
78
+ gen_helper_gvec_vcvt_rz_ds,
79
+};
80
+TRANS(FCVTZS_vf, do_gvec_op2_fpst,
81
+ a->esz, a->q, a->rd, a->rn, a->shift, f_fcvtzs_vf)
82
83
- if (immh & 0x8) {
84
- size = MO_64;
85
- if (!is_scalar && !is_q) {
86
- unallocated_encoding(s);
87
- return;
88
- }
89
- } else if (immh & 0x4) {
90
- size = MO_32;
91
- } else if (immh & 0x2) {
92
- size = MO_16;
93
- if (!dc_isar_feature(aa64_fp16, s)) {
94
- unallocated_encoding(s);
95
- return;
96
- }
97
- } else {
98
- /* Should have split out AdvSIMD modified immediate earlier. */
99
- assert(immh == 1);
100
- unallocated_encoding(s);
101
- return;
102
- }
103
-
104
- if (!fp_access_check(s)) {
105
- return;
106
- }
107
-
108
- assert(!(is_scalar && is_q));
109
-
110
- tcg_fpstatus = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
111
- tcg_rmode = gen_set_rmode(FPROUNDING_ZERO, tcg_fpstatus);
112
- fracbits = (16 << size) - immhb;
113
- tcg_shift = tcg_constant_i32(fracbits);
114
-
115
- if (size == MO_64) {
116
- int maxpass = is_scalar ? 1 : 2;
117
-
118
- for (pass = 0; pass < maxpass; pass++) {
119
- TCGv_i64 tcg_op = tcg_temp_new_i64();
120
-
121
- read_vec_element(s, tcg_op, rn, pass, MO_64);
122
- if (is_u) {
123
- gen_helper_vfp_touqd(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
124
- } else {
125
- gen_helper_vfp_tosqd(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
126
- }
127
- write_vec_element(s, tcg_op, rd, pass, MO_64);
128
- }
129
- clear_vec_high(s, is_q, rd);
130
- } else {
131
- void (*fn)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
132
- int maxpass = is_scalar ? 1 : ((8 << is_q) >> size);
133
-
134
- switch (size) {
135
- case MO_16:
136
- if (is_u) {
137
- fn = gen_helper_vfp_touhh;
138
- } else {
139
- fn = gen_helper_vfp_toshh;
140
- }
141
- break;
142
- case MO_32:
143
- if (is_u) {
144
- fn = gen_helper_vfp_touls;
145
- } else {
146
- fn = gen_helper_vfp_tosls;
147
- }
148
- break;
149
- default:
150
- g_assert_not_reached();
151
- }
152
-
153
- for (pass = 0; pass < maxpass; pass++) {
154
- TCGv_i32 tcg_op = tcg_temp_new_i32();
155
-
156
- read_vec_element_i32(s, tcg_op, rn, pass, size);
157
- fn(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
158
- if (is_scalar) {
159
- if (size == MO_16 && !is_u) {
160
- tcg_gen_ext16u_i32(tcg_op, tcg_op);
161
- }
162
- write_fp_sreg(s, rd, tcg_op);
163
- } else {
164
- write_vec_element_i32(s, tcg_op, rd, pass, size);
165
- }
166
- }
167
- if (!is_scalar) {
168
- clear_vec_high(s, is_q, rd);
169
- }
170
- }
171
-
172
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
173
-}
174
+static gen_helper_gvec_2_ptr * const f_fcvtzu_vf[] = {
175
+ gen_helper_gvec_vcvt_rz_hu,
176
+ gen_helper_gvec_vcvt_rz_fu,
177
+ gen_helper_gvec_vcvt_rz_du,
178
+};
179
+TRANS(FCVTZU_vf, do_gvec_op2_fpst,
180
+ a->esz, a->q, a->rd, a->rn, a->shift, f_fcvtzu_vf)
181
182
static void handle_2misc_64(DisasContext *s, int opcode, bool u,
183
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn,
184
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
185
g_assert_not_reached();
186
}
187
188
-/* AdvSIMD shift by immediate
189
- * 31 30 29 28 23 22 19 18 16 15 11 10 9 5 4 0
190
- * +---+---+---+-------------+------+------+--------+---+------+------+
191
- * | 0 | Q | U | 0 1 1 1 1 0 | immh | immb | opcode | 1 | Rn | Rd |
192
- * +---+---+---+-------------+------+------+--------+---+------+------+
193
- */
194
-static void disas_simd_shift_imm(DisasContext *s, uint32_t insn)
195
-{
196
- int rd = extract32(insn, 0, 5);
197
- int rn = extract32(insn, 5, 5);
198
- int opcode = extract32(insn, 11, 5);
199
- int immb = extract32(insn, 16, 3);
200
- int immh = extract32(insn, 19, 4);
201
- bool is_u = extract32(insn, 29, 1);
202
- bool is_q = extract32(insn, 30, 1);
203
-
204
- if (immh == 0) {
205
- unallocated_encoding(s);
206
- return;
207
- }
208
-
209
- switch (opcode) {
210
- case 0x1f: /* FCVTZS/ FCVTZU */
211
- handle_simd_shift_fpint_conv(s, false, is_q, is_u, immh, immb, rn, rd);
212
- return;
213
- default:
214
- case 0x00: /* SSHR / USHR */
215
- case 0x02: /* SSRA / USRA (accumulate) */
216
- case 0x04: /* SRSHR / URSHR (rounding) */
217
- case 0x06: /* SRSRA / URSRA (accum + rounding) */
218
- case 0x08: /* SRI */
219
- case 0x0a: /* SHL / SLI */
220
- case 0x0c: /* SQSHLU */
221
- case 0x0e: /* SQSHL, UQSHL */
222
- case 0x10: /* SHRN / SQSHRUN */
223
- case 0x11: /* RSHRN / SQRSHRUN */
224
- case 0x12: /* SQSHRN / UQSHRN */
225
- case 0x13: /* SQRSHRN / UQRSHRN */
226
- case 0x14: /* SSHLL / USHLL */
227
- case 0x1c: /* SCVTF / UCVTF */
228
- unallocated_encoding(s);
229
- return;
230
- }
231
-}
232
-
233
static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
234
int size, int rn, int rd)
235
{
236
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
237
static const AArch64DecodeTable data_proc_simd[] = {
238
/* pattern , mask , fn */
239
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
240
- { 0x0f000400, 0x9f800400, disas_simd_shift_imm },
241
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
242
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
243
{ 0x00000000, 0x00000000, NULL }
244
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
245
index XXXXXXX..XXXXXXX 100644
246
--- a/target/arm/tcg/vec_helper.c
247
+++ b/target/arm/tcg/vec_helper.c
248
@@ -XXX,XX +XXX,XX @@ DO_VCVT_FIXED(gvec_vcvt_uf, helper_vfp_ultos, uint32_t)
249
DO_VCVT_FIXED(gvec_vcvt_sh, helper_vfp_shtoh, uint16_t)
250
DO_VCVT_FIXED(gvec_vcvt_uh, helper_vfp_uhtoh, uint16_t)
251
252
+DO_VCVT_FIXED(gvec_vcvt_rz_ds, helper_vfp_tosqd_round_to_zero, uint64_t)
253
+DO_VCVT_FIXED(gvec_vcvt_rz_du, helper_vfp_touqd_round_to_zero, uint64_t)
254
DO_VCVT_FIXED(gvec_vcvt_rz_fs, helper_vfp_tosls_round_to_zero, uint32_t)
255
DO_VCVT_FIXED(gvec_vcvt_rz_fu, helper_vfp_touls_round_to_zero, uint32_t)
256
DO_VCVT_FIXED(gvec_vcvt_rz_hs, helper_vfp_toshh_round_to_zero, uint16_t)
257
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
258
index XXXXXXX..XXXXXXX 100644
259
--- a/target/arm/vfp_helper.c
260
+++ b/target/arm/vfp_helper.c
261
@@ -XXX,XX +XXX,XX @@ VFP_CONV_FIX_A64(sq, h, 16, dh_ctype_f16, 64, int64)
262
VFP_CONV_FIX(uh, h, 16, dh_ctype_f16, 32, uint16)
263
VFP_CONV_FIX(ul, h, 16, dh_ctype_f16, 32, uint32)
264
VFP_CONV_FIX_A64(uq, h, 16, dh_ctype_f16, 64, uint64)
265
+VFP_CONV_FLOAT_FIX_ROUND(sq, d, 64, float64, 64, int64,
266
+ float_round_to_zero, _round_to_zero)
267
+VFP_CONV_FLOAT_FIX_ROUND(uq, d, 64, float64, 64, uint64,
268
+ float_round_to_zero, _round_to_zero)
269
270
#undef VFP_CONV_FIX
271
#undef VFP_CONV_FIX_FLOAT
272
--
273
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_2misc_64 as these were the last insns decoded
4
by that function. Remove helper_advsimd_f16to[su]inth as unused;
5
we now always go through helper_vfp_to[su]hh or a specialized
6
vector function instead.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20241211163036.2297116-65-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/helper.h | 2 +
14
target/arm/tcg/helper-a64.h | 2 -
15
target/arm/tcg/a64.decode | 25 ++++
16
target/arm/tcg/helper-a64.c | 32 -----
17
target/arm/tcg/translate-a64.c | 227 +++++++++++----------------------
18
target/arm/tcg/vec_helper.c | 2 +
19
6 files changed, 102 insertions(+), 188 deletions(-)
20
21
diff --git a/target/arm/helper.h b/target/arm/helper.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.h
24
+++ b/target/arm/helper.h
25
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_vcvt_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_4(gvec_vcvt_rz_ds, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_4(gvec_vcvt_rz_du, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
29
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sd, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ud, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_ss, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_us, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_4(gvec_vcvt_rm_sh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/helper-a64.h
37
+++ b/target/arm/tcg/helper-a64.h
38
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(advsimd_mulx2h, i32, i32, i32, ptr)
39
DEF_HELPER_4(advsimd_muladd2h, i32, i32, i32, i32, ptr)
40
DEF_HELPER_2(advsimd_rinth_exact, f16, f16, ptr)
41
DEF_HELPER_2(advsimd_rinth, f16, f16, ptr)
42
-DEF_HELPER_2(advsimd_f16tosinth, i32, f16, ptr)
43
-DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
44
45
DEF_HELPER_2(exception_return, void, env, i64)
46
DEF_HELPER_FLAGS_2(dc_zva, TCG_CALL_NO_WG, void, env, i64)
47
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/tcg/a64.decode
50
+++ b/target/arm/tcg/a64.decode
51
@@ -XXX,XX +XXX,XX @@ SCVTF_vi 0.00 1110 0.1 00001 11011 0 ..... ..... @qrr_sd
52
UCVTF_vi 0.10 1110 011 11001 11011 0 ..... ..... @qrr_h
53
UCVTF_vi 0.10 1110 0.1 00001 11011 0 ..... ..... @qrr_sd
54
55
+FCVTNS_vi 0.00 1110 011 11001 10101 0 ..... ..... @qrr_h
56
+FCVTNS_vi 0.00 1110 0.1 00001 10101 0 ..... ..... @qrr_sd
57
+FCVTNU_vi 0.10 1110 011 11001 10101 0 ..... ..... @qrr_h
58
+FCVTNU_vi 0.10 1110 0.1 00001 10101 0 ..... ..... @qrr_sd
59
+
60
+FCVTPS_vi 0.00 1110 111 11001 10101 0 ..... ..... @qrr_h
61
+FCVTPS_vi 0.00 1110 1.1 00001 10101 0 ..... ..... @qrr_sd
62
+FCVTPU_vi 0.10 1110 111 11001 10101 0 ..... ..... @qrr_h
63
+FCVTPU_vi 0.10 1110 1.1 00001 10101 0 ..... ..... @qrr_sd
64
+
65
+FCVTMS_vi 0.00 1110 011 11001 10111 0 ..... ..... @qrr_h
66
+FCVTMS_vi 0.00 1110 0.1 00001 10111 0 ..... ..... @qrr_sd
67
+FCVTMU_vi 0.10 1110 011 11001 10111 0 ..... ..... @qrr_h
68
+FCVTMU_vi 0.10 1110 0.1 00001 10111 0 ..... ..... @qrr_sd
69
+
70
+FCVTZS_vi 0.00 1110 111 11001 10111 0 ..... ..... @qrr_h
71
+FCVTZS_vi 0.00 1110 1.1 00001 10111 0 ..... ..... @qrr_sd
72
+FCVTZU_vi 0.10 1110 111 11001 10111 0 ..... ..... @qrr_h
73
+FCVTZU_vi 0.10 1110 1.1 00001 10111 0 ..... ..... @qrr_sd
74
+
75
+FCVTAS_vi 0.00 1110 011 11001 11001 0 ..... ..... @qrr_h
76
+FCVTAS_vi 0.00 1110 0.1 00001 11001 0 ..... ..... @qrr_sd
77
+FCVTAU_vi 0.10 1110 011 11001 11001 0 ..... ..... @qrr_h
78
+FCVTAU_vi 0.10 1110 0.1 00001 11001 0 ..... ..... @qrr_sd
79
+
80
&fcvt_q rd rn esz q shift
81
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
82
&fcvt_q esz=1 shift=%fcvt_f_sh_h
83
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/tcg/helper-a64.c
86
+++ b/target/arm/tcg/helper-a64.c
87
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
88
return ret;
89
}
90
91
-/*
92
- * Half-precision floating point conversion functions
93
- *
94
- * There are a multitude of conversion functions with various
95
- * different rounding modes. This is dealt with by the calling code
96
- * setting the mode appropriately before calling the helper.
97
- */
98
-
99
-uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
100
-{
101
- float_status *fpst = fpstp;
102
-
103
- /* Invalid if we are passed a NaN */
104
- if (float16_is_any_nan(a)) {
105
- float_raise(float_flag_invalid, fpst);
106
- return 0;
107
- }
108
- return float16_to_int16(a, fpst);
109
-}
110
-
111
-uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
112
-{
113
- float_status *fpst = fpstp;
114
-
115
- /* Invalid if we are passed a NaN */
116
- if (float16_is_any_nan(a)) {
117
- float_raise(float_flag_invalid, fpst);
118
- return 0;
119
- }
120
- return float16_to_uint16(a, fpst);
121
-}
122
-
123
static int el_from_spsr(uint32_t spsr)
124
{
125
/* Return the exception level that this SPSR is requesting a return to,
126
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/target/arm/tcg/translate-a64.c
129
+++ b/target/arm/tcg/translate-a64.c
130
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2_ptr * const f_fcvtzu_vf[] = {
131
TRANS(FCVTZU_vf, do_gvec_op2_fpst,
132
a->esz, a->q, a->rd, a->rn, a->shift, f_fcvtzu_vf)
133
134
-static void handle_2misc_64(DisasContext *s, int opcode, bool u,
135
- TCGv_i64 tcg_rd, TCGv_i64 tcg_rn,
136
- TCGv_i32 tcg_rmode, TCGv_ptr tcg_fpstatus)
137
-{
138
- /* Handle 64->64 opcodes which are shared between the scalar and
139
- * vector 2-reg-misc groups. We cover every integer opcode where size == 3
140
- * is valid in either group and also the double-precision fp ops.
141
- * The caller only need provide tcg_rmode and tcg_fpstatus if the op
142
- * requires them.
143
- */
144
- switch (opcode) {
145
- case 0x1a: /* FCVTNS */
146
- case 0x1b: /* FCVTMS */
147
- case 0x1c: /* FCVTAS */
148
- case 0x3a: /* FCVTPS */
149
- case 0x3b: /* FCVTZS */
150
- gen_helper_vfp_tosqd(tcg_rd, tcg_rn, tcg_constant_i32(0), tcg_fpstatus);
151
- break;
152
- case 0x5a: /* FCVTNU */
153
- case 0x5b: /* FCVTMU */
154
- case 0x5c: /* FCVTAU */
155
- case 0x7a: /* FCVTPU */
156
- case 0x7b: /* FCVTZU */
157
- gen_helper_vfp_touqd(tcg_rd, tcg_rn, tcg_constant_i32(0), tcg_fpstatus);
158
- break;
159
- default:
160
- case 0x4: /* CLS, CLZ */
161
- case 0x5: /* NOT */
162
- case 0x7: /* SQABS, SQNEG */
163
- case 0x8: /* CMGT, CMGE */
164
- case 0x9: /* CMEQ, CMLE */
165
- case 0xa: /* CMLT */
166
- case 0xb: /* ABS, NEG */
167
- case 0x2f: /* FABS */
168
- case 0x6f: /* FNEG */
169
- case 0x7f: /* FSQRT */
170
- case 0x18: /* FRINTN */
171
- case 0x19: /* FRINTM */
172
- case 0x38: /* FRINTP */
173
- case 0x39: /* FRINTZ */
174
- case 0x58: /* FRINTA */
175
- case 0x79: /* FRINTI */
176
- case 0x59: /* FRINTX */
177
- case 0x1e: /* FRINT32Z */
178
- case 0x5e: /* FRINT32X */
179
- case 0x1f: /* FRINT64Z */
180
- case 0x5f: /* FRINT64X */
181
- g_assert_not_reached();
182
- }
183
-}
184
+static gen_helper_gvec_2_ptr * const f_fcvt_s_vi[] = {
185
+ gen_helper_gvec_vcvt_rm_sh,
186
+ gen_helper_gvec_vcvt_rm_ss,
187
+ gen_helper_gvec_vcvt_rm_sd,
188
+};
189
+
190
+static gen_helper_gvec_2_ptr * const f_fcvt_u_vi[] = {
191
+ gen_helper_gvec_vcvt_rm_uh,
192
+ gen_helper_gvec_vcvt_rm_us,
193
+ gen_helper_gvec_vcvt_rm_ud,
194
+};
195
+
196
+TRANS(FCVTNS_vi, do_gvec_op2_fpst,
197
+ a->esz, a->q, a->rd, a->rn, float_round_nearest_even, f_fcvt_s_vi)
198
+TRANS(FCVTNU_vi, do_gvec_op2_fpst,
199
+ a->esz, a->q, a->rd, a->rn, float_round_nearest_even, f_fcvt_u_vi)
200
+TRANS(FCVTPS_vi, do_gvec_op2_fpst,
201
+ a->esz, a->q, a->rd, a->rn, float_round_up, f_fcvt_s_vi)
202
+TRANS(FCVTPU_vi, do_gvec_op2_fpst,
203
+ a->esz, a->q, a->rd, a->rn, float_round_up, f_fcvt_u_vi)
204
+TRANS(FCVTMS_vi, do_gvec_op2_fpst,
205
+ a->esz, a->q, a->rd, a->rn, float_round_down, f_fcvt_s_vi)
206
+TRANS(FCVTMU_vi, do_gvec_op2_fpst,
207
+ a->esz, a->q, a->rd, a->rn, float_round_down, f_fcvt_u_vi)
208
+TRANS(FCVTZS_vi, do_gvec_op2_fpst,
209
+ a->esz, a->q, a->rd, a->rn, float_round_to_zero, f_fcvt_s_vi)
210
+TRANS(FCVTZU_vi, do_gvec_op2_fpst,
211
+ a->esz, a->q, a->rd, a->rn, float_round_to_zero, f_fcvt_u_vi)
212
+TRANS(FCVTAS_vi, do_gvec_op2_fpst,
213
+ a->esz, a->q, a->rd, a->rn, float_round_ties_away, f_fcvt_s_vi)
214
+TRANS(FCVTAU_vi, do_gvec_op2_fpst,
215
+ a->esz, a->q, a->rd, a->rn, float_round_ties_away, f_fcvt_u_vi)
216
217
static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
218
bool is_scalar, bool is_u, bool is_q,
219
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
220
}
221
handle_2misc_fcmp_zero(s, opcode, false, u, is_q, size, rn, rd);
222
return;
223
- case 0x1a: /* FCVTNS */
224
- case 0x1b: /* FCVTMS */
225
- case 0x3a: /* FCVTPS */
226
- case 0x3b: /* FCVTZS */
227
- case 0x5a: /* FCVTNU */
228
- case 0x5b: /* FCVTMU */
229
- case 0x7a: /* FCVTPU */
230
- case 0x7b: /* FCVTZU */
231
- need_fpstatus = true;
232
- rmode = extract32(opcode, 5, 1) | (extract32(opcode, 0, 1) << 1);
233
- if (size == 3 && !is_q) {
234
- unallocated_encoding(s);
235
- return;
236
- }
237
- break;
238
- case 0x5c: /* FCVTAU */
239
- case 0x1c: /* FCVTAS */
240
- need_fpstatus = true;
241
- rmode = FPROUNDING_TIEAWAY;
242
- if (size == 3 && !is_q) {
243
- unallocated_encoding(s);
244
- return;
245
- }
246
- break;
247
case 0x3c: /* URECPE */
248
if (size == 3) {
249
unallocated_encoding(s);
250
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
251
case 0x5f: /* FRINT64X */
252
case 0x1d: /* SCVTF */
253
case 0x5d: /* UCVTF */
254
+ case 0x1a: /* FCVTNS */
255
+ case 0x1b: /* FCVTMS */
256
+ case 0x3a: /* FCVTPS */
257
+ case 0x3b: /* FCVTZS */
258
+ case 0x5a: /* FCVTNU */
259
+ case 0x5b: /* FCVTMU */
260
+ case 0x7a: /* FCVTPU */
261
+ case 0x7b: /* FCVTZU */
262
+ case 0x5c: /* FCVTAU */
263
+ case 0x1c: /* FCVTAS */
264
unallocated_encoding(s);
265
return;
266
}
267
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
268
tcg_rmode = NULL;
269
}
270
271
- if (size == 3) {
272
- /* All 64-bit element operations can be shared with scalar 2misc */
273
- int pass;
274
-
275
- /* Coverity claims (size == 3 && !is_q) has been eliminated
276
- * from all paths leading to here.
277
- */
278
- tcg_debug_assert(is_q);
279
- for (pass = 0; pass < 2; pass++) {
280
- TCGv_i64 tcg_op = tcg_temp_new_i64();
281
- TCGv_i64 tcg_res = tcg_temp_new_i64();
282
-
283
- read_vec_element(s, tcg_op, rn, pass, MO_64);
284
-
285
- handle_2misc_64(s, opcode, u, tcg_res, tcg_op,
286
- tcg_rmode, tcg_fpstatus);
287
-
288
- write_vec_element(s, tcg_res, rd, pass, MO_64);
289
- }
290
- } else {
291
+ {
292
int pass;
293
294
assert(size == 2);
295
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
296
{
297
/* Special cases for 32 bit elements */
298
switch (opcode) {
299
- case 0x1a: /* FCVTNS */
300
- case 0x1b: /* FCVTMS */
301
- case 0x1c: /* FCVTAS */
302
- case 0x3a: /* FCVTPS */
303
- case 0x3b: /* FCVTZS */
304
- gen_helper_vfp_tosls(tcg_res, tcg_op,
305
- tcg_constant_i32(0), tcg_fpstatus);
306
- break;
307
- case 0x5a: /* FCVTNU */
308
- case 0x5b: /* FCVTMU */
309
- case 0x5c: /* FCVTAU */
310
- case 0x7a: /* FCVTPU */
311
- case 0x7b: /* FCVTZU */
312
- gen_helper_vfp_touls(tcg_res, tcg_op,
313
- tcg_constant_i32(0), tcg_fpstatus);
314
- break;
315
case 0x7c: /* URSQRTE */
316
gen_helper_rsqrte_u32(tcg_res, tcg_op);
317
break;
318
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
319
case 0x5e: /* FRINT32X */
320
case 0x1f: /* FRINT64Z */
321
case 0x5f: /* FRINT64X */
322
+ case 0x1a: /* FCVTNS */
323
+ case 0x1b: /* FCVTMS */
324
+ case 0x1c: /* FCVTAS */
325
+ case 0x3a: /* FCVTPS */
326
+ case 0x3b: /* FCVTZS */
327
+ case 0x5a: /* FCVTNU */
328
+ case 0x5b: /* FCVTMU */
329
+ case 0x5c: /* FCVTAU */
330
+ case 0x7a: /* FCVTPU */
331
+ case 0x7b: /* FCVTZU */
332
g_assert_not_reached();
333
}
334
}
335
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
336
case 0x3d: /* FRECPE */
337
case 0x3f: /* FRECPX */
338
break;
339
- case 0x1a: /* FCVTNS */
340
- rmode = FPROUNDING_TIEEVEN;
341
- break;
342
- case 0x1b: /* FCVTMS */
343
- rmode = FPROUNDING_NEGINF;
344
- break;
345
- case 0x1c: /* FCVTAS */
346
- rmode = FPROUNDING_TIEAWAY;
347
- break;
348
- case 0x3a: /* FCVTPS */
349
- rmode = FPROUNDING_POSINF;
350
- break;
351
- case 0x3b: /* FCVTZS */
352
- rmode = FPROUNDING_ZERO;
353
- break;
354
- case 0x5a: /* FCVTNU */
355
- rmode = FPROUNDING_TIEEVEN;
356
- break;
357
- case 0x5b: /* FCVTMU */
358
- rmode = FPROUNDING_NEGINF;
359
- break;
360
- case 0x5c: /* FCVTAU */
361
- rmode = FPROUNDING_TIEAWAY;
362
- break;
363
- case 0x7a: /* FCVTPU */
364
- rmode = FPROUNDING_POSINF;
365
- break;
366
- case 0x7b: /* FCVTZU */
367
- rmode = FPROUNDING_ZERO;
368
- break;
369
case 0x7d: /* FRSQRTE */
370
break;
371
default:
372
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
373
case 0x79: /* FRINTI */
374
case 0x1d: /* SCVTF */
375
case 0x5d: /* UCVTF */
376
+ case 0x1a: /* FCVTNS */
377
+ case 0x1b: /* FCVTMS */
378
+ case 0x1c: /* FCVTAS */
379
+ case 0x3a: /* FCVTPS */
380
+ case 0x3b: /* FCVTZS */
381
+ case 0x5a: /* FCVTNU */
382
+ case 0x5b: /* FCVTMU */
383
+ case 0x5c: /* FCVTAU */
384
+ case 0x7a: /* FCVTPU */
385
+ case 0x7b: /* FCVTZU */
386
unallocated_encoding(s);
387
return;
388
}
389
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
390
read_vec_element_i32(s, tcg_op, rn, pass, MO_16);
391
392
switch (fpop) {
393
- case 0x1a: /* FCVTNS */
394
- case 0x1b: /* FCVTMS */
395
- case 0x1c: /* FCVTAS */
396
- case 0x3a: /* FCVTPS */
397
- case 0x3b: /* FCVTZS */
398
- gen_helper_advsimd_f16tosinth(tcg_res, tcg_op, tcg_fpstatus);
399
- break;
400
case 0x3d: /* FRECPE */
401
gen_helper_recpe_f16(tcg_res, tcg_op, tcg_fpstatus);
402
break;
403
- case 0x5a: /* FCVTNU */
404
- case 0x5b: /* FCVTMU */
405
- case 0x5c: /* FCVTAU */
406
- case 0x7a: /* FCVTPU */
407
- case 0x7b: /* FCVTZU */
408
- gen_helper_advsimd_f16touinth(tcg_res, tcg_op, tcg_fpstatus);
409
- break;
410
case 0x7d: /* FRSQRTE */
411
gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
412
break;
413
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
414
case 0x58: /* FRINTA */
415
case 0x79: /* FRINTI */
416
case 0x59: /* FRINTX */
417
+ case 0x1a: /* FCVTNS */
418
+ case 0x1b: /* FCVTMS */
419
+ case 0x1c: /* FCVTAS */
420
+ case 0x3a: /* FCVTPS */
421
+ case 0x3b: /* FCVTZS */
422
+ case 0x5a: /* FCVTNU */
423
+ case 0x5b: /* FCVTMU */
424
+ case 0x5c: /* FCVTAU */
425
+ case 0x7a: /* FCVTPU */
426
+ case 0x7b: /* FCVTZU */
427
g_assert_not_reached();
428
}
429
430
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
431
index XXXXXXX..XXXXXXX 100644
432
--- a/target/arm/tcg/vec_helper.c
433
+++ b/target/arm/tcg/vec_helper.c
434
@@ -XXX,XX +XXX,XX @@ DO_VCVT_FIXED(gvec_vcvt_rz_hu, helper_vfp_touhh_round_to_zero, uint16_t)
435
clear_tail(d, oprsz, simd_maxsz(desc)); \
436
}
437
438
+DO_VCVT_RMODE(gvec_vcvt_rm_sd, helper_vfp_tosqd, uint64_t)
439
+DO_VCVT_RMODE(gvec_vcvt_rm_ud, helper_vfp_touqd, uint64_t)
440
DO_VCVT_RMODE(gvec_vcvt_rm_ss, helper_vfp_tosls, uint32_t)
441
DO_VCVT_RMODE(gvec_vcvt_rm_us, helper_vfp_touls, uint32_t)
442
DO_VCVT_RMODE(gvec_vcvt_rm_sh, helper_vfp_toshh, uint16_t)
443
--
444
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This includes FCMEQ, FCMGT, FCMGE, FCMLT, FCMLE.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241211163036.2297116-66-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.h | 5 +
11
target/arm/tcg/a64.decode | 30 ++++
12
target/arm/tcg/translate-a64.c | 249 +++++++++++++--------------------
13
target/arm/tcg/vec_helper.c | 4 +-
14
4 files changed, 138 insertions(+), 150 deletions(-)
15
16
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.h
19
+++ b/target/arm/helper.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_frsqrte_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
DEF_HELPER_FLAGS_4(gvec_fcgt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(gvec_fcgt0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(gvec_fcgt0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
26
DEF_HELPER_FLAGS_4(gvec_fcge0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_4(gvec_fcge0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(gvec_fcge0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
30
DEF_HELPER_FLAGS_4(gvec_fceq0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(gvec_fceq0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(gvec_fceq0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
34
DEF_HELPER_FLAGS_4(gvec_fcle0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_4(gvec_fcle0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(gvec_fcle0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
38
DEF_HELPER_FLAGS_4(gvec_fclt0_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
DEF_HELPER_FLAGS_4(gvec_fclt0_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(gvec_fclt0_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
42
DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
44
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/a64.decode
47
+++ b/target/arm/tcg/a64.decode
48
@@ -XXX,XX +XXX,XX @@ UQXTN_s 0111 1110 ..1 00001 01001 0 ..... ..... @rr_e
49
50
FCVTXN_s 0111 1110 011 00001 01101 0 ..... ..... @rr_s
51
52
+FCMGT0_s 0101 1110 111 11000 11001 0 ..... ..... @rr_h
53
+FCMGT0_s 0101 1110 1.1 00000 11001 0 ..... ..... @rr_sd
54
+
55
+FCMGE0_s 0111 1110 111 11000 11001 0 ..... ..... @rr_h
56
+FCMGE0_s 0111 1110 1.1 00000 11001 0 ..... ..... @rr_sd
57
+
58
+FCMEQ0_s 0101 1110 111 11000 11011 0 ..... ..... @rr_h
59
+FCMEQ0_s 0101 1110 1.1 00000 11011 0 ..... ..... @rr_sd
60
+
61
+FCMLE0_s 0111 1110 111 11000 11011 0 ..... ..... @rr_h
62
+FCMLE0_s 0111 1110 1.1 00000 11011 0 ..... ..... @rr_sd
63
+
64
+FCMLT0_s 0101 1110 111 11000 11101 0 ..... ..... @rr_h
65
+FCMLT0_s 0101 1110 1.1 00000 11101 0 ..... ..... @rr_sd
66
+
67
@icvt_h . ....... .. ...... ...... rn:5 rd:5 \
68
&fcvt sf=0 esz=1 shift=0
69
@icvt_sd . ....... .. ...... ...... rn:5 rd:5 \
70
@@ -XXX,XX +XXX,XX @@ FCVTAS_vi 0.00 1110 0.1 00001 11001 0 ..... ..... @qrr_sd
71
FCVTAU_vi 0.10 1110 011 11001 11001 0 ..... ..... @qrr_h
72
FCVTAU_vi 0.10 1110 0.1 00001 11001 0 ..... ..... @qrr_sd
73
74
+FCMGT0_v 0.00 1110 111 11000 11001 0 ..... ..... @qrr_h
75
+FCMGT0_v 0.00 1110 1.1 00000 11001 0 ..... ..... @qrr_sd
76
+
77
+FCMGE0_v 0.10 1110 111 11000 11001 0 ..... ..... @qrr_h
78
+FCMGE0_v 0.10 1110 1.1 00000 11001 0 ..... ..... @qrr_sd
79
+
80
+FCMEQ0_v 0.00 1110 111 11000 11011 0 ..... ..... @qrr_h
81
+FCMEQ0_v 0.00 1110 1.1 00000 11011 0 ..... ..... @qrr_sd
82
+
83
+FCMLE0_v 0.10 1110 111 11000 11011 0 ..... ..... @qrr_h
84
+FCMLE0_v 0.10 1110 1.1 00000 11011 0 ..... ..... @qrr_sd
85
+
86
+FCMLT0_v 0.00 1110 111 11000 11101 0 ..... ..... @qrr_h
87
+FCMLT0_v 0.00 1110 1.1 00000 11101 0 ..... ..... @qrr_sd
88
+
89
&fcvt_q rd rn esz q shift
90
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
91
&fcvt_q esz=1 shift=%fcvt_f_sh_h
92
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/tcg/translate-a64.c
95
+++ b/target/arm/tcg/translate-a64.c
96
@@ -XXX,XX +XXX,XX @@ static const FPScalar f_scalar_frsqrts = {
97
};
98
TRANS(FRSQRTS_s, do_fp3_scalar, a, &f_scalar_frsqrts)
99
100
+static bool do_fcmp0_s(DisasContext *s, arg_rr_e *a,
101
+ const FPScalar *f, bool swap)
102
+{
103
+ switch (a->esz) {
104
+ case MO_64:
105
+ if (fp_access_check(s)) {
106
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
107
+ TCGv_i64 t1 = tcg_constant_i64(0);
108
+ if (swap) {
109
+ f->gen_d(t0, t1, t0, fpstatus_ptr(FPST_FPCR));
110
+ } else {
111
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
112
+ }
113
+ write_fp_dreg(s, a->rd, t0);
114
+ }
115
+ break;
116
+ case MO_32:
117
+ if (fp_access_check(s)) {
118
+ TCGv_i32 t0 = read_fp_sreg(s, a->rn);
119
+ TCGv_i32 t1 = tcg_constant_i32(0);
120
+ if (swap) {
121
+ f->gen_s(t0, t1, t0, fpstatus_ptr(FPST_FPCR));
122
+ } else {
123
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
124
+ }
125
+ write_fp_sreg(s, a->rd, t0);
126
+ }
127
+ break;
128
+ case MO_16:
129
+ if (!dc_isar_feature(aa64_fp16, s)) {
130
+ return false;
131
+ }
132
+ if (fp_access_check(s)) {
133
+ TCGv_i32 t0 = read_fp_hreg(s, a->rn);
134
+ TCGv_i32 t1 = tcg_constant_i32(0);
135
+ if (swap) {
136
+ f->gen_h(t0, t1, t0, fpstatus_ptr(FPST_FPCR_F16));
137
+ } else {
138
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
139
+ }
140
+ write_fp_sreg(s, a->rd, t0);
141
+ }
142
+ break;
143
+ default:
144
+ return false;
145
+ }
146
+ return true;
147
+}
148
+
149
+TRANS(FCMEQ0_s, do_fcmp0_s, a, &f_scalar_fcmeq, false)
150
+TRANS(FCMGT0_s, do_fcmp0_s, a, &f_scalar_fcmgt, false)
151
+TRANS(FCMGE0_s, do_fcmp0_s, a, &f_scalar_fcmge, false)
152
+TRANS(FCMLT0_s, do_fcmp0_s, a, &f_scalar_fcmgt, true)
153
+TRANS(FCMLE0_s, do_fcmp0_s, a, &f_scalar_fcmge, true)
154
+
155
static bool do_satacc_s(DisasContext *s, arg_rrr_e *a,
156
MemOp sgn_n, MemOp sgn_m,
157
void (*gen_bhs)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64, MemOp),
158
@@ -XXX,XX +XXX,XX @@ TRANS(FCVTAS_vi, do_gvec_op2_fpst,
159
TRANS(FCVTAU_vi, do_gvec_op2_fpst,
160
a->esz, a->q, a->rd, a->rn, float_round_ties_away, f_fcvt_u_vi)
161
162
-static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
163
- bool is_scalar, bool is_u, bool is_q,
164
- int size, int rn, int rd)
165
-{
166
- bool is_double = (size == MO_64);
167
- TCGv_ptr fpst;
168
+static gen_helper_gvec_2_ptr * const f_fceq0[] = {
169
+ gen_helper_gvec_fceq0_h,
170
+ gen_helper_gvec_fceq0_s,
171
+ gen_helper_gvec_fceq0_d,
172
+};
173
+TRANS(FCMEQ0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fceq0)
174
175
- if (!fp_access_check(s)) {
176
- return;
177
- }
178
+static gen_helper_gvec_2_ptr * const f_fcgt0[] = {
179
+ gen_helper_gvec_fcgt0_h,
180
+ gen_helper_gvec_fcgt0_s,
181
+ gen_helper_gvec_fcgt0_d,
182
+};
183
+TRANS(FCMGT0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fcgt0)
184
185
- fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
186
+static gen_helper_gvec_2_ptr * const f_fcge0[] = {
187
+ gen_helper_gvec_fcge0_h,
188
+ gen_helper_gvec_fcge0_s,
189
+ gen_helper_gvec_fcge0_d,
190
+};
191
+TRANS(FCMGE0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fcge0)
192
193
- if (is_double) {
194
- TCGv_i64 tcg_op = tcg_temp_new_i64();
195
- TCGv_i64 tcg_zero = tcg_constant_i64(0);
196
- TCGv_i64 tcg_res = tcg_temp_new_i64();
197
- NeonGenTwoDoubleOpFn *genfn;
198
- bool swap = false;
199
- int pass;
200
+static gen_helper_gvec_2_ptr * const f_fclt0[] = {
201
+ gen_helper_gvec_fclt0_h,
202
+ gen_helper_gvec_fclt0_s,
203
+ gen_helper_gvec_fclt0_d,
204
+};
205
+TRANS(FCMLT0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fclt0)
206
207
- switch (opcode) {
208
- case 0x2e: /* FCMLT (zero) */
209
- swap = true;
210
- /* fallthrough */
211
- case 0x2c: /* FCMGT (zero) */
212
- genfn = gen_helper_neon_cgt_f64;
213
- break;
214
- case 0x2d: /* FCMEQ (zero) */
215
- genfn = gen_helper_neon_ceq_f64;
216
- break;
217
- case 0x6d: /* FCMLE (zero) */
218
- swap = true;
219
- /* fall through */
220
- case 0x6c: /* FCMGE (zero) */
221
- genfn = gen_helper_neon_cge_f64;
222
- break;
223
- default:
224
- g_assert_not_reached();
225
- }
226
-
227
- for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
228
- read_vec_element(s, tcg_op, rn, pass, MO_64);
229
- if (swap) {
230
- genfn(tcg_res, tcg_zero, tcg_op, fpst);
231
- } else {
232
- genfn(tcg_res, tcg_op, tcg_zero, fpst);
233
- }
234
- write_vec_element(s, tcg_res, rd, pass, MO_64);
235
- }
236
-
237
- clear_vec_high(s, !is_scalar, rd);
238
- } else {
239
- TCGv_i32 tcg_op = tcg_temp_new_i32();
240
- TCGv_i32 tcg_zero = tcg_constant_i32(0);
241
- TCGv_i32 tcg_res = tcg_temp_new_i32();
242
- NeonGenTwoSingleOpFn *genfn;
243
- bool swap = false;
244
- int pass, maxpasses;
245
-
246
- if (size == MO_16) {
247
- switch (opcode) {
248
- case 0x2e: /* FCMLT (zero) */
249
- swap = true;
250
- /* fall through */
251
- case 0x2c: /* FCMGT (zero) */
252
- genfn = gen_helper_advsimd_cgt_f16;
253
- break;
254
- case 0x2d: /* FCMEQ (zero) */
255
- genfn = gen_helper_advsimd_ceq_f16;
256
- break;
257
- case 0x6d: /* FCMLE (zero) */
258
- swap = true;
259
- /* fall through */
260
- case 0x6c: /* FCMGE (zero) */
261
- genfn = gen_helper_advsimd_cge_f16;
262
- break;
263
- default:
264
- g_assert_not_reached();
265
- }
266
- } else {
267
- switch (opcode) {
268
- case 0x2e: /* FCMLT (zero) */
269
- swap = true;
270
- /* fall through */
271
- case 0x2c: /* FCMGT (zero) */
272
- genfn = gen_helper_neon_cgt_f32;
273
- break;
274
- case 0x2d: /* FCMEQ (zero) */
275
- genfn = gen_helper_neon_ceq_f32;
276
- break;
277
- case 0x6d: /* FCMLE (zero) */
278
- swap = true;
279
- /* fall through */
280
- case 0x6c: /* FCMGE (zero) */
281
- genfn = gen_helper_neon_cge_f32;
282
- break;
283
- default:
284
- g_assert_not_reached();
285
- }
286
- }
287
-
288
- if (is_scalar) {
289
- maxpasses = 1;
290
- } else {
291
- int vector_size = 8 << is_q;
292
- maxpasses = vector_size >> size;
293
- }
294
-
295
- for (pass = 0; pass < maxpasses; pass++) {
296
- read_vec_element_i32(s, tcg_op, rn, pass, size);
297
- if (swap) {
298
- genfn(tcg_res, tcg_zero, tcg_op, fpst);
299
- } else {
300
- genfn(tcg_res, tcg_op, tcg_zero, fpst);
301
- }
302
- if (is_scalar) {
303
- write_fp_sreg(s, rd, tcg_res);
304
- } else {
305
- write_vec_element_i32(s, tcg_res, rd, pass, size);
306
- }
307
- }
308
-
309
- if (!is_scalar) {
310
- clear_vec_high(s, is_q, rd);
311
- }
312
- }
313
-}
314
+static gen_helper_gvec_2_ptr * const f_fcle0[] = {
315
+ gen_helper_gvec_fcle0_h,
316
+ gen_helper_gvec_fcle0_s,
317
+ gen_helper_gvec_fcle0_d,
318
+};
319
+TRANS(FCMLE0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fcle0)
320
321
static void handle_2misc_reciprocal(DisasContext *s, int opcode,
322
bool is_scalar, bool is_u, bool is_q,
323
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
324
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
325
size = extract32(size, 0, 1) ? 3 : 2;
326
switch (opcode) {
327
- case 0x2c: /* FCMGT (zero) */
328
- case 0x2d: /* FCMEQ (zero) */
329
- case 0x2e: /* FCMLT (zero) */
330
- case 0x6c: /* FCMGE (zero) */
331
- case 0x6d: /* FCMLE (zero) */
332
- handle_2misc_fcmp_zero(s, opcode, true, u, true, size, rn, rd);
333
- return;
334
case 0x3d: /* FRECPE */
335
case 0x3f: /* FRECPX */
336
case 0x7d: /* FRSQRTE */
337
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
338
case 0x56: /* FCVTXN, FCVTXN2 */
339
case 0x1d: /* SCVTF */
340
case 0x5d: /* UCVTF */
341
+ case 0x2c: /* FCMGT (zero) */
342
+ case 0x2d: /* FCMEQ (zero) */
343
+ case 0x2e: /* FCMLT (zero) */
344
+ case 0x6c: /* FCMGE (zero) */
345
+ case 0x6d: /* FCMLE (zero) */
346
default:
347
unallocated_encoding(s);
348
return;
349
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
350
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
351
size = is_double ? 3 : 2;
352
switch (opcode) {
353
- case 0x2c: /* FCMGT (zero) */
354
- case 0x2d: /* FCMEQ (zero) */
355
- case 0x2e: /* FCMLT (zero) */
356
- case 0x6c: /* FCMGE (zero) */
357
- case 0x6d: /* FCMLE (zero) */
358
- if (size == 3 && !is_q) {
359
- unallocated_encoding(s);
360
- return;
361
- }
362
- handle_2misc_fcmp_zero(s, opcode, false, u, is_q, size, rn, rd);
363
- return;
364
case 0x3c: /* URECPE */
365
if (size == 3) {
366
unallocated_encoding(s);
367
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
368
case 0x7b: /* FCVTZU */
369
case 0x5c: /* FCVTAU */
370
case 0x1c: /* FCVTAS */
371
+ case 0x2c: /* FCMGT (zero) */
372
+ case 0x2d: /* FCMEQ (zero) */
373
+ case 0x2e: /* FCMLT (zero) */
374
+ case 0x6c: /* FCMGE (zero) */
375
+ case 0x6d: /* FCMLE (zero) */
376
unallocated_encoding(s);
377
return;
378
}
379
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
380
fpop = deposit32(fpop, 6, 1, u);
381
382
switch (fpop) {
383
- case 0x2c: /* FCMGT (zero) */
384
- case 0x2d: /* FCMEQ (zero) */
385
- case 0x2e: /* FCMLT (zero) */
386
- case 0x6c: /* FCMGE (zero) */
387
- case 0x6d: /* FCMLE (zero) */
388
- handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_16, rn, rd);
389
- return;
390
case 0x3d: /* FRECPE */
391
case 0x3f: /* FRECPX */
392
break;
393
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
394
case 0x5c: /* FCVTAU */
395
case 0x7a: /* FCVTPU */
396
case 0x7b: /* FCVTZU */
397
+ case 0x2c: /* FCMGT (zero) */
398
+ case 0x2d: /* FCMEQ (zero) */
399
+ case 0x2e: /* FCMLT (zero) */
400
+ case 0x6c: /* FCMGE (zero) */
401
+ case 0x6d: /* FCMLE (zero) */
402
unallocated_encoding(s);
403
return;
404
}
405
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
406
index XXXXXXX..XXXXXXX 100644
407
--- a/target/arm/tcg/vec_helper.c
408
+++ b/target/arm/tcg/vec_helper.c
409
@@ -XXX,XX +XXX,XX @@ DO_2OP(gvec_touszh, vfp_touszh, float16)
410
#define DO_2OP_CMP0(FN, CMPOP, DIRN) \
411
WRAP_CMP0_##DIRN(FN, CMPOP, float16) \
412
WRAP_CMP0_##DIRN(FN, CMPOP, float32) \
413
+ WRAP_CMP0_##DIRN(FN, CMPOP, float64) \
414
DO_2OP(gvec_f##FN##0_h, float16_##FN##0, float16) \
415
- DO_2OP(gvec_f##FN##0_s, float32_##FN##0, float32)
416
+ DO_2OP(gvec_f##FN##0_s, float32_##FN##0, float32) \
417
+ DO_2OP(gvec_f##FN##0_d, float64_##FN##0, float64)
418
419
DO_2OP_CMP0(cgt, cgt, FWD)
420
DO_2OP_CMP0(cge, cge, FWD)
421
--
422
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove disas_simd_scalar_two_reg_misc and
4
disas_simd_two_reg_misc_fp16 as these were the
5
last insns decoded by those functions.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241211163036.2297116-67-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/tcg/a64.decode | 15 ++
13
target/arm/tcg/translate-a64.c | 329 ++++-----------------------------
14
2 files changed, 53 insertions(+), 291 deletions(-)
15
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ FCMLE0_s 0111 1110 1.1 00000 11011 0 ..... ..... @rr_sd
21
FCMLT0_s 0101 1110 111 11000 11101 0 ..... ..... @rr_h
22
FCMLT0_s 0101 1110 1.1 00000 11101 0 ..... ..... @rr_sd
23
24
+FRECPE_s 0101 1110 111 11001 11011 0 ..... ..... @rr_h
25
+FRECPE_s 0101 1110 1.1 00001 11011 0 ..... ..... @rr_sd
26
+
27
+FRECPX_s 0101 1110 111 11001 11111 0 ..... ..... @rr_h
28
+FRECPX_s 0101 1110 1.1 00001 11111 0 ..... ..... @rr_sd
29
+
30
+FRSQRTE_s 0111 1110 111 11001 11011 0 ..... ..... @rr_h
31
+FRSQRTE_s 0111 1110 1.1 00001 11011 0 ..... ..... @rr_sd
32
+
33
@icvt_h . ....... .. ...... ...... rn:5 rd:5 \
34
&fcvt sf=0 esz=1 shift=0
35
@icvt_sd . ....... .. ...... ...... rn:5 rd:5 \
36
@@ -XXX,XX +XXX,XX @@ FCMLE0_v 0.10 1110 1.1 00000 11011 0 ..... ..... @qrr_sd
37
FCMLT0_v 0.00 1110 111 11000 11101 0 ..... ..... @qrr_h
38
FCMLT0_v 0.00 1110 1.1 00000 11101 0 ..... ..... @qrr_sd
39
40
+FRECPE_v 0.00 1110 111 11001 11011 0 ..... ..... @qrr_h
41
+FRECPE_v 0.00 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
42
+
43
+FRSQRTE_v 0.10 1110 111 11001 11011 0 ..... ..... @qrr_h
44
+FRSQRTE_v 0.10 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
45
+
46
&fcvt_q rd rn esz q shift
47
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
48
&fcvt_q esz=1 shift=%fcvt_f_sh_h
49
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/tcg/translate-a64.c
52
+++ b/target/arm/tcg/translate-a64.c
53
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FRINT64Z_s, aa64_frint, do_fp1_scalar, a,
54
&f_scalar_frint64, FPROUNDING_ZERO)
55
TRANS_FEAT(FRINT64X_s, aa64_frint, do_fp1_scalar, a, &f_scalar_frint64, -1)
56
57
+static const FPScalar1 f_scalar_frecpe = {
58
+ gen_helper_recpe_f16,
59
+ gen_helper_recpe_f32,
60
+ gen_helper_recpe_f64,
61
+};
62
+TRANS(FRECPE_s, do_fp1_scalar, a, &f_scalar_frecpe, -1)
63
+
64
+static const FPScalar1 f_scalar_frecpx = {
65
+ gen_helper_frecpx_f16,
66
+ gen_helper_frecpx_f32,
67
+ gen_helper_frecpx_f64,
68
+};
69
+TRANS(FRECPX_s, do_fp1_scalar, a, &f_scalar_frecpx, -1)
70
+
71
+static const FPScalar1 f_scalar_frsqrte = {
72
+ gen_helper_rsqrte_f16,
73
+ gen_helper_rsqrte_f32,
74
+ gen_helper_rsqrte_f64,
75
+};
76
+TRANS(FRSQRTE_s, do_fp1_scalar, a, &f_scalar_frsqrte, -1)
77
+
78
static bool trans_FCVT_s_ds(DisasContext *s, arg_rr *a)
79
{
80
if (fp_access_check(s)) {
81
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2_ptr * const f_fcle0[] = {
82
};
83
TRANS(FCMLE0_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_fcle0)
84
85
+static gen_helper_gvec_2_ptr * const f_frecpe[] = {
86
+ gen_helper_gvec_frecpe_h,
87
+ gen_helper_gvec_frecpe_s,
88
+ gen_helper_gvec_frecpe_d,
89
+};
90
+TRANS(FRECPE_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_frecpe)
91
+
92
+static gen_helper_gvec_2_ptr * const f_frsqrte[] = {
93
+ gen_helper_gvec_frsqrte_h,
94
+ gen_helper_gvec_frsqrte_s,
95
+ gen_helper_gvec_frsqrte_d,
96
+};
97
+TRANS(FRSQRTE_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_frsqrte)
98
+
99
static void handle_2misc_reciprocal(DisasContext *s, int opcode,
100
bool is_scalar, bool is_u, bool is_q,
101
int size, int rn, int rd)
102
{
103
bool is_double = (size == 3);
104
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
105
106
if (is_double) {
107
- TCGv_i64 tcg_op = tcg_temp_new_i64();
108
- TCGv_i64 tcg_res = tcg_temp_new_i64();
109
- int pass;
110
-
111
- for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
112
- read_vec_element(s, tcg_op, rn, pass, MO_64);
113
- switch (opcode) {
114
- case 0x3d: /* FRECPE */
115
- gen_helper_recpe_f64(tcg_res, tcg_op, fpst);
116
- break;
117
- case 0x3f: /* FRECPX */
118
- gen_helper_frecpx_f64(tcg_res, tcg_op, fpst);
119
- break;
120
- case 0x7d: /* FRSQRTE */
121
- gen_helper_rsqrte_f64(tcg_res, tcg_op, fpst);
122
- break;
123
- default:
124
- g_assert_not_reached();
125
- }
126
- write_vec_element(s, tcg_res, rd, pass, MO_64);
127
- }
128
- clear_vec_high(s, !is_scalar, rd);
129
+ g_assert_not_reached();
130
} else {
131
TCGv_i32 tcg_op = tcg_temp_new_i32();
132
TCGv_i32 tcg_res = tcg_temp_new_i32();
133
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
134
gen_helper_recpe_u32(tcg_res, tcg_op);
135
break;
136
case 0x3d: /* FRECPE */
137
- gen_helper_recpe_f32(tcg_res, tcg_op, fpst);
138
- break;
139
case 0x3f: /* FRECPX */
140
- gen_helper_frecpx_f32(tcg_res, tcg_op, fpst);
141
- break;
142
case 0x7d: /* FRSQRTE */
143
- gen_helper_rsqrte_f32(tcg_res, tcg_op, fpst);
144
- break;
145
default:
146
g_assert_not_reached();
147
}
148
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
149
}
150
}
151
152
-/* AdvSIMD scalar two reg misc
153
- * 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
154
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
155
- * | 0 1 | U | 1 1 1 1 0 | size | 1 0 0 0 0 | opcode | 1 0 | Rn | Rd |
156
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
157
- */
158
-static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
159
-{
160
- int rd = extract32(insn, 0, 5);
161
- int rn = extract32(insn, 5, 5);
162
- int opcode = extract32(insn, 12, 5);
163
- int size = extract32(insn, 22, 2);
164
- bool u = extract32(insn, 29, 1);
165
-
166
- switch (opcode) {
167
- case 0xc ... 0xf:
168
- case 0x16 ... 0x1d:
169
- case 0x1f:
170
- /* Floating point: U, size[1] and opcode indicate operation;
171
- * size[0] indicates single or double precision.
172
- */
173
- opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
174
- size = extract32(size, 0, 1) ? 3 : 2;
175
- switch (opcode) {
176
- case 0x3d: /* FRECPE */
177
- case 0x3f: /* FRECPX */
178
- case 0x7d: /* FRSQRTE */
179
- if (!fp_access_check(s)) {
180
- return;
181
- }
182
- handle_2misc_reciprocal(s, opcode, true, u, true, size, rn, rd);
183
- return;
184
- case 0x1a: /* FCVTNS */
185
- case 0x1b: /* FCVTMS */
186
- case 0x3a: /* FCVTPS */
187
- case 0x3b: /* FCVTZS */
188
- case 0x5a: /* FCVTNU */
189
- case 0x5b: /* FCVTMU */
190
- case 0x7a: /* FCVTPU */
191
- case 0x7b: /* FCVTZU */
192
- case 0x1c: /* FCVTAS */
193
- case 0x5c: /* FCVTAU */
194
- case 0x56: /* FCVTXN, FCVTXN2 */
195
- case 0x1d: /* SCVTF */
196
- case 0x5d: /* UCVTF */
197
- case 0x2c: /* FCMGT (zero) */
198
- case 0x2d: /* FCMEQ (zero) */
199
- case 0x2e: /* FCMLT (zero) */
200
- case 0x6c: /* FCMGE (zero) */
201
- case 0x6d: /* FCMLE (zero) */
202
- default:
203
- unallocated_encoding(s);
204
- return;
205
- }
206
- break;
207
- default:
208
- case 0x3: /* USQADD / SUQADD */
209
- case 0x7: /* SQABS / SQNEG */
210
- case 0x8: /* CMGT, CMGE */
211
- case 0x9: /* CMEQ, CMLE */
212
- case 0xa: /* CMLT */
213
- case 0xb: /* ABS, NEG */
214
- case 0x12: /* SQXTUN */
215
- case 0x14: /* SQXTN, UQXTN */
216
- unallocated_encoding(s);
217
- return;
218
- }
219
- g_assert_not_reached();
220
-}
221
-
222
static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
223
int size, int rn, int rd)
224
{
225
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
226
unallocated_encoding(s);
227
return;
228
}
229
- /* fall through */
230
- case 0x3d: /* FRECPE */
231
- case 0x7d: /* FRSQRTE */
232
- if (size == 3 && !is_q) {
233
- unallocated_encoding(s);
234
- return;
235
- }
236
if (!fp_access_check(s)) {
237
return;
238
}
239
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
240
case 0x2e: /* FCMLT (zero) */
241
case 0x6c: /* FCMGE (zero) */
242
case 0x6d: /* FCMLE (zero) */
243
+ case 0x3d: /* FRECPE */
244
+ case 0x7d: /* FRSQRTE */
245
unallocated_encoding(s);
246
return;
247
}
248
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
249
}
250
}
251
252
-/* AdvSIMD [scalar] two register miscellaneous (FP16)
253
- *
254
- * 31 30 29 28 27 24 23 22 21 17 16 12 11 10 9 5 4 0
255
- * +---+---+---+---+---------+---+-------------+--------+-----+------+------+
256
- * | 0 | Q | U | S | 1 1 1 0 | a | 1 1 1 1 0 0 | opcode | 1 0 | Rn | Rd |
257
- * +---+---+---+---+---------+---+-------------+--------+-----+------+------+
258
- * mask: 1000 1111 0111 1110 0000 1100 0000 0000 0x8f7e 0c00
259
- * val: 0000 1110 0111 1000 0000 1000 0000 0000 0x0e78 0800
260
- *
261
- * This actually covers two groups where scalar access is governed by
262
- * bit 28. A bunch of the instructions (float to integral) only exist
263
- * in the vector form and are un-allocated for the scalar decode. Also
264
- * in the scalar decode Q is always 1.
265
- */
266
-static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
267
-{
268
- int fpop, opcode, a, u;
269
- int rn, rd;
270
- bool is_q;
271
- bool is_scalar;
272
-
273
- int pass;
274
- TCGv_i32 tcg_rmode = NULL;
275
- TCGv_ptr tcg_fpstatus = NULL;
276
- bool need_fpst = true;
277
- int rmode = -1;
278
-
279
- if (!dc_isar_feature(aa64_fp16, s)) {
280
- unallocated_encoding(s);
281
- return;
282
- }
283
-
284
- rd = extract32(insn, 0, 5);
285
- rn = extract32(insn, 5, 5);
286
-
287
- a = extract32(insn, 23, 1);
288
- u = extract32(insn, 29, 1);
289
- is_scalar = extract32(insn, 28, 1);
290
- is_q = extract32(insn, 30, 1);
291
-
292
- opcode = extract32(insn, 12, 5);
293
- fpop = deposit32(opcode, 5, 1, a);
294
- fpop = deposit32(fpop, 6, 1, u);
295
-
296
- switch (fpop) {
297
- case 0x3d: /* FRECPE */
298
- case 0x3f: /* FRECPX */
299
- break;
300
- case 0x7d: /* FRSQRTE */
301
- break;
302
- default:
303
- case 0x2f: /* FABS */
304
- case 0x6f: /* FNEG */
305
- case 0x7f: /* FSQRT (vector) */
306
- case 0x18: /* FRINTN */
307
- case 0x19: /* FRINTM */
308
- case 0x38: /* FRINTP */
309
- case 0x39: /* FRINTZ */
310
- case 0x58: /* FRINTA */
311
- case 0x59: /* FRINTX */
312
- case 0x79: /* FRINTI */
313
- case 0x1d: /* SCVTF */
314
- case 0x5d: /* UCVTF */
315
- case 0x1a: /* FCVTNS */
316
- case 0x1b: /* FCVTMS */
317
- case 0x1c: /* FCVTAS */
318
- case 0x3a: /* FCVTPS */
319
- case 0x3b: /* FCVTZS */
320
- case 0x5a: /* FCVTNU */
321
- case 0x5b: /* FCVTMU */
322
- case 0x5c: /* FCVTAU */
323
- case 0x7a: /* FCVTPU */
324
- case 0x7b: /* FCVTZU */
325
- case 0x2c: /* FCMGT (zero) */
326
- case 0x2d: /* FCMEQ (zero) */
327
- case 0x2e: /* FCMLT (zero) */
328
- case 0x6c: /* FCMGE (zero) */
329
- case 0x6d: /* FCMLE (zero) */
330
- unallocated_encoding(s);
331
- return;
332
- }
333
-
334
-
335
- /* Check additional constraints for the scalar encoding */
336
- if (is_scalar) {
337
- if (!is_q) {
338
- unallocated_encoding(s);
339
- return;
340
- }
341
- }
342
-
343
- if (!fp_access_check(s)) {
344
- return;
345
- }
346
-
347
- if (rmode >= 0 || need_fpst) {
348
- tcg_fpstatus = fpstatus_ptr(FPST_FPCR_F16);
349
- }
350
-
351
- if (rmode >= 0) {
352
- tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
353
- }
354
-
355
- if (is_scalar) {
356
- TCGv_i32 tcg_op = read_fp_hreg(s, rn);
357
- TCGv_i32 tcg_res = tcg_temp_new_i32();
358
-
359
- switch (fpop) {
360
- case 0x3d: /* FRECPE */
361
- gen_helper_recpe_f16(tcg_res, tcg_op, tcg_fpstatus);
362
- break;
363
- case 0x3f: /* FRECPX */
364
- gen_helper_frecpx_f16(tcg_res, tcg_op, tcg_fpstatus);
365
- break;
366
- case 0x7d: /* FRSQRTE */
367
- gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
368
- break;
369
- default:
370
- case 0x1a: /* FCVTNS */
371
- case 0x1b: /* FCVTMS */
372
- case 0x1c: /* FCVTAS */
373
- case 0x3a: /* FCVTPS */
374
- case 0x3b: /* FCVTZS */
375
- case 0x5a: /* FCVTNU */
376
- case 0x5b: /* FCVTMU */
377
- case 0x5c: /* FCVTAU */
378
- case 0x7a: /* FCVTPU */
379
- case 0x7b: /* FCVTZU */
380
- g_assert_not_reached();
381
- }
382
-
383
- /* limit any sign extension going on */
384
- tcg_gen_andi_i32(tcg_res, tcg_res, 0xffff);
385
- write_fp_sreg(s, rd, tcg_res);
386
- } else {
387
- for (pass = 0; pass < (is_q ? 8 : 4); pass++) {
388
- TCGv_i32 tcg_op = tcg_temp_new_i32();
389
- TCGv_i32 tcg_res = tcg_temp_new_i32();
390
-
391
- read_vec_element_i32(s, tcg_op, rn, pass, MO_16);
392
-
393
- switch (fpop) {
394
- case 0x3d: /* FRECPE */
395
- gen_helper_recpe_f16(tcg_res, tcg_op, tcg_fpstatus);
396
- break;
397
- case 0x7d: /* FRSQRTE */
398
- gen_helper_rsqrte_f16(tcg_res, tcg_op, tcg_fpstatus);
399
- break;
400
- default:
401
- case 0x2f: /* FABS */
402
- case 0x6f: /* FNEG */
403
- case 0x7f: /* FSQRT */
404
- case 0x18: /* FRINTN */
405
- case 0x19: /* FRINTM */
406
- case 0x38: /* FRINTP */
407
- case 0x39: /* FRINTZ */
408
- case 0x58: /* FRINTA */
409
- case 0x79: /* FRINTI */
410
- case 0x59: /* FRINTX */
411
- case 0x1a: /* FCVTNS */
412
- case 0x1b: /* FCVTMS */
413
- case 0x1c: /* FCVTAS */
414
- case 0x3a: /* FCVTPS */
415
- case 0x3b: /* FCVTZS */
416
- case 0x5a: /* FCVTNU */
417
- case 0x5b: /* FCVTMU */
418
- case 0x5c: /* FCVTAU */
419
- case 0x7a: /* FCVTPU */
420
- case 0x7b: /* FCVTZU */
421
- g_assert_not_reached();
422
- }
423
-
424
- write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
425
- }
426
-
427
- clear_vec_high(s, is_q, rd);
428
- }
429
-
430
- if (tcg_rmode) {
431
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
432
- }
433
-}
434
-
435
/* C3.6 Data processing - SIMD, inc Crypto
436
*
437
* As the decode gets a little complex we are using a table based
438
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
439
static const AArch64DecodeTable data_proc_simd[] = {
440
/* pattern , mask , fn */
441
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
442
- { 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
443
- { 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
444
{ 0x00000000, 0x00000000, NULL }
445
};
446
447
--
448
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241211163036.2297116-68-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper.h | 3 +++
9
target/arm/tcg/translate.h | 5 +++++
10
target/arm/tcg/gengvec.c | 16 ++++++++++++++++
11
target/arm/tcg/translate-neon.c | 4 ++--
12
target/arm/tcg/vec_helper.c | 22 ++++++++++++++++++++++
13
5 files changed, 48 insertions(+), 2 deletions(-)
14
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
18
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_uminp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(gvec_uminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_4(gvec_uminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
23
+DEF_HELPER_FLAGS_3(gvec_urecpe_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_3(gvec_ursqrte_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+
26
#ifdef TARGET_AARCH64
27
#include "tcg/helper-a64.h"
28
#include "tcg/helper-sve.h"
29
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate.h
32
+++ b/target/arm/tcg/translate.h
33
@@ -XXX,XX +XXX,XX @@ void gen_gvec_fabs(unsigned vece, uint32_t dofs, uint32_t aofs,
34
void gen_gvec_fneg(unsigned vece, uint32_t dofs, uint32_t aofs,
35
uint32_t oprsz, uint32_t maxsz);
36
37
+void gen_gvec_urecpe(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
38
+ uint32_t opr_sz, uint32_t max_sz);
39
+void gen_gvec_ursqrte(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
40
+ uint32_t opr_sz, uint32_t max_sz);
41
+
42
/*
43
* Forward to the isar_feature_* tests given a DisasContext pointer.
44
*/
45
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/tcg/gengvec.c
48
+++ b/target/arm/tcg/gengvec.c
49
@@ -XXX,XX +XXX,XX @@ void gen_gvec_fneg(unsigned vece, uint32_t dofs, uint32_t aofs,
50
uint64_t s_bit = 1ull << ((8 << vece) - 1);
51
tcg_gen_gvec_xori(vece, dofs, aofs, s_bit, oprsz, maxsz);
52
}
53
+
54
+void gen_gvec_urecpe(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
55
+ uint32_t opr_sz, uint32_t max_sz)
56
+{
57
+ assert(vece == MO_32);
58
+ tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
59
+ gen_helper_gvec_urecpe_s);
60
+}
61
+
62
+void gen_gvec_ursqrte(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
63
+ uint32_t opr_sz, uint32_t max_sz)
64
+{
65
+ assert(vece == MO_32);
66
+ tcg_gen_gvec_2_ool(rd_ofs, rn_ofs, opr_sz, max_sz, 0,
67
+ gen_helper_gvec_ursqrte_s);
68
+}
69
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/tcg/translate-neon.c
72
+++ b/target/arm/tcg/translate-neon.c
73
@@ -XXX,XX +XXX,XX @@ static bool trans_VRECPE(DisasContext *s, arg_2misc *a)
74
if (a->size != 2) {
75
return false;
76
}
77
- return do_2misc(s, a, gen_helper_recpe_u32);
78
+ return do_2misc_vec(s, a, gen_gvec_urecpe);
79
}
80
81
static bool trans_VRSQRTE(DisasContext *s, arg_2misc *a)
82
@@ -XXX,XX +XXX,XX @@ static bool trans_VRSQRTE(DisasContext *s, arg_2misc *a)
83
if (a->size != 2) {
84
return false;
85
}
86
- return do_2misc(s, a, gen_helper_rsqrte_u32);
87
+ return do_2misc_vec(s, a, gen_gvec_ursqrte);
88
}
89
90
#define WRAP_1OP_ENV_FN(WRAPNAME, FUNC) \
91
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/tcg/vec_helper.c
94
+++ b/target/arm/tcg/vec_helper.c
95
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_rbit_b)(void *vd, void *vn, uint32_t desc)
96
}
97
clear_tail(d, opr_sz, simd_maxsz(desc));
98
}
99
+
100
+void HELPER(gvec_urecpe_s)(void *vd, void *vn, uint32_t desc)
101
+{
102
+ intptr_t i, opr_sz = simd_oprsz(desc);
103
+ uint32_t *d = vd, *n = vn;
104
+
105
+ for (i = 0; i < opr_sz / 4; ++i) {
106
+ d[i] = helper_recpe_u32(n[i]);
107
+ }
108
+ clear_tail(d, opr_sz, simd_maxsz(desc));
109
+}
110
+
111
+void HELPER(gvec_ursqrte_s)(void *vd, void *vn, uint32_t desc)
112
+{
113
+ intptr_t i, opr_sz = simd_oprsz(desc);
114
+ uint32_t *d = vd, *n = vn;
115
+
116
+ for (i = 0; i < opr_sz / 4; ++i) {
117
+ d[i] = helper_rsqrte_u32(n[i]);
118
+ }
119
+ clear_tail(d, opr_sz, simd_maxsz(desc));
120
+}
121
--
122
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove handle_2misc_reciprocal as these were the last
4
insns decoded by that function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241211163036.2297116-69-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/a64.decode | 3 +
12
target/arm/tcg/translate-a64.c | 139 ++-------------------------------
13
2 files changed, 8 insertions(+), 134 deletions(-)
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ FRECPE_v 0.00 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
20
FRSQRTE_v 0.10 1110 111 11001 11011 0 ..... ..... @qrr_h
21
FRSQRTE_v 0.10 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
22
23
+URECPE_v 0.00 1110 101 00001 11001 0 ..... ..... @qrr_s
24
+URSQRTE_v 0.10 1110 101 00001 11001 0 ..... ..... @qrr_s
25
+
26
&fcvt_q rd rn esz q shift
27
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
28
&fcvt_q esz=1 shift=%fcvt_f_sh_h
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ TRANS(CMLE0_v, do_gvec_fn2, a, gen_gvec_cle0)
34
TRANS(CMEQ0_v, do_gvec_fn2, a, gen_gvec_ceq0)
35
TRANS(REV16_v, do_gvec_fn2, a, gen_gvec_rev16)
36
TRANS(REV32_v, do_gvec_fn2, a, gen_gvec_rev32)
37
+TRANS(URECPE_v, do_gvec_fn2, a, gen_gvec_urecpe)
38
+TRANS(URSQRTE_v, do_gvec_fn2, a, gen_gvec_ursqrte)
39
40
static bool do_gvec_fn2_bhs(DisasContext *s, arg_qrr_e *a, GVecGen2Fn *fn)
41
{
42
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2_ptr * const f_frsqrte[] = {
43
};
44
TRANS(FRSQRTE_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_frsqrte)
45
46
-static void handle_2misc_reciprocal(DisasContext *s, int opcode,
47
- bool is_scalar, bool is_u, bool is_q,
48
- int size, int rn, int rd)
49
-{
50
- bool is_double = (size == 3);
51
-
52
- if (is_double) {
53
- g_assert_not_reached();
54
- } else {
55
- TCGv_i32 tcg_op = tcg_temp_new_i32();
56
- TCGv_i32 tcg_res = tcg_temp_new_i32();
57
- int pass, maxpasses;
58
-
59
- if (is_scalar) {
60
- maxpasses = 1;
61
- } else {
62
- maxpasses = is_q ? 4 : 2;
63
- }
64
-
65
- for (pass = 0; pass < maxpasses; pass++) {
66
- read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
67
-
68
- switch (opcode) {
69
- case 0x3c: /* URECPE */
70
- gen_helper_recpe_u32(tcg_res, tcg_op);
71
- break;
72
- case 0x3d: /* FRECPE */
73
- case 0x3f: /* FRECPX */
74
- case 0x7d: /* FRSQRTE */
75
- default:
76
- g_assert_not_reached();
77
- }
78
-
79
- if (is_scalar) {
80
- write_fp_sreg(s, rd, tcg_res);
81
- } else {
82
- write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
83
- }
84
- }
85
- if (!is_scalar) {
86
- clear_vec_high(s, is_q, rd);
87
- }
88
- }
89
-}
90
-
91
static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
92
int size, int rn, int rd)
93
{
94
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
95
bool is_q = extract32(insn, 30, 1);
96
int rn = extract32(insn, 5, 5);
97
int rd = extract32(insn, 0, 5);
98
- bool need_fpstatus = false;
99
- int rmode = -1;
100
- TCGv_i32 tcg_rmode;
101
- TCGv_ptr tcg_fpstatus;
102
103
switch (opcode) {
104
case 0xc ... 0xf:
105
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
106
opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
107
size = is_double ? 3 : 2;
108
switch (opcode) {
109
- case 0x3c: /* URECPE */
110
- if (size == 3) {
111
- unallocated_encoding(s);
112
- return;
113
- }
114
- if (!fp_access_check(s)) {
115
- return;
116
- }
117
- handle_2misc_reciprocal(s, opcode, false, u, is_q, size, rn, rd);
118
- return;
119
case 0x17: /* FCVTL, FCVTL2 */
120
if (!fp_access_check(s)) {
121
return;
122
}
123
handle_2misc_widening(s, opcode, is_q, size, rn, rd);
124
return;
125
- case 0x7c: /* URSQRTE */
126
- if (size == 3) {
127
- unallocated_encoding(s);
128
- return;
129
- }
130
- break;
131
default:
132
case 0x16: /* FCVTN, FCVTN2 */
133
case 0x36: /* BFCVTN, BFCVTN2 */
134
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
135
case 0x6d: /* FCMLE (zero) */
136
case 0x3d: /* FRECPE */
137
case 0x7d: /* FRSQRTE */
138
+ case 0x3c: /* URECPE */
139
+ case 0x7c: /* URSQRTE */
140
unallocated_encoding(s);
141
return;
142
}
143
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
144
unallocated_encoding(s);
145
return;
146
}
147
-
148
- if (!fp_access_check(s)) {
149
- return;
150
- }
151
-
152
- if (need_fpstatus || rmode >= 0) {
153
- tcg_fpstatus = fpstatus_ptr(FPST_FPCR);
154
- } else {
155
- tcg_fpstatus = NULL;
156
- }
157
- if (rmode >= 0) {
158
- tcg_rmode = gen_set_rmode(rmode, tcg_fpstatus);
159
- } else {
160
- tcg_rmode = NULL;
161
- }
162
-
163
- {
164
- int pass;
165
-
166
- assert(size == 2);
167
- for (pass = 0; pass < (is_q ? 4 : 2); pass++) {
168
- TCGv_i32 tcg_op = tcg_temp_new_i32();
169
- TCGv_i32 tcg_res = tcg_temp_new_i32();
170
-
171
- read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
172
-
173
- {
174
- /* Special cases for 32 bit elements */
175
- switch (opcode) {
176
- case 0x7c: /* URSQRTE */
177
- gen_helper_rsqrte_u32(tcg_res, tcg_op);
178
- break;
179
- default:
180
- case 0x7: /* SQABS, SQNEG */
181
- case 0x2f: /* FABS */
182
- case 0x6f: /* FNEG */
183
- case 0x7f: /* FSQRT */
184
- case 0x18: /* FRINTN */
185
- case 0x19: /* FRINTM */
186
- case 0x38: /* FRINTP */
187
- case 0x39: /* FRINTZ */
188
- case 0x58: /* FRINTA */
189
- case 0x79: /* FRINTI */
190
- case 0x59: /* FRINTX */
191
- case 0x1e: /* FRINT32Z */
192
- case 0x5e: /* FRINT32X */
193
- case 0x1f: /* FRINT64Z */
194
- case 0x5f: /* FRINT64X */
195
- case 0x1a: /* FCVTNS */
196
- case 0x1b: /* FCVTMS */
197
- case 0x1c: /* FCVTAS */
198
- case 0x3a: /* FCVTPS */
199
- case 0x3b: /* FCVTZS */
200
- case 0x5a: /* FCVTNU */
201
- case 0x5b: /* FCVTMU */
202
- case 0x5c: /* FCVTAU */
203
- case 0x7a: /* FCVTPU */
204
- case 0x7b: /* FCVTZU */
205
- g_assert_not_reached();
206
- }
207
- }
208
- write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
209
- }
210
- }
211
- clear_vec_high(s, is_q, rd);
212
-
213
- if (tcg_rmode) {
214
- gen_restore_rmode(tcg_rmode, tcg_fpstatus);
215
- }
216
+ g_assert_not_reached();
217
}
218
219
/* C3.6 Data processing - SIMD, inc Crypto
220
--
221
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remove lookup_disas_fn, handle_2misc_widening,
4
disas_simd_two_reg_misc, disas_data_proc_simd,
5
disas_data_proc_simd_fp, disas_a64_legacy, as
6
this is the final insn to be converted.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20241211163036.2297116-70-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/tcg/a64.decode | 2 +
14
target/arm/tcg/translate-a64.c | 202 +++------------------------------
15
2 files changed, 18 insertions(+), 186 deletions(-)
16
17
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/tcg/a64.decode
20
+++ b/target/arm/tcg/a64.decode
21
@@ -XXX,XX +XXX,XX @@ FRSQRTE_v 0.10 1110 1.1 00001 11011 0 ..... ..... @qrr_sd
22
URECPE_v 0.00 1110 101 00001 11001 0 ..... ..... @qrr_s
23
URSQRTE_v 0.10 1110 101 00001 11001 0 ..... ..... @qrr_s
24
25
+FCVTL_v 0.00 1110 0.1 00001 01111 0 ..... ..... @qrr_sd
26
+
27
&fcvt_q rd rn esz q shift
28
@fcvtq_h . q:1 . ...... 001 .... ...... rn:5 rd:5 \
29
&fcvt_q esz=1 shift=%fcvt_f_sh_h
30
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/translate-a64.c
33
+++ b/target/arm/tcg/translate-a64.c
34
@@ -XXX,XX +XXX,XX @@ static inline void gen_check_sp_alignment(DisasContext *s)
35
*/
36
}
37
38
-/*
39
- * This provides a simple table based table lookup decoder. It is
40
- * intended to be used when the relevant bits for decode are too
41
- * awkwardly placed and switch/if based logic would be confusing and
42
- * deeply nested. Since it's a linear search through the table, tables
43
- * should be kept small.
44
- *
45
- * It returns the first handler where insn & mask == pattern, or
46
- * NULL if there is no match.
47
- * The table is terminated by an empty mask (i.e. 0)
48
- */
49
-static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
50
- uint32_t insn)
51
-{
52
- const AArch64DecodeTable *tptr = table;
53
-
54
- while (tptr->mask) {
55
- if ((insn & tptr->mask) == tptr->pattern) {
56
- return tptr->disas_fn;
57
- }
58
- tptr++;
59
- }
60
- return NULL;
61
-}
62
-
63
/*
64
* The instruction disassembly implemented here matches
65
* the instruction encoding classifications in chapter C4
66
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2_ptr * const f_frsqrte[] = {
67
};
68
TRANS(FRSQRTE_v, do_gvec_op2_fpst, a->esz, a->q, a->rd, a->rn, 0, f_frsqrte)
69
70
-static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
71
- int size, int rn, int rd)
72
+static bool trans_FCVTL_v(DisasContext *s, arg_qrr_e *a)
73
{
74
/* Handle 2-reg-misc ops which are widening (so each size element
75
* in the source becomes a 2*size element in the destination.
76
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
77
*/
78
int pass;
79
80
- if (size == 3) {
81
+ if (!fp_access_check(s)) {
82
+ return true;
83
+ }
84
+
85
+ if (a->esz == MO_64) {
86
/* 32 -> 64 bit fp conversion */
87
TCGv_i64 tcg_res[2];
88
- int srcelt = is_q ? 2 : 0;
89
+ TCGv_i32 tcg_op = tcg_temp_new_i32();
90
+ int srcelt = a->q ? 2 : 0;
91
92
for (pass = 0; pass < 2; pass++) {
93
- TCGv_i32 tcg_op = tcg_temp_new_i32();
94
tcg_res[pass] = tcg_temp_new_i64();
95
-
96
- read_vec_element_i32(s, tcg_op, rn, srcelt + pass, MO_32);
97
+ read_vec_element_i32(s, tcg_op, a->rn, srcelt + pass, MO_32);
98
gen_helper_vfp_fcvtds(tcg_res[pass], tcg_op, tcg_env);
99
}
100
for (pass = 0; pass < 2; pass++) {
101
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
102
+ write_vec_element(s, tcg_res[pass], a->rd, pass, MO_64);
103
}
104
} else {
105
/* 16 -> 32 bit fp conversion */
106
- int srcelt = is_q ? 4 : 0;
107
+ int srcelt = a->q ? 4 : 0;
108
TCGv_i32 tcg_res[4];
109
TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
110
TCGv_i32 ahp = get_ahp_flag();
111
112
for (pass = 0; pass < 4; pass++) {
113
tcg_res[pass] = tcg_temp_new_i32();
114
-
115
- read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_16);
116
+ read_vec_element_i32(s, tcg_res[pass], a->rn, srcelt + pass, MO_16);
117
gen_helper_vfp_fcvt_f16_to_f32(tcg_res[pass], tcg_res[pass],
118
fpst, ahp);
119
}
120
for (pass = 0; pass < 4; pass++) {
121
- write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
122
+ write_vec_element_i32(s, tcg_res[pass], a->rd, pass, MO_32);
123
}
124
}
125
-}
126
-
127
-/* AdvSIMD two reg misc
128
- * 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
129
- * +---+---+---+-----------+------+-----------+--------+-----+------+------+
130
- * | 0 | Q | U | 0 1 1 1 0 | size | 1 0 0 0 0 | opcode | 1 0 | Rn | Rd |
131
- * +---+---+---+-----------+------+-----------+--------+-----+------+------+
132
- */
133
-static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
134
-{
135
- int size = extract32(insn, 22, 2);
136
- int opcode = extract32(insn, 12, 5);
137
- bool u = extract32(insn, 29, 1);
138
- bool is_q = extract32(insn, 30, 1);
139
- int rn = extract32(insn, 5, 5);
140
- int rd = extract32(insn, 0, 5);
141
-
142
- switch (opcode) {
143
- case 0xc ... 0xf:
144
- case 0x16 ... 0x1f:
145
- {
146
- /* Floating point: U, size[1] and opcode indicate operation;
147
- * size[0] indicates single or double precision.
148
- */
149
- int is_double = extract32(size, 0, 1);
150
- opcode |= (extract32(size, 1, 1) << 5) | (u << 6);
151
- size = is_double ? 3 : 2;
152
- switch (opcode) {
153
- case 0x17: /* FCVTL, FCVTL2 */
154
- if (!fp_access_check(s)) {
155
- return;
156
- }
157
- handle_2misc_widening(s, opcode, is_q, size, rn, rd);
158
- return;
159
- default:
160
- case 0x16: /* FCVTN, FCVTN2 */
161
- case 0x36: /* BFCVTN, BFCVTN2 */
162
- case 0x56: /* FCVTXN, FCVTXN2 */
163
- case 0x2f: /* FABS */
164
- case 0x6f: /* FNEG */
165
- case 0x7f: /* FSQRT */
166
- case 0x18: /* FRINTN */
167
- case 0x19: /* FRINTM */
168
- case 0x38: /* FRINTP */
169
- case 0x39: /* FRINTZ */
170
- case 0x59: /* FRINTX */
171
- case 0x79: /* FRINTI */
172
- case 0x58: /* FRINTA */
173
- case 0x1e: /* FRINT32Z */
174
- case 0x1f: /* FRINT64Z */
175
- case 0x5e: /* FRINT32X */
176
- case 0x5f: /* FRINT64X */
177
- case 0x1d: /* SCVTF */
178
- case 0x5d: /* UCVTF */
179
- case 0x1a: /* FCVTNS */
180
- case 0x1b: /* FCVTMS */
181
- case 0x3a: /* FCVTPS */
182
- case 0x3b: /* FCVTZS */
183
- case 0x5a: /* FCVTNU */
184
- case 0x5b: /* FCVTMU */
185
- case 0x7a: /* FCVTPU */
186
- case 0x7b: /* FCVTZU */
187
- case 0x5c: /* FCVTAU */
188
- case 0x1c: /* FCVTAS */
189
- case 0x2c: /* FCMGT (zero) */
190
- case 0x2d: /* FCMEQ (zero) */
191
- case 0x2e: /* FCMLT (zero) */
192
- case 0x6c: /* FCMGE (zero) */
193
- case 0x6d: /* FCMLE (zero) */
194
- case 0x3d: /* FRECPE */
195
- case 0x7d: /* FRSQRTE */
196
- case 0x3c: /* URECPE */
197
- case 0x7c: /* URSQRTE */
198
- unallocated_encoding(s);
199
- return;
200
- }
201
- break;
202
- }
203
- default:
204
- case 0x0: /* REV64, REV32 */
205
- case 0x1: /* REV16 */
206
- case 0x2: /* SADDLP, UADDLP */
207
- case 0x3: /* SUQADD, USQADD */
208
- case 0x4: /* CLS, CLZ */
209
- case 0x5: /* CNT, NOT, RBIT */
210
- case 0x6: /* SADALP, UADALP */
211
- case 0x7: /* SQABS, SQNEG */
212
- case 0x8: /* CMGT, CMGE */
213
- case 0x9: /* CMEQ, CMLE */
214
- case 0xa: /* CMLT */
215
- case 0xb: /* ABS, NEG */
216
- case 0x12: /* XTN, XTN2, SQXTUN, SQXTUN2 */
217
- case 0x13: /* SHLL, SHLL2 */
218
- case 0x14: /* SQXTN, SQXTN2, UQXTN, UQXTN2 */
219
- unallocated_encoding(s);
220
- return;
221
- }
222
- g_assert_not_reached();
223
-}
224
-
225
-/* C3.6 Data processing - SIMD, inc Crypto
226
- *
227
- * As the decode gets a little complex we are using a table based
228
- * approach for this part of the decode.
229
- */
230
-static const AArch64DecodeTable data_proc_simd[] = {
231
- /* pattern , mask , fn */
232
- { 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
233
- { 0x00000000, 0x00000000, NULL }
234
-};
235
-
236
-static void disas_data_proc_simd(DisasContext *s, uint32_t insn)
237
-{
238
- /* Note that this is called with all non-FP cases from
239
- * table C3-6 so it must UNDEF for entries not specifically
240
- * allocated to instructions in that table.
241
- */
242
- AArch64DecodeFn *fn = lookup_disas_fn(&data_proc_simd[0], insn);
243
- if (fn) {
244
- fn(s, insn);
245
- } else {
246
- unallocated_encoding(s);
247
- }
248
-}
249
-
250
-/* C3.6 Data processing - SIMD and floating point */
251
-static void disas_data_proc_simd_fp(DisasContext *s, uint32_t insn)
252
-{
253
- if (extract32(insn, 28, 1) == 1 && extract32(insn, 30, 1) == 0) {
254
- unallocated_encoding(s); /* in decodetree */
255
- } else {
256
- /* SIMD, including crypto */
257
- disas_data_proc_simd(s, insn);
258
- }
259
+ clear_vec_high(s, true, a->rd);
260
+ return true;
261
}
262
263
static bool trans_OK(DisasContext *s, arg_OK *a)
264
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
265
return false;
266
}
267
268
-/* C3.1 A64 instruction index by encoding */
269
-static void disas_a64_legacy(DisasContext *s, uint32_t insn)
270
-{
271
- switch (extract32(insn, 25, 4)) {
272
- case 0x7:
273
- case 0xf: /* Data processing - SIMD and floating point */
274
- disas_data_proc_simd_fp(s, insn);
275
- break;
276
- default:
277
- unallocated_encoding(s);
278
- break;
279
- }
280
-}
281
-
282
static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
283
CPUState *cpu)
284
{
285
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
286
if (!disas_a64(s, insn) &&
287
!disas_sme(s, insn) &&
288
!disas_sve(s, insn)) {
289
- disas_a64_legacy(s, insn);
290
+ unallocated_encoding(s);
291
}
292
293
/*
294
--
295
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We were only checking for SVE disabled and not taking into
3
Softfloat has native support for round-to-odd. Use it.
4
account PSTATE.SM to check SME disabled, which resulted in
5
vectors being incorrectly truncated.
6
4
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220713045848.217364-3-richard.henderson@linaro.org
6
Message-id: 20241206031428.78634-1-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/helper.c | 31 +++++++++++++++++++++++++------
10
target/arm/tcg/helper-a64.c | 18 ++++--------------
13
1 file changed, 25 insertions(+), 6 deletions(-)
11
1 file changed, 4 insertions(+), 14 deletions(-)
14
12
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
15
--- a/target/arm/tcg/helper-a64.c
18
+++ b/target/arm/helper.c
16
+++ b/target/arm/tcg/helper-a64.c
19
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
17
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
20
}
18
19
float32 HELPER(fcvtx_f64_to_f32)(float64 a, CPUARMState *env)
20
{
21
- /* Von Neumann rounding is implemented by using round-to-zero
22
- * and then setting the LSB of the result if Inexact was raised.
23
- */
24
float32 r;
25
float_status *fpst = &env->vfp.fp_status;
26
- float_status tstat = *fpst;
27
- int exflags;
28
+ int old = get_float_rounding_mode(fpst);
29
30
- set_float_rounding_mode(float_round_to_zero, &tstat);
31
- set_float_exception_flags(0, &tstat);
32
- r = float64_to_float32(a, &tstat);
33
- exflags = get_float_exception_flags(&tstat);
34
- if (exflags & float_flag_inexact) {
35
- r = make_float32(float32_val(r) | 1);
36
- }
37
- exflags |= get_float_exception_flags(fpst);
38
- set_float_exception_flags(exflags, fpst);
39
+ set_float_rounding_mode(float_round_to_odd, fpst);
40
+ r = float64_to_float32(a, fpst);
41
+ set_float_rounding_mode(old, fpst);
42
return r;
21
}
43
}
22
44
23
+static uint32_t sve_vqm1_for_el_sm_ena(CPUARMState *env, int el, bool sm)
24
+{
25
+ int exc_el;
26
+
27
+ if (sm) {
28
+ exc_el = sme_exception_el(env, el);
29
+ } else {
30
+ exc_el = sve_exception_el(env, el);
31
+ }
32
+ if (exc_el) {
33
+ return 0; /* disabled */
34
+ }
35
+ return sve_vqm1_for_el_sm(env, el, sm);
36
+}
37
+
38
/*
39
* Notice a change in SVE vector size when changing EL.
40
*/
41
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
42
{
43
ARMCPU *cpu = env_archcpu(env);
44
int old_len, new_len;
45
- bool old_a64, new_a64;
46
+ bool old_a64, new_a64, sm;
47
48
/* Nothing to do if no SVE. */
49
if (!cpu_isar_feature(aa64_sve, cpu)) {
50
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
51
* invoke ResetSVEState when taking an exception from, or
52
* returning to, AArch32 state when PSTATE.SM is enabled.
53
*/
54
- if (old_a64 != new_a64 && FIELD_EX64(env->svcr, SVCR, SM)) {
55
+ sm = FIELD_EX64(env->svcr, SVCR, SM);
56
+ if (old_a64 != new_a64 && sm) {
57
arm_reset_sve_state(env);
58
return;
59
}
60
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
61
* we already have the correct register contents when encountering the
62
* vq0->vq0 transition between EL0->EL1.
63
*/
64
- old_len = (old_a64 && !sve_exception_el(env, old_el)
65
- ? sve_vqm1_for_el(env, old_el) : 0);
66
- new_len = (new_a64 && !sve_exception_el(env, new_el)
67
- ? sve_vqm1_for_el(env, new_el) : 0);
68
+ old_len = new_len = 0;
69
+ if (old_a64) {
70
+ old_len = sve_vqm1_for_el_sm_ena(env, old_el, sm);
71
+ }
72
+ if (new_a64) {
73
+ new_len = sve_vqm1_for_el_sm_ena(env, new_el, sm);
74
+ }
75
76
/* When changing vector length, clear inaccessible state. */
77
if (new_len < old_len) {
78
--
45
--
79
2.25.1
46
2.34.1
diff view generated by jsdifflib
New patch
1
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
1
2
3
www.orangepi.org does not support https, it's expected to stick to http.
4
5
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
6
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
7
Message-id: 20241206192254.3889131-2-pierrick.bouvier@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
docs/system/arm/orangepi.rst | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
13
diff --git a/docs/system/arm/orangepi.rst b/docs/system/arm/orangepi.rst
14
index XXXXXXX..XXXXXXX 100644
15
--- a/docs/system/arm/orangepi.rst
16
+++ b/docs/system/arm/orangepi.rst
17
@@ -XXX,XX +XXX,XX @@ Orange Pi PC images
18
Note that the mainline kernel does not have a root filesystem. You may provide it
19
with an official Orange Pi PC image from the official website:
20
21
- http://www.orangepi.org/downloadresources/
22
+ http://www.orangepi.org/html/serviceAndSupport/index.html
23
24
Another possibility is to run an Armbian image for Orange Pi PC which
25
can be downloaded from:
26
@@ -XXX,XX +XXX,XX @@ including the Orange Pi PC. NetBSD 9.0 is known to work best for the Orange Pi P
27
board and provides a fully working system with serial console, networking and storage.
28
For the Orange Pi PC machine, get the 'evbarm-earmv7hf' based image from:
29
30
- https://cdn.netbsd.org/pub/NetBSD/NetBSD-9.0/evbarm-earmv7hf/binary/gzimg/armv7.img.gz
31
+ https://archive.netbsd.org/pub/NetBSD-archive/NetBSD-9.0/evbarm-earmv7hf/binary/gzimg/armv7.img.gz
32
33
The image requires manually installing U-Boot in the image. Build U-Boot with
34
the orangepi_pc_defconfig configuration as described in the previous section.
35
--
36
2.34.1
diff view generated by jsdifflib
1
From: Hao Wu <wuhaotsh@google.com>
1
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
2
2
3
The correct bit for the CONV bit in NPCM7XX ADC is bit 13. This patch
3
Reviewed-by: Cédric Le Goater <clg@redhat.com>
4
fixes that in the module, and also lower the IRQ when the guest
4
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
5
is done handling an interrupt event from the ADC module.
5
Message-id: 20241206192254.3889131-3-pierrick.bouvier@linaro.org
6
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
8
Reviewed-by: Patrick Venture<venture@google.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20220714182836.89602-4-wuhaotsh@google.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
7
---
13
hw/adc/npcm7xx_adc.c | 2 +-
8
docs/system/arm/fby35.rst | 5 +++++
14
tests/qtest/npcm7xx_adc-test.c | 2 +-
9
1 file changed, 5 insertions(+)
15
2 files changed, 2 insertions(+), 2 deletions(-)
16
10
17
diff --git a/hw/adc/npcm7xx_adc.c b/hw/adc/npcm7xx_adc.c
11
diff --git a/docs/system/arm/fby35.rst b/docs/system/arm/fby35.rst
18
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/adc/npcm7xx_adc.c
13
--- a/docs/system/arm/fby35.rst
20
+++ b/hw/adc/npcm7xx_adc.c
14
+++ b/docs/system/arm/fby35.rst
21
@@ -XXX,XX +XXX,XX @@ REG32(NPCM7XX_ADC_DATA, 0x4)
15
@@ -XXX,XX +XXX,XX @@ process starts.
22
#define NPCM7XX_ADC_CON_INT BIT(18)
16
$ screen /dev/tty0 # In a separate TMUX pane, terminal window, etc.
23
#define NPCM7XX_ADC_CON_EN BIT(17)
17
$ screen /dev/tty1
24
#define NPCM7XX_ADC_CON_RST BIT(16)
18
$ (qemu) c         # Start the boot process once screen is setup.
25
-#define NPCM7XX_ADC_CON_CONV BIT(14)
19
+
26
+#define NPCM7XX_ADC_CON_CONV BIT(13)
20
+This machine model supports emulation of the boot from the CE0 flash device by
27
#define NPCM7XX_ADC_CON_DIV(rv) extract32(rv, 1, 8)
21
+setting option ``execute-in-place``. When using this option, the CPU fetches
28
22
+instructions to execute by reading CE0 and not from a preloaded ROM
29
#define NPCM7XX_ADC_MAX_RESULT 1023
23
+initialized at machine init time. As a result, execution will be slower.
30
diff --git a/tests/qtest/npcm7xx_adc-test.c b/tests/qtest/npcm7xx_adc-test.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/tests/qtest/npcm7xx_adc-test.c
33
+++ b/tests/qtest/npcm7xx_adc-test.c
34
@@ -XXX,XX +XXX,XX @@
35
#define CON_INT BIT(18)
36
#define CON_EN BIT(17)
37
#define CON_RST BIT(16)
38
-#define CON_CONV BIT(14)
39
+#define CON_CONV BIT(13)
40
#define CON_DIV(rv) extract32(rv, 1, 8)
41
42
#define FST_RDST BIT(1)
43
--
24
--
44
2.25.1
25
2.34.1
26
27
diff view generated by jsdifflib
New patch
1
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
1
2
3
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
4
Message-id: 20241206192254.3889131-4-pierrick.bouvier@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
docs/system/arm/xlnx-versal-virt.rst | 3 +++
9
1 file changed, 3 insertions(+)
10
11
diff --git a/docs/system/arm/xlnx-versal-virt.rst b/docs/system/arm/xlnx-versal-virt.rst
12
index XXXXXXX..XXXXXXX 100644
13
--- a/docs/system/arm/xlnx-versal-virt.rst
14
+++ b/docs/system/arm/xlnx-versal-virt.rst
15
@@ -XXX,XX +XXX,XX @@ Run the following at the U-Boot prompt:
16
fdt set /chosen/dom0 reg <0x00000000 0x40000000 0x0 0x03100000>
17
booti 30000000 - 20000000
18
19
+It's possible to change the OSPI flash model emulated by using the machine model
20
+option ``ospi-flash``.
21
+
22
BBRAM File Backend
23
""""""""""""""""""
24
BBRAM can have an optional file backend, which must be a seekable
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
1
2
3
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
4
Message-id: 20241206192254.3889131-5-pierrick.bouvier@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
docs/system/arm/virt.rst | 16 ++++++++++++++++
9
1 file changed, 16 insertions(+)
10
11
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
12
index XXXXXXX..XXXXXXX 100644
13
--- a/docs/system/arm/virt.rst
14
+++ b/docs/system/arm/virt.rst
15
@@ -XXX,XX +XXX,XX @@ iommu
16
``smmuv3``
17
Create an SMMUv3
18
19
+default-bus-bypass-iommu
20
+ Set ``on``/``off`` to enable/disable `bypass_iommu
21
+ <https://gitlab.com/qemu-project/qemu/-/blob/master/docs/bypass-iommu.txt>`_
22
+ for default root bus.
23
+
24
ras
25
Set ``on``/``off`` to enable/disable reporting host memory errors to a guest
26
using ACPI and guest external abort exceptions. The default is off.
27
28
+acpi
29
+ Set ``on``/``off``/``auto`` to enable/disable ACPI.
30
+
31
dtb-randomness
32
Set ``on``/``off`` to pass random seeds via the guest DTB
33
rng-seed and kaslr-seed nodes (in both "/chosen" and
34
@@ -XXX,XX +XXX,XX @@ dtb-randomness
35
dtb-kaslr-seed
36
A deprecated synonym for dtb-randomness.
37
38
+x-oem-id
39
+ Set string (up to 6 bytes) to override the default value of field OEMID in ACPI
40
+ table header.
41
+
42
+x-oem-table-id
43
+ Set string (up to 8 bytes) to override the default value of field OEM Table ID
44
+ in ACPI table header.
45
+
46
Linux guest kernel configuration
47
""""""""""""""""""""""""""""""""
48
49
--
50
2.34.1
diff view generated by jsdifflib
1
From: Hao Wu <wuhaotsh@google.com>
1
From: Brian Cain <brian.cain@oss.qualcomm.com>
2
2
3
Our sensor test requires both reading and writing from a sensor's
3
Mea culpa, I don't know how I got this wrong in 2dfe93699c. Still
4
QOM property. So we need to make the input of ADC module R/W instead
4
getting used to the new address, I suppose. Somehow I got it right in the
5
of write only for that to work.
5
mailmap, though.
6
6
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
7
Signed-off-by: Brian Cain <brian.cain@oss.qualcomm.com>
8
Reviewed-by: Titus Rwantare <titusr@google.com>
8
Message-id: 20241209181242.1434231-1-brian.cain@oss.qualcomm.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20220714182836.89602-5-wuhaotsh@google.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/adc/npcm7xx_adc.c | 2 +-
12
MAINTAINERS | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
15
14
16
diff --git a/hw/adc/npcm7xx_adc.c b/hw/adc/npcm7xx_adc.c
15
diff --git a/MAINTAINERS b/MAINTAINERS
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/adc/npcm7xx_adc.c
17
--- a/MAINTAINERS
19
+++ b/hw/adc/npcm7xx_adc.c
18
+++ b/MAINTAINERS
20
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_adc_init(Object *obj)
19
@@ -XXX,XX +XXX,XX @@ F: target/avr/
21
20
F: tests/functional/test_avr_mega2560.py
22
for (i = 0; i < NPCM7XX_ADC_NUM_INPUTS; ++i) {
21
23
object_property_add_uint32_ptr(obj, "adci[*]",
22
Hexagon TCG CPUs
24
- &s->adci[i], OBJ_PROP_FLAG_WRITE);
23
-M: Brian Cain <bcain@oss.qualcomm.com>
25
+ &s->adci[i], OBJ_PROP_FLAG_READWRITE);
24
+M: Brian Cain <brian.cain@oss.qualcomm.com>
26
}
25
S: Supported
27
object_property_add_uint32_ptr(obj, "vref",
26
F: target/hexagon/
28
&s->vref, OBJ_PROP_FLAG_WRITE);
27
X: target/hexagon/idef-parser/
29
--
28
--
30
2.25.1
29
2.34.1
diff view generated by jsdifflib
1
Change the representation of the VSTCR_EL2 and VTCR_EL2 registers in
1
target/arm/helper.c is very large and unwieldy. One subset of code
2
the CPU state struct from struct TCR to uint64_t.
2
that we can pull out into its own file is the cpreg arrays and
3
corresponding functions for the TLBI instructions.
4
5
Because these are instructions they are only relevant for TCG and we
6
can make the new file only be built for CONFIG_TCG.
7
8
In this commit we move the AArch32 instructions from:
9
not_v7_cp_reginfo[]
10
v7_cp_reginfo[]
11
v7mp_cp_reginfo[]
12
v8_cp_reginfo[]
13
into a new file target/arm/tcg/tlb-insns.c.
14
15
A few small functions are used both by functions we haven't yet moved
16
across and by functions we have already moved. We temporarily make
17
these global with a prototype in cpregs.h; when the move of all TLBI
18
insns is complete these will return to being file-local.
19
20
For CONFIG_TCG, this is just moving code around. For a KVM only
21
build, these cpregs will no longer be added to the cpregs hashtable
22
for the CPU. However this should not be a behaviour change, because:
23
* we never try to migration sync or otherwise include
24
ARM_CP_NO_RAW cpregs
25
* for migration we treat the kernel's list of system registers
26
as the authoritative one, so these TLBI insns were never
27
in it anyway
28
The no-tcg stub of define_tlb_insn_regs() therefore does nothing.
3
29
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20220714132303.1287193-6-peter.maydell@linaro.org
32
Message-id: 20241210160452.2427965-2-peter.maydell@linaro.org
7
---
33
---
8
target/arm/cpu.h | 4 ++--
34
target/arm/cpregs.h | 14 +++
9
target/arm/internals.h | 4 ++--
35
target/arm/internals.h | 3 +
10
target/arm/helper.c | 4 +---
36
target/arm/helper.c | 231 ++--------------------------------
11
target/arm/ptw.c | 14 +++++++-------
37
target/arm/tcg-stubs.c | 5 +
12
4 files changed, 12 insertions(+), 14 deletions(-)
38
target/arm/tcg/tlb-insns.c | 246 +++++++++++++++++++++++++++++++++++++
39
target/arm/tcg/meson.build | 1 +
40
6 files changed, 280 insertions(+), 220 deletions(-)
41
create mode 100644 target/arm/tcg/tlb-insns.c
13
42
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
43
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
45
--- a/target/arm/cpregs.h
17
+++ b/target/arm/cpu.h
46
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
47
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpreg_traps_in_nv(const ARMCPRegInfo *ri)
19
uint64_t vsttbr_el2; /* Secure Virtualization Translation Table. */
48
return ri->opc1 == 4 || ri->opc1 == 5;
20
/* MMU translation table base control. */
49
}
21
TCR tcr_el[4];
50
22
- TCR vtcr_el2; /* Virtualization Translation Control. */
51
+/*
23
- TCR vstcr_el2; /* Secure Virtualization Translation Control. */
52
+ * Temporary declarations of functions until the move to tlb_insn_helper.c
24
+ uint64_t vtcr_el2; /* Virtualization Translation Control. */
53
+ * is complete and we can make the functions static again
25
+ uint64_t vstcr_el2; /* Secure Virtualization Translation Control. */
54
+ */
26
uint32_t c2_data; /* MPU data cacheable bits. */
55
+CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
27
uint32_t c2_insn; /* MPU instruction cacheable bits. */
56
+ bool isread);
28
union { /* MMU domain access control register
57
+CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
58
+ bool isread);
59
+bool tlb_force_broadcast(CPUARMState *env);
60
+void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
61
+ uint64_t value);
62
+void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
63
+ uint64_t value);
64
+
65
#endif /* TARGET_ARM_CPREGS_H */
29
diff --git a/target/arm/internals.h b/target/arm/internals.h
66
diff --git a/target/arm/internals.h b/target/arm/internals.h
30
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/internals.h
68
--- a/target/arm/internals.h
32
+++ b/target/arm/internals.h
69
+++ b/target/arm/internals.h
33
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
70
@@ -XXX,XX +XXX,XX @@ static inline uint64_t pauth_ptr_mask(ARMVAParameters param)
34
static inline uint64_t regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
71
/* Add the cpreg definitions for debug related system registers */
35
{
72
void define_debug_regs(ARMCPU *cpu);
36
if (mmu_idx == ARMMMUIdx_Stage2) {
73
37
- return env->cp15.vtcr_el2.raw_tcr;
74
+/* Add the cpreg definitions for TLBI instructions */
38
+ return env->cp15.vtcr_el2;
75
+void define_tlb_insn_regs(ARMCPU *cpu);
39
}
76
+
40
if (mmu_idx == ARMMMUIdx_Stage2_S) {
77
/* Effective value of MDCR_EL2 */
41
/*
78
static inline uint64_t arm_mdcr_el2_eff(CPUARMState *env)
42
* Note: Secure stage 2 nominally shares fields from VTCR_EL2, but
79
{
43
* those are not currently used by QEMU, so just return VSTCR_EL2.
44
*/
45
- return env->cp15.vstcr_el2.raw_tcr;
46
+ return env->cp15.vstcr_el2;
47
}
48
return env->cp15.tcr_el[regime_el(env, mmu_idx)].raw_tcr;
49
}
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
80
diff --git a/target/arm/helper.c b/target/arm/helper.c
51
index XXXXXXX..XXXXXXX 100644
81
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/helper.c
82
--- a/target/arm/helper.c
53
+++ b/target/arm/helper.c
83
+++ b/target/arm/helper.c
54
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
84
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tacr(CPUARMState *env, const ARMCPRegInfo *ri,
55
{ .name = "VTCR_EL2", .state = ARM_CP_STATE_AA64,
85
}
56
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
86
57
.access = PL2_RW,
87
/* Check for traps from EL1 due to HCR_EL2.TTLB. */
58
- /* no .writefn needed as this can't cause an ASID change;
88
-static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
59
- * no .raw_writefn or .resetfn needed as we never use mask/base_mask
89
- bool isread)
60
- */
90
+CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
61
+ /* no .writefn needed as this can't cause an ASID change */
91
+ bool isread)
62
.fieldoffset = offsetof(CPUARMState, cp15.vtcr_el2) },
92
{
63
{ .name = "VTTBR", .state = ARM_CP_STATE_AA32,
93
if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) {
64
.cp = 15, .opc1 = 6, .crm = 2,
94
return CP_ACCESS_TRAP_EL2;
65
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
95
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
96
}
97
98
/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBIS. */
99
-static CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
100
- bool isread)
101
+CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
102
+ bool isread)
103
{
104
if (arm_current_el(env) == 1 &&
105
(arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBIS))) {
106
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
107
ARMMMUIdxBit_Stage2_S);
108
}
109
110
-
111
-/* IS variants of TLB operations must affect all cores */
112
-static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
113
- uint64_t value)
114
-{
115
- CPUState *cs = env_cpu(env);
116
-
117
- tlb_flush_all_cpus_synced(cs);
118
-}
119
-
120
-static void tlbiasid_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
121
- uint64_t value)
122
-{
123
- CPUState *cs = env_cpu(env);
124
-
125
- tlb_flush_all_cpus_synced(cs);
126
-}
127
-
128
-static void tlbimva_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
129
- uint64_t value)
130
-{
131
- CPUState *cs = env_cpu(env);
132
-
133
- tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
134
-}
135
-
136
-static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
137
- uint64_t value)
138
-{
139
- CPUState *cs = env_cpu(env);
140
-
141
- tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
142
-}
143
-
144
/*
145
* Non-IS variants of TLB operations are upgraded to
146
* IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
147
* force broadcast of these operations.
148
*/
149
-static bool tlb_force_broadcast(CPUARMState *env)
150
+bool tlb_force_broadcast(CPUARMState *env)
151
{
152
return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
153
}
154
155
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
156
- uint64_t value)
157
-{
158
- /* Invalidate all (TLBIALL) */
159
- CPUState *cs = env_cpu(env);
160
-
161
- if (tlb_force_broadcast(env)) {
162
- tlb_flush_all_cpus_synced(cs);
163
- } else {
164
- tlb_flush(cs);
165
- }
166
-}
167
-
168
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
169
- uint64_t value)
170
-{
171
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
172
- CPUState *cs = env_cpu(env);
173
-
174
- value &= TARGET_PAGE_MASK;
175
- if (tlb_force_broadcast(env)) {
176
- tlb_flush_page_all_cpus_synced(cs, value);
177
- } else {
178
- tlb_flush_page(cs, value);
179
- }
180
-}
181
-
182
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
183
- uint64_t value)
184
-{
185
- /* Invalidate by ASID (TLBIASID) */
186
- CPUState *cs = env_cpu(env);
187
-
188
- if (tlb_force_broadcast(env)) {
189
- tlb_flush_all_cpus_synced(cs);
190
- } else {
191
- tlb_flush(cs);
192
- }
193
-}
194
-
195
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
- uint64_t value)
197
-{
198
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
199
- CPUState *cs = env_cpu(env);
200
-
201
- value &= TARGET_PAGE_MASK;
202
- if (tlb_force_broadcast(env)) {
203
- tlb_flush_page_all_cpus_synced(cs, value);
204
- } else {
205
- tlb_flush_page(cs, value);
206
- }
207
-}
208
-
209
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
210
uint64_t value)
211
{
212
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
214
}
215
216
-static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
217
- uint64_t value)
218
+void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
+ uint64_t value)
220
{
221
CPUState *cs = env_cpu(env);
222
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
223
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
224
tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
225
}
226
227
-static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
228
- uint64_t value)
229
+void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
230
+ uint64_t value)
231
{
232
CPUState *cs = env_cpu(env);
233
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
234
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
235
ARMMMUIdxBit_E2);
236
}
237
238
-static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
239
- uint64_t value)
240
-{
241
- CPUState *cs = env_cpu(env);
242
- uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
243
-
244
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
245
-}
246
-
247
-static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
248
- uint64_t value)
249
-{
250
- CPUState *cs = env_cpu(env);
251
- uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
252
-
253
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
254
-}
255
-
256
static const ARMCPRegInfo cp_reginfo[] = {
257
/*
258
* Define the secure and non-secure FCSE identifier CP registers
259
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
260
*/
261
{ .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0,
262
.access = PL0_R, .type = ARM_CP_CONST, .resetvalue = 0 },
263
- /*
264
- * MMU TLB control. Note that the wildcarding means we cover not just
265
- * the unified TLB ops but also the dside/iside/inner-shareable variants.
266
- */
267
- { .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY,
268
- .opc1 = CP_ANY, .opc2 = 0, .access = PL1_W, .writefn = tlbiall_write,
269
- .type = ARM_CP_NO_RAW },
270
- { .name = "TLBIMVA", .cp = 15, .crn = 8, .crm = CP_ANY,
271
- .opc1 = CP_ANY, .opc2 = 1, .access = PL1_W, .writefn = tlbimva_write,
272
- .type = ARM_CP_NO_RAW },
273
- { .name = "TLBIASID", .cp = 15, .crn = 8, .crm = CP_ANY,
274
- .opc1 = CP_ANY, .opc2 = 2, .access = PL1_W, .writefn = tlbiasid_write,
275
- .type = ARM_CP_NO_RAW },
276
- { .name = "TLBIMVAA", .cp = 15, .crn = 8, .crm = CP_ANY,
277
- .opc1 = CP_ANY, .opc2 = 3, .access = PL1_W, .writefn = tlbimvaa_write,
278
- .type = ARM_CP_NO_RAW },
279
{ .name = "PRRR", .cp = 15, .crn = 10, .crm = 2,
280
.opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NOP },
281
{ .name = "NMRR", .cp = 15, .crn = 10, .crm = 2,
282
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
283
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 0,
284
.fgt = FGT_ISR_EL1,
285
.type = ARM_CP_NO_RAW, .access = PL1_R, .readfn = isr_read },
286
- /* 32 bit ITLB invalidates */
287
- { .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0,
288
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
289
- .writefn = tlbiall_write },
290
- { .name = "ITLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
291
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
292
- .writefn = tlbimva_write },
293
- { .name = "ITLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 2,
294
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
295
- .writefn = tlbiasid_write },
296
- /* 32 bit DTLB invalidates */
297
- { .name = "DTLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 0,
298
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
299
- .writefn = tlbiall_write },
300
- { .name = "DTLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
301
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
302
- .writefn = tlbimva_write },
303
- { .name = "DTLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 2,
304
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
305
- .writefn = tlbiasid_write },
306
- /* 32 bit TLB invalidates */
307
- { .name = "TLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
308
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
309
- .writefn = tlbiall_write },
310
- { .name = "TLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
311
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
312
- .writefn = tlbimva_write },
313
- { .name = "TLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
314
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
315
- .writefn = tlbiasid_write },
316
- { .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
317
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
318
- .writefn = tlbimvaa_write },
319
-};
320
-
321
-static const ARMCPRegInfo v7mp_cp_reginfo[] = {
322
- /* 32 bit TLB invalidates, Inner Shareable */
323
- { .name = "TLBIALLIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
324
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
325
- .writefn = tlbiall_is_write },
326
- { .name = "TLBIMVAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
327
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
328
- .writefn = tlbimva_is_write },
329
- { .name = "TLBIASIDIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
330
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
331
- .writefn = tlbiasid_is_write },
332
- { .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
333
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
334
- .writefn = tlbimvaa_is_write },
335
};
336
337
static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
338
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
339
.fieldoffset = offsetof(CPUARMState, cp15.par_el[1]),
340
.writefn = par_write },
341
#endif
342
- /* TLB invalidate last level of translation table walk */
343
- { .name = "TLBIMVALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
344
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
345
- .writefn = tlbimva_is_write },
346
- { .name = "TLBIMVAALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
347
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
348
- .writefn = tlbimvaa_is_write },
349
- { .name = "TLBIMVAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
350
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
351
- .writefn = tlbimva_write },
352
- { .name = "TLBIMVAAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
353
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
354
- .writefn = tlbimvaa_write },
355
- { .name = "TLBIMVALH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
356
- .type = ARM_CP_NO_RAW, .access = PL2_W,
357
- .writefn = tlbimva_hyp_write },
358
- { .name = "TLBIMVALHIS",
359
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
360
- .type = ARM_CP_NO_RAW, .access = PL2_W,
361
- .writefn = tlbimva_hyp_is_write },
362
- { .name = "TLBIIPAS2",
363
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
364
- .type = ARM_CP_NO_RAW, .access = PL2_W,
365
- .writefn = tlbiipas2_hyp_write },
366
- { .name = "TLBIIPAS2IS",
367
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
368
- .type = ARM_CP_NO_RAW, .access = PL2_W,
369
- .writefn = tlbiipas2is_hyp_write },
370
- { .name = "TLBIIPAS2L",
371
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
372
- .type = ARM_CP_NO_RAW, .access = PL2_W,
373
- .writefn = tlbiipas2_hyp_write },
374
- { .name = "TLBIIPAS2LIS",
375
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
376
- .type = ARM_CP_NO_RAW, .access = PL2_W,
377
- .writefn = tlbiipas2is_hyp_write },
378
/* 32 bit cache operations */
379
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
380
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_ticab },
381
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
382
define_arm_cp_regs(cpu, not_v8_cp_reginfo);
383
}
384
385
+ define_tlb_insn_regs(cpu);
386
+
387
if (arm_feature(env, ARM_FEATURE_V6)) {
388
/* The ID registers all have impdef reset values */
389
ARMCPRegInfo v6_idregs[] = {
390
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
391
if (arm_feature(env, ARM_FEATURE_V6K)) {
392
define_arm_cp_regs(cpu, v6k_cp_reginfo);
393
}
394
- if (arm_feature(env, ARM_FEATURE_V7MP) &&
395
- !arm_feature(env, ARM_FEATURE_PMSA)) {
396
- define_arm_cp_regs(cpu, v7mp_cp_reginfo);
397
- }
398
if (arm_feature(env, ARM_FEATURE_V7VE)) {
399
define_arm_cp_regs(cpu, pmovsset_cp_reginfo);
400
}
401
diff --git a/target/arm/tcg-stubs.c b/target/arm/tcg-stubs.c
66
index XXXXXXX..XXXXXXX 100644
402
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/ptw.c
403
--- a/target/arm/tcg-stubs.c
68
+++ b/target/arm/ptw.c
404
+++ b/target/arm/tcg-stubs.c
69
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
405
@@ -XXX,XX +XXX,XX @@ void raise_exception_ra(CPUARMState *env, uint32_t excp, uint32_t syndrome,
70
if (arm_is_secure_below_el3(env)) {
406
void assert_hflags_rebuild_correctly(CPUARMState *env)
71
/* Check if page table walk is to secure or non-secure PA space. */
407
{
72
if (*is_secure) {
408
}
73
- *is_secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
409
+
74
+ *is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
410
+/* TLBI insns are only used by TCG, so we don't need to do anything for KVM */
75
} else {
411
+void define_tlb_insn_regs(ARMCPU *cpu)
76
- *is_secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
412
+{
77
+ *is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
413
+}
78
}
414
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
79
} else {
415
new file mode 100644
80
assert(!*is_secure);
416
index XXXXXXX..XXXXXXX
81
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
417
--- /dev/null
82
ipa_secure = attrs->secure;
418
+++ b/target/arm/tcg/tlb-insns.c
83
if (arm_is_secure_below_el3(env)) {
419
@@ -XXX,XX +XXX,XX @@
84
if (ipa_secure) {
420
+/*
85
- attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
421
+ * Helpers for TLBI insns
86
+ attrs->secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
422
+ *
87
} else {
423
+ * This code is licensed under the GNU GPL v2 or later.
88
- attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
424
+ *
89
+ attrs->secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
425
+ * SPDX-License-Identifier: GPL-2.0-or-later
90
}
426
+ */
91
} else {
427
+#include "qemu/osdep.h"
92
assert(!ipa_secure);
428
+#include "exec/exec-all.h"
93
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
429
+#include "cpu.h"
94
if (arm_is_secure_below_el3(env)) {
430
+#include "internals.h"
95
if (ipa_secure) {
431
+#include "cpu-features.h"
96
attrs->secure =
432
+#include "cpregs.h"
97
- !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW));
433
+
98
+ !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
434
+/* IS variants of TLB operations must affect all cores */
99
} else {
435
+static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
100
attrs->secure =
436
+ uint64_t value)
101
- !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW))
437
+{
102
- || (env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW)));
438
+ CPUState *cs = env_cpu(env);
103
+ !((env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))
439
+
104
+ || (env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)));
440
+ tlb_flush_all_cpus_synced(cs);
105
}
441
+}
106
}
442
+
107
return 0;
443
+static void tlbiasid_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
444
+ uint64_t value)
445
+{
446
+ CPUState *cs = env_cpu(env);
447
+
448
+ tlb_flush_all_cpus_synced(cs);
449
+}
450
+
451
+static void tlbimva_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
452
+ uint64_t value)
453
+{
454
+ CPUState *cs = env_cpu(env);
455
+
456
+ tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
457
+}
458
+
459
+static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
460
+ uint64_t value)
461
+{
462
+ CPUState *cs = env_cpu(env);
463
+
464
+ tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
465
+}
466
+
467
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
468
+ uint64_t value)
469
+{
470
+ /* Invalidate all (TLBIALL) */
471
+ CPUState *cs = env_cpu(env);
472
+
473
+ if (tlb_force_broadcast(env)) {
474
+ tlb_flush_all_cpus_synced(cs);
475
+ } else {
476
+ tlb_flush(cs);
477
+ }
478
+}
479
+
480
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
481
+ uint64_t value)
482
+{
483
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
484
+ CPUState *cs = env_cpu(env);
485
+
486
+ value &= TARGET_PAGE_MASK;
487
+ if (tlb_force_broadcast(env)) {
488
+ tlb_flush_page_all_cpus_synced(cs, value);
489
+ } else {
490
+ tlb_flush_page(cs, value);
491
+ }
492
+}
493
+
494
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
495
+ uint64_t value)
496
+{
497
+ /* Invalidate by ASID (TLBIASID) */
498
+ CPUState *cs = env_cpu(env);
499
+
500
+ if (tlb_force_broadcast(env)) {
501
+ tlb_flush_all_cpus_synced(cs);
502
+ } else {
503
+ tlb_flush(cs);
504
+ }
505
+}
506
+
507
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
508
+ uint64_t value)
509
+{
510
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
511
+ CPUState *cs = env_cpu(env);
512
+
513
+ value &= TARGET_PAGE_MASK;
514
+ if (tlb_force_broadcast(env)) {
515
+ tlb_flush_page_all_cpus_synced(cs, value);
516
+ } else {
517
+ tlb_flush_page(cs, value);
518
+ }
519
+}
520
+
521
+static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
522
+ uint64_t value)
523
+{
524
+ CPUState *cs = env_cpu(env);
525
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
526
+
527
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
528
+}
529
+
530
+static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
531
+ uint64_t value)
532
+{
533
+ CPUState *cs = env_cpu(env);
534
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
535
+
536
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
537
+}
538
+
539
+static const ARMCPRegInfo tlbi_not_v7_cp_reginfo[] = {
540
+ /*
541
+ * MMU TLB control. Note that the wildcarding means we cover not just
542
+ * the unified TLB ops but also the dside/iside/inner-shareable variants.
543
+ */
544
+ { .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY,
545
+ .opc1 = CP_ANY, .opc2 = 0, .access = PL1_W, .writefn = tlbiall_write,
546
+ .type = ARM_CP_NO_RAW },
547
+ { .name = "TLBIMVA", .cp = 15, .crn = 8, .crm = CP_ANY,
548
+ .opc1 = CP_ANY, .opc2 = 1, .access = PL1_W, .writefn = tlbimva_write,
549
+ .type = ARM_CP_NO_RAW },
550
+ { .name = "TLBIASID", .cp = 15, .crn = 8, .crm = CP_ANY,
551
+ .opc1 = CP_ANY, .opc2 = 2, .access = PL1_W, .writefn = tlbiasid_write,
552
+ .type = ARM_CP_NO_RAW },
553
+ { .name = "TLBIMVAA", .cp = 15, .crn = 8, .crm = CP_ANY,
554
+ .opc1 = CP_ANY, .opc2 = 3, .access = PL1_W, .writefn = tlbimvaa_write,
555
+ .type = ARM_CP_NO_RAW },
556
+};
557
+
558
+static const ARMCPRegInfo tlbi_v7_cp_reginfo[] = {
559
+ /* 32 bit ITLB invalidates */
560
+ { .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0,
561
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
562
+ .writefn = tlbiall_write },
563
+ { .name = "ITLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
564
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
565
+ .writefn = tlbimva_write },
566
+ { .name = "ITLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 2,
567
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
568
+ .writefn = tlbiasid_write },
569
+ /* 32 bit DTLB invalidates */
570
+ { .name = "DTLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 0,
571
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
572
+ .writefn = tlbiall_write },
573
+ { .name = "DTLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
574
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
575
+ .writefn = tlbimva_write },
576
+ { .name = "DTLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 2,
577
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
578
+ .writefn = tlbiasid_write },
579
+ /* 32 bit TLB invalidates */
580
+ { .name = "TLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
581
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
582
+ .writefn = tlbiall_write },
583
+ { .name = "TLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
584
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
585
+ .writefn = tlbimva_write },
586
+ { .name = "TLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
587
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
588
+ .writefn = tlbiasid_write },
589
+ { .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
590
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
591
+ .writefn = tlbimvaa_write },
592
+};
593
+
594
+static const ARMCPRegInfo tlbi_v7mp_cp_reginfo[] = {
595
+ /* 32 bit TLB invalidates, Inner Shareable */
596
+ { .name = "TLBIALLIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
597
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
598
+ .writefn = tlbiall_is_write },
599
+ { .name = "TLBIMVAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
600
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
601
+ .writefn = tlbimva_is_write },
602
+ { .name = "TLBIASIDIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
603
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
604
+ .writefn = tlbiasid_is_write },
605
+ { .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
606
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
607
+ .writefn = tlbimvaa_is_write },
608
+};
609
+
610
+static const ARMCPRegInfo tlbi_v8_cp_reginfo[] = {
611
+ /* AArch32 TLB invalidate last level of translation table walk */
612
+ { .name = "TLBIMVALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
613
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
614
+ .writefn = tlbimva_is_write },
615
+ { .name = "TLBIMVAALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
616
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
617
+ .writefn = tlbimvaa_is_write },
618
+ { .name = "TLBIMVAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
619
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
620
+ .writefn = tlbimva_write },
621
+ { .name = "TLBIMVAAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
622
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
623
+ .writefn = tlbimvaa_write },
624
+ { .name = "TLBIMVALH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
625
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
626
+ .writefn = tlbimva_hyp_write },
627
+ { .name = "TLBIMVALHIS",
628
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
629
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
630
+ .writefn = tlbimva_hyp_is_write },
631
+ { .name = "TLBIIPAS2",
632
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
633
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
634
+ .writefn = tlbiipas2_hyp_write },
635
+ { .name = "TLBIIPAS2IS",
636
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
637
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
638
+ .writefn = tlbiipas2is_hyp_write },
639
+ { .name = "TLBIIPAS2L",
640
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
641
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
642
+ .writefn = tlbiipas2_hyp_write },
643
+ { .name = "TLBIIPAS2LIS",
644
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
645
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
646
+ .writefn = tlbiipas2is_hyp_write },
647
+};
648
+
649
+void define_tlb_insn_regs(ARMCPU *cpu)
650
+{
651
+ CPUARMState *env = &cpu->env;
652
+
653
+ if (!arm_feature(env, ARM_FEATURE_V7)) {
654
+ define_arm_cp_regs(cpu, tlbi_not_v7_cp_reginfo);
655
+ } else {
656
+ define_arm_cp_regs(cpu, tlbi_v7_cp_reginfo);
657
+ }
658
+ if (arm_feature(env, ARM_FEATURE_V7MP) &&
659
+ !arm_feature(env, ARM_FEATURE_PMSA)) {
660
+ define_arm_cp_regs(cpu, tlbi_v7mp_cp_reginfo);
661
+ }
662
+ if (arm_feature(env, ARM_FEATURE_V8)) {
663
+ define_arm_cp_regs(cpu, tlbi_v8_cp_reginfo);
664
+ }
665
+}
666
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
667
index XXXXXXX..XXXXXXX 100644
668
--- a/target/arm/tcg/meson.build
669
+++ b/target/arm/tcg/meson.build
670
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
671
'op_helper.c',
672
'tlb_helper.c',
673
'vec_helper.c',
674
+ 'tlb-insns.c',
675
))
676
677
arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
108
--
678
--
109
2.25.1
679
2.34.1
diff view generated by jsdifflib
1
The only caller of regime_tcr() is now regime_tcr_value(); fold the
1
Move the AArch32 TLBI insns for AArch32 EL2 to tlbi_insn_helper.c.
2
two together, and use the shorter and more natural 'regime_tcr'
2
To keep this as an obviously pure code-movement, we retain the
3
name for the new function.
3
same condition for registering tlbi_el2_cp_reginfo that we use for
4
el2_cp_reginfo. We'll be able to simplify this condition later,
5
since the need to define the reginfo for EL3-without-EL2 doesn't
6
apply for the TLBI ops specifically.
7
8
This move brings all the uses of tlbimva_hyp_write() and
9
tlbimva_hyp_is_write() back into a single file, so we can move those
10
also, and make them file-local again.
11
12
The helper alle1_tlbmask() is an exception to the pattern that we
13
only need to make these functions global temporarily, because once
14
this refactoring is complete it will be called by both code in
15
helper.c (vttbr_write()) and by code in tlb-insns.c. We therefore
16
put its prototype in a permanent home in internals.h.
4
17
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220714132303.1287193-4-peter.maydell@linaro.org
20
Message-id: 20241210160452.2427965-3-peter.maydell@linaro.org
8
---
21
---
9
target/arm/internals.h | 16 +++++-----------
22
target/arm/cpregs.h | 4 --
10
target/arm/helper.c | 6 +++---
23
target/arm/internals.h | 6 +++
11
target/arm/ptw.c | 6 +++---
24
target/arm/helper.c | 74 +--------------------------------
12
target/arm/tlb_helper.c | 2 +-
25
target/arm/tcg/tlb-insns.c | 85 ++++++++++++++++++++++++++++++++++++++
13
4 files changed, 12 insertions(+), 18 deletions(-)
26
4 files changed, 92 insertions(+), 77 deletions(-)
14
27
28
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpregs.h
31
+++ b/target/arm/cpregs.h
32
@@ -XXX,XX +XXX,XX @@ CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
33
CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
34
bool isread);
35
bool tlb_force_broadcast(CPUARMState *env);
36
-void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
- uint64_t value);
38
-void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
39
- uint64_t value);
40
41
#endif /* TARGET_ARM_CPREGS_H */
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
42
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
44
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
45
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
46
@@ -XXX,XX +XXX,XX @@ uint64_t gt_get_countervalue(CPUARMState *env);
20
return env->cp15.sctlr_el[regime_el(env, mmu_idx)];
47
* and CNTVCT_EL0 (this will be either 0 or the value of CNTVOFF_EL2).
21
}
48
*/
22
49
uint64_t gt_virt_cnt_offset(CPUARMState *env);
23
-/* Return the TCR controlling this translation regime */
50
+
24
-static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
51
+/*
25
+/* Return the value of the TCR controlling this translation regime */
52
+ * Return mask of ARMMMUIdxBit values corresponding to an "invalidate
26
+static inline uint64_t regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
53
+ * all EL1" scope; this covers stage 1 and stage 2.
27
{
54
+ */
28
if (mmu_idx == ARMMMUIdx_Stage2) {
55
+int alle1_tlbmask(CPUARMState *env);
29
- return &env->cp15.vtcr_el2;
56
#endif
30
+ return env->cp15.vtcr_el2.raw_tcr;
31
}
32
if (mmu_idx == ARMMMUIdx_Stage2_S) {
33
/*
34
* Note: Secure stage 2 nominally shares fields from VTCR_EL2, but
35
* those are not currently used by QEMU, so just return VSTCR_EL2.
36
*/
37
- return &env->cp15.vstcr_el2;
38
+ return env->cp15.vstcr_el2.raw_tcr;
39
}
40
- return &env->cp15.tcr_el[regime_el(env, mmu_idx)];
41
-}
42
-
43
-/* Return the raw value of the TCR controlling this translation regime */
44
-static inline uint64_t regime_tcr_value(CPUARMState *env, ARMMMUIdx mmu_idx)
45
-{
46
- return regime_tcr(env, mmu_idx)->raw_tcr;
47
+ return env->cp15.tcr_el[regime_el(env, mmu_idx)].raw_tcr;
48
}
49
50
/**
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
57
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
59
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
60
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
61
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
56
static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
62
raw_write(env, ri, value);
57
uint64_t addr)
63
}
64
65
-static int alle1_tlbmask(CPUARMState *env)
66
+int alle1_tlbmask(CPUARMState *env)
58
{
67
{
59
- uint64_t tcr = regime_tcr_value(env, mmu_idx);
68
/*
60
+ uint64_t tcr = regime_tcr(env, mmu_idx);
69
* Note that the 'ALL' scope must invalidate both stage 1 and
61
int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
70
@@ -XXX,XX +XXX,XX @@ bool tlb_force_broadcast(CPUARMState *env)
62
int select = extract64(addr, 55, 1);
71
return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
63
72
}
64
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx)
73
65
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
74
-static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
ARMMMUIdx mmu_idx, bool data)
75
- uint64_t value)
76
-{
77
- CPUState *cs = env_cpu(env);
78
-
79
- tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
80
-}
81
-
82
-static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
83
- uint64_t value)
84
-{
85
- CPUState *cs = env_cpu(env);
86
-
87
- tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
88
-}
89
-
90
-
91
-static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
- uint64_t value)
93
-{
94
- CPUState *cs = env_cpu(env);
95
-
96
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
97
-}
98
-
99
-static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
100
- uint64_t value)
101
-{
102
- CPUState *cs = env_cpu(env);
103
-
104
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
105
-}
106
-
107
-void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
108
- uint64_t value)
109
-{
110
- CPUState *cs = env_cpu(env);
111
- uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
112
-
113
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
114
-}
115
-
116
-void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
117
- uint64_t value)
118
-{
119
- CPUState *cs = env_cpu(env);
120
- uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
121
-
122
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
123
- ARMMMUIdxBit_E2);
124
-}
125
-
126
static const ARMCPRegInfo cp_reginfo[] = {
127
/*
128
* Define the secure and non-secure FCSE identifier CP registers
129
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
130
{ .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
131
.access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS,
132
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) },
133
- { .name = "TLBIALLNSNH",
134
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
135
- .type = ARM_CP_NO_RAW, .access = PL2_W,
136
- .writefn = tlbiall_nsnh_write },
137
- { .name = "TLBIALLNSNHIS",
138
- .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
139
- .type = ARM_CP_NO_RAW, .access = PL2_W,
140
- .writefn = tlbiall_nsnh_is_write },
141
- { .name = "TLBIALLH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
142
- .type = ARM_CP_NO_RAW, .access = PL2_W,
143
- .writefn = tlbiall_hyp_write },
144
- { .name = "TLBIALLHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
145
- .type = ARM_CP_NO_RAW, .access = PL2_W,
146
- .writefn = tlbiall_hyp_is_write },
147
- { .name = "TLBIMVAH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
148
- .type = ARM_CP_NO_RAW, .access = PL2_W,
149
- .writefn = tlbimva_hyp_write },
150
- { .name = "TLBIMVAHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
151
- .type = ARM_CP_NO_RAW, .access = PL2_W,
152
- .writefn = tlbimva_hyp_is_write },
153
{ .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
154
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
155
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
156
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/target/arm/tcg/tlb-insns.c
159
+++ b/target/arm/tcg/tlb-insns.c
160
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
}
162
}
163
164
+static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
165
+ uint64_t value)
166
+{
167
+ CPUState *cs = env_cpu(env);
168
+ uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
169
+
170
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
171
+}
172
+
173
+static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
174
+ uint64_t value)
175
+{
176
+ CPUState *cs = env_cpu(env);
177
+ uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
178
+
179
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
180
+ ARMMMUIdxBit_E2);
181
+}
182
+
183
static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
184
uint64_t value)
67
{
185
{
68
- uint64_t tcr = regime_tcr_value(env, mmu_idx);
186
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
69
+ uint64_t tcr = regime_tcr(env, mmu_idx);
187
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
70
bool epd, hpd, using16k, using64k, tsz_oob, ds;
188
}
71
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
189
72
ARMCPU *cpu = env_archcpu(env);
190
+static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
73
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
191
+ uint64_t value)
192
+{
193
+ CPUState *cs = env_cpu(env);
194
+
195
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
196
+}
197
+
198
+static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
199
+ uint64_t value)
200
+{
201
+ CPUState *cs = env_cpu(env);
202
+
203
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
204
+}
205
+
206
+
207
+static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
208
+ uint64_t value)
209
+{
210
+ CPUState *cs = env_cpu(env);
211
+
212
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
213
+}
214
+
215
+static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
216
+ uint64_t value)
217
+{
218
+ CPUState *cs = env_cpu(env);
219
+
220
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
221
+}
222
+
223
static const ARMCPRegInfo tlbi_not_v7_cp_reginfo[] = {
224
/*
225
* MMU TLB control. Note that the wildcarding means we cover not just
226
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_v8_cp_reginfo[] = {
227
.writefn = tlbiipas2is_hyp_write },
228
};
229
230
+static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
231
+ { .name = "TLBIALLNSNH",
232
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
233
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
234
+ .writefn = tlbiall_nsnh_write },
235
+ { .name = "TLBIALLNSNHIS",
236
+ .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
237
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
238
+ .writefn = tlbiall_nsnh_is_write },
239
+ { .name = "TLBIALLH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
240
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
241
+ .writefn = tlbiall_hyp_write },
242
+ { .name = "TLBIALLHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
243
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
244
+ .writefn = tlbiall_hyp_is_write },
245
+ { .name = "TLBIMVAH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
246
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
247
+ .writefn = tlbimva_hyp_write },
248
+ { .name = "TLBIMVAHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
249
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
250
+ .writefn = tlbimva_hyp_is_write },
251
+};
252
+
253
void define_tlb_insn_regs(ARMCPU *cpu)
74
{
254
{
75
CPUARMTBFlags flags = {};
255
CPUARMState *env = &cpu->env;
76
ARMMMUIdx stage1 = stage_1_mmu_idx(mmu_idx);
256
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
77
- uint64_t tcr = regime_tcr_value(env, mmu_idx);
257
if (arm_feature(env, ARM_FEATURE_V8)) {
78
+ uint64_t tcr = regime_tcr(env, mmu_idx);
258
define_arm_cp_regs(cpu, tlbi_v8_cp_reginfo);
79
uint64_t sctlr;
80
int tbii, tbid;
81
82
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/target/arm/ptw.c
85
+++ b/target/arm/ptw.c
86
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
87
uint32_t *table, uint32_t address)
88
{
89
/* Note that we can only get here for an AArch32 PL0/PL1 lookup */
90
- uint64_t tcr = regime_tcr_value(env, mmu_idx);
91
+ uint64_t tcr = regime_tcr(env, mmu_idx);
92
int maskshift = extract32(tcr, 0, 3);
93
uint32_t mask = ~(((uint32_t)0xffffffffu) >> maskshift);
94
uint32_t base_mask;
95
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
96
static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
97
ARMMMUIdx mmu_idx)
98
{
99
- uint64_t tcr = regime_tcr_value(env, mmu_idx);
100
+ uint64_t tcr = regime_tcr(env, mmu_idx);
101
uint32_t el = regime_el(env, mmu_idx);
102
int select, tsz;
103
bool epd, hpd;
104
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
105
uint32_t attrs;
106
int32_t stride;
107
int addrsize, inputsize, outputsize;
108
- uint64_t tcr = regime_tcr_value(env, mmu_idx);
109
+ uint64_t tcr = regime_tcr(env, mmu_idx);
110
int ap, ns, xn, pxn;
111
uint32_t el = regime_el(env, mmu_idx);
112
uint64_t descaddrmask;
113
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/tlb_helper.c
116
+++ b/target/arm/tlb_helper.c
117
@@ -XXX,XX +XXX,XX @@ bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
118
return true;
119
}
259
}
120
if (arm_feature(env, ARM_FEATURE_LPAE)
260
+ /*
121
- && (regime_tcr_value(env, mmu_idx) & TTBCR_EAE)) {
261
+ * We retain the existing logic for when to register these TLBI
122
+ && (regime_tcr(env, mmu_idx) & TTBCR_EAE)) {
262
+ * ops (i.e. matching the condition for el2_cp_reginfo[] in
123
return true;
263
+ * helper.c), but we will be able to simplify this later.
124
}
264
+ */
125
return false;
265
+ if (arm_feature(env, ARM_FEATURE_EL2)
266
+ || (arm_feature(env, ARM_FEATURE_EL3)
267
+ && arm_feature(env, ARM_FEATURE_V8))) {
268
+ define_arm_cp_regs(cpu, tlbi_el2_cp_reginfo);
269
+ }
270
}
126
--
271
--
127
2.25.1
272
2.34.1
diff view generated by jsdifflib
New patch
1
Move the AArch64 TLBI insns that are declared in v8_cp_reginfo[]
2
into tlb-insns.c.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241210160452.2427965-4-peter.maydell@linaro.org
7
---
8
target/arm/cpregs.h | 11 +++
9
target/arm/helper.c | 182 +++----------------------------------
10
target/arm/tcg/tlb-insns.c | 160 ++++++++++++++++++++++++++++++++
11
3 files changed, 182 insertions(+), 171 deletions(-)
12
13
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpregs.h
16
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
18
CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
19
bool isread);
20
bool tlb_force_broadcast(CPUARMState *env);
21
+int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
22
+ uint64_t addr);
23
+int vae1_tlbbits(CPUARMState *env, uint64_t addr);
24
+int vae1_tlbmask(CPUARMState *env);
25
+int ipas2e1_tlbmask(CPUARMState *env, int64_t value);
26
+void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
27
+ uint64_t value);
28
+void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
+ uint64_t value);
30
+void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
31
+ uint64_t value);
32
33
#endif /* TARGET_ARM_CPREGS_H */
34
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/helper.c
37
+++ b/target/arm/helper.c
38
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tocu(CPUARMState *env, const ARMCPRegInfo *ri,
39
* Page D4-1736 (DDI0487A.b)
40
*/
41
42
-static int vae1_tlbmask(CPUARMState *env)
43
+int vae1_tlbmask(CPUARMState *env)
44
{
45
uint64_t hcr = arm_hcr_el2_eff(env);
46
uint16_t mask;
47
@@ -XXX,XX +XXX,XX @@ static int vae2_tlbmask(CPUARMState *env)
48
}
49
50
/* Return 56 if TBI is enabled, 64 otherwise. */
51
-static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
52
- uint64_t addr)
53
+int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
54
+ uint64_t addr)
55
{
56
uint64_t tcr = regime_tcr(env, mmu_idx);
57
int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
58
@@ -XXX,XX +XXX,XX @@ static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
59
return (tbi >> select) & 1 ? 56 : 64;
60
}
61
62
-static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
63
+int vae1_tlbbits(CPUARMState *env, uint64_t addr)
64
{
65
uint64_t hcr = arm_hcr_el2_eff(env);
66
ARMMMUIdx mmu_idx;
67
@@ -XXX,XX +XXX,XX @@ static int vae2_tlbbits(CPUARMState *env, uint64_t addr)
68
return tlbbits_for_regime(env, mmu_idx, addr);
69
}
70
71
-static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
72
- uint64_t value)
73
+void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
74
+ uint64_t value)
75
{
76
CPUState *cs = env_cpu(env);
77
int mask = vae1_tlbmask(env);
78
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
79
tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
80
}
81
82
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
83
- uint64_t value)
84
-{
85
- CPUState *cs = env_cpu(env);
86
- int mask = vae1_tlbmask(env);
87
-
88
- if (tlb_force_broadcast(env)) {
89
- tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
90
- } else {
91
- tlb_flush_by_mmuidx(cs, mask);
92
- }
93
-}
94
-
95
static int e2_tlbmask(CPUARMState *env)
96
{
97
return (ARMMMUIdxBit_E20_0 |
98
@@ -XXX,XX +XXX,XX @@ static int e2_tlbmask(CPUARMState *env)
99
ARMMMUIdxBit_E2);
100
}
101
102
-static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
103
- uint64_t value)
104
-{
105
- CPUState *cs = env_cpu(env);
106
- int mask = alle1_tlbmask(env);
107
-
108
- tlb_flush_by_mmuidx(cs, mask);
109
-}
110
-
111
static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
uint64_t value)
113
{
114
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
115
tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
116
}
117
118
-static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
119
- uint64_t value)
120
+void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
121
+ uint64_t value)
122
{
123
CPUState *cs = env_cpu(env);
124
int mask = alle1_tlbmask(env);
125
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
126
tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
127
}
128
129
-static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
130
- uint64_t value)
131
+void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
132
+ uint64_t value)
133
{
134
CPUState *cs = env_cpu(env);
135
int mask = vae1_tlbmask(env);
136
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
137
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
138
}
139
140
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
141
- uint64_t value)
142
-{
143
- /*
144
- * Invalidate by VA, EL1&0 (AArch64 version).
145
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
146
- * since we don't support flush-for-specific-ASID-only or
147
- * flush-last-level-only.
148
- */
149
- CPUState *cs = env_cpu(env);
150
- int mask = vae1_tlbmask(env);
151
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
152
- int bits = vae1_tlbbits(env, pageaddr);
153
-
154
- if (tlb_force_broadcast(env)) {
155
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
156
- } else {
157
- tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
158
- }
159
-}
160
-
161
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
162
uint64_t value)
163
{
164
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
165
ARMMMUIdxBit_E3, bits);
166
}
167
168
-static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
169
+int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
170
{
171
/*
172
* The MSB of value is the NS field, which only applies if SEL2
173
@@ -XXX,XX +XXX,XX @@ static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
174
: ARMMMUIdxBit_Stage2);
175
}
176
177
-static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
178
- uint64_t value)
179
-{
180
- CPUState *cs = env_cpu(env);
181
- int mask = ipas2e1_tlbmask(env, value);
182
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
183
-
184
- if (tlb_force_broadcast(env)) {
185
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
186
- } else {
187
- tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
188
- }
189
-}
190
-
191
-static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
192
- uint64_t value)
193
-{
194
- CPUState *cs = env_cpu(env);
195
- int mask = ipas2e1_tlbmask(env, value);
196
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
197
-
198
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
199
-}
200
-
201
#ifdef TARGET_AARCH64
202
typedef struct {
203
uint64_t base;
204
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
205
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
206
.fgt = FGT_DCCISW,
207
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
208
- /* TLBI operations */
209
- { .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
210
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
211
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
212
- .fgt = FGT_TLBIVMALLE1IS,
213
- .writefn = tlbi_aa64_vmalle1is_write },
214
- { .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
215
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
216
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
217
- .fgt = FGT_TLBIVAE1IS,
218
- .writefn = tlbi_aa64_vae1is_write },
219
- { .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
220
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
221
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
222
- .fgt = FGT_TLBIASIDE1IS,
223
- .writefn = tlbi_aa64_vmalle1is_write },
224
- { .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
225
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
226
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
227
- .fgt = FGT_TLBIVAAE1IS,
228
- .writefn = tlbi_aa64_vae1is_write },
229
- { .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
230
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
231
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
232
- .fgt = FGT_TLBIVALE1IS,
233
- .writefn = tlbi_aa64_vae1is_write },
234
- { .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
235
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
236
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
237
- .fgt = FGT_TLBIVAALE1IS,
238
- .writefn = tlbi_aa64_vae1is_write },
239
- { .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
240
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
241
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
242
- .fgt = FGT_TLBIVMALLE1,
243
- .writefn = tlbi_aa64_vmalle1_write },
244
- { .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64,
245
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
246
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
247
- .fgt = FGT_TLBIVAE1,
248
- .writefn = tlbi_aa64_vae1_write },
249
- { .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64,
250
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
251
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
252
- .fgt = FGT_TLBIASIDE1,
253
- .writefn = tlbi_aa64_vmalle1_write },
254
- { .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64,
255
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
256
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
257
- .fgt = FGT_TLBIVAAE1,
258
- .writefn = tlbi_aa64_vae1_write },
259
- { .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64,
260
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
261
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
262
- .fgt = FGT_TLBIVALE1,
263
- .writefn = tlbi_aa64_vae1_write },
264
- { .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64,
265
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
266
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
267
- .fgt = FGT_TLBIVAALE1,
268
- .writefn = tlbi_aa64_vae1_write },
269
- { .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
270
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
271
- .access = PL2_W, .type = ARM_CP_NO_RAW,
272
- .writefn = tlbi_aa64_ipas2e1is_write },
273
- { .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
274
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
275
- .access = PL2_W, .type = ARM_CP_NO_RAW,
276
- .writefn = tlbi_aa64_ipas2e1is_write },
277
- { .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
278
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
279
- .access = PL2_W, .type = ARM_CP_NO_RAW,
280
- .writefn = tlbi_aa64_alle1is_write },
281
- { .name = "TLBI_VMALLS12E1IS", .state = ARM_CP_STATE_AA64,
282
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 6,
283
- .access = PL2_W, .type = ARM_CP_NO_RAW,
284
- .writefn = tlbi_aa64_alle1is_write },
285
- { .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
286
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
287
- .access = PL2_W, .type = ARM_CP_NO_RAW,
288
- .writefn = tlbi_aa64_ipas2e1_write },
289
- { .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
290
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
291
- .access = PL2_W, .type = ARM_CP_NO_RAW,
292
- .writefn = tlbi_aa64_ipas2e1_write },
293
- { .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
294
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
295
- .access = PL2_W, .type = ARM_CP_NO_RAW,
296
- .writefn = tlbi_aa64_alle1_write },
297
- { .name = "TLBI_VMALLS12E1", .state = ARM_CP_STATE_AA64,
298
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 6,
299
- .access = PL2_W, .type = ARM_CP_NO_RAW,
300
- .writefn = tlbi_aa64_alle1is_write },
301
#ifndef CONFIG_USER_ONLY
302
/* 64 bit address translation operations */
303
{ .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
304
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
305
index XXXXXXX..XXXXXXX 100644
306
--- a/target/arm/tcg/tlb-insns.c
307
+++ b/target/arm/tcg/tlb-insns.c
308
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
309
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
310
}
311
312
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
313
+ uint64_t value)
314
+{
315
+ CPUState *cs = env_cpu(env);
316
+ int mask = vae1_tlbmask(env);
317
+
318
+ if (tlb_force_broadcast(env)) {
319
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
320
+ } else {
321
+ tlb_flush_by_mmuidx(cs, mask);
322
+ }
323
+}
324
+
325
+static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
326
+ uint64_t value)
327
+{
328
+ CPUState *cs = env_cpu(env);
329
+ int mask = alle1_tlbmask(env);
330
+
331
+ tlb_flush_by_mmuidx(cs, mask);
332
+}
333
+
334
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
335
+ uint64_t value)
336
+{
337
+ /*
338
+ * Invalidate by VA, EL1&0 (AArch64 version).
339
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
340
+ * since we don't support flush-for-specific-ASID-only or
341
+ * flush-last-level-only.
342
+ */
343
+ CPUState *cs = env_cpu(env);
344
+ int mask = vae1_tlbmask(env);
345
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
346
+ int bits = vae1_tlbbits(env, pageaddr);
347
+
348
+ if (tlb_force_broadcast(env)) {
349
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
350
+ } else {
351
+ tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
352
+ }
353
+}
354
+
355
+static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
356
+ uint64_t value)
357
+{
358
+ CPUState *cs = env_cpu(env);
359
+ int mask = ipas2e1_tlbmask(env, value);
360
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
361
+
362
+ if (tlb_force_broadcast(env)) {
363
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
364
+ } else {
365
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
366
+ }
367
+}
368
+
369
+static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
370
+ uint64_t value)
371
+{
372
+ CPUState *cs = env_cpu(env);
373
+ int mask = ipas2e1_tlbmask(env, value);
374
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
375
+
376
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
377
+}
378
+
379
static const ARMCPRegInfo tlbi_not_v7_cp_reginfo[] = {
380
/*
381
* MMU TLB control. Note that the wildcarding means we cover not just
382
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_v8_cp_reginfo[] = {
383
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
384
.type = ARM_CP_NO_RAW, .access = PL2_W,
385
.writefn = tlbiipas2is_hyp_write },
386
+ /* AArch64 TLBI operations */
387
+ { .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
388
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
389
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
390
+ .fgt = FGT_TLBIVMALLE1IS,
391
+ .writefn = tlbi_aa64_vmalle1is_write },
392
+ { .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
393
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
394
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
395
+ .fgt = FGT_TLBIVAE1IS,
396
+ .writefn = tlbi_aa64_vae1is_write },
397
+ { .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
398
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
399
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
400
+ .fgt = FGT_TLBIASIDE1IS,
401
+ .writefn = tlbi_aa64_vmalle1is_write },
402
+ { .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
403
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
404
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
405
+ .fgt = FGT_TLBIVAAE1IS,
406
+ .writefn = tlbi_aa64_vae1is_write },
407
+ { .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
408
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
409
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
410
+ .fgt = FGT_TLBIVALE1IS,
411
+ .writefn = tlbi_aa64_vae1is_write },
412
+ { .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
413
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
414
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
415
+ .fgt = FGT_TLBIVAALE1IS,
416
+ .writefn = tlbi_aa64_vae1is_write },
417
+ { .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
418
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
419
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
420
+ .fgt = FGT_TLBIVMALLE1,
421
+ .writefn = tlbi_aa64_vmalle1_write },
422
+ { .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64,
423
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
424
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
425
+ .fgt = FGT_TLBIVAE1,
426
+ .writefn = tlbi_aa64_vae1_write },
427
+ { .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64,
428
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
429
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
430
+ .fgt = FGT_TLBIASIDE1,
431
+ .writefn = tlbi_aa64_vmalle1_write },
432
+ { .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64,
433
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
434
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
435
+ .fgt = FGT_TLBIVAAE1,
436
+ .writefn = tlbi_aa64_vae1_write },
437
+ { .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64,
438
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
439
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
440
+ .fgt = FGT_TLBIVALE1,
441
+ .writefn = tlbi_aa64_vae1_write },
442
+ { .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64,
443
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
444
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
445
+ .fgt = FGT_TLBIVAALE1,
446
+ .writefn = tlbi_aa64_vae1_write },
447
+ { .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
448
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
449
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
450
+ .writefn = tlbi_aa64_ipas2e1is_write },
451
+ { .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
452
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
453
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
454
+ .writefn = tlbi_aa64_ipas2e1is_write },
455
+ { .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
456
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
457
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
458
+ .writefn = tlbi_aa64_alle1is_write },
459
+ { .name = "TLBI_VMALLS12E1IS", .state = ARM_CP_STATE_AA64,
460
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 6,
461
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
462
+ .writefn = tlbi_aa64_alle1is_write },
463
+ { .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
464
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
465
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
466
+ .writefn = tlbi_aa64_ipas2e1_write },
467
+ { .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
468
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
469
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
470
+ .writefn = tlbi_aa64_ipas2e1_write },
471
+ { .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
472
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
473
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
474
+ .writefn = tlbi_aa64_alle1_write },
475
+ { .name = "TLBI_VMALLS12E1", .state = ARM_CP_STATE_AA64,
476
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 6,
477
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
478
+ .writefn = tlbi_aa64_alle1is_write },
479
};
480
481
static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
482
--
483
2.34.1
diff view generated by jsdifflib
New patch
1
Move the AArch64 EL2 TLBI insn definitions that were
2
in el2_cp_reginfo[] across to tlb-insns.c.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241210160452.2427965-5-peter.maydell@linaro.org
7
---
8
target/arm/cpregs.h | 7 +++++
9
target/arm/helper.c | 61 ++++----------------------------------
10
target/arm/tcg/tlb-insns.c | 49 ++++++++++++++++++++++++++++++
11
3 files changed, 62 insertions(+), 55 deletions(-)
12
13
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpregs.h
16
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ bool tlb_force_broadcast(CPUARMState *env);
18
int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
19
uint64_t addr);
20
int vae1_tlbbits(CPUARMState *env, uint64_t addr);
21
+int vae2_tlbbits(CPUARMState *env, uint64_t addr);
22
int vae1_tlbmask(CPUARMState *env);
23
+int vae2_tlbmask(CPUARMState *env);
24
int ipas2e1_tlbmask(CPUARMState *env, int64_t value);
25
+int e2_tlbmask(CPUARMState *env);
26
void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
27
uint64_t value);
28
void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
uint64_t value);
30
void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
31
uint64_t value);
32
+void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
33
+ uint64_t value);
34
+void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
35
+ uint64_t value);
36
37
#endif /* TARGET_ARM_CPREGS_H */
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper.c
41
+++ b/target/arm/helper.c
42
@@ -XXX,XX +XXX,XX @@ int vae1_tlbmask(CPUARMState *env)
43
return mask;
44
}
45
46
-static int vae2_tlbmask(CPUARMState *env)
47
+int vae2_tlbmask(CPUARMState *env)
48
{
49
uint64_t hcr = arm_hcr_el2_eff(env);
50
uint16_t mask;
51
@@ -XXX,XX +XXX,XX @@ int vae1_tlbbits(CPUARMState *env, uint64_t addr)
52
return tlbbits_for_regime(env, mmu_idx, addr);
53
}
54
55
-static int vae2_tlbbits(CPUARMState *env, uint64_t addr)
56
+int vae2_tlbbits(CPUARMState *env, uint64_t addr)
57
{
58
uint64_t hcr = arm_hcr_el2_eff(env);
59
ARMMMUIdx mmu_idx;
60
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
61
tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
62
}
63
64
-static int e2_tlbmask(CPUARMState *env)
65
+int e2_tlbmask(CPUARMState *env)
66
{
67
return (ARMMMUIdxBit_E20_0 |
68
ARMMMUIdxBit_E20_2 |
69
@@ -XXX,XX +XXX,XX @@ static int e2_tlbmask(CPUARMState *env)
70
ARMMMUIdxBit_E2);
71
}
72
73
-static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
74
- uint64_t value)
75
-{
76
- CPUState *cs = env_cpu(env);
77
- int mask = e2_tlbmask(env);
78
-
79
- tlb_flush_by_mmuidx(cs, mask);
80
-}
81
-
82
static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
83
uint64_t value)
84
{
85
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
86
tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
87
}
88
89
-static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
90
- uint64_t value)
91
+void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
+ uint64_t value)
93
{
94
CPUState *cs = env_cpu(env);
95
int mask = e2_tlbmask(env);
96
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
97
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
98
}
99
100
-static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
101
- uint64_t value)
102
-{
103
- /*
104
- * Invalidate by VA, EL2
105
- * Currently handles both VAE2 and VALE2, since we don't support
106
- * flush-last-level-only.
107
- */
108
- CPUState *cs = env_cpu(env);
109
- int mask = vae2_tlbmask(env);
110
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
111
- int bits = vae2_tlbbits(env, pageaddr);
112
-
113
- tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
114
-}
115
-
116
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
117
uint64_t value)
118
{
119
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
120
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
121
}
122
123
-static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
125
uint64_t value)
126
{
127
CPUState *cs = env_cpu(env);
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
129
{ .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
130
.access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS,
131
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) },
132
- { .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
133
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
134
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
135
- .writefn = tlbi_aa64_alle2_write },
136
- { .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
137
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
138
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
139
- .writefn = tlbi_aa64_vae2_write },
140
- { .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
141
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
142
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
143
- .writefn = tlbi_aa64_vae2_write },
144
- { .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
145
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
146
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
147
- .writefn = tlbi_aa64_alle2is_write },
148
- { .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
149
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
150
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
151
- .writefn = tlbi_aa64_vae2is_write },
152
- { .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
153
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
154
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
155
- .writefn = tlbi_aa64_vae2is_write },
156
#ifndef CONFIG_USER_ONLY
157
/*
158
* Unlike the other EL2-related AT operations, these must
159
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/target/arm/tcg/tlb-insns.c
162
+++ b/target/arm/tcg/tlb-insns.c
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
tlb_flush_by_mmuidx(cs, mask);
165
}
166
167
+static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
168
+ uint64_t value)
169
+{
170
+ CPUState *cs = env_cpu(env);
171
+ int mask = e2_tlbmask(env);
172
+
173
+ tlb_flush_by_mmuidx(cs, mask);
174
+}
175
+
176
+static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
177
+ uint64_t value)
178
+{
179
+ /*
180
+ * Invalidate by VA, EL2
181
+ * Currently handles both VAE2 and VALE2, since we don't support
182
+ * flush-last-level-only.
183
+ */
184
+ CPUState *cs = env_cpu(env);
185
+ int mask = vae2_tlbmask(env);
186
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
187
+ int bits = vae2_tlbbits(env, pageaddr);
188
+
189
+ tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
190
+}
191
+
192
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
193
uint64_t value)
194
{
195
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
196
{ .name = "TLBIMVAHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
197
.type = ARM_CP_NO_RAW, .access = PL2_W,
198
.writefn = tlbimva_hyp_is_write },
199
+ { .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
200
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
201
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
202
+ .writefn = tlbi_aa64_alle2_write },
203
+ { .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
204
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
205
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
206
+ .writefn = tlbi_aa64_vae2_write },
207
+ { .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
208
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
209
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
210
+ .writefn = tlbi_aa64_vae2_write },
211
+ { .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
212
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
213
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
214
+ .writefn = tlbi_aa64_alle2is_write },
215
+ { .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
216
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
217
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
218
+ .writefn = tlbi_aa64_vae2is_write },
219
+ { .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
220
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
221
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
222
+ .writefn = tlbi_aa64_vae2is_write },
223
};
224
225
void define_tlb_insn_regs(ARMCPU *cpu)
226
--
227
2.34.1
diff view generated by jsdifflib
New patch
1
Move the AArch64 EL3 TLBI insns from el3_cp_reginfo[] across
2
to tlb-insns.c.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241210160452.2427965-6-peter.maydell@linaro.org
7
---
8
target/arm/cpregs.h | 4 +++
9
target/arm/helper.c | 56 +++-----------------------------------
10
target/arm/tcg/tlb-insns.c | 54 ++++++++++++++++++++++++++++++++++++
11
3 files changed, 62 insertions(+), 52 deletions(-)
12
13
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpregs.h
16
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
18
uint64_t value);
19
void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
uint64_t value);
21
+void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
22
+ uint64_t value);
23
+void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
+ uint64_t value);
25
26
#endif /* TARGET_ARM_CPREGS_H */
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.c
30
+++ b/target/arm/helper.c
31
@@ -XXX,XX +XXX,XX @@ int e2_tlbmask(CPUARMState *env)
32
ARMMMUIdxBit_E2);
33
}
34
35
-static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
36
- uint64_t value)
37
-{
38
- ARMCPU *cpu = env_archcpu(env);
39
- CPUState *cs = CPU(cpu);
40
-
41
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
42
-}
43
-
44
void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
45
uint64_t value)
46
{
47
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
49
}
50
51
-static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
52
- uint64_t value)
53
+void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
+ uint64_t value)
55
{
56
CPUState *cs = env_cpu(env);
57
58
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
59
}
60
61
-static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
62
- uint64_t value)
63
-{
64
- /*
65
- * Invalidate by VA, EL3
66
- * Currently handles both VAE3 and VALE3, since we don't support
67
- * flush-last-level-only.
68
- */
69
- ARMCPU *cpu = env_archcpu(env);
70
- CPUState *cs = CPU(cpu);
71
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
72
-
73
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
74
-}
75
-
76
void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
77
uint64_t value)
78
{
79
@@ -XXX,XX +XXX,XX @@ void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
80
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
81
}
82
83
-static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
84
- uint64_t value)
85
+void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
86
+ uint64_t value)
87
{
88
CPUState *cs = env_cpu(env);
89
uint64_t pageaddr = sextract64(value << 12, 0, 56);
90
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
91
.opc0 = 3, .opc1 = 6, .crn = 5, .crm = 1, .opc2 = 1,
92
.access = PL3_RW, .type = ARM_CP_CONST,
93
.resetvalue = 0 },
94
- { .name = "TLBI_ALLE3IS", .state = ARM_CP_STATE_AA64,
95
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 0,
96
- .access = PL3_W, .type = ARM_CP_NO_RAW,
97
- .writefn = tlbi_aa64_alle3is_write },
98
- { .name = "TLBI_VAE3IS", .state = ARM_CP_STATE_AA64,
99
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 1,
100
- .access = PL3_W, .type = ARM_CP_NO_RAW,
101
- .writefn = tlbi_aa64_vae3is_write },
102
- { .name = "TLBI_VALE3IS", .state = ARM_CP_STATE_AA64,
103
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 5,
104
- .access = PL3_W, .type = ARM_CP_NO_RAW,
105
- .writefn = tlbi_aa64_vae3is_write },
106
- { .name = "TLBI_ALLE3", .state = ARM_CP_STATE_AA64,
107
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 0,
108
- .access = PL3_W, .type = ARM_CP_NO_RAW,
109
- .writefn = tlbi_aa64_alle3_write },
110
- { .name = "TLBI_VAE3", .state = ARM_CP_STATE_AA64,
111
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 1,
112
- .access = PL3_W, .type = ARM_CP_NO_RAW,
113
- .writefn = tlbi_aa64_vae3_write },
114
- { .name = "TLBI_VALE3", .state = ARM_CP_STATE_AA64,
115
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
116
- .access = PL3_W, .type = ARM_CP_NO_RAW,
117
- .writefn = tlbi_aa64_vae3_write },
118
};
119
120
#ifndef CONFIG_USER_ONLY
121
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/tcg/tlb-insns.c
124
+++ b/target/arm/tcg/tlb-insns.c
125
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
126
tlb_flush_by_mmuidx(cs, mask);
127
}
128
129
+static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
130
+ uint64_t value)
131
+{
132
+ ARMCPU *cpu = env_archcpu(env);
133
+ CPUState *cs = CPU(cpu);
134
+
135
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
136
+}
137
+
138
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
139
uint64_t value)
140
{
141
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
142
tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
143
}
144
145
+static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
146
+ uint64_t value)
147
+{
148
+ /*
149
+ * Invalidate by VA, EL3
150
+ * Currently handles both VAE3 and VALE3, since we don't support
151
+ * flush-last-level-only.
152
+ */
153
+ ARMCPU *cpu = env_archcpu(env);
154
+ CPUState *cs = CPU(cpu);
155
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
156
+
157
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
158
+}
159
+
160
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_el2_cp_reginfo[] = {
164
.writefn = tlbi_aa64_vae2is_write },
165
};
166
167
+static const ARMCPRegInfo tlbi_el3_cp_reginfo[] = {
168
+ { .name = "TLBI_ALLE3IS", .state = ARM_CP_STATE_AA64,
169
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 0,
170
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
171
+ .writefn = tlbi_aa64_alle3is_write },
172
+ { .name = "TLBI_VAE3IS", .state = ARM_CP_STATE_AA64,
173
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 1,
174
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
175
+ .writefn = tlbi_aa64_vae3is_write },
176
+ { .name = "TLBI_VALE3IS", .state = ARM_CP_STATE_AA64,
177
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 5,
178
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
179
+ .writefn = tlbi_aa64_vae3is_write },
180
+ { .name = "TLBI_ALLE3", .state = ARM_CP_STATE_AA64,
181
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 0,
182
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
183
+ .writefn = tlbi_aa64_alle3_write },
184
+ { .name = "TLBI_VAE3", .state = ARM_CP_STATE_AA64,
185
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 1,
186
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
187
+ .writefn = tlbi_aa64_vae3_write },
188
+ { .name = "TLBI_VALE3", .state = ARM_CP_STATE_AA64,
189
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
190
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
191
+ .writefn = tlbi_aa64_vae3_write },
192
+};
193
+
194
void define_tlb_insn_regs(ARMCPU *cpu)
195
{
196
CPUARMState *env = &cpu->env;
197
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
198
&& arm_feature(env, ARM_FEATURE_V8))) {
199
define_arm_cp_regs(cpu, tlbi_el2_cp_reginfo);
200
}
201
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
202
+ define_arm_cp_regs(cpu, tlbi_el3_cp_reginfo);
203
+ }
204
}
205
--
206
2.34.1
diff view generated by jsdifflib
1
In regime_tcr() we return the appropriate TCR register for the
1
Move the TLBI invalidate-range insns across to tlb-insns.c.
2
translation regime. For Secure EL2, we return the VSTCR_EL2 value,
3
but in this translation regime some fields that control behaviour are
4
in VTCR_EL2. When this code was originally written (as the comment
5
notes), QEMU didn't care about any of those fields, but we have since
6
added support for features such as LPA2 which do need the values from
7
those fields.
8
2
9
Synthesize a TCR value by merging in the relevant VTCR_EL2 fields to
10
the VSTCR_EL2 value.
11
12
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1103
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20220714132303.1287193-8-peter.maydell@linaro.org
5
Message-id: 20241210160452.2427965-7-peter.maydell@linaro.org
16
---
6
---
17
target/arm/cpu.h | 19 +++++++++++++++++++
7
target/arm/cpregs.h | 2 +
18
target/arm/internals.h | 22 +++++++++++++++++++---
8
target/arm/helper.c | 330 +------------------------------------
19
2 files changed, 38 insertions(+), 3 deletions(-)
9
target/arm/tcg/tlb-insns.c | 329 ++++++++++++++++++++++++++++++++++++
10
3 files changed, 333 insertions(+), 328 deletions(-)
20
11
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
22
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
14
--- a/target/arm/cpregs.h
24
+++ b/target/arm/cpu.h
15
+++ b/target/arm/cpregs.h
25
@@ -XXX,XX +XXX,XX @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
16
@@ -XXX,XX +XXX,XX @@ CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
26
#define TTBCR_SH1 (1U << 28)
17
bool isread);
27
#define TTBCR_EAE (1U << 31)
18
CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
28
19
bool isread);
29
+FIELD(VTCR, T0SZ, 0, 6)
20
+CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
30
+FIELD(VTCR, SL0, 6, 2)
21
+ bool isread);
31
+FIELD(VTCR, IRGN0, 8, 2)
22
bool tlb_force_broadcast(CPUARMState *env);
32
+FIELD(VTCR, ORGN0, 10, 2)
23
int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
33
+FIELD(VTCR, SH0, 12, 2)
24
uint64_t addr);
34
+FIELD(VTCR, TG0, 14, 2)
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
+FIELD(VTCR, PS, 16, 3)
36
+FIELD(VTCR, VS, 19, 1)
37
+FIELD(VTCR, HA, 21, 1)
38
+FIELD(VTCR, HD, 22, 1)
39
+FIELD(VTCR, HWU59, 25, 1)
40
+FIELD(VTCR, HWU60, 26, 1)
41
+FIELD(VTCR, HWU61, 27, 1)
42
+FIELD(VTCR, HWU62, 28, 1)
43
+FIELD(VTCR, NSW, 29, 1)
44
+FIELD(VTCR, NSA, 30, 1)
45
+FIELD(VTCR, DS, 32, 1)
46
+FIELD(VTCR, SL2, 33, 1)
47
+
48
/* Bit definitions for ARMv8 SPSR (PSTATE) format.
49
* Only these are valid when in AArch64 mode; in
50
* AArch32 mode SPSRs are basically CPSR-format.
51
diff --git a/target/arm/internals.h b/target/arm/internals.h
52
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/internals.h
27
--- a/target/arm/helper.c
54
+++ b/target/arm/internals.h
28
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
29
@@ -XXX,XX +XXX,XX @@ CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
56
return env->cp15.sctlr_el[regime_el(env, mmu_idx)];
30
31
#ifdef TARGET_AARCH64
32
/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBOS. */
33
-static CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
34
- bool isread)
35
+CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
36
+ bool isread)
37
{
38
if (arm_current_el(env) == 1 &&
39
(arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBOS))) {
40
@@ -XXX,XX +XXX,XX @@ int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
41
: ARMMMUIdxBit_Stage2);
57
}
42
}
58
43
59
+/*
44
-#ifdef TARGET_AARCH64
60
+ * These are the fields in VTCR_EL2 which affect both the Secure stage 2
45
-typedef struct {
61
+ * and the Non-Secure stage 2 translation regimes (and hence which are
46
- uint64_t base;
62
+ * not present in VSTCR_EL2).
47
- uint64_t length;
63
+ */
48
-} TLBIRange;
64
+#define VTCR_SHARED_FIELD_MASK \
49
-
65
+ (R_VTCR_IRGN0_MASK | R_VTCR_ORGN0_MASK | R_VTCR_SH0_MASK | \
50
-static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
66
+ R_VTCR_PS_MASK | R_VTCR_VS_MASK | R_VTCR_HA_MASK | R_VTCR_HD_MASK | \
51
-{
67
+ R_VTCR_DS_MASK)
52
- /*
68
+
53
- * Note that the TLBI range TG field encoding differs from both
69
/* Return the value of the TCR controlling this translation regime */
54
- * TG0 and TG1 encodings.
70
static inline uint64_t regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
55
- */
56
- switch (tg) {
57
- case 1:
58
- return Gran4K;
59
- case 2:
60
- return Gran16K;
61
- case 3:
62
- return Gran64K;
63
- default:
64
- return GranInvalid;
65
- }
66
-}
67
-
68
-static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
69
- uint64_t value)
70
-{
71
- unsigned int page_size_granule, page_shift, num, scale, exponent;
72
- /* Extract one bit to represent the va selector in use. */
73
- uint64_t select = sextract64(value, 36, 1);
74
- ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true, false);
75
- TLBIRange ret = { };
76
- ARMGranuleSize gran;
77
-
78
- page_size_granule = extract64(value, 46, 2);
79
- gran = tlbi_range_tg_to_gran_size(page_size_granule);
80
-
81
- /* The granule encoded in value must match the granule in use. */
82
- if (gran != param.gran) {
83
- qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
84
- page_size_granule);
85
- return ret;
86
- }
87
-
88
- page_shift = arm_granule_bits(gran);
89
- num = extract64(value, 39, 5);
90
- scale = extract64(value, 44, 2);
91
- exponent = (5 * scale) + 1;
92
-
93
- ret.length = (num + 1) << (exponent + page_shift);
94
-
95
- if (param.select) {
96
- ret.base = sextract64(value, 0, 37);
97
- } else {
98
- ret.base = extract64(value, 0, 37);
99
- }
100
- if (param.ds) {
101
- /*
102
- * With DS=1, BaseADDR is always shifted 16 so that it is able
103
- * to address all 52 va bits. The input address is perforce
104
- * aligned on a 64k boundary regardless of translation granule.
105
- */
106
- page_shift = 16;
107
- }
108
- ret.base <<= page_shift;
109
-
110
- return ret;
111
-}
112
-
113
-static void do_rvae_write(CPUARMState *env, uint64_t value,
114
- int idxmap, bool synced)
115
-{
116
- ARMMMUIdx one_idx = ARM_MMU_IDX_A | ctz32(idxmap);
117
- TLBIRange range;
118
- int bits;
119
-
120
- range = tlbi_aa64_get_range(env, one_idx, value);
121
- bits = tlbbits_for_regime(env, one_idx, range.base);
122
-
123
- if (synced) {
124
- tlb_flush_range_by_mmuidx_all_cpus_synced(env_cpu(env),
125
- range.base,
126
- range.length,
127
- idxmap,
128
- bits);
129
- } else {
130
- tlb_flush_range_by_mmuidx(env_cpu(env), range.base,
131
- range.length, idxmap, bits);
132
- }
133
-}
134
-
135
-static void tlbi_aa64_rvae1_write(CPUARMState *env,
136
- const ARMCPRegInfo *ri,
137
- uint64_t value)
138
-{
139
- /*
140
- * Invalidate by VA range, EL1&0.
141
- * Currently handles all of RVAE1, RVAAE1, RVAALE1 and RVALE1,
142
- * since we don't support flush-for-specific-ASID-only or
143
- * flush-last-level-only.
144
- */
145
-
146
- do_rvae_write(env, value, vae1_tlbmask(env),
147
- tlb_force_broadcast(env));
148
-}
149
-
150
-static void tlbi_aa64_rvae1is_write(CPUARMState *env,
151
- const ARMCPRegInfo *ri,
152
- uint64_t value)
153
-{
154
- /*
155
- * Invalidate by VA range, Inner/Outer Shareable EL1&0.
156
- * Currently handles all of RVAE1IS, RVAE1OS, RVAAE1IS, RVAAE1OS,
157
- * RVAALE1IS, RVAALE1OS, RVALE1IS and RVALE1OS, since we don't support
158
- * flush-for-specific-ASID-only, flush-last-level-only or inner/outer
159
- * shareable specific flushes.
160
- */
161
-
162
- do_rvae_write(env, value, vae1_tlbmask(env), true);
163
-}
164
-
165
-static void tlbi_aa64_rvae2_write(CPUARMState *env,
166
- const ARMCPRegInfo *ri,
167
- uint64_t value)
168
-{
169
- /*
170
- * Invalidate by VA range, EL2.
171
- * Currently handles all of RVAE2 and RVALE2,
172
- * since we don't support flush-for-specific-ASID-only or
173
- * flush-last-level-only.
174
- */
175
-
176
- do_rvae_write(env, value, vae2_tlbmask(env),
177
- tlb_force_broadcast(env));
178
-
179
-
180
-}
181
-
182
-static void tlbi_aa64_rvae2is_write(CPUARMState *env,
183
- const ARMCPRegInfo *ri,
184
- uint64_t value)
185
-{
186
- /*
187
- * Invalidate by VA range, Inner/Outer Shareable, EL2.
188
- * Currently handles all of RVAE2IS, RVAE2OS, RVALE2IS and RVALE2OS,
189
- * since we don't support flush-for-specific-ASID-only,
190
- * flush-last-level-only or inner/outer shareable specific flushes.
191
- */
192
-
193
- do_rvae_write(env, value, vae2_tlbmask(env), true);
194
-
195
-}
196
-
197
-static void tlbi_aa64_rvae3_write(CPUARMState *env,
198
- const ARMCPRegInfo *ri,
199
- uint64_t value)
200
-{
201
- /*
202
- * Invalidate by VA range, EL3.
203
- * Currently handles all of RVAE3 and RVALE3,
204
- * since we don't support flush-for-specific-ASID-only or
205
- * flush-last-level-only.
206
- */
207
-
208
- do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
209
-}
210
-
211
-static void tlbi_aa64_rvae3is_write(CPUARMState *env,
212
- const ARMCPRegInfo *ri,
213
- uint64_t value)
214
-{
215
- /*
216
- * Invalidate by VA range, EL3, Inner/Outer Shareable.
217
- * Currently handles all of RVAE3IS, RVAE3OS, RVALE3IS and RVALE3OS,
218
- * since we don't support flush-for-specific-ASID-only,
219
- * flush-last-level-only or inner/outer specific flushes.
220
- */
221
-
222
- do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
223
-}
224
-
225
-static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
226
- uint64_t value)
227
-{
228
- do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
229
- tlb_force_broadcast(env));
230
-}
231
-
232
-static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
233
- const ARMCPRegInfo *ri,
234
- uint64_t value)
235
-{
236
- do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
237
-}
238
-#endif
239
-
240
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
241
bool isread)
71
{
242
{
72
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
243
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
244
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
245
};
246
247
-static const ARMCPRegInfo tlbirange_reginfo[] = {
248
- { .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
249
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
250
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
251
- .fgt = FGT_TLBIRVAE1IS,
252
- .writefn = tlbi_aa64_rvae1is_write },
253
- { .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
254
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
255
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
256
- .fgt = FGT_TLBIRVAAE1IS,
257
- .writefn = tlbi_aa64_rvae1is_write },
258
- { .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
259
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
260
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
261
- .fgt = FGT_TLBIRVALE1IS,
262
- .writefn = tlbi_aa64_rvae1is_write },
263
- { .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
264
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
265
- .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
266
- .fgt = FGT_TLBIRVAALE1IS,
267
- .writefn = tlbi_aa64_rvae1is_write },
268
- { .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
269
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
270
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
271
- .fgt = FGT_TLBIRVAE1OS,
272
- .writefn = tlbi_aa64_rvae1is_write },
273
- { .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
274
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
275
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
276
- .fgt = FGT_TLBIRVAAE1OS,
277
- .writefn = tlbi_aa64_rvae1is_write },
278
- { .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
279
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
280
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
281
- .fgt = FGT_TLBIRVALE1OS,
282
- .writefn = tlbi_aa64_rvae1is_write },
283
- { .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
284
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
285
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
286
- .fgt = FGT_TLBIRVAALE1OS,
287
- .writefn = tlbi_aa64_rvae1is_write },
288
- { .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
289
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
290
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
291
- .fgt = FGT_TLBIRVAE1,
292
- .writefn = tlbi_aa64_rvae1_write },
293
- { .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
294
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
295
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
296
- .fgt = FGT_TLBIRVAAE1,
297
- .writefn = tlbi_aa64_rvae1_write },
298
- { .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
299
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
300
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
301
- .fgt = FGT_TLBIRVALE1,
302
- .writefn = tlbi_aa64_rvae1_write },
303
- { .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
304
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
305
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
306
- .fgt = FGT_TLBIRVAALE1,
307
- .writefn = tlbi_aa64_rvae1_write },
308
- { .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
309
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
310
- .access = PL2_W, .type = ARM_CP_NO_RAW,
311
- .writefn = tlbi_aa64_ripas2e1is_write },
312
- { .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
313
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
314
- .access = PL2_W, .type = ARM_CP_NO_RAW,
315
- .writefn = tlbi_aa64_ripas2e1is_write },
316
- { .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
317
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
318
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
319
- .writefn = tlbi_aa64_rvae2is_write },
320
- { .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
321
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
322
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
323
- .writefn = tlbi_aa64_rvae2is_write },
324
- { .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
325
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
326
- .access = PL2_W, .type = ARM_CP_NO_RAW,
327
- .writefn = tlbi_aa64_ripas2e1_write },
328
- { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
329
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
330
- .access = PL2_W, .type = ARM_CP_NO_RAW,
331
- .writefn = tlbi_aa64_ripas2e1_write },
332
- { .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
333
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
334
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
335
- .writefn = tlbi_aa64_rvae2is_write },
336
- { .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
337
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
338
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
339
- .writefn = tlbi_aa64_rvae2is_write },
340
- { .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
341
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
342
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
343
- .writefn = tlbi_aa64_rvae2_write },
344
- { .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
345
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
346
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
347
- .writefn = tlbi_aa64_rvae2_write },
348
- { .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
349
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
350
- .access = PL3_W, .type = ARM_CP_NO_RAW,
351
- .writefn = tlbi_aa64_rvae3is_write },
352
- { .name = "TLBI_RVALE3IS", .state = ARM_CP_STATE_AA64,
353
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 5,
354
- .access = PL3_W, .type = ARM_CP_NO_RAW,
355
- .writefn = tlbi_aa64_rvae3is_write },
356
- { .name = "TLBI_RVAE3OS", .state = ARM_CP_STATE_AA64,
357
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 1,
358
- .access = PL3_W, .type = ARM_CP_NO_RAW,
359
- .writefn = tlbi_aa64_rvae3is_write },
360
- { .name = "TLBI_RVALE3OS", .state = ARM_CP_STATE_AA64,
361
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 5,
362
- .access = PL3_W, .type = ARM_CP_NO_RAW,
363
- .writefn = tlbi_aa64_rvae3is_write },
364
- { .name = "TLBI_RVAE3", .state = ARM_CP_STATE_AA64,
365
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 1,
366
- .access = PL3_W, .type = ARM_CP_NO_RAW,
367
- .writefn = tlbi_aa64_rvae3_write },
368
- { .name = "TLBI_RVALE3", .state = ARM_CP_STATE_AA64,
369
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
370
- .access = PL3_W, .type = ARM_CP_NO_RAW,
371
- .writefn = tlbi_aa64_rvae3_write },
372
-};
373
-
374
static const ARMCPRegInfo tlbios_reginfo[] = {
375
{ .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
376
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
377
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
378
if (cpu_isar_feature(aa64_rndr, cpu)) {
379
define_arm_cp_regs(cpu, rndr_reginfo);
73
}
380
}
74
if (mmu_idx == ARMMMUIdx_Stage2_S) {
381
- if (cpu_isar_feature(aa64_tlbirange, cpu)) {
75
/*
382
- define_arm_cp_regs(cpu, tlbirange_reginfo);
76
- * Note: Secure stage 2 nominally shares fields from VTCR_EL2, but
383
- }
77
- * those are not currently used by QEMU, so just return VSTCR_EL2.
384
if (cpu_isar_feature(aa64_tlbios, cpu)) {
78
+ * Secure stage 2 shares fields from VTCR_EL2. We merge those
385
define_arm_cp_regs(cpu, tlbios_reginfo);
79
+ * in with the VSTCR_EL2 value to synthesize a single VTCR_EL2 format
80
+ * value so the callers don't need to special case this.
81
+ *
82
+ * If a future architecture change defines bits in VSTCR_EL2 that
83
+ * overlap with these VTCR_EL2 fields we may need to revisit this.
84
*/
85
- return env->cp15.vstcr_el2;
86
+ uint64_t v = env->cp15.vstcr_el2 & ~VTCR_SHARED_FIELD_MASK;
87
+ v |= env->cp15.vtcr_el2 & VTCR_SHARED_FIELD_MASK;
88
+ return v;
89
}
386
}
90
return env->cp15.tcr_el[regime_el(env, mmu_idx)];
387
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
388
index XXXXXXX..XXXXXXX 100644
389
--- a/target/arm/tcg/tlb-insns.c
390
+++ b/target/arm/tcg/tlb-insns.c
391
@@ -XXX,XX +XXX,XX @@
392
* SPDX-License-Identifier: GPL-2.0-or-later
393
*/
394
#include "qemu/osdep.h"
395
+#include "qemu/log.h"
396
#include "exec/exec-all.h"
397
#include "cpu.h"
398
#include "internals.h"
399
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbi_el3_cp_reginfo[] = {
400
.writefn = tlbi_aa64_vae3_write },
401
};
402
403
+#ifdef TARGET_AARCH64
404
+typedef struct {
405
+ uint64_t base;
406
+ uint64_t length;
407
+} TLBIRange;
408
+
409
+static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
410
+{
411
+ /*
412
+ * Note that the TLBI range TG field encoding differs from both
413
+ * TG0 and TG1 encodings.
414
+ */
415
+ switch (tg) {
416
+ case 1:
417
+ return Gran4K;
418
+ case 2:
419
+ return Gran16K;
420
+ case 3:
421
+ return Gran64K;
422
+ default:
423
+ return GranInvalid;
424
+ }
425
+}
426
+
427
+static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
428
+ uint64_t value)
429
+{
430
+ unsigned int page_size_granule, page_shift, num, scale, exponent;
431
+ /* Extract one bit to represent the va selector in use. */
432
+ uint64_t select = sextract64(value, 36, 1);
433
+ ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true, false);
434
+ TLBIRange ret = { };
435
+ ARMGranuleSize gran;
436
+
437
+ page_size_granule = extract64(value, 46, 2);
438
+ gran = tlbi_range_tg_to_gran_size(page_size_granule);
439
+
440
+ /* The granule encoded in value must match the granule in use. */
441
+ if (gran != param.gran) {
442
+ qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
443
+ page_size_granule);
444
+ return ret;
445
+ }
446
+
447
+ page_shift = arm_granule_bits(gran);
448
+ num = extract64(value, 39, 5);
449
+ scale = extract64(value, 44, 2);
450
+ exponent = (5 * scale) + 1;
451
+
452
+ ret.length = (num + 1) << (exponent + page_shift);
453
+
454
+ if (param.select) {
455
+ ret.base = sextract64(value, 0, 37);
456
+ } else {
457
+ ret.base = extract64(value, 0, 37);
458
+ }
459
+ if (param.ds) {
460
+ /*
461
+ * With DS=1, BaseADDR is always shifted 16 so that it is able
462
+ * to address all 52 va bits. The input address is perforce
463
+ * aligned on a 64k boundary regardless of translation granule.
464
+ */
465
+ page_shift = 16;
466
+ }
467
+ ret.base <<= page_shift;
468
+
469
+ return ret;
470
+}
471
+
472
+static void do_rvae_write(CPUARMState *env, uint64_t value,
473
+ int idxmap, bool synced)
474
+{
475
+ ARMMMUIdx one_idx = ARM_MMU_IDX_A | ctz32(idxmap);
476
+ TLBIRange range;
477
+ int bits;
478
+
479
+ range = tlbi_aa64_get_range(env, one_idx, value);
480
+ bits = tlbbits_for_regime(env, one_idx, range.base);
481
+
482
+ if (synced) {
483
+ tlb_flush_range_by_mmuidx_all_cpus_synced(env_cpu(env),
484
+ range.base,
485
+ range.length,
486
+ idxmap,
487
+ bits);
488
+ } else {
489
+ tlb_flush_range_by_mmuidx(env_cpu(env), range.base,
490
+ range.length, idxmap, bits);
491
+ }
492
+}
493
+
494
+static void tlbi_aa64_rvae1_write(CPUARMState *env,
495
+ const ARMCPRegInfo *ri,
496
+ uint64_t value)
497
+{
498
+ /*
499
+ * Invalidate by VA range, EL1&0.
500
+ * Currently handles all of RVAE1, RVAAE1, RVAALE1 and RVALE1,
501
+ * since we don't support flush-for-specific-ASID-only or
502
+ * flush-last-level-only.
503
+ */
504
+
505
+ do_rvae_write(env, value, vae1_tlbmask(env),
506
+ tlb_force_broadcast(env));
507
+}
508
+
509
+static void tlbi_aa64_rvae1is_write(CPUARMState *env,
510
+ const ARMCPRegInfo *ri,
511
+ uint64_t value)
512
+{
513
+ /*
514
+ * Invalidate by VA range, Inner/Outer Shareable EL1&0.
515
+ * Currently handles all of RVAE1IS, RVAE1OS, RVAAE1IS, RVAAE1OS,
516
+ * RVAALE1IS, RVAALE1OS, RVALE1IS and RVALE1OS, since we don't support
517
+ * flush-for-specific-ASID-only, flush-last-level-only or inner/outer
518
+ * shareable specific flushes.
519
+ */
520
+
521
+ do_rvae_write(env, value, vae1_tlbmask(env), true);
522
+}
523
+
524
+static void tlbi_aa64_rvae2_write(CPUARMState *env,
525
+ const ARMCPRegInfo *ri,
526
+ uint64_t value)
527
+{
528
+ /*
529
+ * Invalidate by VA range, EL2.
530
+ * Currently handles all of RVAE2 and RVALE2,
531
+ * since we don't support flush-for-specific-ASID-only or
532
+ * flush-last-level-only.
533
+ */
534
+
535
+ do_rvae_write(env, value, vae2_tlbmask(env),
536
+ tlb_force_broadcast(env));
537
+
538
+
539
+}
540
+
541
+static void tlbi_aa64_rvae2is_write(CPUARMState *env,
542
+ const ARMCPRegInfo *ri,
543
+ uint64_t value)
544
+{
545
+ /*
546
+ * Invalidate by VA range, Inner/Outer Shareable, EL2.
547
+ * Currently handles all of RVAE2IS, RVAE2OS, RVALE2IS and RVALE2OS,
548
+ * since we don't support flush-for-specific-ASID-only,
549
+ * flush-last-level-only or inner/outer shareable specific flushes.
550
+ */
551
+
552
+ do_rvae_write(env, value, vae2_tlbmask(env), true);
553
+
554
+}
555
+
556
+static void tlbi_aa64_rvae3_write(CPUARMState *env,
557
+ const ARMCPRegInfo *ri,
558
+ uint64_t value)
559
+{
560
+ /*
561
+ * Invalidate by VA range, EL3.
562
+ * Currently handles all of RVAE3 and RVALE3,
563
+ * since we don't support flush-for-specific-ASID-only or
564
+ * flush-last-level-only.
565
+ */
566
+
567
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
568
+}
569
+
570
+static void tlbi_aa64_rvae3is_write(CPUARMState *env,
571
+ const ARMCPRegInfo *ri,
572
+ uint64_t value)
573
+{
574
+ /*
575
+ * Invalidate by VA range, EL3, Inner/Outer Shareable.
576
+ * Currently handles all of RVAE3IS, RVAE3OS, RVALE3IS and RVALE3OS,
577
+ * since we don't support flush-for-specific-ASID-only,
578
+ * flush-last-level-only or inner/outer specific flushes.
579
+ */
580
+
581
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
582
+}
583
+
584
+static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
585
+ uint64_t value)
586
+{
587
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
588
+ tlb_force_broadcast(env));
589
+}
590
+
591
+static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
592
+ const ARMCPRegInfo *ri,
593
+ uint64_t value)
594
+{
595
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
596
+}
597
+
598
+static const ARMCPRegInfo tlbirange_reginfo[] = {
599
+ { .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
600
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
601
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
602
+ .fgt = FGT_TLBIRVAE1IS,
603
+ .writefn = tlbi_aa64_rvae1is_write },
604
+ { .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
605
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
606
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
607
+ .fgt = FGT_TLBIRVAAE1IS,
608
+ .writefn = tlbi_aa64_rvae1is_write },
609
+ { .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
610
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
611
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
612
+ .fgt = FGT_TLBIRVALE1IS,
613
+ .writefn = tlbi_aa64_rvae1is_write },
614
+ { .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
615
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
616
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
617
+ .fgt = FGT_TLBIRVAALE1IS,
618
+ .writefn = tlbi_aa64_rvae1is_write },
619
+ { .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
620
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
621
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
622
+ .fgt = FGT_TLBIRVAE1OS,
623
+ .writefn = tlbi_aa64_rvae1is_write },
624
+ { .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
625
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
626
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
627
+ .fgt = FGT_TLBIRVAAE1OS,
628
+ .writefn = tlbi_aa64_rvae1is_write },
629
+ { .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
630
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
631
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
632
+ .fgt = FGT_TLBIRVALE1OS,
633
+ .writefn = tlbi_aa64_rvae1is_write },
634
+ { .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
635
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
636
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
637
+ .fgt = FGT_TLBIRVAALE1OS,
638
+ .writefn = tlbi_aa64_rvae1is_write },
639
+ { .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
640
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
641
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
642
+ .fgt = FGT_TLBIRVAE1,
643
+ .writefn = tlbi_aa64_rvae1_write },
644
+ { .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
645
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
646
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
647
+ .fgt = FGT_TLBIRVAAE1,
648
+ .writefn = tlbi_aa64_rvae1_write },
649
+ { .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
650
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
651
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
652
+ .fgt = FGT_TLBIRVALE1,
653
+ .writefn = tlbi_aa64_rvae1_write },
654
+ { .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
655
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
656
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
657
+ .fgt = FGT_TLBIRVAALE1,
658
+ .writefn = tlbi_aa64_rvae1_write },
659
+ { .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
660
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
661
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
662
+ .writefn = tlbi_aa64_ripas2e1is_write },
663
+ { .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
664
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
665
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
666
+ .writefn = tlbi_aa64_ripas2e1is_write },
667
+ { .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
668
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
669
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
670
+ .writefn = tlbi_aa64_rvae2is_write },
671
+ { .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
672
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
673
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
674
+ .writefn = tlbi_aa64_rvae2is_write },
675
+ { .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
676
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
677
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
678
+ .writefn = tlbi_aa64_ripas2e1_write },
679
+ { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
680
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
681
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
682
+ .writefn = tlbi_aa64_ripas2e1_write },
683
+ { .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
684
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
685
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
686
+ .writefn = tlbi_aa64_rvae2is_write },
687
+ { .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
688
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
689
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
690
+ .writefn = tlbi_aa64_rvae2is_write },
691
+ { .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
692
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
693
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
694
+ .writefn = tlbi_aa64_rvae2_write },
695
+ { .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
696
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
697
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
698
+ .writefn = tlbi_aa64_rvae2_write },
699
+ { .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
700
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
701
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
702
+ .writefn = tlbi_aa64_rvae3is_write },
703
+ { .name = "TLBI_RVALE3IS", .state = ARM_CP_STATE_AA64,
704
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 5,
705
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
706
+ .writefn = tlbi_aa64_rvae3is_write },
707
+ { .name = "TLBI_RVAE3OS", .state = ARM_CP_STATE_AA64,
708
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 1,
709
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
710
+ .writefn = tlbi_aa64_rvae3is_write },
711
+ { .name = "TLBI_RVALE3OS", .state = ARM_CP_STATE_AA64,
712
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 5,
713
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
714
+ .writefn = tlbi_aa64_rvae3is_write },
715
+ { .name = "TLBI_RVAE3", .state = ARM_CP_STATE_AA64,
716
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 1,
717
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
718
+ .writefn = tlbi_aa64_rvae3_write },
719
+ { .name = "TLBI_RVALE3", .state = ARM_CP_STATE_AA64,
720
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
721
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
722
+ .writefn = tlbi_aa64_rvae3_write },
723
+};
724
+#endif
725
+
726
void define_tlb_insn_regs(ARMCPU *cpu)
727
{
728
CPUARMState *env = &cpu->env;
729
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
730
if (arm_feature(env, ARM_FEATURE_EL3)) {
731
define_arm_cp_regs(cpu, tlbi_el3_cp_reginfo);
732
}
733
+#ifdef TARGET_AARCH64
734
+ if (cpu_isar_feature(aa64_tlbirange, cpu)) {
735
+ define_arm_cp_regs(cpu, tlbirange_reginfo);
736
+ }
737
+#endif
91
}
738
}
92
--
739
--
93
2.25.1
740
2.34.1
diff view generated by jsdifflib
1
We have a bug in our handling of accesses to the AArch32 VTCR
1
Move the TLBI OS insns across to tlb-insns.c.
2
register on big-endian hosts: we were not adjusting the part of the
3
uint64_t field within TCR that the generated code would access. That
4
can be done with offsetoflow32(), by using an ARM_CP_STATE_BOTH cpreg
5
struct, or by defining a full set of read/write/reset functions --
6
the various other TCR cpreg structs used one or another of those
7
strategies, but for VTCR we did not, so on a big-endian host VTCR
8
accesses would touch the wrong half of the register.
9
10
Use offsetoflow32() in the VTCR register struct. This works even
11
though the field in the CPU struct is currently a struct TCR, because
12
the first field in that struct is the uint64_t raw_tcr.
13
14
None of the other TCR registers have this bug -- either they are
15
AArch64 only, or else they define resetfn, writefn, etc, and
16
expect to be passed the full struct pointer.
17
2
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20220714132303.1287193-5-peter.maydell@linaro.org
5
Message-id: 20241210160452.2427965-8-peter.maydell@linaro.org
21
---
6
---
22
target/arm/helper.c | 2 +-
7
target/arm/helper.c | 80 --------------------------------------
23
1 file changed, 1 insertion(+), 1 deletion(-)
8
target/arm/tcg/tlb-insns.c | 80 ++++++++++++++++++++++++++++++++++++++
9
2 files changed, 80 insertions(+), 80 deletions(-)
24
10
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
15
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
30
.cp = 15, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
16
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
31
.type = ARM_CP_ALIAS,
17
};
32
.access = PL2_RW, .accessfn = access_el3_aa32ns,
18
33
- .fieldoffset = offsetof(CPUARMState, cp15.vtcr_el2) },
19
-static const ARMCPRegInfo tlbios_reginfo[] = {
34
+ .fieldoffset = offsetoflow32(CPUARMState, cp15.vtcr_el2) },
20
- { .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
35
{ .name = "VTCR_EL2", .state = ARM_CP_STATE_AA64,
21
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
36
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
22
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
37
.access = PL2_RW,
23
- .fgt = FGT_TLBIVMALLE1OS,
24
- .writefn = tlbi_aa64_vmalle1is_write },
25
- { .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
26
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
27
- .fgt = FGT_TLBIVAE1OS,
28
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
29
- .writefn = tlbi_aa64_vae1is_write },
30
- { .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
31
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
32
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
33
- .fgt = FGT_TLBIASIDE1OS,
34
- .writefn = tlbi_aa64_vmalle1is_write },
35
- { .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
36
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
37
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
38
- .fgt = FGT_TLBIVAAE1OS,
39
- .writefn = tlbi_aa64_vae1is_write },
40
- { .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
41
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
42
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
43
- .fgt = FGT_TLBIVALE1OS,
44
- .writefn = tlbi_aa64_vae1is_write },
45
- { .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
46
- .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
47
- .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
48
- .fgt = FGT_TLBIVAALE1OS,
49
- .writefn = tlbi_aa64_vae1is_write },
50
- { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
51
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
52
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
53
- .writefn = tlbi_aa64_alle2is_write },
54
- { .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
55
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
56
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
57
- .writefn = tlbi_aa64_vae2is_write },
58
- { .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
59
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
60
- .access = PL2_W, .type = ARM_CP_NO_RAW,
61
- .writefn = tlbi_aa64_alle1is_write },
62
- { .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
63
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
64
- .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
65
- .writefn = tlbi_aa64_vae2is_write },
66
- { .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
67
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
68
- .access = PL2_W, .type = ARM_CP_NO_RAW,
69
- .writefn = tlbi_aa64_alle1is_write },
70
- { .name = "TLBI_IPAS2E1OS", .state = ARM_CP_STATE_AA64,
71
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 0,
72
- .access = PL2_W, .type = ARM_CP_NOP },
73
- { .name = "TLBI_RIPAS2E1OS", .state = ARM_CP_STATE_AA64,
74
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 3,
75
- .access = PL2_W, .type = ARM_CP_NOP },
76
- { .name = "TLBI_IPAS2LE1OS", .state = ARM_CP_STATE_AA64,
77
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 4,
78
- .access = PL2_W, .type = ARM_CP_NOP },
79
- { .name = "TLBI_RIPAS2LE1OS", .state = ARM_CP_STATE_AA64,
80
- .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 7,
81
- .access = PL2_W, .type = ARM_CP_NOP },
82
- { .name = "TLBI_ALLE3OS", .state = ARM_CP_STATE_AA64,
83
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 0,
84
- .access = PL3_W, .type = ARM_CP_NO_RAW,
85
- .writefn = tlbi_aa64_alle3is_write },
86
- { .name = "TLBI_VAE3OS", .state = ARM_CP_STATE_AA64,
87
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 1,
88
- .access = PL3_W, .type = ARM_CP_NO_RAW,
89
- .writefn = tlbi_aa64_vae3is_write },
90
- { .name = "TLBI_VALE3OS", .state = ARM_CP_STATE_AA64,
91
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
92
- .access = PL3_W, .type = ARM_CP_NO_RAW,
93
- .writefn = tlbi_aa64_vae3is_write },
94
-};
95
-
96
static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
97
{
98
Error *err = NULL;
99
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
100
if (cpu_isar_feature(aa64_rndr, cpu)) {
101
define_arm_cp_regs(cpu, rndr_reginfo);
102
}
103
- if (cpu_isar_feature(aa64_tlbios, cpu)) {
104
- define_arm_cp_regs(cpu, tlbios_reginfo);
105
- }
106
/* Data Cache clean instructions up to PoP */
107
if (cpu_isar_feature(aa64_dcpop, cpu)) {
108
define_one_arm_cp_reg(cpu, dcpop_reg);
109
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/tcg/tlb-insns.c
112
+++ b/target/arm/tcg/tlb-insns.c
113
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
114
.access = PL3_W, .type = ARM_CP_NO_RAW,
115
.writefn = tlbi_aa64_rvae3_write },
116
};
117
+
118
+static const ARMCPRegInfo tlbios_reginfo[] = {
119
+ { .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
120
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
121
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
122
+ .fgt = FGT_TLBIVMALLE1OS,
123
+ .writefn = tlbi_aa64_vmalle1is_write },
124
+ { .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
125
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
126
+ .fgt = FGT_TLBIVAE1OS,
127
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
128
+ .writefn = tlbi_aa64_vae1is_write },
129
+ { .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
130
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
131
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
132
+ .fgt = FGT_TLBIASIDE1OS,
133
+ .writefn = tlbi_aa64_vmalle1is_write },
134
+ { .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
135
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
136
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
137
+ .fgt = FGT_TLBIVAAE1OS,
138
+ .writefn = tlbi_aa64_vae1is_write },
139
+ { .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
140
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
141
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
142
+ .fgt = FGT_TLBIVALE1OS,
143
+ .writefn = tlbi_aa64_vae1is_write },
144
+ { .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
145
+ .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
146
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
147
+ .fgt = FGT_TLBIVAALE1OS,
148
+ .writefn = tlbi_aa64_vae1is_write },
149
+ { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
150
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
151
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
152
+ .writefn = tlbi_aa64_alle2is_write },
153
+ { .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
154
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
155
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
156
+ .writefn = tlbi_aa64_vae2is_write },
157
+ { .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
158
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
159
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
160
+ .writefn = tlbi_aa64_alle1is_write },
161
+ { .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
162
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
163
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
164
+ .writefn = tlbi_aa64_vae2is_write },
165
+ { .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
166
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
167
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
168
+ .writefn = tlbi_aa64_alle1is_write },
169
+ { .name = "TLBI_IPAS2E1OS", .state = ARM_CP_STATE_AA64,
170
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 0,
171
+ .access = PL2_W, .type = ARM_CP_NOP },
172
+ { .name = "TLBI_RIPAS2E1OS", .state = ARM_CP_STATE_AA64,
173
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 3,
174
+ .access = PL2_W, .type = ARM_CP_NOP },
175
+ { .name = "TLBI_IPAS2LE1OS", .state = ARM_CP_STATE_AA64,
176
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 4,
177
+ .access = PL2_W, .type = ARM_CP_NOP },
178
+ { .name = "TLBI_RIPAS2LE1OS", .state = ARM_CP_STATE_AA64,
179
+ .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 7,
180
+ .access = PL2_W, .type = ARM_CP_NOP },
181
+ { .name = "TLBI_ALLE3OS", .state = ARM_CP_STATE_AA64,
182
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 0,
183
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
184
+ .writefn = tlbi_aa64_alle3is_write },
185
+ { .name = "TLBI_VAE3OS", .state = ARM_CP_STATE_AA64,
186
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 1,
187
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
188
+ .writefn = tlbi_aa64_vae3is_write },
189
+ { .name = "TLBI_VALE3OS", .state = ARM_CP_STATE_AA64,
190
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
191
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
192
+ .writefn = tlbi_aa64_vae3is_write },
193
+};
194
#endif
195
196
void define_tlb_insn_regs(ARMCPU *cpu)
197
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
198
if (cpu_isar_feature(aa64_tlbirange, cpu)) {
199
define_arm_cp_regs(cpu, tlbirange_reginfo);
200
}
201
+ if (cpu_isar_feature(aa64_tlbios, cpu)) {
202
+ define_arm_cp_regs(cpu, tlbios_reginfo);
203
+ }
204
#endif
205
}
38
--
206
--
39
2.25.1
207
2.34.1
diff view generated by jsdifflib
1
The regime_tcr() function returns a pointer to a struct TCR
1
The remaining functions that we temporarily made global are now
2
corresponding to the TCR controlling a translation regime. The
2
used only from callsits in tlb-insns.c; move them across and
3
struct TCR has the raw value of the register, plus two fields mask
3
make them file-local again.
4
and base_mask which are used as a small optimization in the case of
5
32-bit short-descriptor lookups. Almost all callers of regime_tcr()
6
only want the raw register value. Define and use a new
7
regime_tcr_value() function which returns only the raw 64-bit
8
register value.
9
10
This is a preliminary to removing the 32-bit short descriptor
11
optimization -- it only saves a handful of bit operations, which is
12
tiny compared to the overhead of doing a page table walk at all, and
13
the TCR struct is awkward and makes fixing
14
https://gitlab.com/qemu-project/qemu/-/issues/1103 unnecessarily
15
difficult.
16
4
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20220714132303.1287193-2-peter.maydell@linaro.org
7
Message-id: 20241210160452.2427965-9-peter.maydell@linaro.org
20
---
8
---
21
target/arm/internals.h | 6 ++++++
9
target/arm/cpregs.h | 34 ------
22
target/arm/helper.c | 6 +++---
10
target/arm/helper.c | 220 -------------------------------------
23
target/arm/ptw.c | 8 ++++----
11
target/arm/tcg/tlb-insns.c | 220 +++++++++++++++++++++++++++++++++++++
24
target/arm/tlb_helper.c | 2 +-
12
3 files changed, 220 insertions(+), 254 deletions(-)
25
4 files changed, 14 insertions(+), 8 deletions(-)
26
13
27
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
28
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/internals.h
16
--- a/target/arm/cpregs.h
30
+++ b/target/arm/internals.h
17
+++ b/target/arm/cpregs.h
31
@@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpreg_traps_in_nv(const ARMCPRegInfo *ri)
32
return &env->cp15.tcr_el[regime_el(env, mmu_idx)];
19
return ri->opc1 == 4 || ri->opc1 == 5;
33
}
20
}
34
21
35
+/* Return the raw value of the TCR controlling this translation regime */
22
-/*
36
+static inline uint64_t regime_tcr_value(CPUARMState *env, ARMMMUIdx mmu_idx)
23
- * Temporary declarations of functions until the move to tlb_insn_helper.c
37
+{
24
- * is complete and we can make the functions static again
38
+ return regime_tcr(env, mmu_idx)->raw_tcr;
25
- */
39
+}
26
-CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
40
+
27
- bool isread);
41
/**
28
-CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
42
* arm_num_brps: Return number of implemented breakpoints.
29
- bool isread);
43
* Note that the ID register BRPS field is "number of bps - 1",
30
-CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
31
- bool isread);
32
-bool tlb_force_broadcast(CPUARMState *env);
33
-int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
34
- uint64_t addr);
35
-int vae1_tlbbits(CPUARMState *env, uint64_t addr);
36
-int vae2_tlbbits(CPUARMState *env, uint64_t addr);
37
-int vae1_tlbmask(CPUARMState *env);
38
-int vae2_tlbmask(CPUARMState *env);
39
-int ipas2e1_tlbmask(CPUARMState *env, int64_t value);
40
-int e2_tlbmask(CPUARMState *env);
41
-void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
42
- uint64_t value);
43
-void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
44
- uint64_t value);
45
-void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
- uint64_t value);
47
-void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
- uint64_t value);
49
-void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
50
- uint64_t value);
51
-void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
52
- uint64_t value);
53
-void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
- uint64_t value);
55
-
56
#endif /* TARGET_ARM_CPREGS_H */
44
diff --git a/target/arm/helper.c b/target/arm/helper.c
57
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper.c
59
--- a/target/arm/helper.c
47
+++ b/target/arm/helper.c
60
+++ b/target/arm/helper.c
48
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
61
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tacr(CPUARMState *env, const ARMCPRegInfo *ri,
49
static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
62
return CP_ACCESS_OK;
50
uint64_t addr)
63
}
51
{
64
52
- uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
65
-/* Check for traps from EL1 due to HCR_EL2.TTLB. */
53
+ uint64_t tcr = regime_tcr_value(env, mmu_idx);
66
-CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
54
int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
67
- bool isread)
55
int select = extract64(addr, 55, 1);
68
-{
56
69
- if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) {
57
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx)
70
- return CP_ACCESS_TRAP_EL2;
58
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
71
- }
59
ARMMMUIdx mmu_idx, bool data)
72
- return CP_ACCESS_OK;
60
{
73
-}
61
- uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
74
-
62
+ uint64_t tcr = regime_tcr_value(env, mmu_idx);
75
-/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBIS. */
63
bool epd, hpd, using16k, using64k, tsz_oob, ds;
76
-CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
64
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
77
- bool isread)
78
-{
79
- if (arm_current_el(env) == 1 &&
80
- (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBIS))) {
81
- return CP_ACCESS_TRAP_EL2;
82
- }
83
- return CP_ACCESS_OK;
84
-}
85
-
86
-#ifdef TARGET_AARCH64
87
-/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBOS. */
88
-CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
89
- bool isread)
90
-{
91
- if (arm_current_el(env) == 1 &&
92
- (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBOS))) {
93
- return CP_ACCESS_TRAP_EL2;
94
- }
95
- return CP_ACCESS_OK;
96
-}
97
-#endif
98
-
99
static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
100
{
65
ARMCPU *cpu = env_archcpu(env);
101
ARMCPU *cpu = env_archcpu(env);
66
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
102
@@ -XXX,XX +XXX,XX @@ int alle1_tlbmask(CPUARMState *env)
67
{
103
ARMMMUIdxBit_Stage2_S);
68
CPUARMTBFlags flags = {};
104
}
69
ARMMMUIdx stage1 = stage_1_mmu_idx(mmu_idx);
105
70
- uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
106
-/*
71
+ uint64_t tcr = regime_tcr_value(env, mmu_idx);
107
- * Non-IS variants of TLB operations are upgraded to
72
uint64_t sctlr;
108
- * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
73
int tbii, tbid;
109
- * force broadcast of these operations.
74
110
- */
75
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
111
-bool tlb_force_broadcast(CPUARMState *env)
112
-{
113
- return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
114
-}
115
-
116
static const ARMCPRegInfo cp_reginfo[] = {
117
/*
118
* Define the secure and non-secure FCSE identifier CP registers
119
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tocu(CPUARMState *env, const ARMCPRegInfo *ri,
120
return do_cacheop_pou_access(env, HCR_TOCU | HCR_TPU);
121
}
122
123
-/*
124
- * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
125
- * Page D4-1736 (DDI0487A.b)
126
- */
127
-
128
-int vae1_tlbmask(CPUARMState *env)
129
-{
130
- uint64_t hcr = arm_hcr_el2_eff(env);
131
- uint16_t mask;
132
-
133
- assert(arm_feature(env, ARM_FEATURE_AARCH64));
134
-
135
- if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
136
- mask = ARMMMUIdxBit_E20_2 |
137
- ARMMMUIdxBit_E20_2_PAN |
138
- ARMMMUIdxBit_E20_0;
139
- } else {
140
- /* This is AArch64 only, so we don't need to touch the EL30_x TLBs */
141
- mask = ARMMMUIdxBit_E10_1 |
142
- ARMMMUIdxBit_E10_1_PAN |
143
- ARMMMUIdxBit_E10_0;
144
- }
145
- return mask;
146
-}
147
-
148
-int vae2_tlbmask(CPUARMState *env)
149
-{
150
- uint64_t hcr = arm_hcr_el2_eff(env);
151
- uint16_t mask;
152
-
153
- if (hcr & HCR_E2H) {
154
- mask = ARMMMUIdxBit_E20_2 |
155
- ARMMMUIdxBit_E20_2_PAN |
156
- ARMMMUIdxBit_E20_0;
157
- } else {
158
- mask = ARMMMUIdxBit_E2;
159
- }
160
- return mask;
161
-}
162
-
163
-/* Return 56 if TBI is enabled, 64 otherwise. */
164
-int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
165
- uint64_t addr)
166
-{
167
- uint64_t tcr = regime_tcr(env, mmu_idx);
168
- int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
169
- int select = extract64(addr, 55, 1);
170
-
171
- return (tbi >> select) & 1 ? 56 : 64;
172
-}
173
-
174
-int vae1_tlbbits(CPUARMState *env, uint64_t addr)
175
-{
176
- uint64_t hcr = arm_hcr_el2_eff(env);
177
- ARMMMUIdx mmu_idx;
178
-
179
- assert(arm_feature(env, ARM_FEATURE_AARCH64));
180
-
181
- /* Only the regime of the mmu_idx below is significant. */
182
- if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
183
- mmu_idx = ARMMMUIdx_E20_0;
184
- } else {
185
- mmu_idx = ARMMMUIdx_E10_0;
186
- }
187
-
188
- return tlbbits_for_regime(env, mmu_idx, addr);
189
-}
190
-
191
-int vae2_tlbbits(CPUARMState *env, uint64_t addr)
192
-{
193
- uint64_t hcr = arm_hcr_el2_eff(env);
194
- ARMMMUIdx mmu_idx;
195
-
196
- /*
197
- * Only the regime of the mmu_idx below is significant.
198
- * Regime EL2&0 has two ranges with separate TBI configuration, while EL2
199
- * only has one.
200
- */
201
- if (hcr & HCR_E2H) {
202
- mmu_idx = ARMMMUIdx_E20_2;
203
- } else {
204
- mmu_idx = ARMMMUIdx_E2;
205
- }
206
-
207
- return tlbbits_for_regime(env, mmu_idx, addr);
208
-}
209
-
210
-void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
211
- uint64_t value)
212
-{
213
- CPUState *cs = env_cpu(env);
214
- int mask = vae1_tlbmask(env);
215
-
216
- tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
217
-}
218
-
219
-int e2_tlbmask(CPUARMState *env)
220
-{
221
- return (ARMMMUIdxBit_E20_0 |
222
- ARMMMUIdxBit_E20_2 |
223
- ARMMMUIdxBit_E20_2_PAN |
224
- ARMMMUIdxBit_E2);
225
-}
226
-
227
-void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
228
- uint64_t value)
229
-{
230
- CPUState *cs = env_cpu(env);
231
- int mask = alle1_tlbmask(env);
232
-
233
- tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
234
-}
235
-
236
-void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
237
- uint64_t value)
238
-{
239
- CPUState *cs = env_cpu(env);
240
- int mask = e2_tlbmask(env);
241
-
242
- tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
243
-}
244
-
245
-void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
246
- uint64_t value)
247
-{
248
- CPUState *cs = env_cpu(env);
249
-
250
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
251
-}
252
-
253
-void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
- uint64_t value)
255
-{
256
- CPUState *cs = env_cpu(env);
257
- int mask = vae1_tlbmask(env);
258
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
259
- int bits = vae1_tlbbits(env, pageaddr);
260
-
261
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
262
-}
263
-
264
-void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
265
- uint64_t value)
266
-{
267
- CPUState *cs = env_cpu(env);
268
- int mask = vae2_tlbmask(env);
269
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
270
- int bits = vae2_tlbbits(env, pageaddr);
271
-
272
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
273
-}
274
-
275
-void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
276
- uint64_t value)
277
-{
278
- CPUState *cs = env_cpu(env);
279
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
280
- int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
281
-
282
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
283
- ARMMMUIdxBit_E3, bits);
284
-}
285
-
286
-int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
287
-{
288
- /*
289
- * The MSB of value is the NS field, which only applies if SEL2
290
- * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
291
- */
292
- return (value >= 0
293
- && cpu_isar_feature(aa64_sel2, env_archcpu(env))
294
- && arm_is_secure_below_el3(env)
295
- ? ARMMMUIdxBit_Stage2_S
296
- : ARMMMUIdxBit_Stage2);
297
-}
298
-
299
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
300
bool isread)
301
{
302
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
76
index XXXXXXX..XXXXXXX 100644
303
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/ptw.c
304
--- a/target/arm/tcg/tlb-insns.c
78
+++ b/target/arm/ptw.c
305
+++ b/target/arm/tcg/tlb-insns.c
79
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
306
@@ -XXX,XX +XXX,XX @@
80
static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
307
#include "cpu-features.h"
81
ARMMMUIdx mmu_idx)
308
#include "cpregs.h"
82
{
309
83
- uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
310
+/* Check for traps from EL1 due to HCR_EL2.TTLB. */
84
+ uint64_t tcr = regime_tcr_value(env, mmu_idx);
311
+static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
85
uint32_t el = regime_el(env, mmu_idx);
312
+ bool isread)
86
int select, tsz;
313
+{
87
bool epd, hpd;
314
+ if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) {
88
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
315
+ return CP_ACCESS_TRAP_EL2;
89
uint32_t attrs;
316
+ }
90
int32_t stride;
317
+ return CP_ACCESS_OK;
91
int addrsize, inputsize, outputsize;
318
+}
92
- TCR *tcr = regime_tcr(env, mmu_idx);
319
+
93
+ uint64_t tcr = regime_tcr_value(env, mmu_idx);
320
+/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBIS. */
94
int ap, ns, xn, pxn;
321
+static CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
95
uint32_t el = regime_el(env, mmu_idx);
322
+ bool isread)
96
uint64_t descaddrmask;
323
+{
97
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
324
+ if (arm_current_el(env) == 1 &&
98
* For stage 2 translations the starting level is specified by the
325
+ (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBIS))) {
99
* VTCR_EL2.SL0 field (whose interpretation depends on the page size)
326
+ return CP_ACCESS_TRAP_EL2;
100
*/
327
+ }
101
- uint32_t sl0 = extract32(tcr->raw_tcr, 6, 2);
328
+ return CP_ACCESS_OK;
102
- uint32_t sl2 = extract64(tcr->raw_tcr, 33, 1);
329
+}
103
+ uint32_t sl0 = extract32(tcr, 6, 2);
330
+
104
+ uint32_t sl2 = extract64(tcr, 33, 1);
331
+#ifdef TARGET_AARCH64
105
uint32_t startlevel;
332
+/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBOS. */
106
bool ok;
333
+static CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
107
334
+ bool isread)
108
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
335
+{
109
index XXXXXXX..XXXXXXX 100644
336
+ if (arm_current_el(env) == 1 &&
110
--- a/target/arm/tlb_helper.c
337
+ (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBOS))) {
111
+++ b/target/arm/tlb_helper.c
338
+ return CP_ACCESS_TRAP_EL2;
112
@@ -XXX,XX +XXX,XX @@ bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
339
+ }
113
return true;
340
+ return CP_ACCESS_OK;
341
+}
342
+#endif
343
+
344
/* IS variants of TLB operations must affect all cores */
345
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
346
uint64_t value)
347
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
348
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
349
}
350
351
+/*
352
+ * Non-IS variants of TLB operations are upgraded to
353
+ * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
354
+ * force broadcast of these operations.
355
+ */
356
+static bool tlb_force_broadcast(CPUARMState *env)
357
+{
358
+ return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
359
+}
360
+
361
static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
362
uint64_t value)
363
{
364
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
365
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
366
}
367
368
+/*
369
+ * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
370
+ * Page D4-1736 (DDI0487A.b)
371
+ */
372
+
373
+static int vae1_tlbmask(CPUARMState *env)
374
+{
375
+ uint64_t hcr = arm_hcr_el2_eff(env);
376
+ uint16_t mask;
377
+
378
+ assert(arm_feature(env, ARM_FEATURE_AARCH64));
379
+
380
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
381
+ mask = ARMMMUIdxBit_E20_2 |
382
+ ARMMMUIdxBit_E20_2_PAN |
383
+ ARMMMUIdxBit_E20_0;
384
+ } else {
385
+ /* This is AArch64 only, so we don't need to touch the EL30_x TLBs */
386
+ mask = ARMMMUIdxBit_E10_1 |
387
+ ARMMMUIdxBit_E10_1_PAN |
388
+ ARMMMUIdxBit_E10_0;
389
+ }
390
+ return mask;
391
+}
392
+
393
+static int vae2_tlbmask(CPUARMState *env)
394
+{
395
+ uint64_t hcr = arm_hcr_el2_eff(env);
396
+ uint16_t mask;
397
+
398
+ if (hcr & HCR_E2H) {
399
+ mask = ARMMMUIdxBit_E20_2 |
400
+ ARMMMUIdxBit_E20_2_PAN |
401
+ ARMMMUIdxBit_E20_0;
402
+ } else {
403
+ mask = ARMMMUIdxBit_E2;
404
+ }
405
+ return mask;
406
+}
407
+
408
+/* Return 56 if TBI is enabled, 64 otherwise. */
409
+static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
410
+ uint64_t addr)
411
+{
412
+ uint64_t tcr = regime_tcr(env, mmu_idx);
413
+ int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
414
+ int select = extract64(addr, 55, 1);
415
+
416
+ return (tbi >> select) & 1 ? 56 : 64;
417
+}
418
+
419
+static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
420
+{
421
+ uint64_t hcr = arm_hcr_el2_eff(env);
422
+ ARMMMUIdx mmu_idx;
423
+
424
+ assert(arm_feature(env, ARM_FEATURE_AARCH64));
425
+
426
+ /* Only the regime of the mmu_idx below is significant. */
427
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
428
+ mmu_idx = ARMMMUIdx_E20_0;
429
+ } else {
430
+ mmu_idx = ARMMMUIdx_E10_0;
431
+ }
432
+
433
+ return tlbbits_for_regime(env, mmu_idx, addr);
434
+}
435
+
436
+static int vae2_tlbbits(CPUARMState *env, uint64_t addr)
437
+{
438
+ uint64_t hcr = arm_hcr_el2_eff(env);
439
+ ARMMMUIdx mmu_idx;
440
+
441
+ /*
442
+ * Only the regime of the mmu_idx below is significant.
443
+ * Regime EL2&0 has two ranges with separate TBI configuration, while EL2
444
+ * only has one.
445
+ */
446
+ if (hcr & HCR_E2H) {
447
+ mmu_idx = ARMMMUIdx_E20_2;
448
+ } else {
449
+ mmu_idx = ARMMMUIdx_E2;
450
+ }
451
+
452
+ return tlbbits_for_regime(env, mmu_idx, addr);
453
+}
454
+
455
+static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
456
+ uint64_t value)
457
+{
458
+ CPUState *cs = env_cpu(env);
459
+ int mask = vae1_tlbmask(env);
460
+
461
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
462
+}
463
+
464
static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
465
uint64_t value)
466
{
467
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
114
}
468
}
115
if (arm_feature(env, ARM_FEATURE_LPAE)
469
}
116
- && (regime_tcr(env, mmu_idx)->raw_tcr & TTBCR_EAE)) {
470
117
+ && (regime_tcr_value(env, mmu_idx) & TTBCR_EAE)) {
471
+static int e2_tlbmask(CPUARMState *env)
118
return true;
472
+{
473
+ return (ARMMMUIdxBit_E20_0 |
474
+ ARMMMUIdxBit_E20_2 |
475
+ ARMMMUIdxBit_E20_2_PAN |
476
+ ARMMMUIdxBit_E2);
477
+}
478
+
479
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
480
uint64_t value)
481
{
482
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
483
tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
484
}
485
486
+static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
487
+ uint64_t value)
488
+{
489
+ CPUState *cs = env_cpu(env);
490
+ int mask = alle1_tlbmask(env);
491
+
492
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
493
+}
494
+
495
+static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
496
+ uint64_t value)
497
+{
498
+ CPUState *cs = env_cpu(env);
499
+ int mask = e2_tlbmask(env);
500
+
501
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
502
+}
503
+
504
+static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
505
+ uint64_t value)
506
+{
507
+ CPUState *cs = env_cpu(env);
508
+
509
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
510
+}
511
+
512
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
513
uint64_t value)
514
{
515
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
516
tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
517
}
518
519
+static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
520
+ uint64_t value)
521
+{
522
+ CPUState *cs = env_cpu(env);
523
+ int mask = vae1_tlbmask(env);
524
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
525
+ int bits = vae1_tlbbits(env, pageaddr);
526
+
527
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
528
+}
529
+
530
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
531
uint64_t value)
532
{
533
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
119
}
534
}
120
return false;
535
}
536
537
+static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
538
+ uint64_t value)
539
+{
540
+ CPUState *cs = env_cpu(env);
541
+ int mask = vae2_tlbmask(env);
542
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
543
+ int bits = vae2_tlbbits(env, pageaddr);
544
+
545
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
546
+}
547
+
548
+static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
549
+ uint64_t value)
550
+{
551
+ CPUState *cs = env_cpu(env);
552
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
553
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
554
+
555
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
556
+ ARMMMUIdxBit_E3, bits);
557
+}
558
+
559
+static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
560
+{
561
+ /*
562
+ * The MSB of value is the NS field, which only applies if SEL2
563
+ * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
564
+ */
565
+ return (value >= 0
566
+ && cpu_isar_feature(aa64_sel2, env_archcpu(env))
567
+ && arm_is_secure_below_el3(env)
568
+ ? ARMMMUIdxBit_Stage2_S
569
+ : ARMMMUIdxBit_Stage2);
570
+}
571
+
572
static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
573
uint64_t value)
574
{
121
--
575
--
122
2.25.1
576
2.34.1
diff view generated by jsdifflib
1
Change the representation of the TCR_EL* registers in the CPU state
1
Move the FEAT_RME specific TLB insns across to tlb-insns.c.
2
struct from struct TCR to uint64_t. This allows us to drop the
3
custom vmsa_ttbcr_raw_write() function, moving the "enforce RES0"
4
checks to their more usual location in the writefn
5
vmsa_ttbcr_write(). We also don't need the resetfn any more.
6
2
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220714132303.1287193-7-peter.maydell@linaro.org
5
Message-id: 20241210160452.2427965-10-peter.maydell@linaro.org
10
---
6
---
11
target/arm/cpu.h | 8 +----
7
target/arm/helper.c | 38 --------------------------------
12
target/arm/internals.h | 6 ++--
8
target/arm/tcg/tlb-insns.c | 45 ++++++++++++++++++++++++++++++++++++++
13
target/arm/cpu.c | 2 +-
9
2 files changed, 45 insertions(+), 38 deletions(-)
14
target/arm/debug_helper.c | 2 +-
15
target/arm/helper.c | 75 +++++++++++----------------------------
16
target/arm/ptw.c | 2 +-
17
6 files changed, 27 insertions(+), 68 deletions(-)
18
10
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ typedef struct ARMGenericTimer {
24
#define GTIMER_HYPVIRT 4
25
#define NUM_GTIMERS 5
26
27
-typedef struct {
28
- uint64_t raw_tcr;
29
- uint32_t mask;
30
- uint32_t base_mask;
31
-} TCR;
32
-
33
#define VTCR_NSW (1u << 29)
34
#define VTCR_NSA (1u << 30)
35
#define VSTCR_SW VTCR_NSW
36
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
37
uint64_t vttbr_el2; /* Virtualization Translation Table Base. */
38
uint64_t vsttbr_el2; /* Secure Virtualization Translation Table. */
39
/* MMU translation table base control. */
40
- TCR tcr_el[4];
41
+ uint64_t tcr_el[4];
42
uint64_t vtcr_el2; /* Virtualization Translation Control. */
43
uint64_t vstcr_el2; /* Secure Virtualization Translation Control. */
44
uint32_t c2_data; /* MPU data cacheable bits. */
45
diff --git a/target/arm/internals.h b/target/arm/internals.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/internals.h
48
+++ b/target/arm/internals.h
49
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu);
50
*/
51
static inline bool extended_addresses_enabled(CPUARMState *env)
52
{
53
- TCR *tcr = &env->cp15.tcr_el[arm_is_secure(env) ? 3 : 1];
54
+ uint64_t tcr = env->cp15.tcr_el[arm_is_secure(env) ? 3 : 1];
55
return arm_el_is_aa64(env, 1) ||
56
- (arm_feature(env, ARM_FEATURE_LPAE) && (tcr->raw_tcr & TTBCR_EAE));
57
+ (arm_feature(env, ARM_FEATURE_LPAE) && (tcr & TTBCR_EAE));
58
}
59
60
/* Update a QEMU watchpoint based on the information the guest has set in the
61
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
62
*/
63
return env->cp15.vstcr_el2;
64
}
65
- return env->cp15.tcr_el[regime_el(env, mmu_idx)].raw_tcr;
66
+ return env->cp15.tcr_el[regime_el(env, mmu_idx)];
67
}
68
69
/**
70
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/cpu.c
73
+++ b/target/arm/cpu.c
74
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
75
* Enable TBI0 but not TBI1.
76
* Note that this must match useronly_clean_ptr.
77
*/
78
- env->cp15.tcr_el[1].raw_tcr = 5 | (1ULL << 37);
79
+ env->cp15.tcr_el[1] = 5 | (1ULL << 37);
80
81
/* Enable MTE */
82
if (cpu_isar_feature(aa64_mte, cpu)) {
83
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/debug_helper.c
86
+++ b/target/arm/debug_helper.c
87
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_debug_exception_fsr(CPUARMState *env)
88
using_lpae = true;
89
} else {
90
if (arm_feature(env, ARM_FEATURE_LPAE) &&
91
- (env->cp15.tcr_el[target_el].raw_tcr & TTBCR_EAE)) {
92
+ (env->cp15.tcr_el[target_el] & TTBCR_EAE)) {
93
using_lpae = true;
94
}
95
}
96
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
97
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
99
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
100
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
15
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
101
.fieldoffset = offsetof(CPUARMState, cp15.c6_region[7]) },
16
.type = ARM_CP_CONST, .resetvalue = 0 },
102
};
17
};
103
18
104
-static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
-static void tlbi_aa64_paall_write(CPUARMState *env, const ARMCPRegInfo *ri,
105
- uint64_t value)
20
- uint64_t value)
106
+static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
21
-{
107
+ uint64_t value)
22
- CPUState *cs = env_cpu(env);
108
{
23
-
109
- TCR *tcr = raw_ptr(env, ri);
24
- tlb_flush(cs);
110
- int maskshift = extract32(value, 0, 3);
111
+ ARMCPU *cpu = env_archcpu(env);
112
113
if (!arm_feature(env, ARM_FEATURE_V8)) {
114
if (arm_feature(env, ARM_FEATURE_LPAE) && (value & TTBCR_EAE)) {
115
- /* Pre ARMv8 bits [21:19], [15:14] and [6:3] are UNK/SBZP when
116
- * using Long-desciptor translation table format */
117
+ /*
118
+ * Pre ARMv8 bits [21:19], [15:14] and [6:3] are UNK/SBZP when
119
+ * using Long-descriptor translation table format
120
+ */
121
value &= ~((7 << 19) | (3 << 14) | (0xf << 3));
122
} else if (arm_feature(env, ARM_FEATURE_EL3)) {
123
- /* In an implementation that includes the Security Extensions
124
+ /*
125
+ * In an implementation that includes the Security Extensions
126
* TTBCR has additional fields PD0 [4] and PD1 [5] for
127
* Short-descriptor translation table format.
128
*/
129
@@ -XXX,XX +XXX,XX @@ static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri,
130
}
131
}
132
133
- /* Update the masks corresponding to the TCR bank being written
134
- * Note that we always calculate mask and base_mask, but
135
- * they are only used for short-descriptor tables (ie if EAE is 0);
136
- * for long-descriptor tables the TCR fields are used differently
137
- * and the mask and base_mask values are meaningless.
138
- */
139
- tcr->raw_tcr = value;
140
- tcr->mask = ~(((uint32_t)0xffffffffu) >> maskshift);
141
- tcr->base_mask = ~((uint32_t)0x3fffu >> maskshift);
142
-}
25
-}
143
-
26
-
144
-static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
27
static void gpccr_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
- uint64_t value)
28
uint64_t value)
29
{
30
@@ -XXX,XX +XXX,XX @@ static void gpccr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
31
env_archcpu(env)->reset_l0gptsz);
32
}
33
34
-static void tlbi_aa64_paallos_write(CPUARMState *env, const ARMCPRegInfo *ri,
35
- uint64_t value)
146
-{
36
-{
147
- ARMCPU *cpu = env_archcpu(env);
37
- CPUState *cs = env_cpu(env);
148
- TCR *tcr = raw_ptr(env, ri);
149
-
38
-
150
if (arm_feature(env, ARM_FEATURE_LPAE)) {
39
- tlb_flush_all_cpus_synced(cs);
151
/* With LPAE the TTBCR could result in a change of ASID
152
* via the TTBCR.A1 bit, so do a TLB flush.
153
*/
154
tlb_flush(CPU(cpu));
155
}
156
- /* Preserve the high half of TCR_EL1, set via TTBCR2. */
157
- value = deposit64(tcr->raw_tcr, 0, 32, value);
158
- vmsa_ttbcr_raw_write(env, ri, value);
159
-}
40
-}
160
-
41
-
161
-static void vmsa_ttbcr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
42
static const ARMCPRegInfo rme_reginfo[] = {
162
-{
43
{ .name = "GPCCR_EL3", .state = ARM_CP_STATE_AA64,
163
- TCR *tcr = raw_ptr(env, ri);
44
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 1, .opc2 = 6,
164
-
45
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rme_reginfo[] = {
165
- /* Reset both the TCR as well as the masks corresponding to the bank of
46
{ .name = "MFAR_EL3", .state = ARM_CP_STATE_AA64,
166
- * the TCR being reset.
47
.opc0 = 3, .opc1 = 6, .crn = 6, .crm = 0, .opc2 = 5,
48
.access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mfar_el3) },
49
- { .name = "TLBI_PAALL", .state = ARM_CP_STATE_AA64,
50
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 4,
51
- .access = PL3_W, .type = ARM_CP_NO_RAW,
52
- .writefn = tlbi_aa64_paall_write },
53
- { .name = "TLBI_PAALLOS", .state = ARM_CP_STATE_AA64,
54
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 4,
55
- .access = PL3_W, .type = ARM_CP_NO_RAW,
56
- .writefn = tlbi_aa64_paallos_write },
57
- /*
58
- * QEMU does not have a way to invalidate by physical address, thus
59
- * invalidating a range of physical addresses is accomplished by
60
- * flushing all tlb entries in the outer shareable domain,
61
- * just like PAALLOS.
167
- */
62
- */
168
- tcr->raw_tcr = 0;
63
- { .name = "TLBI_RPALOS", .state = ARM_CP_STATE_AA64,
169
- tcr->mask = 0;
64
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 4, .opc2 = 7,
170
- tcr->base_mask = 0xffffc000u;
65
- .access = PL3_W, .type = ARM_CP_NO_RAW,
171
+ raw_write(env, ri, value);
66
- .writefn = tlbi_aa64_paallos_write },
67
- { .name = "TLBI_RPAOS", .state = ARM_CP_STATE_AA64,
68
- .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 4, .opc2 = 3,
69
- .access = PL3_W, .type = ARM_CP_NO_RAW,
70
- .writefn = tlbi_aa64_paallos_write },
71
{ .name = "DC_CIPAPA", .state = ARM_CP_STATE_AA64,
72
.opc0 = 1, .opc1 = 6, .crn = 7, .crm = 14, .opc2 = 1,
73
.access = PL3_W, .type = ARM_CP_NOP },
74
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/tcg/tlb-insns.c
77
+++ b/target/arm/tcg/tlb-insns.c
78
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
79
.access = PL3_W, .type = ARM_CP_NO_RAW,
80
.writefn = tlbi_aa64_vae3is_write },
81
};
82
+
83
+static void tlbi_aa64_paall_write(CPUARMState *env, const ARMCPRegInfo *ri,
84
+ uint64_t value)
85
+{
86
+ CPUState *cs = env_cpu(env);
87
+
88
+ tlb_flush(cs);
89
+}
90
+
91
+static void tlbi_aa64_paallos_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
+ uint64_t value)
93
+{
94
+ CPUState *cs = env_cpu(env);
95
+
96
+ tlb_flush_all_cpus_synced(cs);
97
+}
98
+
99
+static const ARMCPRegInfo tlbi_rme_reginfo[] = {
100
+ { .name = "TLBI_PAALL", .state = ARM_CP_STATE_AA64,
101
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 4,
102
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
103
+ .writefn = tlbi_aa64_paall_write },
104
+ { .name = "TLBI_PAALLOS", .state = ARM_CP_STATE_AA64,
105
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 4,
106
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
107
+ .writefn = tlbi_aa64_paallos_write },
108
+ /*
109
+ * QEMU does not have a way to invalidate by physical address, thus
110
+ * invalidating a range of physical addresses is accomplished by
111
+ * flushing all tlb entries in the outer shareable domain,
112
+ * just like PAALLOS.
113
+ */
114
+ { .name = "TLBI_RPALOS", .state = ARM_CP_STATE_AA64,
115
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 4, .opc2 = 7,
116
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
117
+ .writefn = tlbi_aa64_paallos_write },
118
+ { .name = "TLBI_RPAOS", .state = ARM_CP_STATE_AA64,
119
+ .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 4, .opc2 = 3,
120
+ .access = PL3_W, .type = ARM_CP_NO_RAW,
121
+ .writefn = tlbi_aa64_paallos_write },
122
+};
123
+
124
#endif
125
126
void define_tlb_insn_regs(ARMCPU *cpu)
127
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
128
if (cpu_isar_feature(aa64_tlbios, cpu)) {
129
define_arm_cp_regs(cpu, tlbios_reginfo);
130
}
131
+ if (cpu_isar_feature(aa64_rme, cpu)) {
132
+ define_arm_cp_regs(cpu, tlbi_rme_reginfo);
133
+ }
134
#endif
172
}
135
}
173
174
static void vmsa_tcr_el12_write(CPUARMState *env, const ARMCPRegInfo *ri,
175
uint64_t value)
176
{
177
ARMCPU *cpu = env_archcpu(env);
178
- TCR *tcr = raw_ptr(env, ri);
179
180
/* For AArch64 the A1 bit could result in a change of ASID, so TLB flush. */
181
tlb_flush(CPU(cpu));
182
- tcr->raw_tcr = value;
183
+ raw_write(env, ri, value);
184
}
185
186
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
187
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
188
.opc0 = 3, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2,
189
.access = PL1_RW, .accessfn = access_tvm_trvm,
190
.writefn = vmsa_tcr_el12_write,
191
- .resetfn = vmsa_ttbcr_reset, .raw_writefn = raw_write,
192
+ .raw_writefn = raw_write,
193
+ .resetvalue = 0,
194
.fieldoffset = offsetof(CPUARMState, cp15.tcr_el[1]) },
195
{ .name = "TTBCR", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2,
196
.access = PL1_RW, .accessfn = access_tvm_trvm,
197
.type = ARM_CP_ALIAS, .writefn = vmsa_ttbcr_write,
198
- .raw_writefn = vmsa_ttbcr_raw_write,
199
- /* No offsetoflow32 -- pass the entire TCR to writefn/raw_writefn. */
200
- .bank_fieldoffsets = { offsetof(CPUARMState, cp15.tcr_el[3]),
201
- offsetof(CPUARMState, cp15.tcr_el[1])} },
202
+ .raw_writefn = raw_write,
203
+ .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tcr_el[3]),
204
+ offsetoflow32(CPUARMState, cp15.tcr_el[1])} },
205
};
206
207
/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing
208
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ttbcr2_reginfo = {
209
.access = PL1_RW, .accessfn = access_tvm_trvm,
210
.type = ARM_CP_ALIAS,
211
.bank_fieldoffsets = {
212
- offsetofhigh32(CPUARMState, cp15.tcr_el[3].raw_tcr),
213
- offsetofhigh32(CPUARMState, cp15.tcr_el[1].raw_tcr),
214
+ offsetofhigh32(CPUARMState, cp15.tcr_el[3]),
215
+ offsetofhigh32(CPUARMState, cp15.tcr_el[1]),
216
},
217
};
218
219
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
220
{ .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH,
221
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2,
222
.access = PL2_RW, .writefn = vmsa_tcr_el12_write,
223
- /* no .raw_writefn or .resetfn needed as we never use mask/base_mask */
224
.fieldoffset = offsetof(CPUARMState, cp15.tcr_el[2]) },
225
{ .name = "VTCR", .state = ARM_CP_STATE_AA32,
226
.cp = 15, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
227
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
228
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
229
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
230
.access = PL3_RW,
231
- /* no .writefn needed as this can't cause an ASID change;
232
- * we must provide a .raw_writefn and .resetfn because we handle
233
- * reset and migration for the AArch32 TTBCR(S), which might be
234
- * using mask and base_mask.
235
- */
236
- .resetfn = vmsa_ttbcr_reset, .raw_writefn = vmsa_ttbcr_raw_write,
237
+ /* no .writefn needed as this can't cause an ASID change */
238
+ .resetvalue = 0,
239
.fieldoffset = offsetof(CPUARMState, cp15.tcr_el[3]) },
240
{ .name = "ELR_EL3", .state = ARM_CP_STATE_AA64,
241
.type = ARM_CP_ALIAS,
242
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/target/arm/ptw.c
245
+++ b/target/arm/ptw.c
246
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
247
int r_el = regime_el(env, mmu_idx);
248
if (arm_el_is_aa64(env, r_el)) {
249
int pamax = arm_pamax(env_archcpu(env));
250
- uint64_t tcr = env->cp15.tcr_el[r_el].raw_tcr;
251
+ uint64_t tcr = env->cp15.tcr_el[r_el];
252
int addrtop, tbi;
253
254
tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
255
--
136
--
256
2.25.1
137
2.34.1
diff view generated by jsdifflib
1
In get_level1_table_address(), instead of using precalculated values
1
We currently register the tlbi_el2_cp_reginfo[] TLBI insns if EL2 is
2
of mask and base_mask from the TCR struct, calculate them directly
2
implemented, or if EL3 and v8 is implemented. This is a copy of the
3
(in the same way we currently do in vmsa_ttbcr_raw_write() to
3
logic used for el2_cp_reginfo[], but for the specific case of the
4
populate the TCR struct fields).
4
TLBI insns we can simplify it. This is because we do not need the
5
"if EL2 does not exist but EL3 does then EL2 registers should exist
6
and be RAZ/WI" handling here: all our cpregs are for instructions,
7
which UNDEF when EL3 exists and EL2 does not.
8
9
Simplify the condition down to just "if EL2 exists".
10
This is not a behaviour change because:
11
* for AArch64 insns we marked them with ARM_CP_EL3_NO_EL2_UNDEF,
12
which meant that define_arm_cp_regs() would ignore them if
13
EL2 wasn't present
14
* for AArch32 insns, the .access = PL2_W meant that if EL2
15
was not present the only way to get at them was from AArch32
16
EL3; but we have no CPUs which have ARM_FEATURE_V8 but
17
start in AArch32
5
18
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220714132303.1287193-3-peter.maydell@linaro.org
21
Message-id: 20241210160452.2427965-11-peter.maydell@linaro.org
9
---
22
---
10
target/arm/ptw.c | 14 +++++++++-----
23
target/arm/tcg/tlb-insns.c | 4 +---
11
1 file changed, 9 insertions(+), 5 deletions(-)
24
1 file changed, 1 insertion(+), 3 deletions(-)
12
25
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
26
diff --git a/target/arm/tcg/tlb-insns.c b/target/arm/tcg/tlb-insns.c
14
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.c
28
--- a/target/arm/tcg/tlb-insns.c
16
+++ b/target/arm/ptw.c
29
+++ b/target/arm/tcg/tlb-insns.c
17
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
30
@@ -XXX,XX +XXX,XX @@ void define_tlb_insn_regs(ARMCPU *cpu)
18
uint32_t *table, uint32_t address)
31
* ops (i.e. matching the condition for el2_cp_reginfo[] in
19
{
32
* helper.c), but we will be able to simplify this later.
20
/* Note that we can only get here for an AArch32 PL0/PL1 lookup */
33
*/
21
- TCR *tcr = regime_tcr(env, mmu_idx);
34
- if (arm_feature(env, ARM_FEATURE_EL2)
22
+ uint64_t tcr = regime_tcr_value(env, mmu_idx);
35
- || (arm_feature(env, ARM_FEATURE_EL3)
23
+ int maskshift = extract32(tcr, 0, 3);
36
- && arm_feature(env, ARM_FEATURE_V8))) {
24
+ uint32_t mask = ~(((uint32_t)0xffffffffu) >> maskshift);
37
+ if (arm_feature(env, ARM_FEATURE_EL2)) {
25
+ uint32_t base_mask;
38
define_arm_cp_regs(cpu, tlbi_el2_cp_reginfo);
26
27
- if (address & tcr->mask) {
28
- if (tcr->raw_tcr & TTBCR_PD1) {
29
+ if (address & mask) {
30
+ if (tcr & TTBCR_PD1) {
31
/* Translation table walk disabled for TTBR1 */
32
return false;
33
}
34
*table = regime_ttbr(env, mmu_idx, 1) & 0xffffc000;
35
} else {
36
- if (tcr->raw_tcr & TTBCR_PD0) {
37
+ if (tcr & TTBCR_PD0) {
38
/* Translation table walk disabled for TTBR0 */
39
return false;
40
}
41
- *table = regime_ttbr(env, mmu_idx, 0) & tcr->base_mask;
42
+ base_mask = ~((uint32_t)0x3fffu >> maskshift);
43
+ *table = regime_ttbr(env, mmu_idx, 0) & base_mask;
44
}
39
}
45
*table |= (address >> 18) & 0x3ffc;
40
if (arm_feature(env, ARM_FEATURE_EL3)) {
46
return true;
47
--
41
--
48
2.25.1
42
2.34.1
diff view generated by jsdifflib