1
Hi; most of this is the first half of RTH's decodetree conversion;
1
First arm pullreq of the cycle; this is mostly my softfloat NaN
2
the rest is a mix of fixes from the last couple of weeks.
2
handling series. (Lots more in my to-review queue, but I don't
3
like pullreqs growing too close to a hundred patches at a time :-))
3
4
4
thanks
5
thanks
5
-- PMM
6
-- PMM
6
7
7
The following changes since commit 7e1c0047015ffbd408e1aa4a5ec1abe4751dbf7e:
8
The following changes since commit 97f2796a3736ed37a1b85dc1c76a6c45b829dd17:
8
9
9
Merge tag 'migration-20240522-pull-request' of https://gitlab.com/farosas/qemu into staging (2024-05-22 15:32:25 -0700)
10
Open 10.0 development tree (2024-12-10 17:41:17 +0000)
10
11
11
are available in the Git repository at:
12
are available in the Git repository at:
12
13
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20240523
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20241211
14
15
15
for you to fetch changes up to 051aca523db178ad4c49d5ed736ad26308d1df7b:
16
for you to fetch changes up to 1abe28d519239eea5cf9620bb13149423e5665f8:
16
17
17
target/arm: Convert disas_simd_3same_logic to decodetree (2024-05-23 16:06:29 +0100)
18
MAINTAINERS: Add correct email address for Vikram Garhwal (2024-12-11 15:31:09 +0000)
18
19
19
----------------------------------------------------------------
20
----------------------------------------------------------------
20
target-arm queue:
21
target-arm queue:
21
* xlnx_dpdma: fix descriptor endianness bug
22
* hw/net/lan9118: Extract PHY model, reuse with imx_fec, fix bugs
22
* hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers
23
* fpu: Make muladd NaN handling runtime-selected, not compile-time
23
* hw/arm/npcm7xx: remove setting of mp-affinity
24
* fpu: Make default NaN pattern runtime-selected, not compile-time
24
* hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size
25
* fpu: Minor NaN-related cleanups
25
* hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n>
26
* MAINTAINERS: email address updates
26
* hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx()
27
* hw: arm: Remove use of tabs in some source files
28
* docs/system: Remove ADC from raspi documentation
29
* target/arm: Start of the conversion of A64 SIMD to decodetree
30
27
31
----------------------------------------------------------------
28
----------------------------------------------------------------
32
Alexandra Diupina (1):
29
Bernhard Beschow (5):
33
xlnx_dpdma: fix descriptor endianness bug
30
hw/net/lan9118: Extract lan9118_phy
31
hw/net/lan9118_phy: Reuse in imx_fec and consolidate implementations
32
hw/net/lan9118_phy: Fix off-by-one error in MII_ANLPAR register
33
hw/net/lan9118_phy: Reuse MII constants
34
hw/net/lan9118_phy: Add missing 100 mbps full duplex advertisement
34
35
35
Andrey Shumilin (1):
36
Leif Lindholm (1):
36
hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n>
37
MAINTAINERS: update email address for Leif Lindholm
37
38
38
Dorjoy Chowdhury (1):
39
Peter Maydell (54):
39
hw/arm/npcm7xx: remove setting of mp-affinity
40
fpu: handle raising Invalid for infzero in pick_nan_muladd
41
fpu: Check for default_nan_mode before calling pickNaNMulAdd
42
softfloat: Allow runtime choice of inf * 0 + NaN result
43
tests/fp: Explicitly set inf-zero-nan rule
44
target/arm: Set FloatInfZeroNaNRule explicitly
45
target/s390: Set FloatInfZeroNaNRule explicitly
46
target/ppc: Set FloatInfZeroNaNRule explicitly
47
target/mips: Set FloatInfZeroNaNRule explicitly
48
target/sparc: Set FloatInfZeroNaNRule explicitly
49
target/xtensa: Set FloatInfZeroNaNRule explicitly
50
target/x86: Set FloatInfZeroNaNRule explicitly
51
target/loongarch: Set FloatInfZeroNaNRule explicitly
52
target/hppa: Set FloatInfZeroNaNRule explicitly
53
softfloat: Pass have_snan to pickNaNMulAdd
54
softfloat: Allow runtime choice of NaN propagation for muladd
55
tests/fp: Explicitly set 3-NaN propagation rule
56
target/arm: Set Float3NaNPropRule explicitly
57
target/loongarch: Set Float3NaNPropRule explicitly
58
target/ppc: Set Float3NaNPropRule explicitly
59
target/s390x: Set Float3NaNPropRule explicitly
60
target/sparc: Set Float3NaNPropRule explicitly
61
target/mips: Set Float3NaNPropRule explicitly
62
target/xtensa: Set Float3NaNPropRule explicitly
63
target/i386: Set Float3NaNPropRule explicitly
64
target/hppa: Set Float3NaNPropRule explicitly
65
fpu: Remove use_first_nan field from float_status
66
target/m68k: Don't pass NULL float_status to floatx80_default_nan()
67
softfloat: Create floatx80 default NaN from parts64_default_nan
68
target/loongarch: Use normal float_status in fclass_s and fclass_d helpers
69
target/m68k: In frem helper, initialize local float_status from env->fp_status
70
target/m68k: Init local float_status from env fp_status in gdb get/set reg
71
target/sparc: Initialize local scratch float_status from env->fp_status
72
target/ppc: Use env->fp_status in helper_compute_fprf functions
73
fpu: Allow runtime choice of default NaN value
74
tests/fp: Set default NaN pattern explicitly
75
target/microblaze: Set default NaN pattern explicitly
76
target/i386: Set default NaN pattern explicitly
77
target/hppa: Set default NaN pattern explicitly
78
target/alpha: Set default NaN pattern explicitly
79
target/arm: Set default NaN pattern explicitly
80
target/loongarch: Set default NaN pattern explicitly
81
target/m68k: Set default NaN pattern explicitly
82
target/mips: Set default NaN pattern explicitly
83
target/openrisc: Set default NaN pattern explicitly
84
target/ppc: Set default NaN pattern explicitly
85
target/sh4: Set default NaN pattern explicitly
86
target/rx: Set default NaN pattern explicitly
87
target/s390x: Set default NaN pattern explicitly
88
target/sparc: Set default NaN pattern explicitly
89
target/xtensa: Set default NaN pattern explicitly
90
target/hexagon: Set default NaN pattern explicitly
91
target/riscv: Set default NaN pattern explicitly
92
target/tricore: Set default NaN pattern explicitly
93
fpu: Remove default handling for dnan_pattern
40
94
41
Inès Varhol (1):
95
Richard Henderson (11):
42
hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size
96
target/arm: Copy entire float_status in is_ebf
97
softfloat: Inline pickNaNMulAdd
98
softfloat: Use goto for default nan case in pick_nan_muladd
99
softfloat: Remove which from parts_pick_nan_muladd
100
softfloat: Pad array size in pick_nan_muladd
101
softfloat: Move propagateFloatx80NaN to softfloat.c
102
softfloat: Use parts_pick_nan in propagateFloatx80NaN
103
softfloat: Inline pickNaN
104
softfloat: Share code between parts_pick_nan cases
105
softfloat: Sink frac_cmp in parts_pick_nan until needed
106
softfloat: Replace WHICH with RET in parts_pick_nan
43
107
44
Philippe Mathieu-Daudé (1):
108
Vikram Garhwal (1):
45
hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx()
109
MAINTAINERS: Add correct email address for Vikram Garhwal
46
110
47
Rayhan Faizel (1):
111
MAINTAINERS | 4 +-
48
docs/system: Remove ADC from raspi documentation
112
include/fpu/softfloat-helpers.h | 38 +++-
49
113
include/fpu/softfloat-types.h | 89 +++++++-
50
Richard Henderson (29):
114
include/hw/net/imx_fec.h | 9 +-
51
target/arm: Split out gengvec.c
115
include/hw/net/lan9118_phy.h | 37 ++++
52
target/arm: Split out gengvec64.c
116
include/hw/net/mii.h | 6 +
53
target/arm: Convert Cryptographic AES to decodetree
117
target/mips/fpu_helper.h | 20 ++
54
target/arm: Convert Cryptographic 3-register SHA to decodetree
118
target/sparc/helper.h | 4 +-
55
target/arm: Convert Cryptographic 2-register SHA to decodetree
119
fpu/softfloat.c | 19 ++
56
target/arm: Convert Cryptographic 3-register SHA512 to decodetree
120
hw/net/imx_fec.c | 146 ++------------
57
target/arm: Convert Cryptographic 2-register SHA512 to decodetree
121
hw/net/lan9118.c | 137 ++-----------
58
target/arm: Convert Cryptographic 4-register to decodetree
122
hw/net/lan9118_phy.c | 222 ++++++++++++++++++++
59
target/arm: Convert Cryptographic 3-register, imm2 to decodetree
123
linux-user/arm/nwfpe/fpa11.c | 5 +
60
target/arm: Convert XAR to decodetree
124
target/alpha/cpu.c | 2 +
61
target/arm: Convert Advanced SIMD copy to decodetree
125
target/arm/cpu.c | 10 +
62
target/arm: Convert FMULX to decodetree
126
target/arm/tcg/vec_helper.c | 20 +-
63
target/arm: Convert FADD, FSUB, FDIV, FMUL to decodetree
127
target/hexagon/cpu.c | 2 +
64
target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM to decodetree
128
target/hppa/fpu_helper.c | 12 ++
65
target/arm: Expand vfp neg and abs inline
129
target/i386/tcg/fpu_helper.c | 12 ++
66
target/arm: Convert FNMUL to decodetree
130
target/loongarch/tcg/fpu_helper.c | 14 +-
67
target/arm: Convert FMLA, FMLS to decodetree
131
target/m68k/cpu.c | 14 +-
68
target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT to decodetree
132
target/m68k/fpu_helper.c | 6 +-
69
target/arm: Convert FABD to decodetree
133
target/m68k/helper.c | 6 +-
70
target/arm: Convert FRECPS, FRSQRTS to decodetree
134
target/microblaze/cpu.c | 2 +
71
target/arm: Convert FADDP to decodetree
135
target/mips/msa.c | 10 +
72
target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP to decodetree
136
target/openrisc/cpu.c | 2 +
73
target/arm: Use gvec for neon faddp, fmaxp, fminp
137
target/ppc/cpu_init.c | 19 ++
74
target/arm: Convert ADDP to decodetree
138
target/ppc/fpu_helper.c | 3 +-
75
target/arm: Use gvec for neon padd
139
target/riscv/cpu.c | 2 +
76
target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree
140
target/rx/cpu.c | 2 +
77
target/arm: Use gvec for neon pmax, pmin
141
target/s390x/cpu.c | 5 +
78
target/arm: Convert FMLAL, FMLSL to decodetree
142
target/sh4/cpu.c | 2 +
79
target/arm: Convert disas_simd_3same_logic to decodetree
143
target/sparc/cpu.c | 6 +
80
144
target/sparc/fop_helper.c | 8 +-
81
Tanmay Patil (1):
145
target/sparc/translate.c | 4 +-
82
hw: arm: Remove use of tabs in some source files
146
target/tricore/helper.c | 2 +
83
147
target/xtensa/cpu.c | 4 +
84
Zenghui Yu (1):
148
target/xtensa/fpu_helper.c | 3 +-
85
hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers
149
tests/fp/fp-bench.c | 7 +
86
150
tests/fp/fp-test-log2.c | 1 +
87
docs/system/arm/raspi.rst | 1 -
151
tests/fp/fp-test.c | 7 +
88
target/arm/helper.h | 68 +-
152
fpu/softfloat-parts.c.inc | 152 +++++++++++---
89
target/arm/tcg/helper-a64.h | 12 +
153
fpu/softfloat-specialize.c.inc | 412 ++------------------------------------
90
target/arm/tcg/translate-a64.h | 4 +
154
.mailmap | 5 +-
91
target/arm/tcg/translate.h | 51 +
155
hw/net/Kconfig | 5 +
92
target/arm/tcg/a64.decode | 315 +++-
156
hw/net/meson.build | 1 +
93
hw/arm/boot.c | 8 +-
157
hw/net/trace-events | 10 +-
94
hw/arm/npcm7xx.c | 3 -
158
47 files changed, 778 insertions(+), 730 deletions(-)
95
hw/char/omap_uart.c | 49 +-
159
create mode 100644 include/hw/net/lan9118_phy.h
96
hw/char/stm32l4x5_usart.c | 2 +-
160
create mode 100644 hw/net/lan9118_phy.c
97
hw/dma/xlnx_dpdma.c | 68 +-
98
hw/gpio/zaurus.c | 59 +-
99
hw/input/tsc2005.c | 135 +-
100
hw/intc/arm_gic.c | 4 +-
101
target/arm/hvf/hvf.c | 130 +-
102
target/arm/tcg/gengvec.c | 1672 +++++++++++++++++++++
103
target/arm/tcg/gengvec64.c | 190 +++
104
target/arm/tcg/neon_helper.c | 5 -
105
target/arm/tcg/translate-a64.c | 3110 +++++++++++++--------------------------
106
target/arm/tcg/translate-neon.c | 136 +-
107
target/arm/tcg/translate-sve.c | 145 +-
108
target/arm/tcg/translate-vfp.c | 54 +-
109
target/arm/tcg/translate.c | 1588 --------------------
110
target/arm/tcg/vec_helper.c | 221 ++-
111
target/arm/vfp_helper.c | 30 -
112
target/arm/tcg/meson.build | 2 +
113
26 files changed, 3807 insertions(+), 4255 deletions(-)
114
create mode 100644 target/arm/tcg/gengvec.c
115
create mode 100644 target/arm/tcg/gengvec64.c
116
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
A very similar implementation of the same device exists in imx_fec. Prepare for
4
Message-id: 20240506010403.6204-4-richard.henderson@linaro.org
4
a common implementation by extracting a device model into its own files.
5
6
Some migration state has been moved into the new device model which breaks
7
migration compatibility for the following machines:
8
* smdkc210
9
* realview-*
10
* vexpress-*
11
* kzm
12
* mps2-*
13
14
While breaking migration ABI, fix the size of the MII registers to be 16 bit,
15
as defined by IEEE 802.3u.
16
17
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
18
Tested-by: Guenter Roeck <linux@roeck-us.net>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Message-id: 20241102125724.532843-2-shentey@gmail.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
22
---
8
target/arm/tcg/a64.decode | 21 +++++++--
23
include/hw/net/lan9118_phy.h | 37 ++++++++
9
target/arm/tcg/translate-a64.c | 86 +++++++++++++++-------------------
24
hw/net/lan9118.c | 137 +++++-----------------------
10
2 files changed, 54 insertions(+), 53 deletions(-)
25
hw/net/lan9118_phy.c | 169 +++++++++++++++++++++++++++++++++++
26
hw/net/Kconfig | 4 +
27
hw/net/meson.build | 1 +
28
5 files changed, 233 insertions(+), 115 deletions(-)
29
create mode 100644 include/hw/net/lan9118_phy.h
30
create mode 100644 hw/net/lan9118_phy.c
11
31
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
32
diff --git a/include/hw/net/lan9118_phy.h b/include/hw/net/lan9118_phy.h
33
new file mode 100644
34
index XXXXXXX..XXXXXXX
35
--- /dev/null
36
+++ b/include/hw/net/lan9118_phy.h
37
@@ -XXX,XX +XXX,XX @@
38
+/*
39
+ * SMSC LAN9118 PHY emulation
40
+ *
41
+ * Copyright (c) 2009 CodeSourcery, LLC.
42
+ * Written by Paul Brook
43
+ *
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
45
+ * See the COPYING file in the top-level directory.
46
+ */
47
+
48
+#ifndef HW_NET_LAN9118_PHY_H
49
+#define HW_NET_LAN9118_PHY_H
50
+
51
+#include "qom/object.h"
52
+#include "hw/sysbus.h"
53
+
54
+#define TYPE_LAN9118_PHY "lan9118-phy"
55
+OBJECT_DECLARE_SIMPLE_TYPE(Lan9118PhyState, LAN9118_PHY)
56
+
57
+typedef struct Lan9118PhyState {
58
+ SysBusDevice parent_obj;
59
+
60
+ uint16_t status;
61
+ uint16_t control;
62
+ uint16_t advertise;
63
+ uint16_t ints;
64
+ uint16_t int_mask;
65
+ qemu_irq irq;
66
+ bool link_down;
67
+} Lan9118PhyState;
68
+
69
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down);
70
+void lan9118_phy_reset(Lan9118PhyState *s);
71
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg);
72
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val);
73
+
74
+#endif
75
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
13
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
77
--- a/hw/net/lan9118.c
15
+++ b/target/arm/tcg/a64.decode
78
+++ b/hw/net/lan9118.c
16
@@ -XXX,XX +XXX,XX @@
79
@@ -XXX,XX +XXX,XX @@
17
# This file is processed by scripts/decodetree.py
80
#include "net/net.h"
18
#
81
#include "net/eth.h"
19
82
#include "hw/irq.h"
20
-&r rn
83
+#include "hw/net/lan9118_phy.h"
21
-&ri rd imm
84
#include "hw/net/lan9118.h"
22
-&rri_sf rd rn imm sf
85
#include "hw/ptimer.h"
23
-&i imm
86
#include "hw/qdev-properties.h"
24
+%rd 0:5
87
@@ -XXX,XX +XXX,XX @@ do { printf("lan9118: " fmt , ## __VA_ARGS__); } while (0)
25
88
#define MAC_CR_RXEN 0x00000004
26
+&r rn
89
#define MAC_CR_RESERVED 0x7f404213
27
+&ri rd imm
90
28
+&rri_sf rd rn imm sf
91
-#define PHY_INT_ENERGYON 0x80
29
+&i imm
92
-#define PHY_INT_AUTONEG_COMPLETE 0x40
30
+&qrr_e q rd rn esz
93
-#define PHY_INT_FAULT 0x20
31
+&qrrr_e q rd rn rm esz
94
-#define PHY_INT_DOWN 0x10
32
+
95
-#define PHY_INT_AUTONEG_LP 0x08
33
+@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
96
-#define PHY_INT_PARFAULT 0x04
34
+@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
97
-#define PHY_INT_AUTONEG_PAGE 0x02
35
98
-
36
### Data Processing - Immediate
99
#define GPT_TIMER_EN 0x20000000
37
100
38
@@ -XXX,XX +XXX,XX @@ CPYFE 00 011 0 01100 ..... .... 01 ..... ..... @cpy
101
/*
39
CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy
102
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
40
CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy
103
uint32_t mac_mii_data;
41
CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
104
uint32_t mac_flow;
42
+
105
43
+### Cryptographic AES
106
- uint32_t phy_status;
44
+
107
- uint32_t phy_control;
45
+AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
108
- uint32_t phy_advertise;
46
+AESD 01001110 00 10100 00101 10 ..... ..... @r2r_q1e0
109
- uint32_t phy_int;
47
+AESMC 01001110 00 10100 00110 10 ..... ..... @rr_q1e0
110
- uint32_t phy_int_mask;
48
+AESIMC 01001110 00 10100 00111 10 ..... ..... @rr_q1e0
111
+ Lan9118PhyState mii;
49
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
112
+ IRQState mii_irq;
50
index XXXXXXX..XXXXXXX 100644
113
51
--- a/target/arm/tcg/translate-a64.c
114
int32_t eeprom_writable;
52
+++ b/target/arm/tcg/translate-a64.c
115
uint8_t eeprom[128];
53
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
116
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
54
return true;
117
118
static const VMStateDescription vmstate_lan9118 = {
119
.name = "lan9118",
120
- .version_id = 2,
121
- .minimum_version_id = 1,
122
+ .version_id = 3,
123
+ .minimum_version_id = 3,
124
.fields = (const VMStateField[]) {
125
VMSTATE_PTIMER(timer, lan9118_state),
126
VMSTATE_UINT32(irq_cfg, lan9118_state),
127
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118 = {
128
VMSTATE_UINT32(mac_mii_acc, lan9118_state),
129
VMSTATE_UINT32(mac_mii_data, lan9118_state),
130
VMSTATE_UINT32(mac_flow, lan9118_state),
131
- VMSTATE_UINT32(phy_status, lan9118_state),
132
- VMSTATE_UINT32(phy_control, lan9118_state),
133
- VMSTATE_UINT32(phy_advertise, lan9118_state),
134
- VMSTATE_UINT32(phy_int, lan9118_state),
135
- VMSTATE_UINT32(phy_int_mask, lan9118_state),
136
VMSTATE_INT32(eeprom_writable, lan9118_state),
137
VMSTATE_UINT8_ARRAY(eeprom, lan9118_state, 128),
138
VMSTATE_INT32(tx_fifo_size, lan9118_state),
139
@@ -XXX,XX +XXX,XX @@ static void lan9118_reload_eeprom(lan9118_state *s)
140
lan9118_mac_changed(s);
55
}
141
}
56
142
57
+/*
143
-static void phy_update_irq(lan9118_state *s)
58
+ * Expanders for AdvSIMD translation functions.
144
+static void lan9118_update_irq(void *opaque, int n, int level)
59
+ */
145
{
60
+
146
- if (s->phy_int & s->phy_int_mask) {
61
+static bool do_gvec_op2_ool(DisasContext *s, arg_qrr_e *a, int data,
147
+ lan9118_state *s = opaque;
62
+ gen_helper_gvec_2 *fn)
148
+
63
+{
149
+ if (level) {
64
+ if (!a->q && a->esz == MO_64) {
150
s->int_sts |= PHY_INT;
65
+ return false;
151
} else {
66
+ }
152
s->int_sts &= ~PHY_INT;
67
+ if (fp_access_check(s)) {
153
@@ -XXX,XX +XXX,XX @@ static void phy_update_irq(lan9118_state *s)
68
+ gen_gvec_op2_ool(s, a->q, a->rd, a->rn, data, fn);
154
lan9118_update(s);
69
+ }
70
+ return true;
71
+}
72
+
73
+static bool do_gvec_op3_ool(DisasContext *s, arg_qrrr_e *a, int data,
74
+ gen_helper_gvec_3 *fn)
75
+{
76
+ if (!a->q && a->esz == MO_64) {
77
+ return false;
78
+ }
79
+ if (fp_access_check(s)) {
80
+ gen_gvec_op3_ool(s, a->q, a->rd, a->rn, a->rm, data, fn);
81
+ }
82
+ return true;
83
+}
84
+
85
/*
86
* This utility function is for doing register extension with an
87
* optional shift. You will likely want to pass a temporary for the
88
@@ -XXX,XX +XXX,XX @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
89
return true;
90
}
155
}
91
156
92
+/*
157
-static void phy_update_link(lan9118_state *s)
93
+ * Cryptographic AES
158
-{
94
+ */
159
- /* Autonegotiation status mirrors link status. */
95
+
160
- if (qemu_get_queue(s->nic)->link_down) {
96
+TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
161
- s->phy_status &= ~0x0024;
97
+TRANS_FEAT(AESD, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aesd)
162
- s->phy_int |= PHY_INT_DOWN;
98
+TRANS_FEAT(AESMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesmc)
163
- } else {
99
+TRANS_FEAT(AESIMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesimc)
164
- s->phy_status |= 0x0024;
100
+
165
- s->phy_int |= PHY_INT_ENERGYON;
101
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
166
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
102
* Note that it is the caller's responsibility to ensure that the
167
- }
103
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
168
- phy_update_irq(s);
104
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
169
-}
170
-
171
static void lan9118_set_link(NetClientState *nc)
172
{
173
- phy_update_link(qemu_get_nic_opaque(nc));
174
-}
175
-
176
-static void phy_reset(lan9118_state *s)
177
-{
178
- s->phy_status = 0x7809;
179
- s->phy_control = 0x3000;
180
- s->phy_advertise = 0x01e1;
181
- s->phy_int_mask = 0;
182
- s->phy_int = 0;
183
- phy_update_link(s);
184
+ lan9118_phy_update_link(&LAN9118(qemu_get_nic_opaque(nc))->mii,
185
+ nc->link_down);
186
}
187
188
static void lan9118_reset(DeviceState *d)
189
@@ -XXX,XX +XXX,XX @@ static void lan9118_reset(DeviceState *d)
190
s->read_word_n = 0;
191
s->write_word_n = 0;
192
193
- phy_reset(s);
194
-
195
s->eeprom_writable = 0;
196
lan9118_reload_eeprom(s);
197
}
198
@@ -XXX,XX +XXX,XX @@ static void do_tx_packet(lan9118_state *s)
199
uint32_t status;
200
201
/* FIXME: Honor TX disable, and allow queueing of packets. */
202
- if (s->phy_control & 0x4000) {
203
+ if (s->mii.control & 0x4000) {
204
/* This assumes the receive routine doesn't touch the VLANClient. */
205
qemu_receive_packet(qemu_get_queue(s->nic), s->txp->data, s->txp->len);
206
} else {
207
@@ -XXX,XX +XXX,XX @@ static void tx_fifo_push(lan9118_state *s, uint32_t val)
105
}
208
}
106
}
209
}
107
210
108
-/* Crypto AES
211
-static uint32_t do_phy_read(lan9118_state *s, int reg)
109
- * 31 24 23 22 21 17 16 12 11 10 9 5 4 0
110
- * +-----------------+------+-----------+--------+-----+------+------+
111
- * | 0 1 0 0 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
112
- * +-----------------+------+-----------+--------+-----+------+------+
113
- */
114
-static void disas_crypto_aes(DisasContext *s, uint32_t insn)
115
-{
212
-{
116
- int size = extract32(insn, 22, 2);
213
- uint32_t val;
117
- int opcode = extract32(insn, 12, 5);
214
-
118
- int rn = extract32(insn, 5, 5);
215
- switch (reg) {
119
- int rd = extract32(insn, 0, 5);
216
- case 0: /* Basic Control */
120
- gen_helper_gvec_2 *genfn2 = NULL;
217
- return s->phy_control;
121
- gen_helper_gvec_3 *genfn3 = NULL;
218
- case 1: /* Basic Status */
122
-
219
- return s->phy_status;
123
- if (!dc_isar_feature(aa64_aes, s) || size != 0) {
220
- case 2: /* ID1 */
124
- unallocated_encoding(s);
221
- return 0x0007;
125
- return;
222
- case 3: /* ID2 */
223
- return 0xc0d1;
224
- case 4: /* Auto-neg advertisement */
225
- return s->phy_advertise;
226
- case 5: /* Auto-neg Link Partner Ability */
227
- return 0x0f71;
228
- case 6: /* Auto-neg Expansion */
229
- return 1;
230
- /* TODO 17, 18, 27, 29, 30, 31 */
231
- case 29: /* Interrupt source. */
232
- val = s->phy_int;
233
- s->phy_int = 0;
234
- phy_update_irq(s);
235
- return val;
236
- case 30: /* Interrupt mask */
237
- return s->phy_int_mask;
238
- default:
239
- qemu_log_mask(LOG_GUEST_ERROR,
240
- "do_phy_read: PHY read reg %d\n", reg);
241
- return 0;
126
- }
242
- }
127
-
243
-}
128
- switch (opcode) {
244
-
129
- case 0x4: /* AESE */
245
-static void do_phy_write(lan9118_state *s, int reg, uint32_t val)
130
- genfn3 = gen_helper_crypto_aese;
246
-{
247
- switch (reg) {
248
- case 0: /* Basic Control */
249
- if (val & 0x8000) {
250
- phy_reset(s);
251
- break;
252
- }
253
- s->phy_control = val & 0x7980;
254
- /* Complete autonegotiation immediately. */
255
- if (val & 0x1000) {
256
- s->phy_status |= 0x0020;
257
- }
131
- break;
258
- break;
132
- case 0x6: /* AESMC */
259
- case 4: /* Auto-neg advertisement */
133
- genfn2 = gen_helper_crypto_aesmc;
260
- s->phy_advertise = (val & 0x2d7f) | 0x80;
134
- break;
261
- break;
135
- case 0x5: /* AESD */
262
- /* TODO 17, 18, 27, 31 */
136
- genfn3 = gen_helper_crypto_aesd;
263
- case 30: /* Interrupt mask */
137
- break;
264
- s->phy_int_mask = val & 0xff;
138
- case 0x7: /* AESIMC */
265
- phy_update_irq(s);
139
- genfn2 = gen_helper_crypto_aesimc;
140
- break;
266
- break;
141
- default:
267
- default:
142
- unallocated_encoding(s);
268
- qemu_log_mask(LOG_GUEST_ERROR,
143
- return;
269
- "do_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
144
- }
145
-
146
- if (!fp_access_check(s)) {
147
- return;
148
- }
149
- if (genfn2) {
150
- gen_gvec_op2_ool(s, true, rd, rn, 0, genfn2);
151
- } else {
152
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, genfn3);
153
- }
270
- }
154
-}
271
-}
155
-
272
-
156
/* Crypto three-reg SHA
273
static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
157
* 31 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
274
{
158
* +-----------------+------+---+------+---+--------+-----+------+------+
275
switch (reg) {
159
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
276
@@ -XXX,XX +XXX,XX @@ static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
160
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
277
if (val & 2) {
161
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
278
DPRINTF("PHY write %d = 0x%04x\n",
162
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
279
(val >> 6) & 0x1f, s->mac_mii_data);
163
- { 0x4e280800, 0xff3e0c00, disas_crypto_aes },
280
- do_phy_write(s, (val >> 6) & 0x1f, s->mac_mii_data);
164
{ 0x5e000000, 0xff208c00, disas_crypto_three_reg_sha },
281
+ lan9118_phy_write(&s->mii, (val >> 6) & 0x1f, s->mac_mii_data);
165
{ 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
282
} else {
166
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
283
- s->mac_mii_data = do_phy_read(s, (val >> 6) & 0x1f);
284
+ s->mac_mii_data = lan9118_phy_read(&s->mii, (val >> 6) & 0x1f);
285
DPRINTF("PHY read %d = 0x%04x\n",
286
(val >> 6) & 0x1f, s->mac_mii_data);
287
}
288
@@ -XXX,XX +XXX,XX @@ static void lan9118_writel(void *opaque, hwaddr offset,
289
break;
290
case CSR_PMT_CTRL:
291
if (val & 0x400) {
292
- phy_reset(s);
293
+ lan9118_phy_reset(&s->mii);
294
}
295
s->pmt_ctrl &= ~0x34e;
296
s->pmt_ctrl |= (val & 0x34e);
297
@@ -XXX,XX +XXX,XX @@ static void lan9118_realize(DeviceState *dev, Error **errp)
298
const MemoryRegionOps *mem_ops =
299
s->mode_16bit ? &lan9118_16bit_mem_ops : &lan9118_mem_ops;
300
301
+ qemu_init_irq(&s->mii_irq, lan9118_update_irq, s, 0);
302
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
303
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
304
+ return;
305
+ }
306
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
307
+
308
memory_region_init_io(&s->mmio, OBJECT(dev), mem_ops, s,
309
"lan9118-mmio", 0x100);
310
sysbus_init_mmio(sbd, &s->mmio);
311
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
312
new file mode 100644
313
index XXXXXXX..XXXXXXX
314
--- /dev/null
315
+++ b/hw/net/lan9118_phy.c
316
@@ -XXX,XX +XXX,XX @@
317
+/*
318
+ * SMSC LAN9118 PHY emulation
319
+ *
320
+ * Copyright (c) 2009 CodeSourcery, LLC.
321
+ * Written by Paul Brook
322
+ *
323
+ * This code is licensed under the GNU GPL v2
324
+ *
325
+ * Contributions after 2012-01-13 are licensed under the terms of the
326
+ * GNU GPL, version 2 or (at your option) any later version.
327
+ */
328
+
329
+#include "qemu/osdep.h"
330
+#include "hw/net/lan9118_phy.h"
331
+#include "hw/irq.h"
332
+#include "hw/resettable.h"
333
+#include "migration/vmstate.h"
334
+#include "qemu/log.h"
335
+
336
+#define PHY_INT_ENERGYON (1 << 7)
337
+#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
338
+#define PHY_INT_FAULT (1 << 5)
339
+#define PHY_INT_DOWN (1 << 4)
340
+#define PHY_INT_AUTONEG_LP (1 << 3)
341
+#define PHY_INT_PARFAULT (1 << 2)
342
+#define PHY_INT_AUTONEG_PAGE (1 << 1)
343
+
344
+static void lan9118_phy_update_irq(Lan9118PhyState *s)
345
+{
346
+ qemu_set_irq(s->irq, !!(s->ints & s->int_mask));
347
+}
348
+
349
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
350
+{
351
+ uint16_t val;
352
+
353
+ switch (reg) {
354
+ case 0: /* Basic Control */
355
+ return s->control;
356
+ case 1: /* Basic Status */
357
+ return s->status;
358
+ case 2: /* ID1 */
359
+ return 0x0007;
360
+ case 3: /* ID2 */
361
+ return 0xc0d1;
362
+ case 4: /* Auto-neg advertisement */
363
+ return s->advertise;
364
+ case 5: /* Auto-neg Link Partner Ability */
365
+ return 0x0f71;
366
+ case 6: /* Auto-neg Expansion */
367
+ return 1;
368
+ /* TODO 17, 18, 27, 29, 30, 31 */
369
+ case 29: /* Interrupt source. */
370
+ val = s->ints;
371
+ s->ints = 0;
372
+ lan9118_phy_update_irq(s);
373
+ return val;
374
+ case 30: /* Interrupt mask */
375
+ return s->int_mask;
376
+ default:
377
+ qemu_log_mask(LOG_GUEST_ERROR,
378
+ "lan9118_phy_read: PHY read reg %d\n", reg);
379
+ return 0;
380
+ }
381
+}
382
+
383
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
384
+{
385
+ switch (reg) {
386
+ case 0: /* Basic Control */
387
+ if (val & 0x8000) {
388
+ lan9118_phy_reset(s);
389
+ break;
390
+ }
391
+ s->control = val & 0x7980;
392
+ /* Complete autonegotiation immediately. */
393
+ if (val & 0x1000) {
394
+ s->status |= 0x0020;
395
+ }
396
+ break;
397
+ case 4: /* Auto-neg advertisement */
398
+ s->advertise = (val & 0x2d7f) | 0x80;
399
+ break;
400
+ /* TODO 17, 18, 27, 31 */
401
+ case 30: /* Interrupt mask */
402
+ s->int_mask = val & 0xff;
403
+ lan9118_phy_update_irq(s);
404
+ break;
405
+ default:
406
+ qemu_log_mask(LOG_GUEST_ERROR,
407
+ "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
408
+ }
409
+}
410
+
411
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
412
+{
413
+ s->link_down = link_down;
414
+
415
+ /* Autonegotiation status mirrors link status. */
416
+ if (link_down) {
417
+ s->status &= ~0x0024;
418
+ s->ints |= PHY_INT_DOWN;
419
+ } else {
420
+ s->status |= 0x0024;
421
+ s->ints |= PHY_INT_ENERGYON;
422
+ s->ints |= PHY_INT_AUTONEG_COMPLETE;
423
+ }
424
+ lan9118_phy_update_irq(s);
425
+}
426
+
427
+void lan9118_phy_reset(Lan9118PhyState *s)
428
+{
429
+ s->control = 0x3000;
430
+ s->status = 0x7809;
431
+ s->advertise = 0x01e1;
432
+ s->int_mask = 0;
433
+ s->ints = 0;
434
+ lan9118_phy_update_link(s, s->link_down);
435
+}
436
+
437
+static void lan9118_phy_reset_hold(Object *obj, ResetType type)
438
+{
439
+ Lan9118PhyState *s = LAN9118_PHY(obj);
440
+
441
+ lan9118_phy_reset(s);
442
+}
443
+
444
+static void lan9118_phy_init(Object *obj)
445
+{
446
+ Lan9118PhyState *s = LAN9118_PHY(obj);
447
+
448
+ qdev_init_gpio_out(DEVICE(s), &s->irq, 1);
449
+}
450
+
451
+static const VMStateDescription vmstate_lan9118_phy = {
452
+ .name = "lan9118-phy",
453
+ .version_id = 1,
454
+ .minimum_version_id = 1,
455
+ .fields = (const VMStateField[]) {
456
+ VMSTATE_UINT16(control, Lan9118PhyState),
457
+ VMSTATE_UINT16(status, Lan9118PhyState),
458
+ VMSTATE_UINT16(advertise, Lan9118PhyState),
459
+ VMSTATE_UINT16(ints, Lan9118PhyState),
460
+ VMSTATE_UINT16(int_mask, Lan9118PhyState),
461
+ VMSTATE_BOOL(link_down, Lan9118PhyState),
462
+ VMSTATE_END_OF_LIST()
463
+ }
464
+};
465
+
466
+static void lan9118_phy_class_init(ObjectClass *klass, void *data)
467
+{
468
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
469
+ DeviceClass *dc = DEVICE_CLASS(klass);
470
+
471
+ rc->phases.hold = lan9118_phy_reset_hold;
472
+ dc->vmsd = &vmstate_lan9118_phy;
473
+}
474
+
475
+static const TypeInfo types[] = {
476
+ {
477
+ .name = TYPE_LAN9118_PHY,
478
+ .parent = TYPE_SYS_BUS_DEVICE,
479
+ .instance_size = sizeof(Lan9118PhyState),
480
+ .instance_init = lan9118_phy_init,
481
+ .class_init = lan9118_phy_class_init,
482
+ }
483
+};
484
+
485
+DEFINE_TYPES(types)
486
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
487
index XXXXXXX..XXXXXXX 100644
488
--- a/hw/net/Kconfig
489
+++ b/hw/net/Kconfig
490
@@ -XXX,XX +XXX,XX @@ config VMXNET3_PCI
491
config SMC91C111
492
bool
493
494
+config LAN9118_PHY
495
+ bool
496
+
497
config LAN9118
498
bool
499
+ select LAN9118_PHY
500
select PTIMER
501
502
config NE2000_ISA
503
diff --git a/hw/net/meson.build b/hw/net/meson.build
504
index XXXXXXX..XXXXXXX 100644
505
--- a/hw/net/meson.build
506
+++ b/hw/net/meson.build
507
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_VMXNET3_PCI', if_true: files('vmxnet3.c'))
508
509
system_ss.add(when: 'CONFIG_SMC91C111', if_true: files('smc91c111.c'))
510
system_ss.add(when: 'CONFIG_LAN9118', if_true: files('lan9118.c'))
511
+system_ss.add(when: 'CONFIG_LAN9118_PHY', if_true: files('lan9118_phy.c'))
512
system_ss.add(when: 'CONFIG_NE2000_ISA', if_true: files('ne2000-isa.c'))
513
system_ss.add(when: 'CONFIG_OPENCORES_ETH', if_true: files('opencores_eth.c'))
514
system_ss.add(when: 'CONFIG_XGMAC', if_true: files('xgmac.c'))
167
--
515
--
168
2.34.1
516
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
imx_fec models the same PHY as lan9118_phy. The code is almost the same with
4
Message-id: 20240506010403.6204-25-richard.henderson@linaro.org
4
imx_fec having more logging and tracing. Merge these improvements into
5
lan9118_phy and reuse in imx_fec to fix the code duplication.
6
7
Some migration state how resides in the new device model which breaks migration
8
compatibility for the following machines:
9
* imx25-pdk
10
* sabrelite
11
* mcimx7d-sabre
12
* mcimx6ul-evk
13
14
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
15
Tested-by: Guenter Roeck <linux@roeck-us.net>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20241102125724.532843-3-shentey@gmail.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
19
---
8
target/arm/helper.h | 5 ++
20
include/hw/net/imx_fec.h | 9 ++-
9
target/arm/tcg/translate.h | 3 +
21
hw/net/imx_fec.c | 146 ++++-----------------------------------
10
target/arm/tcg/a64.decode | 6 ++
22
hw/net/lan9118_phy.c | 82 ++++++++++++++++------
11
target/arm/tcg/gengvec.c | 12 ++++
23
hw/net/Kconfig | 1 +
12
target/arm/tcg/translate-a64.c | 128 ++++++---------------------------
24
hw/net/trace-events | 10 +--
13
target/arm/tcg/vec_helper.c | 30 ++++++++
25
5 files changed, 85 insertions(+), 163 deletions(-)
14
6 files changed, 77 insertions(+), 107 deletions(-)
15
26
16
diff --git a/target/arm/helper.h b/target/arm/helper.h
27
diff --git a/include/hw/net/imx_fec.h b/include/hw/net/imx_fec.h
17
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.h
29
--- a/include/hw/net/imx_fec.h
19
+++ b/target/arm/helper.h
30
+++ b/include/hw/net/imx_fec.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fminnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i
31
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(IMXFECState, IMX_FEC)
21
DEF_HELPER_FLAGS_5(gvec_fminnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
32
#define TYPE_IMX_ENET "imx.enet"
22
DEF_HELPER_FLAGS_5(gvec_fminnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
23
34
#include "hw/sysbus.h"
24
+DEF_HELPER_FLAGS_4(gvec_addp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+#include "hw/net/lan9118_phy.h"
25
+DEF_HELPER_FLAGS_4(gvec_addp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+#include "hw/irq.h"
26
+DEF_HELPER_FLAGS_4(gvec_addp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
#include "net/net.h"
27
+DEF_HELPER_FLAGS_4(gvec_addp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
28
+
39
#define ENET_EIR 1
29
#ifdef TARGET_AARCH64
40
@@ -XXX,XX +XXX,XX @@ struct IMXFECState {
30
#include "tcg/helper-a64.h"
41
uint32_t tx_descriptor[ENET_TX_RING_NUM];
31
#include "tcg/helper-sve.h"
42
uint32_t tx_ring_num;
32
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
43
44
- uint32_t phy_status;
45
- uint32_t phy_control;
46
- uint32_t phy_advertise;
47
- uint32_t phy_int;
48
- uint32_t phy_int_mask;
49
+ Lan9118PhyState mii;
50
+ IRQState mii_irq;
51
uint32_t phy_num;
52
bool phy_connected;
53
struct IMXFECState *phy_consumer;
54
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
33
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/tcg/translate.h
56
--- a/hw/net/imx_fec.c
35
+++ b/target/arm/tcg/translate.h
57
+++ b/hw/net/imx_fec.c
36
@@ -XXX,XX +XXX,XX @@ void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
58
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth_txdescs = {
37
void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
59
38
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
60
static const VMStateDescription vmstate_imx_eth = {
39
61
.name = TYPE_IMX_FEC,
40
+void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
62
- .version_id = 2,
41
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
63
- .minimum_version_id = 2,
42
+
64
+ .version_id = 3,
65
+ .minimum_version_id = 3,
66
.fields = (const VMStateField[]) {
67
VMSTATE_UINT32_ARRAY(regs, IMXFECState, ENET_MAX),
68
VMSTATE_UINT32(rx_descriptor, IMXFECState),
69
VMSTATE_UINT32(tx_descriptor[0], IMXFECState),
70
- VMSTATE_UINT32(phy_status, IMXFECState),
71
- VMSTATE_UINT32(phy_control, IMXFECState),
72
- VMSTATE_UINT32(phy_advertise, IMXFECState),
73
- VMSTATE_UINT32(phy_int, IMXFECState),
74
- VMSTATE_UINT32(phy_int_mask, IMXFECState),
75
VMSTATE_END_OF_LIST()
76
},
77
.subsections = (const VMStateDescription * const []) {
78
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth = {
79
},
80
};
81
82
-#define PHY_INT_ENERGYON (1 << 7)
83
-#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
84
-#define PHY_INT_FAULT (1 << 5)
85
-#define PHY_INT_DOWN (1 << 4)
86
-#define PHY_INT_AUTONEG_LP (1 << 3)
87
-#define PHY_INT_PARFAULT (1 << 2)
88
-#define PHY_INT_AUTONEG_PAGE (1 << 1)
89
-
90
static void imx_eth_update(IMXFECState *s);
91
43
/*
92
/*
44
* Forward to the isar_feature_* tests given a DisasContext pointer.
93
@@ -XXX,XX +XXX,XX @@ static void imx_eth_update(IMXFECState *s);
94
* For now we don't handle any GPIO/interrupt line, so the OS will
95
* have to poll for the PHY status.
45
*/
96
*/
46
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
97
-static void imx_phy_update_irq(IMXFECState *s)
47
index XXXXXXX..XXXXXXX 100644
98
+static void imx_phy_update_irq(void *opaque, int n, int level)
48
--- a/target/arm/tcg/a64.decode
99
{
49
+++ b/target/arm/tcg/a64.decode
100
- imx_eth_update(s);
50
@@ -XXX,XX +XXX,XX @@
101
-}
51
&qrrrr_e q rd rn rm ra esz
102
-
52
103
-static void imx_phy_update_link(IMXFECState *s)
53
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
54
+@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
55
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
56
57
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
58
@@ -XXX,XX +XXX,XX @@
59
60
@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
61
@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
62
+@qrrr_e . q:1 ...... esz:2 . rm:5 ...... rn:5 rd:5 &qrrr_e
63
64
@qrrx_h . q:1 .. .... .. .. rm:4 .... . . rn:5 rd:5 \
65
&qrrx_e esz=1 idx=%hlm
66
@@ -XXX,XX +XXX,XX @@ FMAXNMP_s 0111 1110 0.11 0000 1100 10 ..... ..... @rr_sd
67
FMINNMP_s 0101 1110 1011 0000 1100 10 ..... ..... @rr_h
68
FMINNMP_s 0111 1110 1.11 0000 1100 10 ..... ..... @rr_sd
69
70
+ADDP_s 0101 1110 1111 0001 1011 10 ..... ..... @rr_d
71
+
72
### Advanced SIMD three same
73
74
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
75
@@ -XXX,XX +XXX,XX @@ FMAXNMP_v 0.10 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
76
FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
77
FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
78
79
+ADDP_v 0.00 1110 ..1 ..... 10111 1 ..... ..... @qrrr_e
80
+
81
### Advanced SIMD scalar x indexed element
82
83
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
84
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/tcg/gengvec.c
87
+++ b/target/arm/tcg/gengvec.c
88
@@ -XXX,XX +XXX,XX @@ void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
89
};
90
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
91
}
92
+
93
+void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
94
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
95
+{
96
+ static gen_helper_gvec_3 * const fns[4] = {
97
+ gen_helper_gvec_addp_b,
98
+ gen_helper_gvec_addp_h,
99
+ gen_helper_gvec_addp_s,
100
+ gen_helper_gvec_addp_d,
101
+ };
102
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
103
+}
104
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/tcg/translate-a64.c
107
+++ b/target/arm/tcg/translate-a64.c
108
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
109
};
110
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
111
112
+TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
113
+
114
/*
115
* Advanced SIMD scalar/vector x indexed element
116
*/
117
@@ -XXX,XX +XXX,XX @@ TRANS(FMINP_s, do_fp3_scalar_pair, a, &f_scalar_fmin)
118
TRANS(FMAXNMP_s, do_fp3_scalar_pair, a, &f_scalar_fmaxnm)
119
TRANS(FMINNMP_s, do_fp3_scalar_pair, a, &f_scalar_fminnm)
120
121
+static bool trans_ADDP_s(DisasContext *s, arg_rr_e *a)
122
+{
123
+ if (fp_access_check(s)) {
124
+ TCGv_i64 t0 = tcg_temp_new_i64();
125
+ TCGv_i64 t1 = tcg_temp_new_i64();
126
+
127
+ read_vec_element(s, t0, a->rn, 0, MO_64);
128
+ read_vec_element(s, t1, a->rn, 1, MO_64);
129
+ tcg_gen_add_i64(t0, t0, t1);
130
+ write_fp_dreg(s, a->rd, t0);
131
+ }
132
+ return true;
133
+}
134
+
135
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
136
* Note that it is the caller's responsibility to ensure that the
137
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
138
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
139
}
140
}
141
142
-/* AdvSIMD scalar pairwise
143
- * 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
144
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
145
- * | 0 1 | U | 1 1 1 1 0 | size | 1 1 0 0 0 | opcode | 1 0 | Rn | Rd |
146
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
147
- */
148
-static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
149
-{
104
-{
150
- int u = extract32(insn, 29, 1);
105
- /* Autonegotiation status mirrors link status. */
151
- int size = extract32(insn, 22, 2);
106
- if (qemu_get_queue(s->nic)->link_down) {
152
- int opcode = extract32(insn, 12, 5);
107
- trace_imx_phy_update_link("down");
153
- int rn = extract32(insn, 5, 5);
108
- s->phy_status &= ~0x0024;
154
- int rd = extract32(insn, 0, 5);
109
- s->phy_int |= PHY_INT_DOWN;
155
-
110
- } else {
156
- /* For some ops (the FP ones), size[1] is part of the encoding.
111
- trace_imx_phy_update_link("up");
157
- * For ADDP strictly it is not but size[1] is always 1 for valid
112
- s->phy_status |= 0x0024;
158
- * encodings.
113
- s->phy_int |= PHY_INT_ENERGYON;
159
- */
114
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
160
- opcode |= (extract32(size, 1, 1) << 5);
115
- }
161
-
116
- imx_phy_update_irq(s);
162
- switch (opcode) {
117
+ imx_eth_update(opaque);
163
- case 0x3b: /* ADDP */
118
}
164
- if (u || size != 3) {
119
165
- unallocated_encoding(s);
120
static void imx_eth_set_link(NetClientState *nc)
166
- return;
121
{
167
- }
122
- imx_phy_update_link(IMX_FEC(qemu_get_nic_opaque(nc)));
168
- if (!fp_access_check(s)) {
123
-}
169
- return;
124
-
170
- }
125
-static void imx_phy_reset(IMXFECState *s)
126
-{
127
- trace_imx_phy_reset();
128
-
129
- s->phy_status = 0x7809;
130
- s->phy_control = 0x3000;
131
- s->phy_advertise = 0x01e1;
132
- s->phy_int_mask = 0;
133
- s->phy_int = 0;
134
- imx_phy_update_link(s);
135
+ lan9118_phy_update_link(&IMX_FEC(qemu_get_nic_opaque(nc))->mii,
136
+ nc->link_down);
137
}
138
139
static uint32_t imx_phy_read(IMXFECState *s, int reg)
140
{
141
- uint32_t val;
142
uint32_t phy = reg / 32;
143
144
if (!s->phy_connected) {
145
@@ -XXX,XX +XXX,XX @@ static uint32_t imx_phy_read(IMXFECState *s, int reg)
146
147
reg %= 32;
148
149
- switch (reg) {
150
- case 0: /* Basic Control */
151
- val = s->phy_control;
152
- break;
153
- case 1: /* Basic Status */
154
- val = s->phy_status;
155
- break;
156
- case 2: /* ID1 */
157
- val = 0x0007;
158
- break;
159
- case 3: /* ID2 */
160
- val = 0xc0d1;
161
- break;
162
- case 4: /* Auto-neg advertisement */
163
- val = s->phy_advertise;
164
- break;
165
- case 5: /* Auto-neg Link Partner Ability */
166
- val = 0x0f71;
167
- break;
168
- case 6: /* Auto-neg Expansion */
169
- val = 1;
170
- break;
171
- case 29: /* Interrupt source. */
172
- val = s->phy_int;
173
- s->phy_int = 0;
174
- imx_phy_update_irq(s);
175
- break;
176
- case 30: /* Interrupt mask */
177
- val = s->phy_int_mask;
178
- break;
179
- case 17:
180
- case 18:
181
- case 27:
182
- case 31:
183
- qemu_log_mask(LOG_UNIMP, "[%s.phy]%s: reg %d not implemented\n",
184
- TYPE_IMX_FEC, __func__, reg);
185
- val = 0;
171
- break;
186
- break;
172
- default:
187
- default:
173
- case 0xc: /* FMAXNMP */
188
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
174
- case 0xd: /* FADDP */
189
- TYPE_IMX_FEC, __func__, reg);
175
- case 0xf: /* FMAXP */
190
- val = 0;
176
- case 0x2c: /* FMINNMP */
191
- break;
177
- case 0x2f: /* FMINP */
178
- unallocated_encoding(s);
179
- return;
180
- }
192
- }
181
-
193
-
182
- if (size == MO_64) {
194
- trace_imx_phy_read(val, phy, reg);
183
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
195
-
184
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
196
- return val;
185
- TCGv_i64 tcg_res = tcg_temp_new_i64();
197
+ return lan9118_phy_read(&s->mii, reg);
186
-
198
}
187
- read_vec_element(s, tcg_op1, rn, 0, MO_64);
199
188
- read_vec_element(s, tcg_op2, rn, 1, MO_64);
200
static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
189
-
201
@@ -XXX,XX +XXX,XX @@ static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
190
- switch (opcode) {
202
191
- case 0x3b: /* ADDP */
203
reg %= 32;
192
- tcg_gen_add_i64(tcg_res, tcg_op1, tcg_op2);
204
193
- break;
205
- trace_imx_phy_write(val, phy, reg);
194
- default:
206
-
195
- case 0xc: /* FMAXNMP */
207
- switch (reg) {
196
- case 0xd: /* FADDP */
208
- case 0: /* Basic Control */
197
- case 0xf: /* FMAXP */
209
- if (val & 0x8000) {
198
- case 0x2c: /* FMINNMP */
210
- imx_phy_reset(s);
199
- case 0x2f: /* FMINP */
211
- } else {
200
- g_assert_not_reached();
212
- s->phy_control = val & 0x7980;
201
- }
213
- /* Complete autonegotiation immediately. */
202
-
214
- if (val & 0x1000) {
203
- write_fp_dreg(s, rd, tcg_res);
215
- s->phy_status |= 0x0020;
204
- } else {
205
- g_assert_not_reached();
206
- }
207
-}
208
-
209
/*
210
* Common SSHR[RA]/USHR[RA] - Shift right (optional rounding/accumulate)
211
*
212
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
213
* adjacent elements being operated on to produce an element in the result.
214
*/
215
if (size == 3) {
216
- TCGv_i64 tcg_res[2];
217
-
218
- for (pass = 0; pass < 2; pass++) {
219
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
220
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
221
- int passreg = (pass == 0) ? rn : rm;
222
-
223
- read_vec_element(s, tcg_op1, passreg, 0, MO_64);
224
- read_vec_element(s, tcg_op2, passreg, 1, MO_64);
225
- tcg_res[pass] = tcg_temp_new_i64();
226
-
227
- switch (opcode) {
228
- case 0x17: /* ADDP */
229
- tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
230
- break;
231
- default:
232
- case 0x58: /* FMAXNMP */
233
- case 0x5a: /* FADDP */
234
- case 0x5e: /* FMAXP */
235
- case 0x78: /* FMINNMP */
236
- case 0x7e: /* FMINP */
237
- g_assert_not_reached();
238
- }
216
- }
239
- }
217
- }
240
-
218
- break;
241
- for (pass = 0; pass < 2; pass++) {
219
- case 4: /* Auto-neg advertisement */
242
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
220
- s->phy_advertise = (val & 0x2d7f) | 0x80;
221
- break;
222
- case 30: /* Interrupt mask */
223
- s->phy_int_mask = val & 0xff;
224
- imx_phy_update_irq(s);
225
- break;
226
- case 17:
227
- case 18:
228
- case 27:
229
- case 31:
230
- qemu_log_mask(LOG_UNIMP, "[%s.phy)%s: reg %d not implemented\n",
231
- TYPE_IMX_FEC, __func__, reg);
232
- break;
233
- default:
234
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
235
- TYPE_IMX_FEC, __func__, reg);
236
- break;
237
- }
238
+ lan9118_phy_write(&s->mii, reg, val);
239
}
240
241
static void imx_fec_read_bd(IMXFECBufDesc *bd, dma_addr_t addr)
242
@@ -XXX,XX +XXX,XX @@ static void imx_eth_reset(DeviceState *d)
243
244
s->rx_descriptor = 0;
245
memset(s->tx_descriptor, 0, sizeof(s->tx_descriptor));
246
-
247
- /* We also reset the PHY */
248
- imx_phy_reset(s);
249
}
250
251
static uint32_t imx_default_read(IMXFECState *s, uint32_t index)
252
@@ -XXX,XX +XXX,XX @@ static void imx_eth_realize(DeviceState *dev, Error **errp)
253
sysbus_init_irq(sbd, &s->irq[0]);
254
sysbus_init_irq(sbd, &s->irq[1]);
255
256
+ qemu_init_irq(&s->mii_irq, imx_phy_update_irq, s, 0);
257
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
258
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
259
+ return;
260
+ }
261
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
262
+
263
qemu_macaddr_default_if_unset(&s->conf.macaddr);
264
265
s->nic = qemu_new_nic(&imx_eth_net_info, &s->conf,
266
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
267
index XXXXXXX..XXXXXXX 100644
268
--- a/hw/net/lan9118_phy.c
269
+++ b/hw/net/lan9118_phy.c
270
@@ -XXX,XX +XXX,XX @@
271
* Copyright (c) 2009 CodeSourcery, LLC.
272
* Written by Paul Brook
273
*
274
+ * Copyright (c) 2013 Jean-Christophe Dubois. <jcd@tribudubois.net>
275
+ *
276
* This code is licensed under the GNU GPL v2
277
*
278
* Contributions after 2012-01-13 are licensed under the terms of the
279
@@ -XXX,XX +XXX,XX @@
280
#include "hw/resettable.h"
281
#include "migration/vmstate.h"
282
#include "qemu/log.h"
283
+#include "trace.h"
284
285
#define PHY_INT_ENERGYON (1 << 7)
286
#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
287
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
288
289
switch (reg) {
290
case 0: /* Basic Control */
291
- return s->control;
292
+ val = s->control;
293
+ break;
294
case 1: /* Basic Status */
295
- return s->status;
296
+ val = s->status;
297
+ break;
298
case 2: /* ID1 */
299
- return 0x0007;
300
+ val = 0x0007;
301
+ break;
302
case 3: /* ID2 */
303
- return 0xc0d1;
304
+ val = 0xc0d1;
305
+ break;
306
case 4: /* Auto-neg advertisement */
307
- return s->advertise;
308
+ val = s->advertise;
309
+ break;
310
case 5: /* Auto-neg Link Partner Ability */
311
- return 0x0f71;
312
+ val = 0x0f71;
313
+ break;
314
case 6: /* Auto-neg Expansion */
315
- return 1;
316
- /* TODO 17, 18, 27, 29, 30, 31 */
317
+ val = 1;
318
+ break;
319
case 29: /* Interrupt source. */
320
val = s->ints;
321
s->ints = 0;
322
lan9118_phy_update_irq(s);
323
- return val;
324
+ break;
325
case 30: /* Interrupt mask */
326
- return s->int_mask;
327
+ val = s->int_mask;
328
+ break;
329
+ case 17:
330
+ case 18:
331
+ case 27:
332
+ case 31:
333
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
334
+ __func__, reg);
335
+ val = 0;
336
+ break;
337
default:
338
- qemu_log_mask(LOG_GUEST_ERROR,
339
- "lan9118_phy_read: PHY read reg %d\n", reg);
340
- return 0;
341
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
342
+ __func__, reg);
343
+ val = 0;
344
+ break;
345
}
346
+
347
+ trace_lan9118_phy_read(val, reg);
348
+
349
+ return val;
350
}
351
352
void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
353
{
354
+ trace_lan9118_phy_write(val, reg);
355
+
356
switch (reg) {
357
case 0: /* Basic Control */
358
if (val & 0x8000) {
359
lan9118_phy_reset(s);
360
- break;
243
- }
361
- }
244
+ g_assert_not_reached();
362
- s->control = val & 0x7980;
363
- /* Complete autonegotiation immediately. */
364
- if (val & 0x1000) {
365
- s->status |= 0x0020;
366
+ } else {
367
+ s->control = val & 0x7980;
368
+ /* Complete autonegotiation immediately. */
369
+ if (val & 0x1000) {
370
+ s->status |= 0x0020;
371
+ }
372
}
373
break;
374
case 4: /* Auto-neg advertisement */
375
s->advertise = (val & 0x2d7f) | 0x80;
376
break;
377
- /* TODO 17, 18, 27, 31 */
378
case 30: /* Interrupt mask */
379
s->int_mask = val & 0xff;
380
lan9118_phy_update_irq(s);
381
break;
382
+ case 17:
383
+ case 18:
384
+ case 27:
385
+ case 31:
386
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
387
+ __func__, reg);
388
+ break;
389
default:
390
- qemu_log_mask(LOG_GUEST_ERROR,
391
- "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
392
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
393
+ __func__, reg);
394
+ break;
395
}
396
}
397
398
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
399
400
/* Autonegotiation status mirrors link status. */
401
if (link_down) {
402
+ trace_lan9118_phy_update_link("down");
403
s->status &= ~0x0024;
404
s->ints |= PHY_INT_DOWN;
245
} else {
405
} else {
246
int maxpass = is_q ? 4 : 2;
406
+ trace_lan9118_phy_update_link("up");
247
TCGv_i32 tcg_res[4];
407
s->status |= 0x0024;
248
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
408
s->ints |= PHY_INT_ENERGYON;
249
tcg_res[pass] = tcg_temp_new_i32();
409
s->ints |= PHY_INT_AUTONEG_COMPLETE;
250
410
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
251
switch (opcode) {
411
252
- case 0x17: /* ADDP */
412
void lan9118_phy_reset(Lan9118PhyState *s)
253
- {
413
{
254
- static NeonGenTwoOpFn * const fns[3] = {
414
+ trace_lan9118_phy_reset();
255
- gen_helper_neon_padd_u8,
415
+
256
- gen_helper_neon_padd_u16,
416
s->control = 0x3000;
257
- tcg_gen_add_i32,
417
s->status = 0x7809;
258
- };
418
s->advertise = 0x01e1;
259
- genfn = fns[size];
419
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_phy = {
260
- break;
420
.version_id = 1,
261
- }
421
.minimum_version_id = 1,
262
case 0x14: /* SMAXP, UMAXP */
422
.fields = (const VMStateField[]) {
263
{
423
- VMSTATE_UINT16(control, Lan9118PhyState),
264
static NeonGenTwoOpFn * const fns[3][2] = {
424
VMSTATE_UINT16(status, Lan9118PhyState),
265
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
425
+ VMSTATE_UINT16(control, Lan9118PhyState),
266
break;
426
VMSTATE_UINT16(advertise, Lan9118PhyState),
267
}
427
VMSTATE_UINT16(ints, Lan9118PhyState),
268
default:
428
VMSTATE_UINT16(int_mask, Lan9118PhyState),
269
+ case 0x17: /* ADDP */
429
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
270
case 0x58: /* FMAXNMP */
271
case 0x5a: /* FADDP */
272
case 0x5e: /* FMAXP */
273
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
274
case 0x3: /* logic ops */
275
disas_simd_3same_logic(s, insn);
276
break;
277
- case 0x17: /* ADDP */
278
case 0x14: /* SMAXP, UMAXP */
279
case 0x15: /* SMINP, UMINP */
280
{
281
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
282
default:
283
disas_simd_3same_int(s, insn);
284
break;
285
+ case 0x17: /* ADDP */
286
+ unallocated_encoding(s);
287
+ break;
288
}
289
}
290
291
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
292
{ 0x5e008400, 0xdf208400, disas_simd_scalar_three_reg_same_extra },
293
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
294
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
295
- { 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
296
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
297
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
298
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
299
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
300
index XXXXXXX..XXXXXXX 100644
430
index XXXXXXX..XXXXXXX 100644
301
--- a/target/arm/tcg/vec_helper.c
431
--- a/hw/net/Kconfig
302
+++ b/target/arm/tcg/vec_helper.c
432
+++ b/hw/net/Kconfig
303
@@ -XXX,XX +XXX,XX @@ DO_3OP_PAIR(gvec_fminnump_h, float16_minnum, float16, H2)
433
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_SUN8I_EMAC
304
DO_3OP_PAIR(gvec_fminnump_s, float32_minnum, float32, H4)
434
305
DO_3OP_PAIR(gvec_fminnump_d, float64_minnum, float64, )
435
config IMX_FEC
306
436
bool
307
+#undef DO_3OP_PAIR
437
+ select LAN9118_PHY
308
+
438
309
+#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
439
config CADENCE
310
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
440
bool
311
+{ \
441
diff --git a/hw/net/trace-events b/hw/net/trace-events
312
+ ARMVectorReg scratch; \
442
index XXXXXXX..XXXXXXX 100644
313
+ intptr_t oprsz = simd_oprsz(desc); \
443
--- a/hw/net/trace-events
314
+ intptr_t half = oprsz / sizeof(TYPE) / 2; \
444
+++ b/hw/net/trace-events
315
+ TYPE *d = vd, *n = vn, *m = vm; \
445
@@ -XXX,XX +XXX,XX @@ allwinner_sun8i_emac_set_link(bool active) "Set link: active=%u"
316
+ if (unlikely(d == m)) { \
446
allwinner_sun8i_emac_read(uint64_t offset, uint64_t val) "MMIO read: offset=0x%" PRIx64 " value=0x%" PRIx64
317
+ m = memcpy(&scratch, m, oprsz); \
447
allwinner_sun8i_emac_write(uint64_t offset, uint64_t val) "MMIO write: offset=0x%" PRIx64 " value=0x%" PRIx64
318
+ } \
448
319
+ for (intptr_t i = 0; i < half; ++i) { \
449
+# lan9118_phy.c
320
+ d[H(i)] = FUNC(n[H(i * 2)], n[H(i * 2 + 1)]); \
450
+lan9118_phy_read(uint16_t val, int reg) "[0x%02x] -> 0x%04" PRIx16
321
+ } \
451
+lan9118_phy_write(uint16_t val, int reg) "[0x%02x] <- 0x%04" PRIx16
322
+ for (intptr_t i = 0; i < half; ++i) { \
452
+lan9118_phy_update_link(const char *s) "%s"
323
+ d[H(i + half)] = FUNC(m[H(i * 2)], m[H(i * 2 + 1)]); \
453
+lan9118_phy_reset(void) ""
324
+ } \
454
+
325
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
455
# lance.c
326
+}
456
lance_mem_readw(uint64_t addr, uint32_t ret) "addr=0x%"PRIx64"val=0x%04x"
327
+
457
lance_mem_writew(uint64_t addr, uint32_t val) "addr=0x%"PRIx64"val=0x%04x"
328
+#define ADD(A, B) (A + B)
458
@@ -XXX,XX +XXX,XX @@ i82596_set_multicast(uint16_t count) "Added %d multicast entries"
329
+DO_3OP_PAIR(gvec_addp_b, ADD, uint8_t, H1)
459
i82596_channel_attention(void *s) "%p: Received CHANNEL ATTENTION"
330
+DO_3OP_PAIR(gvec_addp_h, ADD, uint16_t, H2)
460
331
+DO_3OP_PAIR(gvec_addp_s, ADD, uint32_t, H4)
461
# imx_fec.c
332
+DO_3OP_PAIR(gvec_addp_d, ADD, uint64_t, )
462
-imx_phy_read(uint32_t val, int phy, int reg) "0x%04"PRIx32" <= phy[%d].reg[%d]"
333
+#undef ADD
463
imx_phy_read_num(int phy, int configured) "read request from unconfigured phy %d (configured %d)"
334
+
464
-imx_phy_write(uint32_t val, int phy, int reg) "0x%04"PRIx32" => phy[%d].reg[%d]"
335
+#undef DO_3OP_PAIR
465
imx_phy_write_num(int phy, int configured) "write request to unconfigured phy %d (configured %d)"
336
+
466
-imx_phy_update_link(const char *s) "%s"
337
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
467
-imx_phy_reset(void) ""
338
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
468
imx_fec_read_bd(uint64_t addr, int flags, int len, int data) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x"
339
{ \
469
imx_enet_read_bd(uint64_t addr, int flags, int len, int data, int options, int status) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x option 0x%04x status 0x%04x"
470
imx_eth_tx_bd_busy(void) "tx_bd ran out of descriptors to transmit"
340
--
471
--
341
2.34.1
472
2.34.1
diff view generated by jsdifflib
1
From: Inès Varhol <ines.varhol@telecom-paris.fr>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
3
Turns 0x70 into 0xe0 (== 0x70 << 1) which adds the missing MII_ANLPAR_TX and
4
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
4
fixes the MSB of selector field to be zero, as specified in the datasheet.
5
Message-id: 20240505141613.387508-1-ines.varhol@telecom-paris.fr
5
6
Fixes: 2a424990170b "LAN9118 emulation"
7
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
8
Tested-by: Guenter Roeck <linux@roeck-us.net>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20241102125724.532843-4-shentey@gmail.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
12
---
9
hw/char/stm32l4x5_usart.c | 2 +-
13
hw/net/lan9118_phy.c | 2 +-
10
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
11
15
12
diff --git a/hw/char/stm32l4x5_usart.c b/hw/char/stm32l4x5_usart.c
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/char/stm32l4x5_usart.c
18
--- a/hw/net/lan9118_phy.c
15
+++ b/hw/char/stm32l4x5_usart.c
19
+++ b/hw/net/lan9118_phy.c
16
@@ -XXX,XX +XXX,XX @@ REG32(CR1, 0x00)
20
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
17
FIELD(CR1, UE, 0, 1) /* USART enable */
21
val = s->advertise;
18
REG32(CR2, 0x04)
22
break;
19
FIELD(CR2, ADD_1, 28, 4) /* ADD[7:4] */
23
case 5: /* Auto-neg Link Partner Ability */
20
- FIELD(CR2, ADD_0, 24, 1) /* ADD[3:0] */
24
- val = 0x0f71;
21
+ FIELD(CR2, ADD_0, 24, 4) /* ADD[3:0] */
25
+ val = 0x0fe1;
22
FIELD(CR2, RTOEN, 23, 1) /* Receiver timeout enable */
26
break;
23
FIELD(CR2, ABRMOD, 21, 2) /* Auto baud rate mode */
27
case 6: /* Auto-neg Expansion */
24
FIELD(CR2, ABREN, 20, 1) /* Auto baud rate enable */
28
val = 1;
25
--
29
--
26
2.34.1
30
2.34.1
27
28
diff view generated by jsdifflib
1
From: Tanmay Patil <tanmaynpatil105@gmail.com>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Some of the source files for older devices use hardcoded tabs
3
Prefer named constants over magic values for better readability.
4
instead of our current coding standard's required spaces.
5
Fix these in the following files:
6
    - hw/arm/boot.c
7
    - hw/char/omap_uart.c
8
    - hw/gpio/zaurus.c
9
    - hw/input/tsc2005.c
10
4
11
This commit is mostly whitespace-only changes; it also
12
adds curly-braces to some 'if' statements.
13
14
This addresses part of https://gitlab.com/qemu-project/qemu/-/issues/373
15
but some other files remain to be handled.
16
17
Signed-off-by: Tanmay Patil <tanmaynpatil105@gmail.com>
18
Message-id: 20240508081502.88375-1-tanmaynpatil105@gmail.com
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
[PMM: tweaked commit message]
6
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
7
Tested-by: Guenter Roeck <linux@roeck-us.net>
8
Message-id: 20241102125724.532843-5-shentey@gmail.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
10
---
23
hw/arm/boot.c | 8 +--
11
include/hw/net/mii.h | 6 +++++
24
hw/char/omap_uart.c | 49 +++++++++--------
12
hw/net/lan9118_phy.c | 63 ++++++++++++++++++++++++++++----------------
25
hw/gpio/zaurus.c | 59 ++++++++++----------
13
2 files changed, 46 insertions(+), 23 deletions(-)
26
hw/input/tsc2005.c | 130 ++++++++++++++++++++++++--------------------
27
4 files changed, 130 insertions(+), 116 deletions(-)
28
14
29
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
15
diff --git a/include/hw/net/mii.h b/include/hw/net/mii.h
30
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/boot.c
17
--- a/include/hw/net/mii.h
32
+++ b/hw/arm/boot.c
18
+++ b/include/hw/net/mii.h
33
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args_old(const struct arm_boot_info *info,
19
@@ -XXX,XX +XXX,XX @@
34
WRITE_WORD(p, info->ram_size / 4096);
20
#define MII_BMSR_JABBER (1 << 1) /* Jabber detected */
35
/* ramdisk_size */
21
#define MII_BMSR_EXTCAP (1 << 0) /* Ext-reg capability */
36
WRITE_WORD(p, 0);
22
37
-#define FLAG_READONLY    1
23
+#define MII_ANAR_RFAULT (1 << 13) /* Say we can detect faults */
38
-#define FLAG_RDLOAD    4
24
#define MII_ANAR_PAUSE_ASYM (1 << 11) /* Try for asymmetric pause */
39
-#define FLAG_RDPROMPT    8
25
#define MII_ANAR_PAUSE (1 << 10) /* Try for pause */
40
+#define FLAG_READONLY 1
26
#define MII_ANAR_TXFD (1 << 8)
41
+#define FLAG_RDLOAD 4
27
@@ -XXX,XX +XXX,XX @@
42
+#define FLAG_RDPROMPT 8
28
#define MII_ANAR_10FD (1 << 6)
43
/* flags */
29
#define MII_ANAR_10 (1 << 5)
44
WRITE_WORD(p, FLAG_READONLY | FLAG_RDLOAD | FLAG_RDPROMPT);
30
#define MII_ANAR_CSMACD (1 << 0)
45
/* rootdev */
31
+#define MII_ANAR_SELECT (0x001f) /* Selector bits */
46
- WRITE_WORD(p, (31 << 8) | 0);    /* /dev/mtdblock0 */
32
47
+ WRITE_WORD(p, (31 << 8) | 0); /* /dev/mtdblock0 */
33
#define MII_ANLPAR_ACK (1 << 14)
48
/* video_num_cols */
34
#define MII_ANLPAR_PAUSEASY (1 << 11) /* can pause asymmetrically */
49
WRITE_WORD(p, 0);
35
@@ -XXX,XX +XXX,XX @@
50
/* video_num_rows */
36
#define RTL8201CP_PHYID1 0x0000
51
diff --git a/hw/char/omap_uart.c b/hw/char/omap_uart.c
37
#define RTL8201CP_PHYID2 0x8201
38
39
+/* SMSC LAN9118 */
40
+#define SMSCLAN9118_PHYID1 0x0007
41
+#define SMSCLAN9118_PHYID2 0xc0d1
42
+
43
/* RealTek 8211E */
44
#define RTL8211E_PHYID1 0x001c
45
#define RTL8211E_PHYID2 0xc915
46
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
52
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/char/omap_uart.c
48
--- a/hw/net/lan9118_phy.c
54
+++ b/hw/char/omap_uart.c
49
+++ b/hw/net/lan9118_phy.c
55
@@ -XXX,XX +XXX,XX @@ struct omap_uart_s *omap_uart_init(hwaddr base,
50
@@ -XXX,XX +XXX,XX @@
56
s->fclk = fclk;
51
57
s->irq = irq;
52
#include "qemu/osdep.h"
58
s->serial = serial_mm_init(get_system_memory(), base, 2, irq,
53
#include "hw/net/lan9118_phy.h"
59
- omap_clk_getrate(fclk)/16,
54
+#include "hw/net/mii.h"
60
+ omap_clk_getrate(fclk) / 16,
55
#include "hw/irq.h"
61
chr ?: qemu_chr_new(label, "null", NULL),
56
#include "hw/resettable.h"
62
DEVICE_NATIVE_ENDIAN);
57
#include "migration/vmstate.h"
63
return s;
58
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
64
@@ -XXX,XX +XXX,XX @@ static uint64_t omap_uart_read(void *opaque, hwaddr addr, unsigned size)
59
uint16_t val;
65
}
60
66
61
switch (reg) {
67
switch (addr) {
62
- case 0: /* Basic Control */
68
- case 0x20:    /* MDR1 */
63
+ case MII_BMCR:
69
+ case 0x20: /* MDR1 */
64
val = s->control;
70
return s->mdr[0];
71
- case 0x24:    /* MDR2 */
72
+ case 0x24: /* MDR2 */
73
return s->mdr[1];
74
- case 0x40:    /* SCR */
75
+ case 0x40: /* SCR */
76
return s->scr;
77
- case 0x44:    /* SSR */
78
+ case 0x44: /* SSR */
79
return 0x0;
80
- case 0x48:    /* EBLR (OMAP2) */
81
+ case 0x48: /* EBLR (OMAP2) */
82
return s->eblr;
83
- case 0x4C:    /* OSC_12M_SEL (OMAP1) */
84
+ case 0x4C: /* OSC_12M_SEL (OMAP1) */
85
return s->clksel;
86
- case 0x50:    /* MVR */
87
+ case 0x50: /* MVR */
88
return 0x30;
89
- case 0x54:    /* SYSC (OMAP2) */
90
+ case 0x54: /* SYSC (OMAP2) */
91
return s->syscontrol;
92
- case 0x58:    /* SYSS (OMAP2) */
93
+ case 0x58: /* SYSS (OMAP2) */
94
return 1;
95
- case 0x5c:    /* WER (OMAP2) */
96
+ case 0x5c: /* WER (OMAP2) */
97
return s->wkup;
98
- case 0x60:    /* CFPS (OMAP2) */
99
+ case 0x60: /* CFPS (OMAP2) */
100
return s->cfps;
101
}
102
103
@@ -XXX,XX +XXX,XX @@ static void omap_uart_write(void *opaque, hwaddr addr,
104
}
105
106
switch (addr) {
107
- case 0x20:    /* MDR1 */
108
+ case 0x20: /* MDR1 */
109
s->mdr[0] = value & 0x7f;
110
break;
65
break;
111
- case 0x24:    /* MDR2 */
66
- case 1: /* Basic Status */
112
+ case 0x24: /* MDR2 */
67
+ case MII_BMSR:
113
s->mdr[1] = value & 0xff;
68
val = s->status;
114
break;
69
break;
115
- case 0x40:    /* SCR */
70
- case 2: /* ID1 */
116
+ case 0x40: /* SCR */
71
- val = 0x0007;
117
s->scr = value & 0xff;
72
+ case MII_PHYID1:
73
+ val = SMSCLAN9118_PHYID1;
118
break;
74
break;
119
- case 0x48:    /* EBLR (OMAP2) */
75
- case 3: /* ID2 */
120
+ case 0x48: /* EBLR (OMAP2) */
76
- val = 0xc0d1;
121
s->eblr = value & 0xff;
77
+ case MII_PHYID2:
78
+ val = SMSCLAN9118_PHYID2;
122
break;
79
break;
123
- case 0x4C:    /* OSC_12M_SEL (OMAP1) */
80
- case 4: /* Auto-neg advertisement */
124
+ case 0x4C: /* OSC_12M_SEL (OMAP1) */
81
+ case MII_ANAR:
125
s->clksel = value & 1;
82
val = s->advertise;
126
break;
83
break;
127
- case 0x44:    /* SSR */
84
- case 5: /* Auto-neg Link Partner Ability */
128
- case 0x50:    /* MVR */
85
- val = 0x0fe1;
129
- case 0x58:    /* SYSS (OMAP2) */
86
+ case MII_ANLPAR:
130
+ case 0x44: /* SSR */
87
+ val = MII_ANLPAR_PAUSEASY | MII_ANLPAR_PAUSE | MII_ANLPAR_T4 |
131
+ case 0x50: /* MVR */
88
+ MII_ANLPAR_TXFD | MII_ANLPAR_TX | MII_ANLPAR_10FD |
132
+ case 0x58: /* SYSS (OMAP2) */
89
+ MII_ANLPAR_10 | MII_ANLPAR_CSMACD;
133
OMAP_RO_REG(addr);
134
break;
90
break;
135
- case 0x54:    /* SYSC (OMAP2) */
91
- case 6: /* Auto-neg Expansion */
136
+ case 0x54: /* SYSC (OMAP2) */
92
- val = 1;
137
s->syscontrol = value & 0x1d;
93
+ case MII_ANER:
138
- if (value & 2)
94
+ val = MII_ANER_NWAY;
139
+ if (value & 2) {
140
omap_uart_reset(s);
141
+ }
142
break;
95
break;
143
- case 0x5c:    /* WER (OMAP2) */
96
case 29: /* Interrupt source. */
144
+ case 0x5c: /* WER (OMAP2) */
97
val = s->ints;
145
s->wkup = value & 0x7f;
98
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
146
break;
99
trace_lan9118_phy_write(val, reg);
147
- case 0x60:    /* CFPS (OMAP2) */
148
+ case 0x60: /* CFPS (OMAP2) */
149
s->cfps = value & 0xff;
150
break;
151
default:
152
diff --git a/hw/gpio/zaurus.c b/hw/gpio/zaurus.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/hw/gpio/zaurus.c
155
+++ b/hw/gpio/zaurus.c
156
@@ -XXX,XX +XXX,XX @@ struct ScoopInfo {
157
uint16_t isr;
158
};
159
160
-#define SCOOP_MCR    0x00
161
-#define SCOOP_CDR    0x04
162
-#define SCOOP_CSR    0x08
163
-#define SCOOP_CPR    0x0c
164
-#define SCOOP_CCR    0x10
165
-#define SCOOP_IRR_IRM    0x14
166
-#define SCOOP_IMR    0x18
167
-#define SCOOP_ISR    0x1c
168
-#define SCOOP_GPCR    0x20
169
-#define SCOOP_GPWR    0x24
170
-#define SCOOP_GPRR    0x28
171
+#define SCOOP_MCR 0x00
172
+#define SCOOP_CDR 0x04
173
+#define SCOOP_CSR 0x08
174
+#define SCOOP_CPR 0x0c
175
+#define SCOOP_CCR 0x10
176
+#define SCOOP_IRR_IRM 0x14
177
+#define SCOOP_IMR 0x18
178
+#define SCOOP_ISR 0x1c
179
+#define SCOOP_GPCR 0x20
180
+#define SCOOP_GPWR 0x24
181
+#define SCOOP_GPRR 0x28
182
183
-static inline void scoop_gpio_handler_update(ScoopInfo *s) {
184
+static inline void scoop_gpio_handler_update(ScoopInfo *s)
185
+{
186
uint32_t level, diff;
187
int bit;
188
level = s->gpio_level & s->gpio_dir;
189
@@ -XXX,XX +XXX,XX @@ static void scoop_write(void *opaque, hwaddr addr,
190
break;
191
case SCOOP_CPR:
192
s->power = value;
193
- if (value & 0x80)
194
+ if (value & 0x80) {
195
s->power |= 0x8040;
196
+ }
197
break;
198
case SCOOP_CCR:
199
s->ccr = value;
200
@@ -XXX,XX +XXX,XX @@ static void scoop_write(void *opaque, hwaddr addr,
201
scoop_gpio_handler_update(s);
202
break;
203
case SCOOP_GPWR:
204
- case SCOOP_GPRR:    /* GPRR is probably R/O in real HW */
205
+ case SCOOP_GPRR: /* GPRR is probably R/O in real HW */
206
s->gpio_level = value & s->gpio_dir;
207
scoop_gpio_handler_update(s);
208
break;
209
@@ -XXX,XX +XXX,XX @@ static void scoop_gpio_set(void *opaque, int line, int level)
210
{
211
ScoopInfo *s = (ScoopInfo *) opaque;
212
213
- if (level)
214
+ if (level) {
215
s->gpio_level |= (1 << line);
216
- else
217
+ } else {
218
s->gpio_level &= ~(1 << line);
219
+ }
220
}
221
222
static void scoop_init(Object *obj)
223
@@ -XXX,XX +XXX,XX @@ static int scoop_post_load(void *opaque, int version_id)
224
return 0;
225
}
226
227
-static bool is_version_0 (void *opaque, int version_id)
228
+static bool is_version_0(void *opaque, int version_id)
229
{
230
return version_id == 0;
231
}
232
@@ -XXX,XX +XXX,XX @@ type_init(scoop_register_types)
233
234
/* Write the bootloader parameters memory area. */
235
236
-#define MAGIC_CHG(a, b, c, d)    ((d << 24) | (c << 16) | (b << 8) | a)
237
+#define MAGIC_CHG(a, b, c, d) ((d << 24) | (c << 16) | (b << 8) | a)
238
239
static struct QEMU_PACKED sl_param_info {
240
uint32_t comadj_keyword;
241
@@ -XXX,XX +XXX,XX @@ static struct QEMU_PACKED sl_param_info {
242
uint32_t phad_keyword;
243
int32_t phadadj;
244
} zaurus_bootparam = {
245
- .comadj_keyword    = MAGIC_CHG('C', 'M', 'A', 'D'),
246
- .comadj        = 125,
247
- .uuid_keyword    = MAGIC_CHG('U', 'U', 'I', 'D'),
248
- .uuid        = { -1 },
249
- .touch_keyword    = MAGIC_CHG('T', 'U', 'C', 'H'),
250
- .touch_xp        = -1,
251
- .adadj_keyword    = MAGIC_CHG('B', 'V', 'A', 'D'),
252
- .adadj        = -1,
253
- .phad_keyword    = MAGIC_CHG('P', 'H', 'A', 'D'),
254
- .phadadj        = 0x01,
255
+ .comadj_keyword = MAGIC_CHG('C', 'M', 'A', 'D'),
256
+ .comadj = 125,
257
+ .uuid_keyword = MAGIC_CHG('U', 'U', 'I', 'D'),
258
+ .uuid = { -1 },
259
+ .touch_keyword = MAGIC_CHG('T', 'U', 'C', 'H'),
260
+ .touch_xp = -1,
261
+ .adadj_keyword = MAGIC_CHG('B', 'V', 'A', 'D'),
262
+ .adadj = -1,
263
+ .phad_keyword = MAGIC_CHG('P', 'H', 'A', 'D'),
264
+ .phadadj = 0x01,
265
};
266
267
void sl_bootparam_write(hwaddr ptr)
268
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
269
index XXXXXXX..XXXXXXX 100644
270
--- a/hw/input/tsc2005.c
271
+++ b/hw/input/tsc2005.c
272
@@ -XXX,XX +XXX,XX @@
273
#include "migration/vmstate.h"
274
#include "trace.h"
275
276
-#define TSC_CUT_RESOLUTION(value, p)    ((value) >> (16 - (p ? 12 : 10)))
277
+#define TSC_CUT_RESOLUTION(value, p) ((value) >> (16 - (p ? 12 : 10)))
278
279
typedef struct {
280
- qemu_irq pint;    /* Combination of the nPENIRQ and DAV signals */
281
+ qemu_irq pint; /* Combination of the nPENIRQ and DAV signals */
282
QEMUTimer *timer;
283
uint16_t model;
284
285
@@ -XXX,XX +XXX,XX @@ typedef struct {
286
} TSC2005State;
287
288
enum {
289
- TSC_MODE_XYZ_SCAN    = 0x0,
290
+ TSC_MODE_XYZ_SCAN = 0x0,
291
TSC_MODE_XY_SCAN,
292
TSC_MODE_X,
293
TSC_MODE_Y,
294
@@ -XXX,XX +XXX,XX @@ enum {
295
};
296
297
static const uint16_t mode_regs[16] = {
298
- 0xf000,    /* X, Y, Z scan */
299
- 0xc000,    /* X, Y scan */
300
- 0x8000,    /* X */
301
- 0x4000,    /* Y */
302
- 0x3000,    /* Z */
303
- 0x0800,    /* AUX */
304
- 0x0400,    /* TEMP1 */
305
- 0x0200,    /* TEMP2 */
306
- 0x0800,    /* AUX scan */
307
- 0x0040,    /* X test */
308
- 0x0020,    /* Y test */
309
- 0x0080,    /* Short-circuit test */
310
- 0x0000,    /* Reserved */
311
- 0x0000,    /* X+, X- drivers */
312
- 0x0000,    /* Y+, Y- drivers */
313
- 0x0000,    /* Y+, X- drivers */
314
+ 0xf000, /* X, Y, Z scan */
315
+ 0xc000, /* X, Y scan */
316
+ 0x8000, /* X */
317
+ 0x4000, /* Y */
318
+ 0x3000, /* Z */
319
+ 0x0800, /* AUX */
320
+ 0x0400, /* TEMP1 */
321
+ 0x0200, /* TEMP2 */
322
+ 0x0800, /* AUX scan */
323
+ 0x0040, /* X test */
324
+ 0x0020, /* Y test */
325
+ 0x0080, /* Short-circuit test */
326
+ 0x0000, /* Reserved */
327
+ 0x0000, /* X+, X- drivers */
328
+ 0x0000, /* Y+, Y- drivers */
329
+ 0x0000, /* Y+, X- drivers */
330
};
331
332
-#define X_TRANSFORM(s)            \
333
+#define X_TRANSFORM(s) \
334
((s->y * s->tr[0] - s->x * s->tr[1]) / s->tr[2] + s->tr[3])
335
-#define Y_TRANSFORM(s)            \
336
+#define Y_TRANSFORM(s) \
337
((s->y * s->tr[4] - s->x * s->tr[5]) / s->tr[6] + s->tr[7])
338
-#define Z1_TRANSFORM(s)            \
339
+#define Z1_TRANSFORM(s) \
340
((400 - ((s)->x >> 7) + ((s)->pressure << 10)) << 4)
341
-#define Z2_TRANSFORM(s)            \
342
+#define Z2_TRANSFORM(s) \
343
((4000 + ((s)->y >> 7) - ((s)->pressure << 10)) << 4)
344
345
-#define AUX_VAL                (700 << 4)    /* +/- 3 at 12-bit */
346
-#define TEMP1_VAL            (1264 << 4)    /* +/- 5 at 12-bit */
347
-#define TEMP2_VAL            (1531 << 4)    /* +/- 5 at 12-bit */
348
+#define AUX_VAL (700 << 4) /* +/- 3 at 12-bit */
349
+#define TEMP1_VAL (1264 << 4) /* +/- 5 at 12-bit */
350
+#define TEMP2_VAL (1531 << 4) /* +/- 5 at 12-bit */
351
352
static uint16_t tsc2005_read(TSC2005State *s, int reg)
353
{
354
uint16_t ret;
355
100
356
switch (reg) {
101
switch (reg) {
357
- case 0x0:    /* X */
102
- case 0: /* Basic Control */
358
+ case 0x0: /* X */
103
- if (val & 0x8000) {
359
s->dav &= ~mode_regs[TSC_MODE_X];
104
+ case MII_BMCR:
360
return TSC_CUT_RESOLUTION(X_TRANSFORM(s), s->precision) +
105
+ if (val & MII_BMCR_RESET) {
361
(s->noise & 3);
106
lan9118_phy_reset(s);
362
- case 0x1:    /* Y */
107
} else {
363
+ case 0x1: /* Y */
108
- s->control = val & 0x7980;
364
s->dav &= ~mode_regs[TSC_MODE_Y];
109
+ s->control = val & (MII_BMCR_LOOPBACK | MII_BMCR_SPEED100 |
365
- s->noise ++;
110
+ MII_BMCR_AUTOEN | MII_BMCR_PDOWN | MII_BMCR_FD |
366
+ s->noise++;
111
+ MII_BMCR_CTST);
367
return TSC_CUT_RESOLUTION(Y_TRANSFORM(s), s->precision) ^
112
/* Complete autonegotiation immediately. */
368
(s->noise & 3);
113
- if (val & 0x1000) {
369
- case 0x2:    /* Z1 */
114
- s->status |= 0x0020;
370
+ case 0x2: /* Z1 */
115
+ if (val & MII_BMCR_AUTOEN) {
371
s->dav &= 0xdfff;
116
+ s->status |= MII_BMSR_AN_COMP;
372
return TSC_CUT_RESOLUTION(Z1_TRANSFORM(s), s->precision) -
117
}
373
(s->noise & 3);
374
- case 0x3:    /* Z2 */
375
+ case 0x3: /* Z2 */
376
s->dav &= 0xefff;
377
return TSC_CUT_RESOLUTION(Z2_TRANSFORM(s), s->precision) |
378
(s->noise & 3);
379
380
- case 0x4:    /* AUX */
381
+ case 0x4: /* AUX */
382
s->dav &= ~mode_regs[TSC_MODE_AUX];
383
return TSC_CUT_RESOLUTION(AUX_VAL, s->precision);
384
385
- case 0x5:    /* TEMP1 */
386
+ case 0x5: /* TEMP1 */
387
s->dav &= ~mode_regs[TSC_MODE_TEMP1];
388
return TSC_CUT_RESOLUTION(TEMP1_VAL, s->precision) -
389
(s->noise & 5);
390
- case 0x6:    /* TEMP2 */
391
+ case 0x6: /* TEMP2 */
392
s->dav &= 0xdfff;
393
s->dav &= ~mode_regs[TSC_MODE_TEMP2];
394
return TSC_CUT_RESOLUTION(TEMP2_VAL, s->precision) ^
395
(s->noise & 3);
396
397
- case 0x7:    /* Status */
398
+ case 0x7: /* Status */
399
ret = s->dav | (s->reset << 7) | (s->pdst << 2) | 0x0;
400
s->dav &= ~(mode_regs[TSC_MODE_X_TEST] | mode_regs[TSC_MODE_Y_TEST] |
401
mode_regs[TSC_MODE_TS_TEST]);
402
s->reset = true;
403
return ret;
404
405
- case 0x8: /* AUX high threshold */
406
+ case 0x8: /* AUX high threshold */
407
return s->aux_thr[1];
408
- case 0x9: /* AUX low threshold */
409
+ case 0x9: /* AUX low threshold */
410
return s->aux_thr[0];
411
412
- case 0xa: /* TEMP high threshold */
413
+ case 0xa: /* TEMP high threshold */
414
return s->temp_thr[1];
415
- case 0xb: /* TEMP low threshold */
416
+ case 0xb: /* TEMP low threshold */
417
return s->temp_thr[0];
418
419
- case 0xc:    /* CFR0 */
420
+ case 0xc: /* CFR0 */
421
return (s->pressure << 15) | ((!s->busy) << 14) |
422
- (s->nextprecision << 13) | s->timing[0];
423
- case 0xd:    /* CFR1 */
424
+ (s->nextprecision << 13) | s->timing[0];
425
+ case 0xd: /* CFR1 */
426
return s->timing[1];
427
- case 0xe:    /* CFR2 */
428
+ case 0xe: /* CFR2 */
429
return (s->pin_func << 14) | s->filter;
430
431
- case 0xf:    /* Function select status */
432
+ case 0xf: /* Function select status */
433
return s->function >= 0 ? 1 << s->function : 0;
434
}
435
436
@@ -XXX,XX +XXX,XX @@ static void tsc2005_write(TSC2005State *s, int reg, uint16_t data)
437
s->temp_thr[0] = data;
438
break;
439
440
- case 0xc:    /* CFR0 */
441
+ case 0xc: /* CFR0 */
442
s->host_mode = (data >> 15) != 0;
443
if (s->enabled != !(data & 0x4000)) {
444
s->enabled = !(data & 0x4000);
445
trace_tsc2005_sense(s->enabled ? "enabled" : "disabled");
446
- if (s->busy && !s->enabled)
447
+ if (s->busy && !s->enabled) {
448
timer_del(s->timer);
449
+ }
450
s->busy = s->busy && s->enabled;
451
}
452
s->nextprecision = (data >> 13) & 1;
453
@@ -XXX,XX +XXX,XX @@ static void tsc2005_write(TSC2005State *s, int reg, uint16_t data)
454
"tsc2005_write: illegal conversion clock setting\n");
455
}
118
}
456
break;
119
break;
457
- case 0xd:    /* CFR1 */
120
- case 4: /* Auto-neg advertisement */
458
+ case 0xd: /* CFR1 */
121
- s->advertise = (val & 0x2d7f) | 0x80;
459
s->timing[1] = data & 0xf07;
122
+ case MII_ANAR:
123
+ s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
124
+ MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
125
+ MII_ANAR_SELECT))
126
+ | MII_ANAR_TX;
460
break;
127
break;
461
- case 0xe:    /* CFR2 */
128
case 30: /* Interrupt mask */
462
+ case 0xe: /* CFR2 */
129
s->int_mask = val & 0xff;
463
s->pin_func = (data >> 14) & 3;
130
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
464
s->filter = data & 0x3fff;
131
/* Autonegotiation status mirrors link status. */
465
break;
132
if (link_down) {
466
@@ -XXX,XX +XXX,XX @@ static void tsc2005_pin_update(TSC2005State *s)
133
trace_lan9118_phy_update_link("down");
467
switch (s->nextfunction) {
134
- s->status &= ~0x0024;
468
case TSC_MODE_XYZ_SCAN:
135
+ s->status &= ~(MII_BMSR_AN_COMP | MII_BMSR_LINK_ST);
469
case TSC_MODE_XY_SCAN:
136
s->ints |= PHY_INT_DOWN;
470
- if (!s->host_mode && s->dav)
137
} else {
471
+ if (!s->host_mode && s->dav) {
138
trace_lan9118_phy_update_link("up");
472
s->enabled = false;
139
- s->status |= 0x0024;
473
- if (!s->pressure)
140
+ s->status |= MII_BMSR_AN_COMP | MII_BMSR_LINK_ST;
474
+ }
141
s->ints |= PHY_INT_ENERGYON;
475
+ if (!s->pressure) {
142
s->ints |= PHY_INT_AUTONEG_COMPLETE;
476
return;
477
+ }
478
/* Fall through */
479
case TSC_MODE_AUX_SCAN:
480
break;
481
@@ -XXX,XX +XXX,XX @@ static void tsc2005_pin_update(TSC2005State *s)
482
case TSC_MODE_X:
483
case TSC_MODE_Y:
484
case TSC_MODE_Z:
485
- if (!s->pressure)
486
+ if (!s->pressure) {
487
return;
488
+ }
489
/* Fall through */
490
case TSC_MODE_AUX:
491
case TSC_MODE_TEMP1:
492
@@ -XXX,XX +XXX,XX @@ static void tsc2005_pin_update(TSC2005State *s)
493
case TSC_MODE_X_TEST:
494
case TSC_MODE_Y_TEST:
495
case TSC_MODE_TS_TEST:
496
- if (s->dav)
497
+ if (s->dav) {
498
s->enabled = false;
499
+ }
500
break;
501
502
case TSC_MODE_RESERVED:
503
@@ -XXX,XX +XXX,XX @@ static void tsc2005_pin_update(TSC2005State *s)
504
return;
505
}
143
}
506
144
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_reset(Lan9118PhyState *s)
507
- if (!s->enabled || s->busy)
145
{
508
+ if (!s->enabled || s->busy) {
146
trace_lan9118_phy_reset();
509
return;
147
510
+ }
148
- s->control = 0x3000;
511
149
- s->status = 0x7809;
512
s->busy = true;
150
- s->advertise = 0x01e1;
513
s->precision = s->nextprecision;
151
+ s->control = MII_BMCR_AUTOEN | MII_BMCR_SPEED100;
514
s->function = s->nextfunction;
152
+ s->status = MII_BMSR_100TX_FD
515
- s->pdst = !s->pnd0;    /* Synchronised on internal clock */
153
+ | MII_BMSR_100TX_HD
516
+ s->pdst = !s->pnd0; /* Synchronised on internal clock */
154
+ | MII_BMSR_10T_FD
517
expires = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
155
+ | MII_BMSR_10T_HD
518
(NANOSECONDS_PER_SECOND >> 7);
156
+ | MII_BMSR_AUTONEG
519
timer_mod(s->timer, expires);
157
+ | MII_BMSR_EXTCAP;
520
@@ -XXX,XX +XXX,XX @@ static uint8_t tsc2005_txrx_word(void *opaque, uint8_t value)
158
+ s->advertise = MII_ANAR_TXFD
521
TSC2005State *s = opaque;
159
+ | MII_ANAR_TX
522
uint32_t ret = 0;
160
+ | MII_ANAR_10FD
523
161
+ | MII_ANAR_10
524
- switch (s->state ++) {
162
+ | MII_ANAR_CSMACD;
525
+ switch (s->state++) {
163
s->int_mask = 0;
526
case 0:
164
s->ints = 0;
527
if (value & 0x80) {
165
lan9118_phy_update_link(s, s->link_down);
528
/* Command */
529
@@ -XXX,XX +XXX,XX @@ static uint8_t tsc2005_txrx_word(void *opaque, uint8_t value)
530
if (s->enabled != !(value & 1)) {
531
s->enabled = !(value & 1);
532
trace_tsc2005_sense(s->enabled ? "enabled" : "disabled");
533
- if (s->busy && !s->enabled)
534
+ if (s->busy && !s->enabled) {
535
timer_del(s->timer);
536
+ }
537
s->busy = s->busy && s->enabled;
538
}
539
tsc2005_pin_update(s);
540
@@ -XXX,XX +XXX,XX @@ static uint8_t tsc2005_txrx_word(void *opaque, uint8_t value)
541
break;
542
543
case 1:
544
- if (s->command)
545
+ if (s->command) {
546
ret = (s->data >> 8) & 0xff;
547
- else
548
+ } else {
549
s->data |= value << 8;
550
+ }
551
break;
552
553
case 2:
554
@@ -XXX,XX +XXX,XX @@ static void tsc2005_timer_tick(void *opaque)
555
556
/* Timer ticked -- a set of conversions has been finished. */
557
558
- if (!s->busy)
559
+ if (!s->busy) {
560
return;
561
+ }
562
563
s->busy = false;
564
s->dav |= mode_regs[function];
565
@@ -XXX,XX +XXX,XX @@ static void tsc2005_touchscreen_event(void *opaque,
566
* signaling TS events immediately, but for now we simulate
567
* the first conversion delay for sake of correctness.
568
*/
569
- if (p != s->pressure)
570
+ if (p != s->pressure) {
571
tsc2005_pin_update(s);
572
+ }
573
}
574
575
static int tsc2005_post_load(void *opaque, int version_id)
576
--
166
--
577
2.34.1
167
2.34.1
diff view generated by jsdifflib
1
From: Rayhan Faizel <rayhan.faizel@gmail.com>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
None of the RPi boards have ADC on-board. In real life, an external ADC chip
3
The real device advertises this mode and the device model already advertises
4
is required to operate on analog signals.
4
100 mbps half duplex and 10 mbps full+half duplex. So advertise this mode to
5
make the model more realistic.
5
6
6
Signed-off-by: Rayhan Faizel <rayhan.faizel@gmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
8
Message-id: 20240512085716.222326-1-rayhan.faizel@gmail.com
9
Tested-by: Guenter Roeck <linux@roeck-us.net>
10
Message-id: 20241102125724.532843-6-shentey@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
docs/system/arm/raspi.rst | 1 -
13
hw/net/lan9118_phy.c | 4 ++--
12
1 file changed, 1 deletion(-)
14
1 file changed, 2 insertions(+), 2 deletions(-)
13
15
14
diff --git a/docs/system/arm/raspi.rst b/docs/system/arm/raspi.rst
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/system/arm/raspi.rst
18
--- a/hw/net/lan9118_phy.c
17
+++ b/docs/system/arm/raspi.rst
19
+++ b/hw/net/lan9118_phy.c
18
@@ -XXX,XX +XXX,XX @@ Implemented devices
20
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
19
Missing devices
21
break;
20
---------------
22
case MII_ANAR:
21
23
s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
22
- * Analog to Digital Converter (ADC)
24
- MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
23
* Pulse Width Modulation (PWM)
25
- MII_ANAR_SELECT))
24
* PCIE Root Port (raspi4b)
26
+ MII_ANAR_PAUSE | MII_ANAR_TXFD | MII_ANAR_10FD |
25
* GENET Ethernet Controller (raspi4b)
27
+ MII_ANAR_10 | MII_ANAR_SELECT))
28
| MII_ANAR_TX;
29
break;
30
case 30: /* Interrupt mask */
26
--
31
--
27
2.34.1
32
2.34.1
28
29
diff view generated by jsdifflib
New patch
1
For IEEE fused multiply-add, the (0 * inf) + NaN case should raise
2
Invalid for the multiplication of 0 by infinity. Currently we handle
3
this in the per-architecture ifdef ladder in pickNaNMulAdd().
4
However, since this isn't really architecture specific we can hoist
5
it up to the generic code.
1
6
7
For the cases where the infzero test in pickNaNMulAdd was
8
returning 2, we can delete the check entirely and allow the
9
code to fall into the normal pick-a-NaN handling, because this
10
will return 2 anyway (input 'c' being the only NaN in this case).
11
For the cases where infzero was returning 3 to indicate "return
12
the default NaN", we must retain that "return 3".
13
14
For Arm, this looks like it might be a behaviour change because we
15
used to set float_flag_invalid | float_flag_invalid_imz only if C is
16
a quiet NaN. However, it is not, because Arm target code never looks
17
at float_flag_invalid_imz, and for the (0 * inf) + SNaN case we
18
already raised float_flag_invalid via the "abc_mask &
19
float_cmask_snan" check in pick_nan_muladd.
20
21
For any target architecture using the "default implementation" at the
22
bottom of the ifdef, this is a behaviour change but will be fixing a
23
bug (where we failed to raise the Invalid exception for (0 * inf +
24
QNaN). The architectures using the default case are:
25
* hppa
26
* i386
27
* sh4
28
* tricore
29
30
The x86, Tricore and SH4 CPU architecture manuals are clear that this
31
should have raised Invalid; HPPA is a bit vaguer but still seems
32
clear enough.
33
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20241202131347.498124-2-peter.maydell@linaro.org
37
---
38
fpu/softfloat-parts.c.inc | 13 +++++++------
39
fpu/softfloat-specialize.c.inc | 29 +----------------------------
40
2 files changed, 8 insertions(+), 34 deletions(-)
41
42
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
43
index XXXXXXX..XXXXXXX 100644
44
--- a/fpu/softfloat-parts.c.inc
45
+++ b/fpu/softfloat-parts.c.inc
46
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
47
int ab_mask, int abc_mask)
48
{
49
int which;
50
+ bool infzero = (ab_mask == float_cmask_infzero);
51
52
if (unlikely(abc_mask & float_cmask_snan)) {
53
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
54
}
55
56
- which = pickNaNMulAdd(a->cls, b->cls, c->cls,
57
- ab_mask == float_cmask_infzero, s);
58
+ if (infzero) {
59
+ /* This is (0 * inf) + NaN or (inf * 0) + NaN */
60
+ float_raise(float_flag_invalid | float_flag_invalid_imz, s);
61
+ }
62
+
63
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
64
65
if (s->default_nan_mode || which == 3) {
66
- /*
67
- * Note that this check is after pickNaNMulAdd so that function
68
- * has an opportunity to set the Invalid flag for infzero.
69
- */
70
parts_default_nan(a, s);
71
return a;
72
}
73
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
74
index XXXXXXX..XXXXXXX 100644
75
--- a/fpu/softfloat-specialize.c.inc
76
+++ b/fpu/softfloat-specialize.c.inc
77
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
78
* the default NaN
79
*/
80
if (infzero && is_qnan(c_cls)) {
81
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
82
return 3;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
86
* case sets InvalidOp and returns the default NaN
87
*/
88
if (infzero) {
89
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
90
return 3;
91
}
92
/* Prefer sNaN over qNaN, in the a, b, c order. */
93
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
94
* For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
95
* case sets InvalidOp and returns the input value 'c'
96
*/
97
- if (infzero) {
98
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
99
- return 2;
100
- }
101
/* Prefer sNaN over qNaN, in the c, a, b order. */
102
if (is_snan(c_cls)) {
103
return 2;
104
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
105
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
106
* case sets InvalidOp and returns the input value 'c'
107
*/
108
- if (infzero) {
109
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
110
- return 2;
111
- }
112
+
113
/* Prefer sNaN over qNaN, in the c, a, b order. */
114
if (is_snan(c_cls)) {
115
return 2;
116
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
117
* to return an input NaN if we have one (ie c) rather than generating
118
* a default NaN
119
*/
120
- if (infzero) {
121
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
122
- return 2;
123
- }
124
125
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
126
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
127
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
128
return 1;
129
}
130
#elif defined(TARGET_RISCV)
131
- /* For RISC-V, InvalidOp is set when multiplicands are Inf and zero */
132
- if (infzero) {
133
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
134
- }
135
return 3; /* default NaN */
136
#elif defined(TARGET_S390X)
137
if (infzero) {
138
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
139
return 3;
140
}
141
142
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
143
return 2;
144
}
145
#elif defined(TARGET_SPARC)
146
- /* For (inf,0,nan) return c. */
147
- if (infzero) {
148
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
149
- return 2;
150
- }
151
/* Prefer SNaN over QNaN, order C, B, A. */
152
if (is_snan(c_cls)) {
153
return 2;
154
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
155
* For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
156
* an input NaN if we have one (ie c).
157
*/
158
- if (infzero) {
159
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
160
- return 2;
161
- }
162
if (status->use_first_nan) {
163
if (is_nan(a_cls)) {
164
return 0;
165
--
166
2.34.1
diff view generated by jsdifflib
New patch
1
If the target sets default_nan_mode then we're always going to return
2
the default NaN, and pickNaNMulAdd() no longer has any side effects.
3
For consistency with pickNaN(), check for default_nan_mode before
4
calling pickNaNMulAdd().
1
5
6
When we convert pickNaNMulAdd() to allow runtime selection of the NaN
7
propagation rule, this means we won't have to make the targets which
8
use default_nan_mode also set a propagation rule.
9
10
Since RiscV always uses default_nan_mode, this allows us to remove
11
its ifdef case from pickNaNMulAdd().
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-3-peter.maydell@linaro.org
16
---
17
fpu/softfloat-parts.c.inc | 8 ++++++--
18
fpu/softfloat-specialize.c.inc | 9 +++++++--
19
2 files changed, 13 insertions(+), 4 deletions(-)
20
21
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
22
index XXXXXXX..XXXXXXX 100644
23
--- a/fpu/softfloat-parts.c.inc
24
+++ b/fpu/softfloat-parts.c.inc
25
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
26
float_raise(float_flag_invalid | float_flag_invalid_imz, s);
27
}
28
29
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
30
+ if (s->default_nan_mode) {
31
+ which = 3;
32
+ } else {
33
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ }
35
36
- if (s->default_nan_mode || which == 3) {
37
+ if (which == 3) {
38
parts_default_nan(a, s);
39
return a;
40
}
41
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
42
index XXXXXXX..XXXXXXX 100644
43
--- a/fpu/softfloat-specialize.c.inc
44
+++ b/fpu/softfloat-specialize.c.inc
45
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
46
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
47
bool infzero, float_status *status)
48
{
49
+ /*
50
+ * We guarantee not to require the target to tell us how to
51
+ * pick a NaN if we're always returning the default NaN.
52
+ * But if we're not in default-NaN mode then the target must
53
+ * specify.
54
+ */
55
+ assert(!status->default_nan_mode);
56
#if defined(TARGET_ARM)
57
/* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
58
* the default NaN
59
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
60
} else {
61
return 1;
62
}
63
-#elif defined(TARGET_RISCV)
64
- return 3; /* default NaN */
65
#elif defined(TARGET_S390X)
66
if (infzero) {
67
return 3;
68
--
69
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
IEEE 758 does not define a fixed rule for what NaN to return in
2
2
the case of a fused multiply-add of inf * 0 + NaN. Different
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
architectures thus do different things:
4
Message-id: 20240506010403.6204-12-richard.henderson@linaro.org
4
* some return the default NaN
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
* some return the input NaN
6
* Arm returns the default NaN if the input NaN is quiet,
7
and the input NaN if it is signalling
8
9
We want to make this logic be runtime selected rather than
10
hardcoded into the binary, because:
11
* this will let us have multiple targets in one QEMU binary
12
* the Arm FEAT_AFP architectural feature includes letting
13
the guest select a NaN propagation rule at runtime
14
15
In this commit we add an enum for the propagation rule, the field in
16
float_status, and the corresponding getters and setters. We change
17
pickNaNMulAdd to honour this, but because all targets still leave
18
this field at its default 0 value, the fallback logic will pick the
19
rule type with the old ifdef ladder.
20
21
Note that four architectures both use the muladd softfloat functions
22
and did not have a branch of the ifdef ladder to specify their
23
behaviour (and so were ending up with the "default" case, probably
24
wrongly): i386, HPPA, SH4 and Tricore. SH4 and Tricore both set
25
default_nan_mode, and so will never get into pickNaNMulAdd(). For
26
HPPA and i386 we retain the same behaviour as the old default-case,
27
which is to not ever return the default NaN. This might not be
28
correct but it is not a behaviour change.
29
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
32
Message-id: 20241202131347.498124-4-peter.maydell@linaro.org
7
---
33
---
8
target/arm/tcg/a64.decode | 13 +
34
include/fpu/softfloat-helpers.h | 11 ++++
9
target/arm/tcg/translate-a64.c | 426 +++++++++++----------------------
35
include/fpu/softfloat-types.h | 23 +++++++++
10
2 files changed, 152 insertions(+), 287 deletions(-)
36
fpu/softfloat-specialize.c.inc | 91 ++++++++++++++++++++++-----------
11
37
3 files changed, 95 insertions(+), 30 deletions(-)
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
38
39
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
13
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
41
--- a/include/fpu/softfloat-helpers.h
15
+++ b/target/arm/tcg/a64.decode
42
+++ b/include/fpu/softfloat-helpers.h
16
@@ -XXX,XX +XXX,XX @@ SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
43
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
17
### Cryptographic XAR
44
status->float_2nan_prop_rule = rule;
18
45
}
19
XAR 1100 1110 100 rm:5 imm:6 rn:5 rd:5
46
20
+
47
+static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
21
+### Advanced SIMD scalar copy
48
+ float_status *status)
22
+
49
+{
23
+DUP_element_s 0101 1110 000 imm:5 0 0000 1 rn:5 rd:5
50
+ status->float_infzeronan_rule = rule;
24
+
51
+}
25
+### Advanced SIMD copy
52
+
26
+
53
static inline void set_flush_to_zero(bool val, float_status *status)
27
+DUP_element_v 0 q:1 00 1110 000 imm:5 0 0000 1 rn:5 rd:5
54
{
28
+DUP_general 0 q:1 00 1110 000 imm:5 0 0001 1 rn:5 rd:5
55
status->flush_to_zero = val;
29
+INS_general 0 1 00 1110 000 imm:5 0 0011 1 rn:5 rd:5
56
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
30
+SMOV 0 q:1 00 1110 000 imm:5 0 0101 1 rn:5 rd:5
57
return status->float_2nan_prop_rule;
31
+UMOV 0 q:1 00 1110 000 imm:5 0 0111 1 rn:5 rd:5
58
}
32
+INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
59
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
60
+static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
61
+{
62
+ return status->float_infzeronan_rule;
63
+}
64
+
65
static inline bool get_flush_to_zero(float_status *status)
66
{
67
return status->flush_to_zero;
68
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
34
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
70
--- a/include/fpu/softfloat-types.h
36
+++ b/target/arm/tcg/translate-a64.c
71
+++ b/include/fpu/softfloat-types.h
37
@@ -XXX,XX +XXX,XX @@ static bool trans_XAR(DisasContext *s, arg_XAR *a)
72
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
38
return true;
73
float_2nan_prop_x87,
39
}
74
} Float2NaNPropRule;
40
75
41
+/*
76
+/*
42
+ * Advanced SIMD copy
77
+ * Rule for result of fused multiply-add 0 * Inf + NaN.
78
+ * This must be a NaN, but implementations differ on whether this
79
+ * is the input NaN or the default NaN.
80
+ *
81
+ * You don't need to set this if default_nan_mode is enabled.
82
+ * When not in default-NaN mode, it is an error for the target
83
+ * not to set the rule in float_status if it uses muladd, and we
84
+ * will assert if we need to handle an input NaN and no rule was
85
+ * selected.
43
+ */
86
+ */
44
+
87
+typedef enum __attribute__((__packed__)) {
45
+static bool decode_esz_idx(int imm, MemOp *pesz, unsigned *pidx)
88
+ /* No propagation rule specified */
46
+{
89
+ float_infzeronan_none = 0,
47
+ unsigned esz = ctz32(imm);
90
+ /* Result is never the default NaN (so always the input NaN) */
48
+ if (esz <= MO_64) {
91
+ float_infzeronan_dnan_never,
49
+ *pesz = esz;
92
+ /* Result is always the default NaN */
50
+ *pidx = imm >> (esz + 1);
93
+ float_infzeronan_dnan_always,
51
+ return true;
94
+ /* Result is the default NaN if the input NaN is quiet */
52
+ }
95
+ float_infzeronan_dnan_if_qnan,
53
+ return false;
96
+} FloatInfZeroNaNRule;
54
+}
97
+
55
+
98
/*
56
+static bool trans_DUP_element_s(DisasContext *s, arg_DUP_element_s *a)
99
* Floating Point Status. Individual architectures may maintain
57
+{
100
* several versions of float_status for different functions. The
58
+ MemOp esz;
101
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
59
+ unsigned idx;
102
FloatRoundMode float_rounding_mode;
60
+
103
FloatX80RoundPrec floatx80_rounding_precision;
61
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
104
Float2NaNPropRule float_2nan_prop_rule;
62
+ return false;
105
+ FloatInfZeroNaNRule float_infzeronan_rule;
63
+ }
106
bool tininess_before_rounding;
64
+ if (fp_access_check(s)) {
107
/* should denormalised results go to zero and set the inexact flag? */
65
+ /*
108
bool flush_to_zero;
66
+ * This instruction just extracts the specified element and
109
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
67
+ * zero-extends it into the bottom of the destination register.
110
index XXXXXXX..XXXXXXX 100644
68
+ */
111
--- a/fpu/softfloat-specialize.c.inc
69
+ TCGv_i64 tmp = tcg_temp_new_i64();
112
+++ b/fpu/softfloat-specialize.c.inc
70
+ read_vec_element(s, tmp, a->rn, idx, esz);
113
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
71
+ write_fp_dreg(s, a->rd, tmp);
114
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
72
+ }
115
bool infzero, float_status *status)
73
+ return true;
116
{
74
+}
117
+ FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
75
+
118
+
76
+static bool trans_DUP_element_v(DisasContext *s, arg_DUP_element_v *a)
119
/*
77
+{
120
* We guarantee not to require the target to tell us how to
78
+ MemOp esz;
121
* pick a NaN if we're always returning the default NaN.
79
+ unsigned idx;
122
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
80
+
123
* specify.
81
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
124
*/
82
+ return false;
125
assert(!status->default_nan_mode);
83
+ }
126
+
84
+ if (esz == MO_64 && !a->q) {
127
+ if (rule == float_infzeronan_none) {
85
+ return false;
128
+ /*
86
+ }
129
+ * Temporarily fall back to ifdef ladder
87
+ if (fp_access_check(s)) {
130
+ */
88
+ tcg_gen_gvec_dup_mem(esz, vec_full_reg_offset(s, a->rd),
131
#if defined(TARGET_ARM)
89
+ vec_reg_offset(s, a->rn, idx, esz),
132
- /* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
90
+ a->q ? 16 : 8, vec_full_reg_size(s));
133
- * the default NaN
91
+ }
134
- */
92
+ return true;
135
- if (infzero && is_qnan(c_cls)) {
93
+}
136
- return 3;
94
+
137
+ /*
95
+static bool trans_DUP_general(DisasContext *s, arg_DUP_general *a)
138
+ * For ARM, the (inf,zero,qnan) case returns the default NaN,
96
+{
139
+ * but (inf,zero,snan) returns the input NaN.
97
+ MemOp esz;
140
+ */
98
+ unsigned idx;
141
+ rule = float_infzeronan_dnan_if_qnan;
99
+
142
+#elif defined(TARGET_MIPS)
100
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
143
+ if (snan_bit_is_one(status)) {
101
+ return false;
144
+ /*
102
+ }
145
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
103
+ if (esz == MO_64 && !a->q) {
146
+ * case sets InvalidOp and returns the default NaN
104
+ return false;
147
+ */
105
+ }
148
+ rule = float_infzeronan_dnan_always;
106
+ if (fp_access_check(s)) {
149
+ } else {
107
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd),
150
+ /*
108
+ a->q ? 16 : 8, vec_full_reg_size(s),
151
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
109
+ cpu_reg(s, a->rn));
152
+ * case sets InvalidOp and returns the input value 'c'
110
+ }
153
+ */
111
+ return true;
154
+ rule = float_infzeronan_dnan_never;
112
+}
113
+
114
+static bool do_smov_umov(DisasContext *s, arg_SMOV *a, MemOp is_signed)
115
+{
116
+ MemOp esz;
117
+ unsigned idx;
118
+
119
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
120
+ return false;
121
+ }
122
+ if (is_signed) {
123
+ if (esz == MO_64 || (esz == MO_32 && !a->q)) {
124
+ return false;
125
+ }
155
+ }
126
+ } else {
156
+#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
127
+ if (esz == MO_64 ? !a->q : a->q) {
157
+ defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
128
+ return false;
158
+ defined(TARGET_I386) || defined(TARGET_LOONGARCH)
159
+ /*
160
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
161
+ * case sets InvalidOp and returns the input value 'c'
162
+ */
163
+ /*
164
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
165
+ * to return an input NaN if we have one (ie c) rather than generating
166
+ * a default NaN
167
+ */
168
+ rule = float_infzeronan_dnan_never;
169
+#elif defined(TARGET_S390X)
170
+ rule = float_infzeronan_dnan_always;
171
+#endif
172
}
173
174
+ if (infzero) {
175
+ /*
176
+ * Inf * 0 + NaN -- some implementations return the default NaN here,
177
+ * and some return the input NaN.
178
+ */
179
+ switch (rule) {
180
+ case float_infzeronan_dnan_never:
181
+ return 2;
182
+ case float_infzeronan_dnan_always:
183
+ return 3;
184
+ case float_infzeronan_dnan_if_qnan:
185
+ return is_qnan(c_cls) ? 3 : 2;
186
+ default:
187
+ g_assert_not_reached();
129
+ }
188
+ }
130
+ }
189
+ }
131
+ if (fp_access_check(s)) {
190
+
132
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
191
+#if defined(TARGET_ARM)
133
+ read_vec_element(s, tcg_rd, a->rn, idx, esz | is_signed);
192
+
134
+ if (is_signed && !a->q) {
193
/* This looks different from the ARM ARM pseudocode, because the ARM ARM
135
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
194
* puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
136
+ }
195
*/
137
+ }
196
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
138
+ return true;
197
}
139
+}
198
#elif defined(TARGET_MIPS)
140
+
199
if (snan_bit_is_one(status)) {
141
+TRANS(SMOV, do_smov_umov, a, MO_SIGN)
200
- /*
142
+TRANS(UMOV, do_smov_umov, a, 0)
201
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
143
+
202
- * case sets InvalidOp and returns the default NaN
144
+static bool trans_INS_general(DisasContext *s, arg_INS_general *a)
203
- */
145
+{
204
- if (infzero) {
146
+ MemOp esz;
205
- return 3;
147
+ unsigned idx;
206
- }
148
+
207
/* Prefer sNaN over qNaN, in the a, b, c order. */
149
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
208
if (is_snan(a_cls)) {
150
+ return false;
209
return 0;
151
+ }
210
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
152
+ if (fp_access_check(s)) {
211
return 2;
153
+ write_vec_element(s, cpu_reg(s, a->rn), a->rd, idx, esz);
212
}
154
+ clear_vec_high(s, true, a->rd);
213
} else {
155
+ }
214
- /*
156
+ return true;
215
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
157
+}
216
- * case sets InvalidOp and returns the input value 'c'
158
+
217
- */
159
+static bool trans_INS_element(DisasContext *s, arg_INS_element *a)
218
/* Prefer sNaN over qNaN, in the c, a, b order. */
160
+{
219
if (is_snan(c_cls)) {
161
+ MemOp esz;
220
return 2;
162
+ unsigned didx, sidx;
221
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
163
+
222
}
164
+ if (!decode_esz_idx(a->di, &esz, &didx)) {
223
}
165
+ return false;
224
#elif defined(TARGET_LOONGARCH64)
166
+ }
225
- /*
167
+ sidx = a->si >> esz;
226
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
168
+ if (fp_access_check(s)) {
227
- * case sets InvalidOp and returns the input value 'c'
169
+ TCGv_i64 tmp = tcg_temp_new_i64();
228
- */
170
+
171
+ read_vec_element(s, tmp, a->rn, sidx, esz);
172
+ write_vec_element(s, tmp, a->rd, didx, esz);
173
+
174
+ /* INS is considered a 128-bit write for SVE. */
175
+ clear_vec_high(s, true, a->rd);
176
+ }
177
+ return true;
178
+}
179
+
180
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
181
* Note that it is the caller's responsibility to ensure that the
182
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
183
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
184
write_fp_dreg(s, rd, tcg_res);
185
}
186
187
-/* DUP (Element, Vector)
188
- *
189
- * 31 30 29 21 20 16 15 10 9 5 4 0
190
- * +---+---+-------------------+--------+-------------+------+------+
191
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 0 1 | Rn | Rd |
192
- * +---+---+-------------------+--------+-------------+------+------+
193
- *
194
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
195
- */
196
-static void handle_simd_dupe(DisasContext *s, int is_q, int rd, int rn,
197
- int imm5)
198
-{
199
- int size = ctz32(imm5);
200
- int index;
201
-
229
-
202
- if (size > 3 || (size == 3 && !is_q)) {
230
/* Prefer sNaN over qNaN, in the c, a, b order. */
203
- unallocated_encoding(s);
231
if (is_snan(c_cls)) {
204
- return;
232
return 2;
233
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
234
return 1;
235
}
236
#elif defined(TARGET_PPC)
237
- /* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
238
- * to return an input NaN if we have one (ie c) rather than generating
239
- * a default NaN
240
- */
241
-
242
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
243
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
244
*/
245
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
246
return 1;
247
}
248
#elif defined(TARGET_S390X)
249
- if (infzero) {
250
- return 3;
205
- }
251
- }
206
-
252
-
207
- if (!fp_access_check(s)) {
253
if (is_snan(a_cls)) {
208
- return;
254
return 0;
209
- }
255
} else if (is_snan(b_cls)) {
210
-
211
- index = imm5 >> (size + 1);
212
- tcg_gen_gvec_dup_mem(size, vec_full_reg_offset(s, rd),
213
- vec_reg_offset(s, rn, index, size),
214
- is_q ? 16 : 8, vec_full_reg_size(s));
215
-}
216
-
217
-/* DUP (element, scalar)
218
- * 31 21 20 16 15 10 9 5 4 0
219
- * +-----------------------+--------+-------------+------+------+
220
- * | 0 1 0 1 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 0 1 | Rn | Rd |
221
- * +-----------------------+--------+-------------+------+------+
222
- */
223
-static void handle_simd_dupes(DisasContext *s, int rd, int rn,
224
- int imm5)
225
-{
226
- int size = ctz32(imm5);
227
- int index;
228
- TCGv_i64 tmp;
229
-
230
- if (size > 3) {
231
- unallocated_encoding(s);
232
- return;
233
- }
234
-
235
- if (!fp_access_check(s)) {
236
- return;
237
- }
238
-
239
- index = imm5 >> (size + 1);
240
-
241
- /* This instruction just extracts the specified element and
242
- * zero-extends it into the bottom of the destination register.
243
- */
244
- tmp = tcg_temp_new_i64();
245
- read_vec_element(s, tmp, rn, index, size);
246
- write_fp_dreg(s, rd, tmp);
247
-}
248
-
249
-/* DUP (General)
250
- *
251
- * 31 30 29 21 20 16 15 10 9 5 4 0
252
- * +---+---+-------------------+--------+-------------+------+------+
253
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 1 1 | Rn | Rd |
254
- * +---+---+-------------------+--------+-------------+------+------+
255
- *
256
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
257
- */
258
-static void handle_simd_dupg(DisasContext *s, int is_q, int rd, int rn,
259
- int imm5)
260
-{
261
- int size = ctz32(imm5);
262
- uint32_t dofs, oprsz, maxsz;
263
-
264
- if (size > 3 || ((size == 3) && !is_q)) {
265
- unallocated_encoding(s);
266
- return;
267
- }
268
-
269
- if (!fp_access_check(s)) {
270
- return;
271
- }
272
-
273
- dofs = vec_full_reg_offset(s, rd);
274
- oprsz = is_q ? 16 : 8;
275
- maxsz = vec_full_reg_size(s);
276
-
277
- tcg_gen_gvec_dup_i64(size, dofs, oprsz, maxsz, cpu_reg(s, rn));
278
-}
279
-
280
-/* INS (Element)
281
- *
282
- * 31 21 20 16 15 14 11 10 9 5 4 0
283
- * +-----------------------+--------+------------+---+------+------+
284
- * | 0 1 1 0 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
285
- * +-----------------------+--------+------------+---+------+------+
286
- *
287
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
288
- * index: encoded in imm5<4:size+1>
289
- */
290
-static void handle_simd_inse(DisasContext *s, int rd, int rn,
291
- int imm4, int imm5)
292
-{
293
- int size = ctz32(imm5);
294
- int src_index, dst_index;
295
- TCGv_i64 tmp;
296
-
297
- if (size > 3) {
298
- unallocated_encoding(s);
299
- return;
300
- }
301
-
302
- if (!fp_access_check(s)) {
303
- return;
304
- }
305
-
306
- dst_index = extract32(imm5, 1+size, 5);
307
- src_index = extract32(imm4, size, 4);
308
-
309
- tmp = tcg_temp_new_i64();
310
-
311
- read_vec_element(s, tmp, rn, src_index, size);
312
- write_vec_element(s, tmp, rd, dst_index, size);
313
-
314
- /* INS is considered a 128-bit write for SVE. */
315
- clear_vec_high(s, true, rd);
316
-}
317
-
318
-
319
-/* INS (General)
320
- *
321
- * 31 21 20 16 15 10 9 5 4 0
322
- * +-----------------------+--------+-------------+------+------+
323
- * | 0 1 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 1 1 1 | Rn | Rd |
324
- * +-----------------------+--------+-------------+------+------+
325
- *
326
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
327
- * index: encoded in imm5<4:size+1>
328
- */
329
-static void handle_simd_insg(DisasContext *s, int rd, int rn, int imm5)
330
-{
331
- int size = ctz32(imm5);
332
- int idx;
333
-
334
- if (size > 3) {
335
- unallocated_encoding(s);
336
- return;
337
- }
338
-
339
- if (!fp_access_check(s)) {
340
- return;
341
- }
342
-
343
- idx = extract32(imm5, 1 + size, 4 - size);
344
- write_vec_element(s, cpu_reg(s, rn), rd, idx, size);
345
-
346
- /* INS is considered a 128-bit write for SVE. */
347
- clear_vec_high(s, true, rd);
348
-}
349
-
350
-/*
351
- * UMOV (General)
352
- * SMOV (General)
353
- *
354
- * 31 30 29 21 20 16 15 12 10 9 5 4 0
355
- * +---+---+-------------------+--------+-------------+------+------+
356
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 1 U 1 1 | Rn | Rd |
357
- * +---+---+-------------------+--------+-------------+------+------+
358
- *
359
- * U: unsigned when set
360
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
361
- */
362
-static void handle_simd_umov_smov(DisasContext *s, int is_q, int is_signed,
363
- int rn, int rd, int imm5)
364
-{
365
- int size = ctz32(imm5);
366
- int element;
367
- TCGv_i64 tcg_rd;
368
-
369
- /* Check for UnallocatedEncodings */
370
- if (is_signed) {
371
- if (size > 2 || (size == 2 && !is_q)) {
372
- unallocated_encoding(s);
373
- return;
374
- }
375
- } else {
376
- if (size > 3
377
- || (size < 3 && is_q)
378
- || (size == 3 && !is_q)) {
379
- unallocated_encoding(s);
380
- return;
381
- }
382
- }
383
-
384
- if (!fp_access_check(s)) {
385
- return;
386
- }
387
-
388
- element = extract32(imm5, 1+size, 4);
389
-
390
- tcg_rd = cpu_reg(s, rd);
391
- read_vec_element(s, tcg_rd, rn, element, size | (is_signed ? MO_SIGN : 0));
392
- if (is_signed && !is_q) {
393
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
394
- }
395
-}
396
-
397
-/* AdvSIMD copy
398
- * 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
399
- * +---+---+----+-----------------+------+---+------+---+------+------+
400
- * | 0 | Q | op | 0 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
401
- * +---+---+----+-----------------+------+---+------+---+------+------+
402
- */
403
-static void disas_simd_copy(DisasContext *s, uint32_t insn)
404
-{
405
- int rd = extract32(insn, 0, 5);
406
- int rn = extract32(insn, 5, 5);
407
- int imm4 = extract32(insn, 11, 4);
408
- int op = extract32(insn, 29, 1);
409
- int is_q = extract32(insn, 30, 1);
410
- int imm5 = extract32(insn, 16, 5);
411
-
412
- if (op) {
413
- if (is_q) {
414
- /* INS (element) */
415
- handle_simd_inse(s, rd, rn, imm4, imm5);
416
- } else {
417
- unallocated_encoding(s);
418
- }
419
- } else {
420
- switch (imm4) {
421
- case 0:
422
- /* DUP (element - vector) */
423
- handle_simd_dupe(s, is_q, rd, rn, imm5);
424
- break;
425
- case 1:
426
- /* DUP (general) */
427
- handle_simd_dupg(s, is_q, rd, rn, imm5);
428
- break;
429
- case 3:
430
- if (is_q) {
431
- /* INS (general) */
432
- handle_simd_insg(s, rd, rn, imm5);
433
- } else {
434
- unallocated_encoding(s);
435
- }
436
- break;
437
- case 5:
438
- case 7:
439
- /* UMOV/SMOV (is_q indicates 32/64; imm4 indicates signedness) */
440
- handle_simd_umov_smov(s, is_q, (imm4 == 5), rn, rd, imm5);
441
- break;
442
- default:
443
- unallocated_encoding(s);
444
- break;
445
- }
446
- }
447
-}
448
-
449
/* AdvSIMD modified immediate
450
* 31 30 29 28 19 18 16 15 12 11 10 9 5 4 0
451
* +---+---+----+---------------------+-----+-------+----+---+-------+------+
452
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
453
}
454
}
455
456
-/* AdvSIMD scalar copy
457
- * 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
458
- * +-----+----+-----------------+------+---+------+---+------+------+
459
- * | 0 1 | op | 1 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
460
- * +-----+----+-----------------+------+---+------+---+------+------+
461
- */
462
-static void disas_simd_scalar_copy(DisasContext *s, uint32_t insn)
463
-{
464
- int rd = extract32(insn, 0, 5);
465
- int rn = extract32(insn, 5, 5);
466
- int imm4 = extract32(insn, 11, 4);
467
- int imm5 = extract32(insn, 16, 5);
468
- int op = extract32(insn, 29, 1);
469
-
470
- if (op != 0 || imm4 != 0) {
471
- unallocated_encoding(s);
472
- return;
473
- }
474
-
475
- /* DUP (element, scalar) */
476
- handle_simd_dupes(s, rd, rn, imm5);
477
-}
478
-
479
/* AdvSIMD scalar pairwise
480
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
481
* +-----+---+-----------+------+-----------+--------+-----+------+------+
482
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
483
{ 0x0e200000, 0x9f200c00, disas_simd_three_reg_diff },
484
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
485
{ 0x0e300800, 0x9f3e0c00, disas_simd_across_lanes },
486
- { 0x0e000400, 0x9fe08400, disas_simd_copy },
487
{ 0x0f000000, 0x9f000400, disas_simd_indexed }, /* vector indexed */
488
/* simd_mod_imm decode is a subset of simd_shift_imm, so must precede it */
489
{ 0x0f000400, 0x9ff80400, disas_simd_mod_imm },
490
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
491
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
492
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
493
{ 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
494
- { 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
495
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
496
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
497
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
498
--
256
--
499
2.34.1
257
2.34.1
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for the inf-zero-nan
2
muladd special case. In meson.build we put -DTARGET_ARM in fpcflags,
3
and so we should select here the Arm rule of
4
float_infzeronan_dnan_if_qnan.
1
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241202131347.498124-5-peter.maydell@linaro.org
9
---
10
tests/fp/fp-bench.c | 5 +++++
11
tests/fp/fp-test.c | 5 +++++
12
2 files changed, 10 insertions(+)
13
14
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/fp/fp-bench.c
17
+++ b/tests/fp/fp-bench.c
18
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
19
{
20
bench_func_t f;
21
22
+ /*
23
+ * These implementation-defined choices for various things IEEE
24
+ * doesn't specify match those used by the Arm architecture.
25
+ */
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
28
29
f = bench_funcs[operation][precision];
30
g_assert(f);
31
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/tests/fp/fp-test.c
34
+++ b/tests/fp/fp-test.c
35
@@ -XXX,XX +XXX,XX @@ void run_test(void)
36
{
37
unsigned int i;
38
39
+ /*
40
+ * These implementation-defined choices for various things IEEE
41
+ * doesn't specify match those used by the Arm architecture.
42
+ */
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
44
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
45
46
genCases_setLevel(test_level);
47
verCases_maxErrorCount = n_max_errors;
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the Arm target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-6-peter.maydell@linaro.org
7
---
8
target/arm/cpu.c | 3 +++
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
2 files changed, 4 insertions(+), 7 deletions(-)
11
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
17
* * tininess-before-rounding
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
20
+ * * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
21
+ * and the input NaN if it is signalling
22
*/
23
static void arm_set_default_fp_behaviours(float_status *s)
24
{
25
set_float_detect_tininess(float_tininess_before_rounding, s);
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
28
}
29
30
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
/*
37
* Temporarily fall back to ifdef ladder
38
*/
39
-#if defined(TARGET_ARM)
40
- /*
41
- * For ARM, the (inf,zero,qnan) case returns the default NaN,
42
- * but (inf,zero,snan) returns the input NaN.
43
- */
44
- rule = float_infzeronan_dnan_if_qnan;
45
-#elif defined(TARGET_MIPS)
46
+#if defined(TARGET_MIPS)
47
if (snan_bit_is_one(status)) {
48
/*
49
* For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
50
--
51
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for s390, so we
2
can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-7-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/cpu.c
15
+++ b/target/s390x/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
17
set_float_detect_tininess(float_tininess_before_rounding,
18
&env->fpu_status);
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
20
+ set_float_infzeronan_rule(float_infzeronan_dnan_always,
21
+ &env->fpu_status);
22
/* fall through */
23
case RESET_TYPE_S390_CPU_NORMAL:
24
env->psw.mask &= ~PSW_MASK_RI;
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
* a default NaN
31
*/
32
rule = float_infzeronan_dnan_never;
33
-#elif defined(TARGET_S390X)
34
- rule = float_infzeronan_dnan_always;
35
#endif
36
}
37
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the PPC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-8-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 7 +++++++
9
fpu/softfloat-specialize.c.inc | 7 +------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
22
+ * to return an input NaN if we have one (ie c) rather than generating
23
+ * a default NaN
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
27
28
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
29
ppc_spr_t *spr = &env->spr_cb[i];
30
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
31
index XXXXXXX..XXXXXXX 100644
32
--- a/fpu/softfloat-specialize.c.inc
33
+++ b/fpu/softfloat-specialize.c.inc
34
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
*/
36
rule = float_infzeronan_dnan_never;
37
}
38
-#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
39
+#elif defined(TARGET_SPARC) || \
40
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
41
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
42
/*
43
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
44
* case sets InvalidOp and returns the input value 'c'
45
*/
46
- /*
47
- * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
48
- * to return an input NaN if we have one (ie c) rather than generating
49
- * a default NaN
50
- */
51
rule = float_infzeronan_dnan_never;
52
#endif
53
}
54
--
55
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the MIPS target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-9-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 9 +++++++++
9
target/mips/msa.c | 4 ++++
10
fpu/softfloat-specialize.c.inc | 16 +---------------
11
3 files changed, 14 insertions(+), 15 deletions(-)
12
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/mips/fpu_helper.h
16
+++ b/target/mips/fpu_helper.h
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_flush_mode(CPUMIPSState *env)
18
static inline void restore_snan_bit_mode(CPUMIPSState *env)
19
{
20
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
21
+ FloatInfZeroNaNRule izn_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
set_snan_bit_is_one(!nan2008, &env->active_fpu.fp_status);
28
set_default_nan_mode(!nan2008, &env->active_fpu.fp_status);
29
+ /*
30
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
31
+ * case sets InvalidOp and returns the default NaN.
32
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
33
+ * case sets InvalidOp and returns the input value 'c'.
34
+ */
35
+ izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
36
+ set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
37
}
38
39
static inline void restore_fp_status(CPUMIPSState *env)
40
diff --git a/target/mips/msa.c b/target/mips/msa.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/mips/msa.c
43
+++ b/target/mips/msa.c
44
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
45
46
/* set proper signanling bit meaning ("1" means "quiet") */
47
set_snan_bit_is_one(0, &env->active_tc.msa_fp_status);
48
+
49
+ /* Inf * 0 + NaN returns the input NaN */
50
+ set_float_infzeronan_rule(float_infzeronan_dnan_never,
51
+ &env->active_tc.msa_fp_status);
52
}
53
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
54
index XXXXXXX..XXXXXXX 100644
55
--- a/fpu/softfloat-specialize.c.inc
56
+++ b/fpu/softfloat-specialize.c.inc
57
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
58
/*
59
* Temporarily fall back to ifdef ladder
60
*/
61
-#if defined(TARGET_MIPS)
62
- if (snan_bit_is_one(status)) {
63
- /*
64
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
65
- * case sets InvalidOp and returns the default NaN
66
- */
67
- rule = float_infzeronan_dnan_always;
68
- } else {
69
- /*
70
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
71
- * case sets InvalidOp and returns the input value 'c'
72
- */
73
- rule = float_infzeronan_dnan_never;
74
- }
75
-#elif defined(TARGET_SPARC) || \
76
+#if defined(TARGET_SPARC) || \
77
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
78
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
79
/*
80
--
81
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the SPARC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-10-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_SPARC) || \
34
- defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
35
+#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
36
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
37
/*
38
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the xtensa target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-11-peter.maydell@linaro.org
7
---
8
target/xtensa/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 +-
10
2 files changed, 3 insertions(+), 1 deletion(-)
11
12
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/xtensa/cpu.c
15
+++ b/target/xtensa/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
17
reset_mmu(env);
18
cs->halted = env->runstall;
19
#endif
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
set_no_signaling_nans(!dfpu, &env->fp_status);
23
xtensa_use_first_nan(env, !dfpu);
24
}
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
34
+#if defined(TARGET_HPPA) || \
35
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
36
/*
37
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the x86 target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-12-peter.maydell@linaro.org
6
---
7
target/i386/tcg/fpu_helper.c | 7 +++++++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 8 insertions(+), 1 deletion(-)
10
11
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/i386/tcg/fpu_helper.c
14
+++ b/target/i386/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->mmx_status);
18
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->sse_status);
19
+ /*
20
+ * Only SSE has multiply-add instructions. In the SDM Section 14.5.2
21
+ * "Fused-Multiply-ADD (FMA) Numeric Behavior" the NaN handling is
22
+ * specified -- for 0 * inf + NaN the input NaN is selected, and if
23
+ * there are multiple input NaNs they are selected in the order a, b, c.
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
26
}
27
28
static inline uint8_t save_exception_flags(CPUX86State *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
34
* Temporarily fall back to ifdef ladder
35
*/
36
#if defined(TARGET_HPPA) || \
37
- defined(TARGET_I386) || defined(TARGET_LOONGARCH)
38
+ defined(TARGET_LOONGARCH)
39
/*
40
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
41
* case sets InvalidOp and returns the input value 'c'
42
--
43
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the loongarch target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-13-peter.maydell@linaro.org
6
---
7
target/loongarch/tcg/fpu_helper.c | 5 +++++
8
fpu/softfloat-specialize.c.inc | 7 +------
9
2 files changed, 6 insertions(+), 6 deletions(-)
10
11
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/loongarch/tcg/fpu_helper.c
14
+++ b/target/loongarch/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
16
&env->fp_status);
17
set_flush_to_zero(0, &env->fp_status);
18
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
19
+ /*
20
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
21
+ * case sets InvalidOp and returns the input value 'c'
22
+ */
23
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
}
25
26
int ieee_ex_to_loongarch(int xcpt)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
32
/*
33
* Temporarily fall back to ifdef ladder
34
*/
35
-#if defined(TARGET_HPPA) || \
36
- defined(TARGET_LOONGARCH)
37
- /*
38
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
- * case sets InvalidOp and returns the input value 'c'
40
- */
41
+#if defined(TARGET_HPPA)
42
rule = float_infzeronan_dnan_never;
43
#endif
44
}
45
--
46
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the HPPA target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
As this is the last target to be converted to explicitly setting
5
the rule, we can remove the fallback code in pickNaNMulAdd()
6
entirely.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20241202131347.498124-14-peter.maydell@linaro.org
11
---
12
target/hppa/fpu_helper.c | 2 ++
13
fpu/softfloat-specialize.c.inc | 13 +------------
14
2 files changed, 3 insertions(+), 12 deletions(-)
15
16
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/hppa/fpu_helper.c
19
+++ b/target/hppa/fpu_helper.c
20
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
21
* HPPA does note implement a CPU reset method at all...
22
*/
23
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
24
+ /* For inf * 0 + NaN, return the input NaN */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
}
27
28
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
34
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
bool infzero, float_status *status)
36
{
37
- FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
38
-
39
/*
40
* We guarantee not to require the target to tell us how to
41
* pick a NaN if we're always returning the default NaN.
42
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
43
*/
44
assert(!status->default_nan_mode);
45
46
- if (rule == float_infzeronan_none) {
47
- /*
48
- * Temporarily fall back to ifdef ladder
49
- */
50
-#if defined(TARGET_HPPA)
51
- rule = float_infzeronan_dnan_never;
52
-#endif
53
- }
54
-
55
if (infzero) {
56
/*
57
* Inf * 0 + NaN -- some implementations return the default NaN here,
58
* and some return the input NaN.
59
*/
60
- switch (rule) {
61
+ switch (status->float_infzeronan_rule) {
62
case float_infzeronan_dnan_never:
63
return 2;
64
case float_infzeronan_dnan_always:
65
--
66
2.34.1
diff view generated by jsdifflib
New patch
1
The new implementation of pickNaNMulAdd() will find it convenient
2
to know whether at least one of the three arguments to the muladd
3
was a signaling NaN. We already calculate that in the caller,
4
so pass it in as a new bool have_snan.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-15-peter.maydell@linaro.org
9
---
10
fpu/softfloat-parts.c.inc | 5 +++--
11
fpu/softfloat-specialize.c.inc | 2 +-
12
2 files changed, 4 insertions(+), 3 deletions(-)
13
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
16
--- a/fpu/softfloat-parts.c.inc
17
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
19
{
20
int which;
21
bool infzero = (ab_mask == float_cmask_infzero);
22
+ bool have_snan = (abc_mask & float_cmask_snan);
23
24
- if (unlikely(abc_mask & float_cmask_snan)) {
25
+ if (unlikely(have_snan)) {
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
27
}
28
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
30
if (s->default_nan_mode) {
31
which = 3;
32
} else {
33
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
35
}
36
37
if (which == 3) {
38
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
39
index XXXXXXX..XXXXXXX 100644
40
--- a/fpu/softfloat-specialize.c.inc
41
+++ b/fpu/softfloat-specialize.c.inc
42
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
43
| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
44
*----------------------------------------------------------------------------*/
45
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
46
- bool infzero, float_status *status)
47
+ bool infzero, bool have_snan, float_status *status)
48
{
49
/*
50
* We guarantee not to require the target to tell us how to
51
--
52
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
IEEE 758 does not define a fixed rule for which NaN to pick as the
2
2
result if both operands of a 3-operand fused multiply-add operation
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
are NaNs. As a result different architectures have ended up with
4
Message-id: 20240506010403.6204-16-richard.henderson@linaro.org
4
different rules for propagating NaNs.
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
6
QEMU currently hardcodes the NaN propagation logic into the binary
7
because pickNaNMulAdd() has an ifdef ladder for different targets.
8
We want to make the propagation rule instead be selectable at
9
runtime, because:
10
* this will let us have multiple targets in one QEMU binary
11
* the Arm FEAT_AFP architectural feature includes letting
12
the guest select a NaN propagation rule at runtime
13
14
In this commit we add an enum for the propagation rule, the field in
15
float_status, and the corresponding getters and setters. We change
16
pickNaNMulAdd to honour this, but because all targets still leave
17
this field at its default 0 value, the fallback logic will pick the
18
rule type with the old ifdef ladder.
19
20
It's valid not to set a propagation rule if default_nan_mode is
21
enabled, because in that case there's no need to pick a NaN; all the
22
callers of pickNaNMulAdd() catch this case and skip calling it.
23
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-id: 20241202131347.498124-16-peter.maydell@linaro.org
7
---
27
---
8
target/arm/helper.h | 6 ----
28
include/fpu/softfloat-helpers.h | 11 +++
9
target/arm/tcg/translate.h | 30 +++++++++++++++++++
29
include/fpu/softfloat-types.h | 55 +++++++++++
10
target/arm/tcg/translate-a64.c | 44 +++++++++++++--------------
30
fpu/softfloat-specialize.c.inc | 167 ++++++++------------------------
11
target/arm/tcg/translate-vfp.c | 54 +++++++++++++++++-----------------
31
3 files changed, 107 insertions(+), 126 deletions(-)
12
target/arm/vfp_helper.c | 30 -------------------
32
13
5 files changed, 79 insertions(+), 85 deletions(-)
33
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
14
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
35
--- a/include/fpu/softfloat-helpers.h
18
+++ b/target/arm/helper.h
36
+++ b/include/fpu/softfloat-helpers.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_maxnumd, f64, f64, f64, ptr)
37
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
20
DEF_HELPER_3(vfp_minnumh, f16, f16, f16, ptr)
38
status->float_2nan_prop_rule = rule;
21
DEF_HELPER_3(vfp_minnums, f32, f32, f32, ptr)
39
}
22
DEF_HELPER_3(vfp_minnumd, f64, f64, f64, ptr)
40
23
-DEF_HELPER_1(vfp_negh, f16, f16)
41
+static inline void set_float_3nan_prop_rule(Float3NaNPropRule rule,
24
-DEF_HELPER_1(vfp_negs, f32, f32)
42
+ float_status *status)
25
-DEF_HELPER_1(vfp_negd, f64, f64)
43
+{
26
-DEF_HELPER_1(vfp_absh, f16, f16)
44
+ status->float_3nan_prop_rule = rule;
27
-DEF_HELPER_1(vfp_abss, f32, f32)
45
+}
28
-DEF_HELPER_1(vfp_absd, f64, f64)
46
+
29
DEF_HELPER_2(vfp_sqrth, f16, f16, env)
47
static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
30
DEF_HELPER_2(vfp_sqrts, f32, f32, env)
48
float_status *status)
31
DEF_HELPER_2(vfp_sqrtd, f64, f64, env)
49
{
32
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
50
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
51
return status->float_2nan_prop_rule;
52
}
53
54
+static inline Float3NaNPropRule get_float_3nan_prop_rule(float_status *status)
55
+{
56
+ return status->float_3nan_prop_rule;
57
+}
58
+
59
static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
60
{
61
return status->float_infzeronan_rule;
62
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
33
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/tcg/translate.h
64
--- a/include/fpu/softfloat-types.h
35
+++ b/target/arm/tcg/translate.h
65
+++ b/include/fpu/softfloat-types.h
36
@@ -XXX,XX +XXX,XX @@ static inline void gen_swstep_exception(DisasContext *s, int isv, int ex)
66
@@ -XXX,XX +XXX,XX @@ this code that are retained.
67
#ifndef SOFTFLOAT_TYPES_H
68
#define SOFTFLOAT_TYPES_H
69
70
+#include "hw/registerfields.h"
71
+
72
/*
73
* Software IEC/IEEE floating-point types.
37
*/
74
*/
38
uint64_t vfp_expand_imm(int size, uint8_t imm8);
75
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
39
76
float_2nan_prop_x87,
40
+static inline void gen_vfp_absh(TCGv_i32 d, TCGv_i32 s)
77
} Float2NaNPropRule;
41
+{
78
42
+ tcg_gen_andi_i32(d, s, INT16_MAX);
79
+/*
43
+}
80
+ * 3-input NaN propagation rule, for fused multiply-add. Individual
44
+
81
+ * architectures have different rules for which input NaN is
45
+static inline void gen_vfp_abss(TCGv_i32 d, TCGv_i32 s)
82
+ * propagated to the output when there is more than one NaN on the
46
+{
83
+ * input.
47
+ tcg_gen_andi_i32(d, s, INT32_MAX);
84
+ *
48
+}
85
+ * If default_nan_mode is enabled then it is valid not to set a NaN
49
+
86
+ * propagation rule, because the softfloat code guarantees not to try
50
+static inline void gen_vfp_absd(TCGv_i64 d, TCGv_i64 s)
87
+ * to pick a NaN to propagate in default NaN mode. When not in
51
+{
88
+ * default-NaN mode, it is an error for the target not to set the rule
52
+ tcg_gen_andi_i64(d, s, INT64_MAX);
89
+ * in float_status if it uses a muladd, and we will assert if we need
53
+}
90
+ * to handle an input NaN and no rule was selected.
54
+
91
+ *
55
+static inline void gen_vfp_negh(TCGv_i32 d, TCGv_i32 s)
92
+ * The naming scheme for Float3NaNPropRule values is:
56
+{
93
+ * float_3nan_prop_s_abc:
57
+ tcg_gen_xori_i32(d, s, 1u << 15);
94
+ * = "Prefer SNaN over QNaN, then operand A over B over C"
58
+}
95
+ * float_3nan_prop_abc:
59
+
96
+ * = "Prefer A over B over C regardless of SNaN vs QNAN"
60
+static inline void gen_vfp_negs(TCGv_i32 d, TCGv_i32 s)
97
+ *
61
+{
98
+ * For QEMU, the multiply-add operation is A * B + C.
62
+ tcg_gen_xori_i32(d, s, 1u << 31);
99
+ */
63
+}
100
+
64
+
101
+/*
65
+static inline void gen_vfp_negd(TCGv_i64 d, TCGv_i64 s)
102
+ * We set the Float3NaNPropRule enum values up so we can select the
66
+{
103
+ * right value in pickNaNMulAdd in a data driven way.
67
+ tcg_gen_xori_i64(d, s, 1ull << 63);
104
+ */
68
+}
105
+FIELD(3NAN, 1ST, 0, 2) /* which operand is most preferred ? */
69
+
106
+FIELD(3NAN, 2ND, 2, 2) /* which operand is next most preferred ? */
70
/* Vector operations shared between ARM and AArch64. */
107
+FIELD(3NAN, 3RD, 4, 2) /* which operand is least preferred ? */
71
void gen_gvec_ceq0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
108
+FIELD(3NAN, SNAN, 6, 1) /* do we prefer SNaN over QNaN ? */
72
uint32_t opr_sz, uint32_t max_sz);
109
+
73
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
110
+#define PROPRULE(X, Y, Z) \
111
+ ((X << R_3NAN_1ST_SHIFT) | (Y << R_3NAN_2ND_SHIFT) | (Z << R_3NAN_3RD_SHIFT))
112
+
113
+typedef enum __attribute__((__packed__)) {
114
+ float_3nan_prop_none = 0, /* No propagation rule specified */
115
+ float_3nan_prop_abc = PROPRULE(0, 1, 2),
116
+ float_3nan_prop_acb = PROPRULE(0, 2, 1),
117
+ float_3nan_prop_bac = PROPRULE(1, 0, 2),
118
+ float_3nan_prop_bca = PROPRULE(1, 2, 0),
119
+ float_3nan_prop_cab = PROPRULE(2, 0, 1),
120
+ float_3nan_prop_cba = PROPRULE(2, 1, 0),
121
+ float_3nan_prop_s_abc = float_3nan_prop_abc | R_3NAN_SNAN_MASK,
122
+ float_3nan_prop_s_acb = float_3nan_prop_acb | R_3NAN_SNAN_MASK,
123
+ float_3nan_prop_s_bac = float_3nan_prop_bac | R_3NAN_SNAN_MASK,
124
+ float_3nan_prop_s_bca = float_3nan_prop_bca | R_3NAN_SNAN_MASK,
125
+ float_3nan_prop_s_cab = float_3nan_prop_cab | R_3NAN_SNAN_MASK,
126
+ float_3nan_prop_s_cba = float_3nan_prop_cba | R_3NAN_SNAN_MASK,
127
+} Float3NaNPropRule;
128
+
129
+#undef PROPRULE
130
+
131
/*
132
* Rule for result of fused multiply-add 0 * Inf + NaN.
133
* This must be a NaN, but implementations differ on whether this
134
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
135
FloatRoundMode float_rounding_mode;
136
FloatX80RoundPrec floatx80_rounding_precision;
137
Float2NaNPropRule float_2nan_prop_rule;
138
+ Float3NaNPropRule float_3nan_prop_rule;
139
FloatInfZeroNaNRule float_infzeronan_rule;
140
bool tininess_before_rounding;
141
/* should denormalised results go to zero and set the inexact flag? */
142
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
74
index XXXXXXX..XXXXXXX 100644
143
index XXXXXXX..XXXXXXX 100644
75
--- a/target/arm/tcg/translate-a64.c
144
--- a/fpu/softfloat-specialize.c.inc
76
+++ b/target/arm/tcg/translate-a64.c
145
+++ b/fpu/softfloat-specialize.c.inc
77
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
146
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
78
tcg_gen_mov_i32(tcg_res, tcg_op);
147
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
79
break;
148
bool infzero, bool have_snan, float_status *status)
80
case 0x1: /* FABS */
149
{
81
- tcg_gen_andi_i32(tcg_res, tcg_op, 0x7fff);
150
+ FloatClass cls[3] = { a_cls, b_cls, c_cls };
82
+ gen_vfp_absh(tcg_res, tcg_op);
151
+ Float3NaNPropRule rule = status->float_3nan_prop_rule;
83
break;
152
+ int which;
84
case 0x2: /* FNEG */
153
+
85
- tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
154
/*
86
+ gen_vfp_negh(tcg_res, tcg_op);
155
* We guarantee not to require the target to tell us how to
87
break;
156
* pick a NaN if we're always returning the default NaN.
88
case 0x3: /* FSQRT */
157
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
89
fpst = fpstatus_ptr(FPST_FPCR_F16);
158
}
90
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
91
tcg_gen_mov_i32(tcg_res, tcg_op);
92
goto done;
93
case 0x1: /* FABS */
94
- gen_helper_vfp_abss(tcg_res, tcg_op);
95
+ gen_vfp_abss(tcg_res, tcg_op);
96
goto done;
97
case 0x2: /* FNEG */
98
- gen_helper_vfp_negs(tcg_res, tcg_op);
99
+ gen_vfp_negs(tcg_res, tcg_op);
100
goto done;
101
case 0x3: /* FSQRT */
102
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
103
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
104
105
switch (opcode) {
106
case 0x1: /* FABS */
107
- gen_helper_vfp_absd(tcg_res, tcg_op);
108
+ gen_vfp_absd(tcg_res, tcg_op);
109
goto done;
110
case 0x2: /* FNEG */
111
- gen_helper_vfp_negd(tcg_res, tcg_op);
112
+ gen_vfp_negd(tcg_res, tcg_op);
113
goto done;
114
case 0x3: /* FSQRT */
115
gen_helper_vfp_sqrtd(tcg_res, tcg_op, tcg_env);
116
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
117
switch (opcode) {
118
case 0x8: /* FNMUL */
119
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
120
- gen_helper_vfp_negs(tcg_res, tcg_res);
121
+ gen_vfp_negs(tcg_res, tcg_res);
122
break;
123
default:
124
case 0x0: /* FMUL */
125
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
126
switch (opcode) {
127
case 0x8: /* FNMUL */
128
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
129
- gen_helper_vfp_negd(tcg_res, tcg_res);
130
+ gen_vfp_negd(tcg_res, tcg_res);
131
break;
132
default:
133
case 0x0: /* FMUL */
134
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
135
switch (opcode) {
136
case 0x8: /* FNMUL */
137
gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
138
- tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
139
+ gen_vfp_negh(tcg_res, tcg_res);
140
break;
141
default:
142
case 0x0: /* FMUL */
143
@@ -XXX,XX +XXX,XX @@ static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
144
* flipped if it is a negated-input.
145
*/
146
if (o1 == true) {
147
- gen_helper_vfp_negs(tcg_op3, tcg_op3);
148
+ gen_vfp_negs(tcg_op3, tcg_op3);
149
}
159
}
150
160
151
if (o0 != o1) {
161
+ if (rule == float_3nan_prop_none) {
152
- gen_helper_vfp_negs(tcg_op1, tcg_op1);
162
#if defined(TARGET_ARM)
153
+ gen_vfp_negs(tcg_op1, tcg_op1);
163
-
154
}
164
- /* This looks different from the ARM ARM pseudocode, because the ARM ARM
155
165
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
156
gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
166
- */
157
@@ -XXX,XX +XXX,XX @@ static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
167
- if (is_snan(c_cls)) {
158
* flipped if it is a negated-input.
168
- return 2;
159
*/
169
- } else if (is_snan(a_cls)) {
160
if (o1 == true) {
170
- return 0;
161
- gen_helper_vfp_negd(tcg_op3, tcg_op3);
171
- } else if (is_snan(b_cls)) {
162
+ gen_vfp_negd(tcg_op3, tcg_op3);
172
- return 1;
163
}
173
- } else if (is_qnan(c_cls)) {
164
174
- return 2;
165
if (o0 != o1) {
175
- } else if (is_qnan(a_cls)) {
166
- gen_helper_vfp_negd(tcg_op1, tcg_op1);
176
- return 0;
167
+ gen_vfp_negd(tcg_op1, tcg_op1);
177
- } else {
168
}
178
- return 1;
169
179
- }
170
gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
180
+ /*
171
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
181
+ * This looks different from the ARM ARM pseudocode, because the ARM ARM
172
switch (fpopcode) {
182
+ * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
173
case 0x39: /* FMLS */
183
+ */
174
/* As usual for ARM, separate negation for fused multiply-add */
184
+ rule = float_3nan_prop_s_cab;
175
- gen_helper_vfp_negd(tcg_op1, tcg_op1);
185
#elif defined(TARGET_MIPS)
176
+ gen_vfp_negd(tcg_op1, tcg_op1);
186
- if (snan_bit_is_one(status)) {
177
/* fall through */
187
- /* Prefer sNaN over qNaN, in the a, b, c order. */
178
case 0x19: /* FMLA */
188
- if (is_snan(a_cls)) {
179
read_vec_element(s, tcg_res, rd, pass, MO_64);
189
- return 0;
180
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
190
- } else if (is_snan(b_cls)) {
181
break;
191
- return 1;
182
case 0x7a: /* FABD */
192
- } else if (is_snan(c_cls)) {
183
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
193
- return 2;
184
- gen_helper_vfp_absd(tcg_res, tcg_res);
194
- } else if (is_qnan(a_cls)) {
185
+ gen_vfp_absd(tcg_res, tcg_res);
195
- return 0;
186
break;
196
- } else if (is_qnan(b_cls)) {
187
case 0x7c: /* FCMGT */
197
- return 1;
188
gen_helper_neon_cgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
198
+ if (snan_bit_is_one(status)) {
189
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
199
+ rule = float_3nan_prop_s_abc;
190
switch (fpopcode) {
200
} else {
191
case 0x39: /* FMLS */
201
- return 2;
192
/* As usual for ARM, separate negation for fused multiply-add */
202
+ rule = float_3nan_prop_s_cab;
193
- gen_helper_vfp_negs(tcg_op1, tcg_op1);
194
+ gen_vfp_negs(tcg_op1, tcg_op1);
195
/* fall through */
196
case 0x19: /* FMLA */
197
read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
198
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
199
break;
200
case 0x7a: /* FABD */
201
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
202
- gen_helper_vfp_abss(tcg_res, tcg_res);
203
+ gen_vfp_abss(tcg_res, tcg_res);
204
break;
205
case 0x7c: /* FCMGT */
206
gen_helper_neon_cgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
207
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
208
}
203
}
209
break;
204
- } else {
210
case 0x2f: /* FABS */
205
- /* Prefer sNaN over qNaN, in the c, a, b order. */
211
- gen_helper_vfp_absd(tcg_rd, tcg_rn);
206
- if (is_snan(c_cls)) {
212
+ gen_vfp_absd(tcg_rd, tcg_rn);
207
- return 2;
213
break;
208
- } else if (is_snan(a_cls)) {
214
case 0x6f: /* FNEG */
209
- return 0;
215
- gen_helper_vfp_negd(tcg_rd, tcg_rn);
210
- } else if (is_snan(b_cls)) {
216
+ gen_vfp_negd(tcg_rd, tcg_rn);
211
- return 1;
217
break;
212
- } else if (is_qnan(c_cls)) {
218
case 0x7f: /* FSQRT */
213
- return 2;
219
gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_env);
214
- } else if (is_qnan(a_cls)) {
220
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
215
- return 0;
221
}
216
- } else {
222
break;
217
- return 1;
223
case 0x2f: /* FABS */
218
- }
224
- gen_helper_vfp_abss(tcg_res, tcg_op);
219
- }
225
+ gen_vfp_abss(tcg_res, tcg_op);
220
#elif defined(TARGET_LOONGARCH64)
226
break;
221
- /* Prefer sNaN over qNaN, in the c, a, b order. */
227
case 0x6f: /* FNEG */
222
- if (is_snan(c_cls)) {
228
- gen_helper_vfp_negs(tcg_res, tcg_op);
223
- return 2;
229
+ gen_vfp_negs(tcg_res, tcg_op);
224
- } else if (is_snan(a_cls)) {
230
break;
225
- return 0;
231
case 0x7f: /* FSQRT */
226
- } else if (is_snan(b_cls)) {
232
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
227
- return 1;
233
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
228
- } else if (is_qnan(c_cls)) {
234
switch (16 * u + opcode) {
229
- return 2;
235
case 0x05: /* FMLS */
230
- } else if (is_qnan(a_cls)) {
236
/* As usual for ARM, separate negation for fused multiply-add */
231
- return 0;
237
- gen_helper_vfp_negd(tcg_op, tcg_op);
232
- } else {
238
+ gen_vfp_negd(tcg_op, tcg_op);
233
- return 1;
239
/* fall through */
234
- }
240
case 0x01: /* FMLA */
235
+ rule = float_3nan_prop_s_cab;
241
read_vec_element(s, tcg_res, rd, pass, MO_64);
236
#elif defined(TARGET_PPC)
242
diff --git a/target/arm/tcg/translate-vfp.c b/target/arm/tcg/translate-vfp.c
237
- /* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
243
index XXXXXXX..XXXXXXX 100644
238
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
244
--- a/target/arm/tcg/translate-vfp.c
239
- */
245
+++ b/target/arm/tcg/translate-vfp.c
240
- if (is_nan(a_cls)) {
246
@@ -XXX,XX +XXX,XX @@ static void gen_VMLS_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
241
- return 0;
247
TCGv_i32 tmp = tcg_temp_new_i32();
242
- } else if (is_nan(c_cls)) {
248
243
- return 2;
249
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
244
- } else {
250
- gen_helper_vfp_negh(tmp, tmp);
245
- return 1;
251
+ gen_vfp_negh(tmp, tmp);
246
- }
252
gen_helper_vfp_addh(vd, vd, tmp, fpst);
247
+ /*
248
+ * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
249
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
250
+ */
251
+ rule = float_3nan_prop_acb;
252
#elif defined(TARGET_S390X)
253
- if (is_snan(a_cls)) {
254
- return 0;
255
- } else if (is_snan(b_cls)) {
256
- return 1;
257
- } else if (is_snan(c_cls)) {
258
- return 2;
259
- } else if (is_qnan(a_cls)) {
260
- return 0;
261
- } else if (is_qnan(b_cls)) {
262
- return 1;
263
- } else {
264
- return 2;
265
- }
266
+ rule = float_3nan_prop_s_abc;
267
#elif defined(TARGET_SPARC)
268
- /* Prefer SNaN over QNaN, order C, B, A. */
269
- if (is_snan(c_cls)) {
270
- return 2;
271
- } else if (is_snan(b_cls)) {
272
- return 1;
273
- } else if (is_snan(a_cls)) {
274
- return 0;
275
- } else if (is_qnan(c_cls)) {
276
- return 2;
277
- } else if (is_qnan(b_cls)) {
278
- return 1;
279
- } else {
280
- return 0;
281
- }
282
+ rule = float_3nan_prop_s_cba;
283
#elif defined(TARGET_XTENSA)
284
- /*
285
- * For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
286
- * an input NaN if we have one (ie c).
287
- */
288
- if (status->use_first_nan) {
289
- if (is_nan(a_cls)) {
290
- return 0;
291
- } else if (is_nan(b_cls)) {
292
- return 1;
293
+ if (status->use_first_nan) {
294
+ rule = float_3nan_prop_abc;
295
} else {
296
- return 2;
297
+ rule = float_3nan_prop_cba;
298
}
299
- } else {
300
- if (is_nan(c_cls)) {
301
- return 2;
302
- } else if (is_nan(b_cls)) {
303
- return 1;
304
- } else {
305
- return 0;
306
- }
307
- }
308
#else
309
- /* A default implementation: prefer a to b to c.
310
- * This is unlikely to actually match any real implementation.
311
- */
312
- if (is_nan(a_cls)) {
313
- return 0;
314
- } else if (is_nan(b_cls)) {
315
- return 1;
316
- } else {
317
- return 2;
318
- }
319
+ rule = float_3nan_prop_abc;
320
#endif
321
+ }
322
+
323
+ assert(rule != float_3nan_prop_none);
324
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
325
+ /* We have at least one SNaN input and should prefer it */
326
+ do {
327
+ which = rule & R_3NAN_1ST_MASK;
328
+ rule >>= R_3NAN_1ST_LENGTH;
329
+ } while (!is_snan(cls[which]));
330
+ } else {
331
+ do {
332
+ which = rule & R_3NAN_1ST_MASK;
333
+ rule >>= R_3NAN_1ST_LENGTH;
334
+ } while (!is_nan(cls[which]));
335
+ }
336
+ return which;
253
}
337
}
254
338
255
@@ -XXX,XX +XXX,XX @@ static void gen_VMLS_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
339
/*----------------------------------------------------------------------------
256
TCGv_i32 tmp = tcg_temp_new_i32();
257
258
gen_helper_vfp_muls(tmp, vn, vm, fpst);
259
- gen_helper_vfp_negs(tmp, tmp);
260
+ gen_vfp_negs(tmp, tmp);
261
gen_helper_vfp_adds(vd, vd, tmp, fpst);
262
}
263
264
@@ -XXX,XX +XXX,XX @@ static void gen_VMLS_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
265
TCGv_i64 tmp = tcg_temp_new_i64();
266
267
gen_helper_vfp_muld(tmp, vn, vm, fpst);
268
- gen_helper_vfp_negd(tmp, tmp);
269
+ gen_vfp_negd(tmp, tmp);
270
gen_helper_vfp_addd(vd, vd, tmp, fpst);
271
}
272
273
@@ -XXX,XX +XXX,XX @@ static void gen_VNMLS_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
274
TCGv_i32 tmp = tcg_temp_new_i32();
275
276
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
277
- gen_helper_vfp_negh(vd, vd);
278
+ gen_vfp_negh(vd, vd);
279
gen_helper_vfp_addh(vd, vd, tmp, fpst);
280
}
281
282
@@ -XXX,XX +XXX,XX @@ static void gen_VNMLS_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
283
TCGv_i32 tmp = tcg_temp_new_i32();
284
285
gen_helper_vfp_muls(tmp, vn, vm, fpst);
286
- gen_helper_vfp_negs(vd, vd);
287
+ gen_vfp_negs(vd, vd);
288
gen_helper_vfp_adds(vd, vd, tmp, fpst);
289
}
290
291
@@ -XXX,XX +XXX,XX @@ static void gen_VNMLS_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
292
TCGv_i64 tmp = tcg_temp_new_i64();
293
294
gen_helper_vfp_muld(tmp, vn, vm, fpst);
295
- gen_helper_vfp_negd(vd, vd);
296
+ gen_vfp_negd(vd, vd);
297
gen_helper_vfp_addd(vd, vd, tmp, fpst);
298
}
299
300
@@ -XXX,XX +XXX,XX @@ static void gen_VNMLA_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
301
TCGv_i32 tmp = tcg_temp_new_i32();
302
303
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
304
- gen_helper_vfp_negh(tmp, tmp);
305
- gen_helper_vfp_negh(vd, vd);
306
+ gen_vfp_negh(tmp, tmp);
307
+ gen_vfp_negh(vd, vd);
308
gen_helper_vfp_addh(vd, vd, tmp, fpst);
309
}
310
311
@@ -XXX,XX +XXX,XX @@ static void gen_VNMLA_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
312
TCGv_i32 tmp = tcg_temp_new_i32();
313
314
gen_helper_vfp_muls(tmp, vn, vm, fpst);
315
- gen_helper_vfp_negs(tmp, tmp);
316
- gen_helper_vfp_negs(vd, vd);
317
+ gen_vfp_negs(tmp, tmp);
318
+ gen_vfp_negs(vd, vd);
319
gen_helper_vfp_adds(vd, vd, tmp, fpst);
320
}
321
322
@@ -XXX,XX +XXX,XX @@ static void gen_VNMLA_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
323
TCGv_i64 tmp = tcg_temp_new_i64();
324
325
gen_helper_vfp_muld(tmp, vn, vm, fpst);
326
- gen_helper_vfp_negd(tmp, tmp);
327
- gen_helper_vfp_negd(vd, vd);
328
+ gen_vfp_negd(tmp, tmp);
329
+ gen_vfp_negd(vd, vd);
330
gen_helper_vfp_addd(vd, vd, tmp, fpst);
331
}
332
333
@@ -XXX,XX +XXX,XX @@ static void gen_VNMUL_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
334
{
335
/* VNMUL: -(fn * fm) */
336
gen_helper_vfp_mulh(vd, vn, vm, fpst);
337
- gen_helper_vfp_negh(vd, vd);
338
+ gen_vfp_negh(vd, vd);
339
}
340
341
static bool trans_VNMUL_hp(DisasContext *s, arg_VNMUL_sp *a)
342
@@ -XXX,XX +XXX,XX @@ static void gen_VNMUL_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
343
{
344
/* VNMUL: -(fn * fm) */
345
gen_helper_vfp_muls(vd, vn, vm, fpst);
346
- gen_helper_vfp_negs(vd, vd);
347
+ gen_vfp_negs(vd, vd);
348
}
349
350
static bool trans_VNMUL_sp(DisasContext *s, arg_VNMUL_sp *a)
351
@@ -XXX,XX +XXX,XX @@ static void gen_VNMUL_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
352
{
353
/* VNMUL: -(fn * fm) */
354
gen_helper_vfp_muld(vd, vn, vm, fpst);
355
- gen_helper_vfp_negd(vd, vd);
356
+ gen_vfp_negd(vd, vd);
357
}
358
359
static bool trans_VNMUL_dp(DisasContext *s, arg_VNMUL_dp *a)
360
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
361
vfp_load_reg32(vm, a->vm);
362
if (neg_n) {
363
/* VFNMS, VFMS */
364
- gen_helper_vfp_negh(vn, vn);
365
+ gen_vfp_negh(vn, vn);
366
}
367
vfp_load_reg32(vd, a->vd);
368
if (neg_d) {
369
/* VFNMA, VFNMS */
370
- gen_helper_vfp_negh(vd, vd);
371
+ gen_vfp_negh(vd, vd);
372
}
373
fpst = fpstatus_ptr(FPST_FPCR_F16);
374
gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst);
375
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
376
vfp_load_reg32(vm, a->vm);
377
if (neg_n) {
378
/* VFNMS, VFMS */
379
- gen_helper_vfp_negs(vn, vn);
380
+ gen_vfp_negs(vn, vn);
381
}
382
vfp_load_reg32(vd, a->vd);
383
if (neg_d) {
384
/* VFNMA, VFNMS */
385
- gen_helper_vfp_negs(vd, vd);
386
+ gen_vfp_negs(vd, vd);
387
}
388
fpst = fpstatus_ptr(FPST_FPCR);
389
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
390
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
391
vfp_load_reg64(vm, a->vm);
392
if (neg_n) {
393
/* VFNMS, VFMS */
394
- gen_helper_vfp_negd(vn, vn);
395
+ gen_vfp_negd(vn, vn);
396
}
397
vfp_load_reg64(vd, a->vd);
398
if (neg_d) {
399
/* VFNMA, VFNMS */
400
- gen_helper_vfp_negd(vd, vd);
401
+ gen_vfp_negd(vd, vd);
402
}
403
fpst = fpstatus_ptr(FPST_FPCR);
404
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
405
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
406
DO_VFP_VMOV(VMOV_reg, sp, tcg_gen_mov_i32)
407
DO_VFP_VMOV(VMOV_reg, dp, tcg_gen_mov_i64)
408
409
-DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
410
-DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
411
-DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd, aa32_fpdp_v2)
412
+DO_VFP_2OP(VABS, hp, gen_vfp_absh, aa32_fp16_arith)
413
+DO_VFP_2OP(VABS, sp, gen_vfp_abss, aa32_fpsp_v2)
414
+DO_VFP_2OP(VABS, dp, gen_vfp_absd, aa32_fpdp_v2)
415
416
-DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh, aa32_fp16_arith)
417
-DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs, aa32_fpsp_v2)
418
-DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd, aa32_fpdp_v2)
419
+DO_VFP_2OP(VNEG, hp, gen_vfp_negh, aa32_fp16_arith)
420
+DO_VFP_2OP(VNEG, sp, gen_vfp_negs, aa32_fpsp_v2)
421
+DO_VFP_2OP(VNEG, dp, gen_vfp_negd, aa32_fpdp_v2)
422
423
static void gen_VSQRT_hp(TCGv_i32 vd, TCGv_i32 vm)
424
{
425
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
426
index XXXXXXX..XXXXXXX 100644
427
--- a/target/arm/vfp_helper.c
428
+++ b/target/arm/vfp_helper.c
429
@@ -XXX,XX +XXX,XX @@ VFP_BINOP(minnum)
430
VFP_BINOP(maxnum)
431
#undef VFP_BINOP
432
433
-dh_ctype_f16 VFP_HELPER(neg, h)(dh_ctype_f16 a)
434
-{
435
- return float16_chs(a);
436
-}
437
-
438
-float32 VFP_HELPER(neg, s)(float32 a)
439
-{
440
- return float32_chs(a);
441
-}
442
-
443
-float64 VFP_HELPER(neg, d)(float64 a)
444
-{
445
- return float64_chs(a);
446
-}
447
-
448
-dh_ctype_f16 VFP_HELPER(abs, h)(dh_ctype_f16 a)
449
-{
450
- return float16_abs(a);
451
-}
452
-
453
-float32 VFP_HELPER(abs, s)(float32 a)
454
-{
455
- return float32_abs(a);
456
-}
457
-
458
-float64 VFP_HELPER(abs, d)(float64 a)
459
-{
460
- return float64_abs(a);
461
-}
462
-
463
dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, CPUARMState *env)
464
{
465
return float16_sqrt(a, &env->vfp.fp_status_f16);
466
--
340
--
467
2.34.1
341
2.34.1
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for propagating NaNs in
2
the muladd case. In meson.build we put -DTARGET_ARM in fpcflags, and
3
so we should select here the Arm rule of float_3nan_prop_s_cab.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-17-peter.maydell@linaro.org
8
---
9
tests/fp/fp-bench.c | 1 +
10
tests/fp/fp-test.c | 1 +
11
2 files changed, 2 insertions(+)
12
13
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/fp/fp-bench.c
16
+++ b/tests/fp/fp-bench.c
17
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
18
* doesn't specify match those used by the Arm architecture.
19
*/
20
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
22
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
23
24
f = bench_funcs[operation][precision];
25
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/fp/fp-test.c
28
+++ b/tests/fp/fp-test.c
29
@@ -XXX,XX +XXX,XX @@ void run_test(void)
30
* doesn't specify match those used by the Arm architecture.
31
*/
32
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
33
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
34
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
35
36
genCases_setLevel(test_level);
37
--
38
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-18-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-18-peter.maydell@linaro.org
7
---
7
---
8
target/arm/helper.h | 2 +
8
target/arm/cpu.c | 5 +++++
9
target/arm/tcg/a64.decode | 22 +++
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
target/arm/tcg/translate-a64.c | 241 +++++++++++++++++----------------
10
2 files changed, 6 insertions(+), 7 deletions(-)
11
target/arm/tcg/vec_helper.c | 14 ++
12
4 files changed, 163 insertions(+), 116 deletions(-)
13
11
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
14
--- a/target/arm/cpu.c
17
+++ b/target/arm/helper.h
15
+++ b/target/arm/cpu.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fmls_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
19
17
* * tininess-before-rounding
20
DEF_HELPER_FLAGS_5(gvec_vfma_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
21
DEF_HELPER_FLAGS_5(gvec_vfma_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
22
+DEF_HELPER_FLAGS_5(gvec_vfma_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
+ * * 3-input NaN propagation prefers SNaN over QNaN, and then
23
21
+ * operand C over A over B (see FPProcessNaNs3() pseudocode,
24
DEF_HELPER_FLAGS_5(gvec_vfms_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
+ * but note that for QEMU muladd is a * b + c, whereas for
25
DEF_HELPER_FLAGS_5(gvec_vfms_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+ * the pseudocode function the arguments are in the order c, a, b.
26
+DEF_HELPER_FLAGS_5(gvec_vfms_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
27
25
* and the input NaN if it is signalling
28
DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
26
*/
29
void, ptr, ptr, ptr, ptr, i32)
27
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
30
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
28
{
29
set_float_detect_tininess(float_tininess_before_rounding, s);
30
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
31
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
32
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
33
}
34
35
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
31
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/a64.decode
37
--- a/fpu/softfloat-specialize.c.inc
33
+++ b/target/arm/tcg/a64.decode
38
+++ b/fpu/softfloat-specialize.c.inc
34
@@ -XXX,XX +XXX,XX @@ FMINNM_v 0.00 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
39
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
36
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
37
38
+FMLA_v 0.00 1110 010 ..... 00001 1 ..... ..... @qrrr_h
39
+FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
40
+
41
+FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
42
+FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
43
+
44
### Advanced SIMD scalar x indexed element
45
46
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
47
FMUL_si 0101 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
48
FMUL_si 0101 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
49
50
+FMLA_si 0101 1111 00 .. .... 0001 . 0 ..... ..... @rrx_h
51
+FMLA_si 0101 1111 10 .. .... 0001 . 0 ..... ..... @rrx_s
52
+FMLA_si 0101 1111 11 0. .... 0001 . 0 ..... ..... @rrx_d
53
+
54
+FMLS_si 0101 1111 00 .. .... 0101 . 0 ..... ..... @rrx_h
55
+FMLS_si 0101 1111 10 .. .... 0101 . 0 ..... ..... @rrx_s
56
+FMLS_si 0101 1111 11 0. .... 0101 . 0 ..... ..... @rrx_d
57
+
58
FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
59
FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
60
FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
61
@@ -XXX,XX +XXX,XX @@ FMUL_vi 0.00 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
62
FMUL_vi 0.00 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
63
FMUL_vi 0.00 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
64
65
+FMLA_vi 0.00 1111 00 .. .... 0001 . 0 ..... ..... @qrrx_h
66
+FMLA_vi 0.00 1111 10 . ..... 0001 . 0 ..... ..... @qrrx_s
67
+FMLA_vi 0.00 1111 11 0 ..... 0001 . 0 ..... ..... @qrrx_d
68
+
69
+FMLS_vi 0.00 1111 00 .. .... 0101 . 0 ..... ..... @qrrx_h
70
+FMLS_vi 0.00 1111 10 . ..... 0101 . 0 ..... ..... @qrrx_s
71
+FMLS_vi 0.00 1111 11 0 ..... 0101 . 0 ..... ..... @qrrx_d
72
+
73
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
74
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
75
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
76
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/tcg/translate-a64.c
79
+++ b/target/arm/tcg/translate-a64.c
80
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
81
};
82
TRANS(FMULX_v, do_fp3_vector, a, f_vector_fmulx)
83
84
+static gen_helper_gvec_3_ptr * const f_vector_fmla[3] = {
85
+ gen_helper_gvec_vfma_h,
86
+ gen_helper_gvec_vfma_s,
87
+ gen_helper_gvec_vfma_d,
88
+};
89
+TRANS(FMLA_v, do_fp3_vector, a, f_vector_fmla)
90
+
91
+static gen_helper_gvec_3_ptr * const f_vector_fmls[3] = {
92
+ gen_helper_gvec_vfms_h,
93
+ gen_helper_gvec_vfms_s,
94
+ gen_helper_gvec_vfms_d,
95
+};
96
+TRANS(FMLS_v, do_fp3_vector, a, f_vector_fmls)
97
+
98
/*
99
* Advanced SIMD scalar/vector x indexed element
100
*/
101
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
102
TRANS(FMUL_si, do_fp3_scalar_idx, a, &f_scalar_fmul)
103
TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
104
105
+static bool do_fmla_scalar_idx(DisasContext *s, arg_rrx_e *a, bool neg)
106
+{
107
+ switch (a->esz) {
108
+ case MO_64:
109
+ if (fp_access_check(s)) {
110
+ TCGv_i64 t0 = read_fp_dreg(s, a->rd);
111
+ TCGv_i64 t1 = read_fp_dreg(s, a->rn);
112
+ TCGv_i64 t2 = tcg_temp_new_i64();
113
+
114
+ read_vec_element(s, t2, a->rm, a->idx, MO_64);
115
+ if (neg) {
116
+ gen_vfp_negd(t1, t1);
117
+ }
118
+ gen_helper_vfp_muladdd(t0, t1, t2, t0, fpstatus_ptr(FPST_FPCR));
119
+ write_fp_dreg(s, a->rd, t0);
120
+ }
121
+ break;
122
+ case MO_32:
123
+ if (fp_access_check(s)) {
124
+ TCGv_i32 t0 = read_fp_sreg(s, a->rd);
125
+ TCGv_i32 t1 = read_fp_sreg(s, a->rn);
126
+ TCGv_i32 t2 = tcg_temp_new_i32();
127
+
128
+ read_vec_element_i32(s, t2, a->rm, a->idx, MO_32);
129
+ if (neg) {
130
+ gen_vfp_negs(t1, t1);
131
+ }
132
+ gen_helper_vfp_muladds(t0, t1, t2, t0, fpstatus_ptr(FPST_FPCR));
133
+ write_fp_sreg(s, a->rd, t0);
134
+ }
135
+ break;
136
+ case MO_16:
137
+ if (!dc_isar_feature(aa64_fp16, s)) {
138
+ return false;
139
+ }
140
+ if (fp_access_check(s)) {
141
+ TCGv_i32 t0 = read_fp_hreg(s, a->rd);
142
+ TCGv_i32 t1 = read_fp_hreg(s, a->rn);
143
+ TCGv_i32 t2 = tcg_temp_new_i32();
144
+
145
+ read_vec_element_i32(s, t2, a->rm, a->idx, MO_16);
146
+ if (neg) {
147
+ gen_vfp_negh(t1, t1);
148
+ }
149
+ gen_helper_advsimd_muladdh(t0, t1, t2, t0,
150
+ fpstatus_ptr(FPST_FPCR_F16));
151
+ write_fp_sreg(s, a->rd, t0);
152
+ }
153
+ break;
154
+ default:
155
+ g_assert_not_reached();
156
+ }
157
+ return true;
158
+}
159
+
160
+TRANS(FMLA_si, do_fmla_scalar_idx, a, false)
161
+TRANS(FMLS_si, do_fmla_scalar_idx, a, true)
162
+
163
static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
164
gen_helper_gvec_3_ptr * const fns[3])
165
{
166
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
167
};
168
TRANS(FMULX_vi, do_fp3_vector_idx, a, f_vector_idx_fmulx)
169
170
+static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
171
+{
172
+ static gen_helper_gvec_4_ptr * const fns[3] = {
173
+ gen_helper_gvec_fmla_idx_h,
174
+ gen_helper_gvec_fmla_idx_s,
175
+ gen_helper_gvec_fmla_idx_d,
176
+ };
177
+ MemOp esz = a->esz;
178
+
179
+ switch (esz) {
180
+ case MO_64:
181
+ if (!a->q) {
182
+ return false;
183
+ }
184
+ break;
185
+ case MO_32:
186
+ break;
187
+ case MO_16:
188
+ if (!dc_isar_feature(aa64_fp16, s)) {
189
+ return false;
190
+ }
191
+ break;
192
+ default:
193
+ g_assert_not_reached();
194
+ }
195
+ if (fp_access_check(s)) {
196
+ gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
197
+ esz == MO_16, (a->idx << 1) | neg,
198
+ fns[esz - 1]);
199
+ }
200
+ return true;
201
+}
202
+
203
+TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
204
+TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
205
+
206
207
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
208
* Note that it is the caller's responsibility to ensure that the
209
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
210
read_vec_element(s, tcg_op2, rm, pass, MO_64);
211
212
switch (fpopcode) {
213
- case 0x39: /* FMLS */
214
- /* As usual for ARM, separate negation for fused multiply-add */
215
- gen_vfp_negd(tcg_op1, tcg_op1);
216
- /* fall through */
217
- case 0x19: /* FMLA */
218
- read_vec_element(s, tcg_res, rd, pass, MO_64);
219
- gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
220
- tcg_res, fpst);
221
- break;
222
case 0x1c: /* FCMEQ */
223
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
224
break;
225
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
226
break;
227
default:
228
case 0x18: /* FMAXNM */
229
+ case 0x19: /* FMLA */
230
case 0x1a: /* FADD */
231
case 0x1b: /* FMULX */
232
case 0x1e: /* FMAX */
233
case 0x38: /* FMINNM */
234
+ case 0x39: /* FMLS */
235
case 0x3a: /* FSUB */
236
case 0x3e: /* FMIN */
237
case 0x5b: /* FMUL */
238
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
239
read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
240
241
switch (fpopcode) {
242
- case 0x39: /* FMLS */
243
- /* As usual for ARM, separate negation for fused multiply-add */
244
- gen_vfp_negs(tcg_op1, tcg_op1);
245
- /* fall through */
246
- case 0x19: /* FMLA */
247
- read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
248
- gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
249
- tcg_res, fpst);
250
- break;
251
case 0x1c: /* FCMEQ */
252
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
253
break;
254
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
255
break;
256
default:
257
case 0x18: /* FMAXNM */
258
+ case 0x19: /* FMLA */
259
case 0x1a: /* FADD */
260
case 0x1b: /* FMULX */
261
case 0x1e: /* FMAX */
262
case 0x38: /* FMINNM */
263
+ case 0x39: /* FMLS */
264
case 0x3a: /* FSUB */
265
case 0x3e: /* FMIN */
266
case 0x5b: /* FMUL */
267
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
268
case 0x3f: /* FRSQRTS */
269
case 0x5d: /* FACGE */
270
case 0x7d: /* FACGT */
271
- case 0x19: /* FMLA */
272
- case 0x39: /* FMLS */
273
case 0x1c: /* FCMEQ */
274
case 0x5c: /* FCMGE */
275
case 0x7a: /* FABD */
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
277
278
default:
279
case 0x18: /* FMAXNM */
280
+ case 0x19: /* FMLA */
281
case 0x1a: /* FADD */
282
case 0x1b: /* FMULX */
283
case 0x1e: /* FMAX */
284
case 0x38: /* FMINNM */
285
+ case 0x39: /* FMLS */
286
case 0x3a: /* FSUB */
287
case 0x3e: /* FMIN */
288
case 0x5b: /* FMUL */
289
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
290
int pass;
291
292
switch (fpopcode) {
293
- case 0x1: /* FMLA */
294
case 0x4: /* FCMEQ */
295
case 0x7: /* FRECPS */
296
- case 0x9: /* FMLS */
297
case 0xf: /* FRSQRTS */
298
case 0x14: /* FCMGE */
299
case 0x15: /* FACGE */
300
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
301
break;
302
default:
303
case 0x0: /* FMAXNM */
304
+ case 0x1: /* FMLA */
305
case 0x2: /* FADD */
306
case 0x3: /* FMULX */
307
case 0x6: /* FMAX */
308
case 0x8: /* FMINNM */
309
+ case 0x9: /* FMLS */
310
case 0xa: /* FSUB */
311
case 0xe: /* FMIN */
312
case 0x13: /* FMUL */
313
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
314
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
315
316
switch (fpopcode) {
317
- case 0x1: /* FMLA */
318
- read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
319
- gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
320
- fpst);
321
- break;
322
case 0x4: /* FCMEQ */
323
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
324
break;
325
case 0x7: /* FRECPS */
326
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
327
break;
328
- case 0x9: /* FMLS */
329
- /* As usual for ARM, separate negation for fused multiply-add */
330
- tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
331
- read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
332
- gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
333
- fpst);
334
- break;
335
case 0xf: /* FRSQRTS */
336
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
337
break;
338
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
339
break;
340
default:
341
case 0x0: /* FMAXNM */
342
+ case 0x1: /* FMLA */
343
case 0x2: /* FADD */
344
case 0x3: /* FMULX */
345
case 0x6: /* FMAX */
346
case 0x8: /* FMINNM */
347
+ case 0x9: /* FMLS */
348
case 0xa: /* FSUB */
349
case 0xe: /* FMIN */
350
case 0x13: /* FMUL */
351
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
352
case 0x0c: /* SQDMULH */
353
case 0x0d: /* SQRDMULH */
354
break;
355
- case 0x01: /* FMLA */
356
- case 0x05: /* FMLS */
357
- is_fp = 1;
358
- break;
359
case 0x1d: /* SQRDMLAH */
360
case 0x1f: /* SQRDMLSH */
361
if (!dc_isar_feature(aa64_rdm, s)) {
362
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
363
/* is_fp, but we pass tcg_env not fp_status. */
364
break;
365
default:
366
+ case 0x01: /* FMLA */
367
+ case 0x05: /* FMLS */
368
case 0x09: /* FMUL */
369
case 0x19: /* FMULX */
370
unallocated_encoding(s);
371
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
372
373
switch (is_fp) {
374
case 1: /* normal fp */
375
- /* convert insn encoded size to MemOp size */
376
- switch (size) {
377
- case 0: /* half-precision */
378
- size = MO_16;
379
- is_fp16 = true;
380
- break;
381
- case MO_32: /* single precision */
382
- case MO_64: /* double precision */
383
- break;
384
- default:
385
- unallocated_encoding(s);
386
- return;
387
- }
388
- break;
389
+ unallocated_encoding(s); /* in decodetree */
390
+ return;
391
392
case 2: /* complex fp */
393
/* Each indexable element is a complex pair. */
394
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
395
}
40
}
396
41
397
if (size == 3) {
42
if (rule == float_3nan_prop_none) {
398
- TCGv_i64 tcg_idx = tcg_temp_new_i64();
43
-#if defined(TARGET_ARM)
399
- int pass;
44
- /*
400
-
45
- * This looks different from the ARM ARM pseudocode, because the ARM ARM
401
- assert(is_fp && is_q && !is_long);
46
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
402
-
47
- */
403
- read_vec_element(s, tcg_idx, rm, index, MO_64);
48
- rule = float_3nan_prop_s_cab;
404
-
49
-#elif defined(TARGET_MIPS)
405
- for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
50
+#if defined(TARGET_MIPS)
406
- TCGv_i64 tcg_op = tcg_temp_new_i64();
51
if (snan_bit_is_one(status)) {
407
- TCGv_i64 tcg_res = tcg_temp_new_i64();
52
rule = float_3nan_prop_s_abc;
408
-
53
} else {
409
- read_vec_element(s, tcg_op, rn, pass, MO_64);
410
-
411
- switch (16 * u + opcode) {
412
- case 0x05: /* FMLS */
413
- /* As usual for ARM, separate negation for fused multiply-add */
414
- gen_vfp_negd(tcg_op, tcg_op);
415
- /* fall through */
416
- case 0x01: /* FMLA */
417
- read_vec_element(s, tcg_res, rd, pass, MO_64);
418
- gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
419
- break;
420
- default:
421
- case 0x09: /* FMUL */
422
- case 0x19: /* FMULX */
423
- g_assert_not_reached();
424
- }
425
-
426
- write_vec_element(s, tcg_res, rd, pass, MO_64);
427
- }
428
-
429
- clear_vec_high(s, !is_scalar, rd);
430
+ g_assert_not_reached();
431
} else if (!is_long) {
432
/* 32 bit floating point, or 16 or 32 bit integer.
433
* For the 16 bit scalar case we use the usual Neon helpers and
434
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
435
genfn(tcg_res, tcg_op, tcg_res);
436
break;
437
}
438
- case 0x05: /* FMLS */
439
- case 0x01: /* FMLA */
440
- read_vec_element_i32(s, tcg_res, rd, pass,
441
- is_scalar ? size : MO_32);
442
- switch (size) {
443
- case 1:
444
- if (opcode == 0x5) {
445
- /* As usual for ARM, separate negation for fused
446
- * multiply-add */
447
- tcg_gen_xori_i32(tcg_op, tcg_op, 0x80008000);
448
- }
449
- if (is_scalar) {
450
- gen_helper_advsimd_muladdh(tcg_res, tcg_op, tcg_idx,
451
- tcg_res, fpst);
452
- } else {
453
- gen_helper_advsimd_muladd2h(tcg_res, tcg_op, tcg_idx,
454
- tcg_res, fpst);
455
- }
456
- break;
457
- case 2:
458
- if (opcode == 0x5) {
459
- /* As usual for ARM, separate negation for
460
- * fused multiply-add */
461
- tcg_gen_xori_i32(tcg_op, tcg_op, 0x80000000);
462
- }
463
- gen_helper_vfp_muladds(tcg_res, tcg_op, tcg_idx,
464
- tcg_res, fpst);
465
- break;
466
- default:
467
- g_assert_not_reached();
468
- }
469
- break;
470
case 0x0c: /* SQDMULH */
471
if (size == 1) {
472
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
473
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
474
}
475
break;
476
default:
477
+ case 0x01: /* FMLA */
478
+ case 0x05: /* FMLS */
479
case 0x09: /* FMUL */
480
case 0x19: /* FMULX */
481
g_assert_not_reached();
482
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
483
index XXXXXXX..XXXXXXX 100644
484
--- a/target/arm/tcg/vec_helper.c
485
+++ b/target/arm/tcg/vec_helper.c
486
@@ -XXX,XX +XXX,XX @@ static float32 float32_muladd_f(float32 dest, float32 op1, float32 op2,
487
return float32_muladd(op1, op2, dest, 0, stat);
488
}
489
490
+static float64 float64_muladd_f(float64 dest, float64 op1, float64 op2,
491
+ float_status *stat)
492
+{
493
+ return float64_muladd(op1, op2, dest, 0, stat);
494
+}
495
+
496
static float16 float16_mulsub_f(float16 dest, float16 op1, float16 op2,
497
float_status *stat)
498
{
499
@@ -XXX,XX +XXX,XX @@ static float32 float32_mulsub_f(float32 dest, float32 op1, float32 op2,
500
return float32_muladd(float32_chs(op1), op2, dest, 0, stat);
501
}
502
503
+static float64 float64_mulsub_f(float64 dest, float64 op1, float64 op2,
504
+ float_status *stat)
505
+{
506
+ return float64_muladd(float64_chs(op1), op2, dest, 0, stat);
507
+}
508
+
509
#define DO_MULADD(NAME, FUNC, TYPE) \
510
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
511
{ \
512
@@ -XXX,XX +XXX,XX @@ DO_MULADD(gvec_fmls_s, float32_mulsub_nf, float32)
513
514
DO_MULADD(gvec_vfma_h, float16_muladd_f, float16)
515
DO_MULADD(gvec_vfma_s, float32_muladd_f, float32)
516
+DO_MULADD(gvec_vfma_d, float64_muladd_f, float64)
517
518
DO_MULADD(gvec_vfms_h, float16_mulsub_f, float16)
519
DO_MULADD(gvec_vfms_s, float32_mulsub_f, float32)
520
+DO_MULADD(gvec_vfms_d, float64_mulsub_f, float64)
521
522
/* For the indexed ops, SVE applies the index per 128-bit vector segment.
523
* For AdvSIMD, there is of course only one such vector segment.
524
--
54
--
525
2.34.1
55
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for loongarch, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-19-peter.maydell@linaro.org
7
---
8
target/loongarch/tcg/fpu_helper.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/loongarch/tcg/fpu_helper.c
15
+++ b/target/loongarch/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
17
* case sets InvalidOp and returns the input value 'c'
18
*/
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
21
}
22
23
int ieee_ex_to_loongarch(int xcpt)
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_LOONGARCH64)
33
- rule = float_3nan_prop_s_cab;
34
#elif defined(TARGET_PPC)
35
/*
36
* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for PPC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-20-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 8 ++++++++
9
fpu/softfloat-specialize.c.inc | 6 ------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * NaN propagation for fused multiply-add:
22
+ * if fRA is a NaN return it; otherwise if fRB is a NaN return it;
23
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
24
+ * whereas QEMU labels the operands as (a * b) + c.
25
+ */
26
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->fp_status);
27
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->vec_status);
28
/*
29
* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
30
* to return an input NaN if we have one (ie c) rather than generating
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
} else {
37
rule = float_3nan_prop_s_cab;
38
}
39
-#elif defined(TARGET_PPC)
40
- /*
41
- * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
42
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
43
- */
44
- rule = float_3nan_prop_acb;
45
#elif defined(TARGET_S390X)
46
rule = float_3nan_prop_s_abc;
47
#elif defined(TARGET_SPARC)
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for s390x, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-21-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/cpu.c
15
+++ b/target/s390x/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
17
set_float_detect_tininess(float_tininess_before_rounding,
18
&env->fpu_status);
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
21
set_float_infzeronan_rule(float_infzeronan_dnan_always,
22
&env->fpu_status);
23
/* fall through */
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_S390X)
33
- rule = float_3nan_prop_s_abc;
34
#elif defined(TARGET_SPARC)
35
rule = float_3nan_prop_s_cba;
36
#elif defined(TARGET_XTENSA)
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for SPARC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-22-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For fused-multiply add, prefer SNaN over QNaN, then C->B->A */
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
22
/* For inf * 0 + NaN, return the input NaN */
23
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
} else {
31
rule = float_3nan_prop_s_cab;
32
}
33
-#elif defined(TARGET_SPARC)
34
- rule = float_3nan_prop_s_cba;
35
#elif defined(TARGET_XTENSA)
36
if (status->use_first_nan) {
37
rule = float_3nan_prop_abc;
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-23-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 4 ++++
9
target/mips/msa.c | 3 +++
10
fpu/softfloat-specialize.c.inc | 8 +-------
11
3 files changed, 8 insertions(+), 7 deletions(-)
12
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/mips/fpu_helper.h
16
+++ b/target/mips/fpu_helper.h
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
18
{
19
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
20
FloatInfZeroNaNRule izn_rule;
21
+ Float3NaNPropRule nan3_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
28
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
29
+ nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
30
+ set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
31
+
32
}
33
34
static inline void restore_fp_status(CPUMIPSState *env)
35
diff --git a/target/mips/msa.c b/target/mips/msa.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/mips/msa.c
38
+++ b/target/mips/msa.c
39
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
40
set_float_2nan_prop_rule(float_2nan_prop_s_ab,
41
&env->active_tc.msa_fp_status);
42
43
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab,
44
+ &env->active_tc.msa_fp_status);
45
+
46
/* clear float_status exception flags */
47
set_float_exception_flags(0, &env->active_tc.msa_fp_status);
48
49
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
50
index XXXXXXX..XXXXXXX 100644
51
--- a/fpu/softfloat-specialize.c.inc
52
+++ b/fpu/softfloat-specialize.c.inc
53
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
54
}
55
56
if (rule == float_3nan_prop_none) {
57
-#if defined(TARGET_MIPS)
58
- if (snan_bit_is_one(status)) {
59
- rule = float_3nan_prop_s_abc;
60
- } else {
61
- rule = float_3nan_prop_s_cab;
62
- }
63
-#elif defined(TARGET_XTENSA)
64
+#if defined(TARGET_XTENSA)
65
if (status->use_first_nan) {
66
rule = float_3nan_prop_abc;
67
} else {
68
--
69
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the Float3NaNPropRule explicitly for xtensa, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-15-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-24-peter.maydell@linaro.org
7
---
7
---
8
target/arm/helper.h | 4 +
8
target/xtensa/fpu_helper.c | 2 ++
9
target/arm/tcg/a64.decode | 17 ++++
9
fpu/softfloat-specialize.c.inc | 8 --------
10
target/arm/tcg/translate-a64.c | 168 +++++++++++++++++----------------
10
2 files changed, 2 insertions(+), 8 deletions(-)
11
target/arm/tcg/vec_helper.c | 4 +
12
4 files changed, 113 insertions(+), 80 deletions(-)
13
11
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
12
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
14
--- a/target/xtensa/fpu_helper.c
17
+++ b/target/arm/helper.h
15
+++ b/target/xtensa/fpu_helper.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_facgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
16
@@ -XXX,XX +XXX,XX @@ void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
19
17
set_use_first_nan(use_first, &env->fp_status);
20
DEF_HELPER_FLAGS_5(gvec_fmax_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
18
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
21
DEF_HELPER_FLAGS_5(gvec_fmax_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
&env->fp_status);
22
+DEF_HELPER_FLAGS_5(gvec_fmax_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
+ set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
23
21
+ &env->fp_status);
24
DEF_HELPER_FLAGS_5(gvec_fmin_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
}
25
DEF_HELPER_FLAGS_5(gvec_fmin_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
26
+DEF_HELPER_FLAGS_5(gvec_fmin_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
void HELPER(wur_fpu2k_fcr)(CPUXtensaState *env, uint32_t v)
27
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
DEF_HELPER_FLAGS_5(gvec_fmaxnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_5(gvec_fmaxnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(gvec_fmaxnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
32
DEF_HELPER_FLAGS_5(gvec_fminnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_5(gvec_fminnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(gvec_fminnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
36
DEF_HELPER_FLAGS_5(gvec_recps_nf_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(gvec_recps_nf_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
39
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/a64.decode
27
--- a/fpu/softfloat-specialize.c.inc
41
+++ b/target/arm/tcg/a64.decode
28
+++ b/fpu/softfloat-specialize.c.inc
42
@@ -XXX,XX +XXX,XX @@ FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
43
FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
44
FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
45
46
+FMAX_s 0001 1110 ..1 ..... 0100 10 ..... ..... @rrr_hsd
47
+FMIN_s 0001 1110 ..1 ..... 0101 10 ..... ..... @rrr_hsd
48
+FMAXNM_s 0001 1110 ..1 ..... 0110 10 ..... ..... @rrr_hsd
49
+FMINNM_s 0001 1110 ..1 ..... 0111 10 ..... ..... @rrr_hsd
50
+
51
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
52
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
53
54
@@ -XXX,XX +XXX,XX @@ FDIV_v 0.10 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
55
FMUL_v 0.10 1110 010 ..... 00011 1 ..... ..... @qrrr_h
56
FMUL_v 0.10 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
57
58
+FMAX_v 0.00 1110 010 ..... 00110 1 ..... ..... @qrrr_h
59
+FMAX_v 0.00 1110 0.1 ..... 11110 1 ..... ..... @qrrr_sd
60
+
61
+FMIN_v 0.00 1110 110 ..... 00110 1 ..... ..... @qrrr_h
62
+FMIN_v 0.00 1110 1.1 ..... 11110 1 ..... ..... @qrrr_sd
63
+
64
+FMAXNM_v 0.00 1110 010 ..... 00000 1 ..... ..... @qrrr_h
65
+FMAXNM_v 0.00 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
66
+
67
+FMINNM_v 0.00 1110 110 ..... 00000 1 ..... ..... @qrrr_h
68
+FMINNM_v 0.00 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
69
+
70
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
71
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
72
73
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/target/arm/tcg/translate-a64.c
76
+++ b/target/arm/tcg/translate-a64.c
77
@@ -XXX,XX +XXX,XX @@ static const FPScalar f_scalar_fmul = {
78
};
79
TRANS(FMUL_s, do_fp3_scalar, a, &f_scalar_fmul)
80
81
+static const FPScalar f_scalar_fmax = {
82
+ gen_helper_advsimd_maxh,
83
+ gen_helper_vfp_maxs,
84
+ gen_helper_vfp_maxd,
85
+};
86
+TRANS(FMAX_s, do_fp3_scalar, a, &f_scalar_fmax)
87
+
88
+static const FPScalar f_scalar_fmin = {
89
+ gen_helper_advsimd_minh,
90
+ gen_helper_vfp_mins,
91
+ gen_helper_vfp_mind,
92
+};
93
+TRANS(FMIN_s, do_fp3_scalar, a, &f_scalar_fmin)
94
+
95
+static const FPScalar f_scalar_fmaxnm = {
96
+ gen_helper_advsimd_maxnumh,
97
+ gen_helper_vfp_maxnums,
98
+ gen_helper_vfp_maxnumd,
99
+};
100
+TRANS(FMAXNM_s, do_fp3_scalar, a, &f_scalar_fmaxnm)
101
+
102
+static const FPScalar f_scalar_fminnm = {
103
+ gen_helper_advsimd_minnumh,
104
+ gen_helper_vfp_minnums,
105
+ gen_helper_vfp_minnumd,
106
+};
107
+TRANS(FMINNM_s, do_fp3_scalar, a, &f_scalar_fminnm)
108
+
109
static const FPScalar f_scalar_fmulx = {
110
gen_helper_advsimd_mulxh,
111
gen_helper_vfp_mulxs,
112
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_fmul[3] = {
113
};
114
TRANS(FMUL_v, do_fp3_vector, a, f_vector_fmul)
115
116
+static gen_helper_gvec_3_ptr * const f_vector_fmax[3] = {
117
+ gen_helper_gvec_fmax_h,
118
+ gen_helper_gvec_fmax_s,
119
+ gen_helper_gvec_fmax_d,
120
+};
121
+TRANS(FMAX_v, do_fp3_vector, a, f_vector_fmax)
122
+
123
+static gen_helper_gvec_3_ptr * const f_vector_fmin[3] = {
124
+ gen_helper_gvec_fmin_h,
125
+ gen_helper_gvec_fmin_s,
126
+ gen_helper_gvec_fmin_d,
127
+};
128
+TRANS(FMIN_v, do_fp3_vector, a, f_vector_fmin)
129
+
130
+static gen_helper_gvec_3_ptr * const f_vector_fmaxnm[3] = {
131
+ gen_helper_gvec_fmaxnum_h,
132
+ gen_helper_gvec_fmaxnum_s,
133
+ gen_helper_gvec_fmaxnum_d,
134
+};
135
+TRANS(FMAXNM_v, do_fp3_vector, a, f_vector_fmaxnm)
136
+
137
+static gen_helper_gvec_3_ptr * const f_vector_fminnm[3] = {
138
+ gen_helper_gvec_fminnum_h,
139
+ gen_helper_gvec_fminnum_s,
140
+ gen_helper_gvec_fminnum_d,
141
+};
142
+TRANS(FMINNM_v, do_fp3_vector, a, f_vector_fminnm)
143
+
144
static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
145
gen_helper_gvec_fmulx_h,
146
gen_helper_gvec_fmulx_s,
147
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
148
tcg_op2 = read_fp_sreg(s, rm);
149
150
switch (opcode) {
151
- case 0x4: /* FMAX */
152
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
153
- break;
154
- case 0x5: /* FMIN */
155
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
156
- break;
157
- case 0x6: /* FMAXNM */
158
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
159
- break;
160
- case 0x7: /* FMINNM */
161
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
162
- break;
163
case 0x8: /* FNMUL */
164
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
165
gen_helper_vfp_negs(tcg_res, tcg_res);
166
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
167
case 0x1: /* FDIV */
168
case 0x2: /* FADD */
169
case 0x3: /* FSUB */
170
+ case 0x4: /* FMAX */
171
+ case 0x5: /* FMIN */
172
+ case 0x6: /* FMAXNM */
173
+ case 0x7: /* FMINNM */
174
g_assert_not_reached();
175
}
30
}
176
31
177
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
32
if (rule == float_3nan_prop_none) {
178
tcg_op2 = read_fp_dreg(s, rm);
33
-#if defined(TARGET_XTENSA)
179
34
- if (status->use_first_nan) {
180
switch (opcode) {
35
- rule = float_3nan_prop_abc;
181
- case 0x4: /* FMAX */
36
- } else {
182
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
37
- rule = float_3nan_prop_cba;
183
- break;
38
- }
184
- case 0x5: /* FMIN */
39
-#else
185
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
40
rule = float_3nan_prop_abc;
186
- break;
41
-#endif
187
- case 0x6: /* FMAXNM */
188
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
189
- break;
190
- case 0x7: /* FMINNM */
191
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
192
- break;
193
case 0x8: /* FNMUL */
194
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
195
gen_helper_vfp_negd(tcg_res, tcg_res);
196
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
197
case 0x1: /* FDIV */
198
case 0x2: /* FADD */
199
case 0x3: /* FSUB */
200
+ case 0x4: /* FMAX */
201
+ case 0x5: /* FMIN */
202
+ case 0x6: /* FMAXNM */
203
+ case 0x7: /* FMINNM */
204
g_assert_not_reached();
205
}
42
}
206
43
207
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
44
assert(rule != float_3nan_prop_none);
208
tcg_op2 = read_fp_hreg(s, rm);
209
210
switch (opcode) {
211
- case 0x4: /* FMAX */
212
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
213
- break;
214
- case 0x5: /* FMIN */
215
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
216
- break;
217
- case 0x6: /* FMAXNM */
218
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
219
- break;
220
- case 0x7: /* FMINNM */
221
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
222
- break;
223
case 0x8: /* FNMUL */
224
gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
225
tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
226
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
227
case 0x1: /* FDIV */
228
case 0x2: /* FADD */
229
case 0x3: /* FSUB */
230
+ case 0x4: /* FMAX */
231
+ case 0x5: /* FMIN */
232
+ case 0x6: /* FMAXNM */
233
+ case 0x7: /* FMINNM */
234
g_assert_not_reached();
235
}
236
237
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
238
gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
239
tcg_res, fpst);
240
break;
241
- case 0x18: /* FMAXNM */
242
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
243
- break;
244
case 0x1c: /* FCMEQ */
245
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
246
break;
247
- case 0x1e: /* FMAX */
248
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
249
- break;
250
case 0x1f: /* FRECPS */
251
gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
252
break;
253
- case 0x38: /* FMINNM */
254
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
255
- break;
256
- case 0x3e: /* FMIN */
257
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
258
- break;
259
case 0x3f: /* FRSQRTS */
260
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
261
break;
262
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
263
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
264
break;
265
default:
266
+ case 0x18: /* FMAXNM */
267
case 0x1a: /* FADD */
268
case 0x1b: /* FMULX */
269
+ case 0x1e: /* FMAX */
270
+ case 0x38: /* FMINNM */
271
case 0x3a: /* FSUB */
272
+ case 0x3e: /* FMIN */
273
case 0x5b: /* FMUL */
274
case 0x5f: /* FDIV */
275
g_assert_not_reached();
276
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
277
case 0x1c: /* FCMEQ */
278
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
279
break;
280
- case 0x1e: /* FMAX */
281
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
282
- break;
283
case 0x1f: /* FRECPS */
284
gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
285
break;
286
- case 0x18: /* FMAXNM */
287
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
288
- break;
289
- case 0x38: /* FMINNM */
290
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
291
- break;
292
- case 0x3e: /* FMIN */
293
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
294
- break;
295
case 0x3f: /* FRSQRTS */
296
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
297
break;
298
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
299
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
300
break;
301
default:
302
+ case 0x18: /* FMAXNM */
303
case 0x1a: /* FADD */
304
case 0x1b: /* FMULX */
305
+ case 0x1e: /* FMAX */
306
+ case 0x38: /* FMINNM */
307
case 0x3a: /* FSUB */
308
+ case 0x3e: /* FMIN */
309
case 0x5b: /* FMUL */
310
case 0x5f: /* FDIV */
311
g_assert_not_reached();
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
313
case 0x7d: /* FACGT */
314
case 0x19: /* FMLA */
315
case 0x39: /* FMLS */
316
- case 0x18: /* FMAXNM */
317
case 0x1c: /* FCMEQ */
318
- case 0x1e: /* FMAX */
319
- case 0x38: /* FMINNM */
320
- case 0x3e: /* FMIN */
321
case 0x5c: /* FCMGE */
322
case 0x7a: /* FABD */
323
case 0x7c: /* FCMGT */
324
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
325
return;
326
327
default:
328
+ case 0x18: /* FMAXNM */
329
case 0x1a: /* FADD */
330
case 0x1b: /* FMULX */
331
+ case 0x1e: /* FMAX */
332
+ case 0x38: /* FMINNM */
333
case 0x3a: /* FSUB */
334
+ case 0x3e: /* FMIN */
335
case 0x5b: /* FMUL */
336
case 0x5f: /* FDIV */
337
unallocated_encoding(s);
338
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
339
int pass;
340
341
switch (fpopcode) {
342
- case 0x0: /* FMAXNM */
343
case 0x1: /* FMLA */
344
case 0x4: /* FCMEQ */
345
- case 0x6: /* FMAX */
346
case 0x7: /* FRECPS */
347
- case 0x8: /* FMINNM */
348
case 0x9: /* FMLS */
349
- case 0xe: /* FMIN */
350
case 0xf: /* FRSQRTS */
351
case 0x14: /* FCMGE */
352
case 0x15: /* FACGE */
353
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
354
pairwise = true;
355
break;
356
default:
357
+ case 0x0: /* FMAXNM */
358
case 0x2: /* FADD */
359
case 0x3: /* FMULX */
360
+ case 0x6: /* FMAX */
361
+ case 0x8: /* FMINNM */
362
case 0xa: /* FSUB */
363
+ case 0xe: /* FMIN */
364
case 0x13: /* FMUL */
365
case 0x17: /* FDIV */
366
unallocated_encoding(s);
367
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
368
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
369
370
switch (fpopcode) {
371
- case 0x0: /* FMAXNM */
372
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
373
- break;
374
case 0x1: /* FMLA */
375
read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
376
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
377
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
378
case 0x4: /* FCMEQ */
379
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
380
break;
381
- case 0x6: /* FMAX */
382
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
383
- break;
384
case 0x7: /* FRECPS */
385
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
386
break;
387
- case 0x8: /* FMINNM */
388
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
389
- break;
390
case 0x9: /* FMLS */
391
/* As usual for ARM, separate negation for fused multiply-add */
392
tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
393
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
394
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
395
fpst);
396
break;
397
- case 0xe: /* FMIN */
398
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
399
- break;
400
case 0xf: /* FRSQRTS */
401
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
402
break;
403
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
404
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
405
break;
406
default:
407
+ case 0x0: /* FMAXNM */
408
case 0x2: /* FADD */
409
case 0x3: /* FMULX */
410
+ case 0x6: /* FMAX */
411
+ case 0x8: /* FMINNM */
412
case 0xa: /* FSUB */
413
+ case 0xe: /* FMIN */
414
case 0x13: /* FMUL */
415
case 0x17: /* FDIV */
416
g_assert_not_reached();
417
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
418
index XXXXXXX..XXXXXXX 100644
419
--- a/target/arm/tcg/vec_helper.c
420
+++ b/target/arm/tcg/vec_helper.c
421
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_facgt_s, float32_acgt, float32)
422
423
DO_3OP(gvec_fmax_h, float16_max, float16)
424
DO_3OP(gvec_fmax_s, float32_max, float32)
425
+DO_3OP(gvec_fmax_d, float64_max, float64)
426
427
DO_3OP(gvec_fmin_h, float16_min, float16)
428
DO_3OP(gvec_fmin_s, float32_min, float32)
429
+DO_3OP(gvec_fmin_d, float64_min, float64)
430
431
DO_3OP(gvec_fmaxnum_h, float16_maxnum, float16)
432
DO_3OP(gvec_fmaxnum_s, float32_maxnum, float32)
433
+DO_3OP(gvec_fmaxnum_d, float64_maxnum, float64)
434
435
DO_3OP(gvec_fminnum_h, float16_minnum, float16)
436
DO_3OP(gvec_fminnum_s, float32_minnum, float32)
437
+DO_3OP(gvec_fminnum_d, float64_minnum, float64)
438
439
DO_3OP(gvec_recps_nf_h, float16_recps_nf, float16)
440
DO_3OP(gvec_recps_nf_s, float32_recps_nf, float32)
441
--
45
--
442
2.34.1
46
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the Float3NaNPropRule explicitly for i386. We had no
2
i386-specific behaviour in the old ifdef ladder, so we were using the
3
default "prefer a then b then c" fallback; this is actually the
4
correct per-the-spec handling for i386.
2
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-29-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-25-peter.maydell@linaro.org
7
---
9
---
8
target/arm/tcg/a64.decode | 10 +++
10
target/i386/tcg/fpu_helper.c | 1 +
9
target/arm/tcg/translate-a64.c | 144 ++++++++++-----------------------
11
1 file changed, 1 insertion(+)
10
2 files changed, 51 insertions(+), 103 deletions(-)
11
12
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
15
--- a/target/i386/tcg/fpu_helper.c
15
+++ b/target/arm/tcg/a64.decode
16
+++ b/target/i386/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
17
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
17
FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
18
* there are multiple input NaNs they are selected in the order a, b, c.
18
FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
19
*/
19
20
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
20
+FMLAL_v 0.00 1110 001 ..... 11101 1 ..... ..... @qrrr_h
21
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
21
+FMLSL_v 0.00 1110 101 ..... 11101 1 ..... ..... @qrrr_h
22
+FMLAL2_v 0.10 1110 001 ..... 11001 1 ..... ..... @qrrr_h
23
+FMLSL2_v 0.10 1110 101 ..... 11001 1 ..... ..... @qrrr_h
24
+
25
FCMEQ_v 0.00 1110 010 ..... 00100 1 ..... ..... @qrrr_h
26
FCMEQ_v 0.00 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
27
28
@@ -XXX,XX +XXX,XX @@ FMLS_vi 0.00 1111 11 0 ..... 0101 . 0 ..... ..... @qrrx_d
29
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
30
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
31
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
32
+
33
+FMLAL_vi 0.00 1111 10 .. .... 0000 . 0 ..... ..... @qrrx_h
34
+FMLSL_vi 0.00 1111 10 .. .... 0100 . 0 ..... ..... @qrrx_h
35
+FMLAL2_vi 0.10 1111 10 .. .... 1000 . 0 ..... ..... @qrrx_h
36
+FMLSL2_vi 0.10 1111 10 .. .... 1100 . 0 ..... ..... @qrrx_h
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
42
};
43
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
44
45
+static bool do_fmlal(DisasContext *s, arg_qrrr_e *a, bool is_s, bool is_2)
46
+{
47
+ if (fp_access_check(s)) {
48
+ int data = (is_2 << 1) | is_s;
49
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
50
+ vec_full_reg_offset(s, a->rn),
51
+ vec_full_reg_offset(s, a->rm), tcg_env,
52
+ a->q ? 16 : 8, vec_full_reg_size(s),
53
+ data, gen_helper_gvec_fmlal_a64);
54
+ }
55
+ return true;
56
+}
57
+
58
+TRANS_FEAT(FMLAL_v, aa64_fhm, do_fmlal, a, false, false)
59
+TRANS_FEAT(FMLSL_v, aa64_fhm, do_fmlal, a, true, false)
60
+TRANS_FEAT(FMLAL2_v, aa64_fhm, do_fmlal, a, false, true)
61
+TRANS_FEAT(FMLSL2_v, aa64_fhm, do_fmlal, a, true, true)
62
+
63
TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
64
TRANS(SMAXP_v, do_gvec_fn3_no64, a, gen_gvec_smaxp)
65
TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
66
@@ -XXX,XX +XXX,XX @@ static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
67
TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
68
TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
69
70
+static bool do_fmlal_idx(DisasContext *s, arg_qrrx_e *a, bool is_s, bool is_2)
71
+{
72
+ if (fp_access_check(s)) {
73
+ int data = (a->idx << 2) | (is_2 << 1) | is_s;
74
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
75
+ vec_full_reg_offset(s, a->rn),
76
+ vec_full_reg_offset(s, a->rm), tcg_env,
77
+ a->q ? 16 : 8, vec_full_reg_size(s),
78
+ data, gen_helper_gvec_fmlal_idx_a64);
79
+ }
80
+ return true;
81
+}
82
+
83
+TRANS_FEAT(FMLAL_vi, aa64_fhm, do_fmlal_idx, a, false, false)
84
+TRANS_FEAT(FMLSL_vi, aa64_fhm, do_fmlal_idx, a, true, false)
85
+TRANS_FEAT(FMLAL2_vi, aa64_fhm, do_fmlal_idx, a, false, true)
86
+TRANS_FEAT(FMLSL2_vi, aa64_fhm, do_fmlal_idx, a, true, true)
87
+
88
/*
89
* Advanced SIMD scalar pairwise
90
*/
91
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
92
}
93
}
22
}
94
23
95
-/* Floating point op subgroup of C3.6.16. */
24
static inline uint8_t save_exception_flags(CPUX86State *env)
96
-static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
97
-{
98
- /* For floating point ops, the U, size[1] and opcode bits
99
- * together indicate the operation. size[0] indicates single
100
- * or double.
101
- */
102
- int fpopcode = extract32(insn, 11, 5)
103
- | (extract32(insn, 23, 1) << 5)
104
- | (extract32(insn, 29, 1) << 6);
105
- int is_q = extract32(insn, 30, 1);
106
- int size = extract32(insn, 22, 1);
107
- int rm = extract32(insn, 16, 5);
108
- int rn = extract32(insn, 5, 5);
109
- int rd = extract32(insn, 0, 5);
110
-
111
- if (size == 1 && !is_q) {
112
- unallocated_encoding(s);
113
- return;
114
- }
115
-
116
- switch (fpopcode) {
117
- case 0x1d: /* FMLAL */
118
- case 0x3d: /* FMLSL */
119
- case 0x59: /* FMLAL2 */
120
- case 0x79: /* FMLSL2 */
121
- if (size & 1 || !dc_isar_feature(aa64_fhm, s)) {
122
- unallocated_encoding(s);
123
- return;
124
- }
125
- if (fp_access_check(s)) {
126
- int is_s = extract32(insn, 23, 1);
127
- int is_2 = extract32(insn, 29, 1);
128
- int data = (is_2 << 1) | is_s;
129
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
130
- vec_full_reg_offset(s, rn),
131
- vec_full_reg_offset(s, rm), tcg_env,
132
- is_q ? 16 : 8, vec_full_reg_size(s),
133
- data, gen_helper_gvec_fmlal_a64);
134
- }
135
- return;
136
-
137
- default:
138
- case 0x18: /* FMAXNM */
139
- case 0x19: /* FMLA */
140
- case 0x1a: /* FADD */
141
- case 0x1b: /* FMULX */
142
- case 0x1c: /* FCMEQ */
143
- case 0x1e: /* FMAX */
144
- case 0x1f: /* FRECPS */
145
- case 0x38: /* FMINNM */
146
- case 0x39: /* FMLS */
147
- case 0x3a: /* FSUB */
148
- case 0x3e: /* FMIN */
149
- case 0x3f: /* FRSQRTS */
150
- case 0x58: /* FMAXNMP */
151
- case 0x5a: /* FADDP */
152
- case 0x5b: /* FMUL */
153
- case 0x5c: /* FCMGE */
154
- case 0x5d: /* FACGE */
155
- case 0x5e: /* FMAXP */
156
- case 0x5f: /* FDIV */
157
- case 0x78: /* FMINNMP */
158
- case 0x7a: /* FABD */
159
- case 0x7d: /* FACGT */
160
- case 0x7c: /* FCMGT */
161
- case 0x7e: /* FMINP */
162
- unallocated_encoding(s);
163
- return;
164
- }
165
-}
166
-
167
/* Integer op subgroup of C3.6.16. */
168
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
169
{
170
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
171
case 0x3: /* logic ops */
172
disas_simd_3same_logic(s, insn);
173
break;
174
- case 0x18 ... 0x31:
175
- /* floating point ops, sz[1] and U are part of opcode */
176
- disas_simd_3same_float(s, insn);
177
- break;
178
default:
179
disas_simd_3same_int(s, insn);
180
break;
181
case 0x14: /* SMAXP, UMAXP */
182
case 0x15: /* SMINP, UMINP */
183
case 0x17: /* ADDP */
184
+ case 0x18 ... 0x31: /* floating point ops */
185
unallocated_encoding(s);
186
break;
187
}
188
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
189
}
190
is_fp = 2;
191
break;
192
- case 0x00: /* FMLAL */
193
- case 0x04: /* FMLSL */
194
- case 0x18: /* FMLAL2 */
195
- case 0x1c: /* FMLSL2 */
196
- if (is_scalar || size != MO_32 || !dc_isar_feature(aa64_fhm, s)) {
197
- unallocated_encoding(s);
198
- return;
199
- }
200
- size = MO_16;
201
- /* is_fp, but we pass tcg_env not fp_status. */
202
- break;
203
default:
204
+ case 0x00: /* FMLAL */
205
case 0x01: /* FMLA */
206
+ case 0x04: /* FMLSL */
207
case 0x05: /* FMLS */
208
case 0x09: /* FMUL */
209
+ case 0x18: /* FMLAL2 */
210
case 0x19: /* FMULX */
211
+ case 0x1c: /* FMLSL2 */
212
unallocated_encoding(s);
213
return;
214
}
215
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
216
}
217
return;
218
219
- case 0x00: /* FMLAL */
220
- case 0x04: /* FMLSL */
221
- case 0x18: /* FMLAL2 */
222
- case 0x1c: /* FMLSL2 */
223
- {
224
- int is_s = extract32(opcode, 2, 1);
225
- int is_2 = u;
226
- int data = (index << 2) | (is_2 << 1) | is_s;
227
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
228
- vec_full_reg_offset(s, rn),
229
- vec_full_reg_offset(s, rm), tcg_env,
230
- is_q ? 16 : 8, vec_full_reg_size(s),
231
- data, gen_helper_gvec_fmlal_idx_a64);
232
- }
233
- return;
234
-
235
case 0x08: /* MUL */
236
if (!is_long && !is_scalar) {
237
static gen_helper_gvec_3 * const fns[3] = {
238
--
25
--
239
2.34.1
26
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the Float3NaNPropRule explicitly for HPPA, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
HPPA is the only target that was using the default branch of the
4
Message-id: 20240506010403.6204-28-richard.henderson@linaro.org
5
ifdef ladder (other targets either do not use muladd or set
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
default_nan_mode), so we can remove the ifdef fallback entirely now
7
(allowing the "rule not set" case to fall into the default of the
8
switch statement and assert).
9
10
We add a TODO note that the HPPA rule is probably wrong; this is
11
not a behavioural change for this refactoring.
12
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-26-peter.maydell@linaro.org
7
---
16
---
8
target/arm/tcg/translate-neon.c | 78 ++-------------------------------
17
target/hppa/fpu_helper.c | 8 ++++++++
9
1 file changed, 4 insertions(+), 74 deletions(-)
18
fpu/softfloat-specialize.c.inc | 4 ----
19
2 files changed, 8 insertions(+), 4 deletions(-)
10
20
11
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
21
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/tcg/translate-neon.c
23
--- a/target/hppa/fpu_helper.c
14
+++ b/target/arm/tcg/translate-neon.c
24
+++ b/target/hppa/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VABA_S, gen_gvec_saba)
25
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
16
DO_3SAME_NO_SZ_3(VABD_U, gen_gvec_uabd)
26
* HPPA does note implement a CPU reset method at all...
17
DO_3SAME_NO_SZ_3(VABA_U, gen_gvec_uaba)
27
*/
18
DO_3SAME_NO_SZ_3(VPADD, gen_gvec_addp)
28
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
19
+DO_3SAME_NO_SZ_3(VPMAX_S, gen_gvec_smaxp)
29
+ /*
20
+DO_3SAME_NO_SZ_3(VPMIN_S, gen_gvec_sminp)
30
+ * TODO: The HPPA architecture reference only documents its NaN
21
+DO_3SAME_NO_SZ_3(VPMAX_U, gen_gvec_umaxp)
31
+ * propagation rule for 2-operand operations. Testing on real hardware
22
+DO_3SAME_NO_SZ_3(VPMIN_U, gen_gvec_uminp)
32
+ * might be necessary to confirm whether this order for muladd is correct.
23
33
+ * Not preferring the SNaN is almost certainly incorrect as it diverges
24
#define DO_3SAME_CMP(INSN, COND) \
34
+ * from the documented rules for 2-operand operations.
25
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
35
+ */
26
@@ -XXX,XX +XXX,XX @@ DO_3SAME_32_ENV(VQSHL_U, qshl_u)
36
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
27
DO_3SAME_32_ENV(VQRSHL_S, qrshl_s)
37
/* For inf * 0 + NaN, return the input NaN */
28
DO_3SAME_32_ENV(VQRSHL_U, qrshl_u)
38
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
29
39
}
30
-static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
40
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
31
-{
41
index XXXXXXX..XXXXXXX 100644
32
- /* Operations handled pairwise 32 bits at a time */
42
--- a/fpu/softfloat-specialize.c.inc
33
- TCGv_i32 tmp, tmp2, tmp3;
43
+++ b/fpu/softfloat-specialize.c.inc
34
-
44
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
45
}
36
- return false;
46
}
47
48
- if (rule == float_3nan_prop_none) {
49
- rule = float_3nan_prop_abc;
37
- }
50
- }
38
-
51
-
39
- /* UNDEF accesses to D16-D31 if they don't exist. */
52
assert(rule != float_3nan_prop_none);
40
- if (!dc_isar_feature(aa32_simd_r32, s) &&
53
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
41
- ((a->vd | a->vn | a->vm) & 0x10)) {
54
/* We have at least one SNaN input and should prefer it */
42
- return false;
43
- }
44
-
45
- if (a->size == 3) {
46
- return false;
47
- }
48
-
49
- if (!vfp_access_check(s)) {
50
- return true;
51
- }
52
-
53
- assert(a->q == 0); /* enforced by decode patterns */
54
-
55
- /*
56
- * Note that we have to be careful not to clobber the source operands
57
- * in the "vm == vd" case by storing the result of the first pass too
58
- * early. Since Q is 0 there are always just two passes, so instead
59
- * of a complicated loop over each pass we just unroll.
60
- */
61
- tmp = tcg_temp_new_i32();
62
- tmp2 = tcg_temp_new_i32();
63
- tmp3 = tcg_temp_new_i32();
64
-
65
- read_neon_element32(tmp, a->vn, 0, MO_32);
66
- read_neon_element32(tmp2, a->vn, 1, MO_32);
67
- fn(tmp, tmp, tmp2);
68
-
69
- read_neon_element32(tmp3, a->vm, 0, MO_32);
70
- read_neon_element32(tmp2, a->vm, 1, MO_32);
71
- fn(tmp3, tmp3, tmp2);
72
-
73
- write_neon_element32(tmp, a->vd, 0, MO_32);
74
- write_neon_element32(tmp3, a->vd, 1, MO_32);
75
-
76
- return true;
77
-}
78
-
79
-#define DO_3SAME_PAIR(INSN, func) \
80
- static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
81
- { \
82
- static NeonGenTwoOpFn * const fns[] = { \
83
- gen_helper_neon_##func##8, \
84
- gen_helper_neon_##func##16, \
85
- gen_helper_neon_##func##32, \
86
- }; \
87
- if (a->size > 2) { \
88
- return false; \
89
- } \
90
- return do_3same_pair(s, a, fns[a->size]); \
91
- }
92
-
93
-/* 32-bit pairwise ops end up the same as the elementwise versions. */
94
-#define gen_helper_neon_pmax_s32 tcg_gen_smax_i32
95
-#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
96
-#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
97
-#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
98
-
99
-DO_3SAME_PAIR(VPMAX_S, pmax_s)
100
-DO_3SAME_PAIR(VPMIN_S, pmin_s)
101
-DO_3SAME_PAIR(VPMAX_U, pmax_u)
102
-DO_3SAME_PAIR(VPMIN_U, pmin_u)
103
-
104
#define DO_3SAME_VQDMULH(INSN, FUNC) \
105
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
106
WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##_s32); \
107
--
55
--
108
2.34.1
56
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The use_first_nan field in float_status was an xtensa-specific way to
2
select at runtime from two different NaN propagation rules. Now that
3
xtensa is using the target-agnostic NaN propagation rule selection
4
that we've just added, we can remove use_first_nan, because there is
5
no longer any code that reads it.
2
6
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-24-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241202131347.498124-27-peter.maydell@linaro.org
7
---
10
---
8
target/arm/helper.h | 7 -----
11
include/fpu/softfloat-helpers.h | 5 -----
9
target/arm/tcg/translate-neon.c | 55 ++-------------------------------
12
include/fpu/softfloat-types.h | 1 -
10
target/arm/tcg/vec_helper.c | 45 ---------------------------
13
target/xtensa/fpu_helper.c | 1 -
11
3 files changed, 3 insertions(+), 104 deletions(-)
14
3 files changed, 7 deletions(-)
12
15
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
18
--- a/include/fpu/softfloat-helpers.h
16
+++ b/target/arm/helper.h
19
+++ b/include/fpu/softfloat-helpers.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
20
@@ -XXX,XX +XXX,XX @@ static inline void set_snan_bit_is_one(bool val, float_status *status)
18
DEF_HELPER_FLAGS_6(gvec_fcmlad, TCG_CALL_NO_RWG,
21
status->snan_bit_is_one = val;
19
void, ptr, ptr, ptr, ptr, ptr, i32)
20
21
-DEF_HELPER_FLAGS_5(neon_paddh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
-DEF_HELPER_FLAGS_5(neon_pmaxh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
-DEF_HELPER_FLAGS_5(neon_pminh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
-DEF_HELPER_FLAGS_5(neon_padds, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
-DEF_HELPER_FLAGS_5(neon_pmaxs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
-DEF_HELPER_FLAGS_5(neon_pmins, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
-
28
DEF_HELPER_FLAGS_4(gvec_sstoh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_4(gvec_sitos, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_4(gvec_ustoh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/translate-neon.c
34
+++ b/target/arm/tcg/translate-neon.c
35
@@ -XXX,XX +XXX,XX @@ DO_3S_FP_GVEC(VFMA, gen_helper_gvec_vfma_s, gen_helper_gvec_vfma_h)
36
DO_3S_FP_GVEC(VFMS, gen_helper_gvec_vfms_s, gen_helper_gvec_vfms_h)
37
DO_3S_FP_GVEC(VRECPS, gen_helper_gvec_recps_nf_s, gen_helper_gvec_recps_nf_h)
38
DO_3S_FP_GVEC(VRSQRTS, gen_helper_gvec_rsqrts_nf_s, gen_helper_gvec_rsqrts_nf_h)
39
+DO_3S_FP_GVEC(VPADD, gen_helper_gvec_faddp_s, gen_helper_gvec_faddp_h)
40
+DO_3S_FP_GVEC(VPMAX, gen_helper_gvec_fmaxp_s, gen_helper_gvec_fmaxp_h)
41
+DO_3S_FP_GVEC(VPMIN, gen_helper_gvec_fminp_s, gen_helper_gvec_fminp_h)
42
43
WRAP_FP_GVEC(gen_VMAXNM_fp32_3s, FPST_STD, gen_helper_gvec_fmaxnum_s)
44
WRAP_FP_GVEC(gen_VMAXNM_fp16_3s, FPST_STD_F16, gen_helper_gvec_fmaxnum_h)
45
@@ -XXX,XX +XXX,XX @@ static bool trans_VMINNM_fp_3s(DisasContext *s, arg_3same *a)
46
return do_3same(s, a, gen_VMINNM_fp32_3s);
47
}
22
}
48
23
49
-static bool do_3same_fp_pair(DisasContext *s, arg_3same *a,
24
-static inline void set_use_first_nan(bool val, float_status *status)
50
- gen_helper_gvec_3_ptr *fn)
51
-{
25
-{
52
- /* FP pairwise operations */
26
- status->use_first_nan = val;
53
- TCGv_ptr fpstatus;
54
-
55
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
56
- return false;
57
- }
58
-
59
- /* UNDEF accesses to D16-D31 if they don't exist. */
60
- if (!dc_isar_feature(aa32_simd_r32, s) &&
61
- ((a->vd | a->vn | a->vm) & 0x10)) {
62
- return false;
63
- }
64
-
65
- if (!vfp_access_check(s)) {
66
- return true;
67
- }
68
-
69
- assert(a->q == 0); /* enforced by decode patterns */
70
-
71
-
72
- fpstatus = fpstatus_ptr(a->size == MO_16 ? FPST_STD_F16 : FPST_STD);
73
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
74
- vfp_reg_offset(1, a->vn),
75
- vfp_reg_offset(1, a->vm),
76
- fpstatus, 8, 8, 0, fn);
77
-
78
- return true;
79
-}
27
-}
80
-
28
-
81
-/*
29
static inline void set_no_signaling_nans(bool val, float_status *status)
82
- * For all the functions using this macro, size == 1 means fp16,
83
- * which is an architecture extension we don't implement yet.
84
- */
85
-#define DO_3S_FP_PAIR(INSN,FUNC) \
86
- static bool trans_##INSN##_fp_3s(DisasContext *s, arg_3same *a) \
87
- { \
88
- if (a->size == MO_16) { \
89
- if (!dc_isar_feature(aa32_fp16_arith, s)) { \
90
- return false; \
91
- } \
92
- return do_3same_fp_pair(s, a, FUNC##h); \
93
- } \
94
- return do_3same_fp_pair(s, a, FUNC##s); \
95
- }
96
-
97
-DO_3S_FP_PAIR(VPADD, gen_helper_neon_padd)
98
-DO_3S_FP_PAIR(VPMAX, gen_helper_neon_pmax)
99
-DO_3S_FP_PAIR(VPMIN, gen_helper_neon_pmin)
100
-
101
static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
102
{
30
{
103
/* Handle a 2-reg-shift insn which can be vectorized. */
31
status->no_signaling_nans = val;
104
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
32
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
105
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/tcg/vec_helper.c
34
--- a/include/fpu/softfloat-types.h
107
+++ b/target/arm/tcg/vec_helper.c
35
+++ b/include/fpu/softfloat-types.h
108
@@ -XXX,XX +XXX,XX @@ DO_ABA(gvec_uaba_d, uint64_t)
36
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
109
37
* softfloat-specialize.inc.c)
110
#undef DO_ABA
38
*/
111
39
bool snan_bit_is_one;
112
-#define DO_NEON_PAIRWISE(NAME, OP) \
40
- bool use_first_nan;
113
- void HELPER(NAME##s)(void *vd, void *vn, void *vm, \
41
bool no_signaling_nans;
114
- void *stat, uint32_t oprsz) \
42
/* should overflowed results subtract re_bias to its exponent? */
115
- { \
43
bool rebias_overflow;
116
- float_status *fpst = stat; \
44
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
117
- float32 *d = vd; \
45
index XXXXXXX..XXXXXXX 100644
118
- float32 *n = vn; \
46
--- a/target/xtensa/fpu_helper.c
119
- float32 *m = vm; \
47
+++ b/target/xtensa/fpu_helper.c
120
- float32 r0, r1; \
48
@@ -XXX,XX +XXX,XX @@ static const struct {
121
- \
49
122
- /* Read all inputs before writing outputs in case vm == vd */ \
50
void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
123
- r0 = float32_##OP(n[H4(0)], n[H4(1)], fpst); \
51
{
124
- r1 = float32_##OP(m[H4(0)], m[H4(1)], fpst); \
52
- set_use_first_nan(use_first, &env->fp_status);
125
- \
53
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
126
- d[H4(0)] = r0; \
54
&env->fp_status);
127
- d[H4(1)] = r1; \
55
set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
128
- } \
129
- \
130
- void HELPER(NAME##h)(void *vd, void *vn, void *vm, \
131
- void *stat, uint32_t oprsz) \
132
- { \
133
- float_status *fpst = stat; \
134
- float16 *d = vd; \
135
- float16 *n = vn; \
136
- float16 *m = vm; \
137
- float16 r0, r1, r2, r3; \
138
- \
139
- /* Read all inputs before writing outputs in case vm == vd */ \
140
- r0 = float16_##OP(n[H2(0)], n[H2(1)], fpst); \
141
- r1 = float16_##OP(n[H2(2)], n[H2(3)], fpst); \
142
- r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \
143
- r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \
144
- \
145
- d[H2(0)] = r0; \
146
- d[H2(1)] = r1; \
147
- d[H2(2)] = r2; \
148
- d[H2(3)] = r3; \
149
- }
150
-
151
-DO_NEON_PAIRWISE(neon_padd, add)
152
-DO_NEON_PAIRWISE(neon_pmax, max)
153
-DO_NEON_PAIRWISE(neon_pmin, min)
154
-
155
-#undef DO_NEON_PAIRWISE
156
-
157
#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
158
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
159
{ \
160
--
56
--
161
2.34.1
57
2.34.1
diff view generated by jsdifflib
New patch
1
Currently m68k_cpu_reset_hold() calls floatx80_default_nan(NULL)
2
to get the NaN bit pattern to reset the FPU registers. This
3
works because it happens that our implementation of
4
floatx80_default_nan() doesn't actually look at the float_status
5
pointer except for TARGET_MIPS. However, this isn't guaranteed,
6
and to be able to remove the ifdef in floatx80_default_nan()
7
we're going to need a real float_status here.
1
8
9
Rearrange m68k_cpu_reset_hold() so that we initialize env->fp_status
10
earlier, and thus can pass it to floatx80_default_nan().
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20241202131347.498124-28-peter.maydell@linaro.org
15
---
16
target/m68k/cpu.c | 12 +++++++-----
17
1 file changed, 7 insertions(+), 5 deletions(-)
18
19
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/m68k/cpu.c
22
+++ b/target/m68k/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
24
CPUState *cs = CPU(obj);
25
M68kCPUClass *mcc = M68K_CPU_GET_CLASS(obj);
26
CPUM68KState *env = cpu_env(cs);
27
- floatx80 nan = floatx80_default_nan(NULL);
28
+ floatx80 nan;
29
int i;
30
31
if (mcc->parent_phases.hold) {
32
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
33
#else
34
cpu_m68k_set_sr(env, SR_S | SR_I);
35
#endif
36
- for (i = 0; i < 8; i++) {
37
- env->fregs[i].d = nan;
38
- }
39
- cpu_m68k_set_fpcr(env, 0);
40
/*
41
* M68000 FAMILY PROGRAMMER'S REFERENCE MANUAL
42
* 3.4 FLOATING-POINT INSTRUCTION DETAILS
43
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
44
* preceding paragraph for nonsignaling NaNs.
45
*/
46
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
47
+
48
+ nan = floatx80_default_nan(&env->fp_status);
49
+ for (i = 0; i < 8; i++) {
50
+ env->fregs[i].d = nan;
51
+ }
52
+ cpu_m68k_set_fpcr(env, 0);
53
env->fpsr = 0;
54
55
/* TODO: We should set PC from the interrupt vector. */
56
--
57
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
We create our 128-bit default NaN by calling parts64_default_nan()
2
and then adjusting the result. We can do the same trick for creating
3
the floatx80 default NaN, which lets us drop a target ifdef.
2
4
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
floatx80 is used only by:
4
Message-id: 20240506010403.6204-26-richard.henderson@linaro.org
6
i386
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
m68k
8
arm nwfpe old floating-point emulation emulation support
9
(which is essentially dead, especially the parts involving floatx80)
10
PPC (only in the xsrqpxp instruction, which just rounds an input
11
value by converting to floatx80 and back, so will never generate
12
the default NaN)
13
14
The floatx80 default NaN as currently implemented is:
15
m68k: sign = 0, exp = 1...1, int = 1, frac = 1....1
16
i386: sign = 1, exp = 1...1, int = 1, frac = 10...0
17
18
These are the same as the parts64_default_nan for these architectures.
19
20
This is technically a possible behaviour change for arm linux-user
21
nwfpe emulation emulation, because the default NaN will now have the
22
sign bit clear. But we were already generating a different floatx80
23
default NaN from the real kernel emulation we are supposedly
24
following, which appears to use an all-bits-1 value:
25
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L267
26
27
This won't affect the only "real" use of the nwfpe emulation, which
28
is ancient binaries that used it as part of the old floating point
29
calling convention; that only uses loads and stores of 32 and 64 bit
30
floats, not any of the floatx80 behaviour the original hardware had.
31
We also get the nwfpe float64 default NaN value wrong:
32
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L166
33
so if we ever cared about this obscure corner the right fix would be
34
to correct that so nwfpe used its own default-NaN setting rather
35
than the Arm VFP one.
36
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
37
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
39
Message-id: 20241202131347.498124-29-peter.maydell@linaro.org
7
---
40
---
8
target/arm/helper.h | 2 --
41
fpu/softfloat-specialize.c.inc | 20 ++++++++++----------
9
target/arm/tcg/neon_helper.c | 5 -----
42
1 file changed, 10 insertions(+), 10 deletions(-)
10
target/arm/tcg/translate-neon.c | 3 +--
11
3 files changed, 1 insertion(+), 9 deletions(-)
12
43
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
44
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
14
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
46
--- a/fpu/softfloat-specialize.c.inc
16
+++ b/target/arm/helper.h
47
+++ b/fpu/softfloat-specialize.c.inc
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(neon_qrshl_s64, i64, env, i64, i64)
48
@@ -XXX,XX +XXX,XX @@ static void parts128_silence_nan(FloatParts128 *p, float_status *status)
18
49
floatx80 floatx80_default_nan(float_status *status)
19
DEF_HELPER_2(neon_add_u8, i32, i32, i32)
50
{
20
DEF_HELPER_2(neon_add_u16, i32, i32, i32)
51
floatx80 r;
21
-DEF_HELPER_2(neon_padd_u8, i32, i32, i32)
52
+ /*
22
-DEF_HELPER_2(neon_padd_u16, i32, i32, i32)
53
+ * Extrapolate from the choices made by parts64_default_nan to fill
23
DEF_HELPER_2(neon_sub_u8, i32, i32, i32)
54
+ * in the floatx80 format. We assume that floatx80's explicit
24
DEF_HELPER_2(neon_sub_u16, i32, i32, i32)
55
+ * integer bit is always set (this is true for i386 and m68k,
25
DEF_HELPER_2(neon_mul_u8, i32, i32, i32)
56
+ * which are the only real users of this format).
26
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
57
+ */
27
index XXXXXXX..XXXXXXX 100644
58
+ FloatParts64 p64;
28
--- a/target/arm/tcg/neon_helper.c
59
+ parts64_default_nan(&p64, status);
29
+++ b/target/arm/tcg/neon_helper.c
60
30
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_add_u16)(uint32_t a, uint32_t b)
61
- /* None of the targets that have snan_bit_is_one use floatx80. */
31
return (a + b) ^ mask;
62
- assert(!snan_bit_is_one(status));
63
-#if defined(TARGET_M68K)
64
- r.low = UINT64_C(0xFFFFFFFFFFFFFFFF);
65
- r.high = 0x7FFF;
66
-#else
67
- /* X86 */
68
- r.low = UINT64_C(0xC000000000000000);
69
- r.high = 0xFFFF;
70
-#endif
71
+ r.high = 0x7FFF | (p64.sign << 15);
72
+ r.low = (1ULL << DECOMPOSED_BINARY_POINT) | p64.frac;
73
return r;
32
}
74
}
33
75
34
-#define NEON_FN(dest, src1, src2) dest = src1 + src2
35
-NEON_POP(padd_u8, neon_u8, 4)
36
-NEON_POP(padd_u16, neon_u16, 2)
37
-#undef NEON_FN
38
-
39
#define NEON_FN(dest, src1, src2) dest = src1 - src2
40
NEON_VOP(sub_u8, neon_u8, 4)
41
NEON_VOP(sub_u16, neon_u16, 2)
42
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/translate-neon.c
45
+++ b/target/arm/tcg/translate-neon.c
46
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VABD_S, gen_gvec_sabd)
47
DO_3SAME_NO_SZ_3(VABA_S, gen_gvec_saba)
48
DO_3SAME_NO_SZ_3(VABD_U, gen_gvec_uabd)
49
DO_3SAME_NO_SZ_3(VABA_U, gen_gvec_uaba)
50
+DO_3SAME_NO_SZ_3(VPADD, gen_gvec_addp)
51
52
#define DO_3SAME_CMP(INSN, COND) \
53
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
54
@@ -XXX,XX +XXX,XX @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
55
#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
56
#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
57
#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
58
-#define gen_helper_neon_padd_u32 tcg_gen_add_i32
59
60
DO_3SAME_PAIR(VPMAX_S, pmax_s)
61
DO_3SAME_PAIR(VPMIN_S, pmin_s)
62
DO_3SAME_PAIR(VPMAX_U, pmax_u)
63
DO_3SAME_PAIR(VPMIN_U, pmin_u)
64
-DO_3SAME_PAIR(VPADD, padd_u)
65
66
#define DO_3SAME_VQDMULH(INSN, FUNC) \
67
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
68
--
76
--
69
2.34.1
77
2.34.1
diff view generated by jsdifflib
New patch
1
In target/loongarch's helper_fclass_s() and helper_fclass_d() we pass
2
a zero-initialized float_status struct to float32_is_quiet_nan() and
3
float64_is_quiet_nan(), with the cryptic comment "for
4
snan_bit_is_one".
1
5
6
This pattern appears to have been copied from target/riscv, where it
7
is used because the functions there do not have ready access to the
8
CPU state struct. The comment presumably refers to the fact that the
9
main reason the is_quiet_nan() functions want the float_state is
10
because they want to know about the snan_bit_is_one config.
11
12
In the loongarch helpers, though, we have the CPU state struct
13
to hand. Use the usual env->fp_status here. This avoids our needing
14
to track that we need to update the initializer of the local
15
float_status structs when the core softfloat code adds new
16
options for targets to configure their behaviour.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20241202131347.498124-30-peter.maydell@linaro.org
21
---
22
target/loongarch/tcg/fpu_helper.c | 6 ++----
23
1 file changed, 2 insertions(+), 4 deletions(-)
24
25
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/loongarch/tcg/fpu_helper.c
28
+++ b/target/loongarch/tcg/fpu_helper.c
29
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_s(CPULoongArchState *env, uint64_t fj)
30
} else if (float32_is_zero_or_denormal(f)) {
31
return sign ? 1 << 4 : 1 << 8;
32
} else if (float32_is_any_nan(f)) {
33
- float_status s = { }; /* for snan_bit_is_one */
34
- return float32_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
35
+ return float32_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
36
} else {
37
return sign ? 1 << 3 : 1 << 7;
38
}
39
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_d(CPULoongArchState *env, uint64_t fj)
40
} else if (float64_is_zero_or_denormal(f)) {
41
return sign ? 1 << 4 : 1 << 8;
42
} else if (float64_is_any_nan(f)) {
43
- float_status s = { }; /* for snan_bit_is_one */
44
- return float64_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
45
+ return float64_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
46
} else {
47
return sign ? 1 << 3 : 1 << 7;
48
}
49
--
50
2.34.1
diff view generated by jsdifflib
New patch
1
In the frem helper, we have a local float_status because we want to
2
execute the floatx80_div() with a custom rounding mode. Instead of
3
zero-initializing the local float_status and then having to set it up
4
with the m68k standard behaviour (including the NaN propagation rule
5
and copying the rounding precision from env->fp_status), initialize
6
it as a complete copy of env->fp_status. This will avoid our having
7
to add new code in this function for every new config knob we add
8
to fp_status.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-31-peter.maydell@linaro.org
13
---
14
target/m68k/fpu_helper.c | 6 ++----
15
1 file changed, 2 insertions(+), 4 deletions(-)
16
17
diff --git a/target/m68k/fpu_helper.c b/target/m68k/fpu_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/m68k/fpu_helper.c
20
+++ b/target/m68k/fpu_helper.c
21
@@ -XXX,XX +XXX,XX @@ void HELPER(frem)(CPUM68KState *env, FPReg *res, FPReg *val0, FPReg *val1)
22
23
fp_rem = floatx80_rem(val1->d, val0->d, &env->fp_status);
24
if (!floatx80_is_any_nan(fp_rem)) {
25
- float_status fp_status = { };
26
+ /* Use local temporary fp_status to set different rounding mode */
27
+ float_status fp_status = env->fp_status;
28
uint32_t quotient;
29
int sign;
30
31
/* Calculate quotient directly using round to nearest mode */
32
- set_float_2nan_prop_rule(float_2nan_prop_ab, &fp_status);
33
set_float_rounding_mode(float_round_nearest_even, &fp_status);
34
- set_floatx80_rounding_precision(
35
- get_floatx80_rounding_precision(&env->fp_status), &fp_status);
36
fp_quot.d = floatx80_div(val1->d, val0->d, &fp_status);
37
38
sign = extractFloatx80Sign(fp_quot.d);
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
In cf_fpu_gdb_get_reg() and cf_fpu_gdb_set_reg() we do the conversion
2
from float64 to floatx80 using a scratch float_status, because we
3
don't want the conversion to affect the CPU's floating point exception
4
status. Currently we use a zero-initialized float_status. This will
5
get steadily more awkward as we add config knobs to float_status
6
that the target must initialize. Avoid having to add any of that
7
configuration here by instead initializing our local float_status
8
from the env->fp_status.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-32-peter.maydell@linaro.org
13
---
14
target/m68k/helper.c | 6 ++++--
15
1 file changed, 4 insertions(+), 2 deletions(-)
16
17
diff --git a/target/m68k/helper.c b/target/m68k/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/m68k/helper.c
20
+++ b/target/m68k/helper.c
21
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_get_reg(CPUState *cs, GByteArray *mem_buf, int n)
22
CPUM68KState *env = &cpu->env;
23
24
if (n < 8) {
25
- float_status s = {};
26
+ /* Use scratch float_status so any exceptions don't change CPU state */
27
+ float_status s = env->fp_status;
28
return gdb_get_reg64(mem_buf, floatx80_to_float64(env->fregs[n].d, &s));
29
}
30
switch (n) {
31
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_set_reg(CPUState *cs, uint8_t *mem_buf, int n)
32
CPUM68KState *env = &cpu->env;
33
34
if (n < 8) {
35
- float_status s = {};
36
+ /* Use scratch float_status so any exceptions don't change CPU state */
37
+ float_status s = env->fp_status;
38
env->fregs[n].d = float64_to_floatx80(ldq_be_p(mem_buf), &s);
39
return 8;
40
}
41
--
42
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In the helper functions flcmps and flcmpd we use a scratch float_status
2
so that we don't change the CPU state if the comparison raises any
3
floating point exception flags. Instead of zero-initializing this
4
scratch float_status, initialize it as a copy of env->fp_status. This
5
avoids the need to explicitly initialize settings like the NaN
6
propagation rule or others we might add to softfloat in future.
2
7
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
To do this we need to pass the CPU env pointer in to the helper.
4
Message-id: 20240506010403.6204-14-richard.henderson@linaro.org
9
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-33-peter.maydell@linaro.org
7
---
13
---
8
target/arm/tcg/helper-a64.h | 4 +
14
target/sparc/helper.h | 4 ++--
9
target/arm/tcg/translate.h | 5 +
15
target/sparc/fop_helper.c | 8 ++++----
10
target/arm/tcg/a64.decode | 27 +++++
16
target/sparc/translate.c | 4 ++--
11
target/arm/tcg/translate-a64.c | 205 +++++++++++++++++----------------
17
3 files changed, 8 insertions(+), 8 deletions(-)
12
target/arm/tcg/vec_helper.c | 4 +
13
5 files changed, 143 insertions(+), 102 deletions(-)
14
18
15
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
19
diff --git a/target/sparc/helper.h b/target/sparc/helper.h
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/helper-a64.h
21
--- a/target/sparc/helper.h
18
+++ b/target/arm/tcg/helper-a64.h
22
+++ b/target/sparc/helper.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(fcmpd, TCG_CALL_NO_WG, i32, env, f64, f64)
20
DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
24
DEF_HELPER_FLAGS_3(fcmped, TCG_CALL_NO_WG, i32, env, f64, f64)
21
DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
25
DEF_HELPER_FLAGS_3(fcmpq, TCG_CALL_NO_WG, i32, env, i128, i128)
22
26
DEF_HELPER_FLAGS_3(fcmpeq, TCG_CALL_NO_WG, i32, env, i128, i128)
23
+DEF_HELPER_FLAGS_5(gvec_fdiv_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
-DEF_HELPER_FLAGS_2(flcmps, TCG_CALL_NO_RWG_SE, i32, f32, f32)
24
+DEF_HELPER_FLAGS_5(gvec_fdiv_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
-DEF_HELPER_FLAGS_2(flcmpd, TCG_CALL_NO_RWG_SE, i32, f64, f64)
25
+DEF_HELPER_FLAGS_5(gvec_fdiv_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(flcmps, TCG_CALL_NO_RWG_SE, i32, env, f32, f32)
26
+
30
+DEF_HELPER_FLAGS_3(flcmpd, TCG_CALL_NO_RWG_SE, i32, env, f64, f64)
27
DEF_HELPER_FLAGS_5(gvec_fmulx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
DEF_HELPER_2(raise_exception, noreturn, env, int)
28
DEF_HELPER_FLAGS_5(gvec_fmulx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
32
29
DEF_HELPER_FLAGS_5(gvec_fmulx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_3(faddd, TCG_CALL_NO_WG, f64, env, f64, f64)
30
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
34
diff --git a/target/sparc/fop_helper.c b/target/sparc/fop_helper.c
31
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/translate.h
36
--- a/target/sparc/fop_helper.c
33
+++ b/target/arm/tcg/translate.h
37
+++ b/target/sparc/fop_helper.c
34
@@ -XXX,XX +XXX,XX @@ static inline int shl_12(DisasContext *s, int x)
38
@@ -XXX,XX +XXX,XX @@ uint32_t helper_fcmpeq(CPUSPARCState *env, Int128 src1, Int128 src2)
35
return x << 12;
39
return finish_fcmp(env, r, GETPC());
36
}
40
}
37
41
38
+static inline int xor_2(DisasContext *s, int x)
42
-uint32_t helper_flcmps(float32 src1, float32 src2)
39
+{
43
+uint32_t helper_flcmps(CPUSPARCState *env, float32 src1, float32 src2)
40
+ return x ^ 2;
41
+}
42
+
43
static inline int neon_3same_fp_size(DisasContext *s, int x)
44
{
44
{
45
/* Convert 0==fp32, 1==fp16 into a MO_* value */
45
/*
46
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
46
* FLCMP never raises an exception nor modifies any FSR fields.
47
* Perform the comparison with a dummy fp environment.
48
*/
49
- float_status discard = { };
50
+ float_status discard = env->fp_status;
51
FloatRelation r;
52
53
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
54
@@ -XXX,XX +XXX,XX @@ uint32_t helper_flcmps(float32 src1, float32 src2)
55
g_assert_not_reached();
56
}
57
58
-uint32_t helper_flcmpd(float64 src1, float64 src2)
59
+uint32_t helper_flcmpd(CPUSPARCState *env, float64 src1, float64 src2)
60
{
61
- float_status discard = { };
62
+ float_status discard = env->fp_status;
63
FloatRelation r;
64
65
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
66
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
47
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/tcg/a64.decode
68
--- a/target/sparc/translate.c
49
+++ b/target/arm/tcg/a64.decode
69
+++ b/target/sparc/translate.c
50
@@ -XXX,XX +XXX,XX @@
70
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPs(DisasContext *dc, arg_FLCMPs *a)
51
71
52
%rd 0:5
72
src1 = gen_load_fpr_F(dc, a->rs1);
53
%esz_sd 22:1 !function=plus_2
73
src2 = gen_load_fpr_F(dc, a->rs2);
54
+%esz_hsd 22:2 !function=xor_2
74
- gen_helper_flcmps(cpu_fcc[a->cc], src1, src2);
55
%hl 11:1 21:1
75
+ gen_helper_flcmps(cpu_fcc[a->cc], tcg_env, src1, src2);
56
%hlm 11:1 20:2
76
return advance_pc(dc);
57
58
@@ -XXX,XX +XXX,XX @@
59
60
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
61
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
62
+@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
63
64
@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
65
@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
66
@@ -XXX,XX +XXX,XX @@ INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
67
68
### Advanced SIMD scalar three same
69
70
+FADD_s 0001 1110 ..1 ..... 0010 10 ..... ..... @rrr_hsd
71
+FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
72
+FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
73
+FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
74
+
75
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
76
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
77
78
### Advanced SIMD three same
79
80
+FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
81
+FADD_v 0.00 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
82
+
83
+FSUB_v 0.00 1110 110 ..... 00010 1 ..... ..... @qrrr_h
84
+FSUB_v 0.00 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
85
+
86
+FDIV_v 0.10 1110 010 ..... 00111 1 ..... ..... @qrrr_h
87
+FDIV_v 0.10 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
88
+
89
+FMUL_v 0.10 1110 010 ..... 00011 1 ..... ..... @qrrr_h
90
+FMUL_v 0.10 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
91
+
92
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
93
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
94
95
### Advanced SIMD scalar x indexed element
96
97
+FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
98
+FMUL_si 0101 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
99
+FMUL_si 0101 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
100
+
101
FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
102
FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
103
FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
104
105
### Advanced SIMD vector x indexed element
106
107
+FMUL_vi 0.00 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
108
+FMUL_vi 0.00 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
109
+FMUL_vi 0.00 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
110
+
111
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
112
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
113
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
114
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/tcg/translate-a64.c
117
+++ b/target/arm/tcg/translate-a64.c
118
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_scalar(DisasContext *s, arg_rrr_e *a, const FPScalar *f)
119
return true;
120
}
77
}
121
78
122
+static const FPScalar f_scalar_fadd = {
79
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPd(DisasContext *dc, arg_FLCMPd *a)
123
+ gen_helper_vfp_addh,
80
124
+ gen_helper_vfp_adds,
81
src1 = gen_load_fpr_D(dc, a->rs1);
125
+ gen_helper_vfp_addd,
82
src2 = gen_load_fpr_D(dc, a->rs2);
126
+};
83
- gen_helper_flcmpd(cpu_fcc[a->cc], src1, src2);
127
+TRANS(FADD_s, do_fp3_scalar, a, &f_scalar_fadd)
84
+ gen_helper_flcmpd(cpu_fcc[a->cc], tcg_env, src1, src2);
128
+
85
return advance_pc(dc);
129
+static const FPScalar f_scalar_fsub = {
130
+ gen_helper_vfp_subh,
131
+ gen_helper_vfp_subs,
132
+ gen_helper_vfp_subd,
133
+};
134
+TRANS(FSUB_s, do_fp3_scalar, a, &f_scalar_fsub)
135
+
136
+static const FPScalar f_scalar_fdiv = {
137
+ gen_helper_vfp_divh,
138
+ gen_helper_vfp_divs,
139
+ gen_helper_vfp_divd,
140
+};
141
+TRANS(FDIV_s, do_fp3_scalar, a, &f_scalar_fdiv)
142
+
143
+static const FPScalar f_scalar_fmul = {
144
+ gen_helper_vfp_mulh,
145
+ gen_helper_vfp_muls,
146
+ gen_helper_vfp_muld,
147
+};
148
+TRANS(FMUL_s, do_fp3_scalar, a, &f_scalar_fmul)
149
+
150
static const FPScalar f_scalar_fmulx = {
151
gen_helper_advsimd_mulxh,
152
gen_helper_vfp_mulxs,
153
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
154
return true;
155
}
86
}
156
87
157
+static gen_helper_gvec_3_ptr * const f_vector_fadd[3] = {
158
+ gen_helper_gvec_fadd_h,
159
+ gen_helper_gvec_fadd_s,
160
+ gen_helper_gvec_fadd_d,
161
+};
162
+TRANS(FADD_v, do_fp3_vector, a, f_vector_fadd)
163
+
164
+static gen_helper_gvec_3_ptr * const f_vector_fsub[3] = {
165
+ gen_helper_gvec_fsub_h,
166
+ gen_helper_gvec_fsub_s,
167
+ gen_helper_gvec_fsub_d,
168
+};
169
+TRANS(FSUB_v, do_fp3_vector, a, f_vector_fsub)
170
+
171
+static gen_helper_gvec_3_ptr * const f_vector_fdiv[3] = {
172
+ gen_helper_gvec_fdiv_h,
173
+ gen_helper_gvec_fdiv_s,
174
+ gen_helper_gvec_fdiv_d,
175
+};
176
+TRANS(FDIV_v, do_fp3_vector, a, f_vector_fdiv)
177
+
178
+static gen_helper_gvec_3_ptr * const f_vector_fmul[3] = {
179
+ gen_helper_gvec_fmul_h,
180
+ gen_helper_gvec_fmul_s,
181
+ gen_helper_gvec_fmul_d,
182
+};
183
+TRANS(FMUL_v, do_fp3_vector, a, f_vector_fmul)
184
+
185
static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
186
gen_helper_gvec_fmulx_h,
187
gen_helper_gvec_fmulx_s,
188
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
189
return true;
190
}
191
192
+TRANS(FMUL_si, do_fp3_scalar_idx, a, &f_scalar_fmul)
193
TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
194
195
static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
196
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
197
return true;
198
}
199
200
+static gen_helper_gvec_3_ptr * const f_vector_idx_fmul[3] = {
201
+ gen_helper_gvec_fmul_idx_h,
202
+ gen_helper_gvec_fmul_idx_s,
203
+ gen_helper_gvec_fmul_idx_d,
204
+};
205
+TRANS(FMUL_vi, do_fp3_vector_idx, a, f_vector_idx_fmul)
206
+
207
static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
208
gen_helper_gvec_fmulx_idx_h,
209
gen_helper_gvec_fmulx_idx_s,
210
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
211
tcg_op2 = read_fp_sreg(s, rm);
212
213
switch (opcode) {
214
- case 0x0: /* FMUL */
215
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
216
- break;
217
- case 0x1: /* FDIV */
218
- gen_helper_vfp_divs(tcg_res, tcg_op1, tcg_op2, fpst);
219
- break;
220
- case 0x2: /* FADD */
221
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
222
- break;
223
- case 0x3: /* FSUB */
224
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
225
- break;
226
case 0x4: /* FMAX */
227
gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
228
break;
229
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
230
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
231
gen_helper_vfp_negs(tcg_res, tcg_res);
232
break;
233
+ default:
234
+ case 0x0: /* FMUL */
235
+ case 0x1: /* FDIV */
236
+ case 0x2: /* FADD */
237
+ case 0x3: /* FSUB */
238
+ g_assert_not_reached();
239
}
240
241
write_fp_sreg(s, rd, tcg_res);
242
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
243
tcg_op2 = read_fp_dreg(s, rm);
244
245
switch (opcode) {
246
- case 0x0: /* FMUL */
247
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
248
- break;
249
- case 0x1: /* FDIV */
250
- gen_helper_vfp_divd(tcg_res, tcg_op1, tcg_op2, fpst);
251
- break;
252
- case 0x2: /* FADD */
253
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
254
- break;
255
- case 0x3: /* FSUB */
256
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
257
- break;
258
case 0x4: /* FMAX */
259
gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
260
break;
261
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
262
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
263
gen_helper_vfp_negd(tcg_res, tcg_res);
264
break;
265
+ default:
266
+ case 0x0: /* FMUL */
267
+ case 0x1: /* FDIV */
268
+ case 0x2: /* FADD */
269
+ case 0x3: /* FSUB */
270
+ g_assert_not_reached();
271
}
272
273
write_fp_dreg(s, rd, tcg_res);
274
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
275
tcg_op2 = read_fp_hreg(s, rm);
276
277
switch (opcode) {
278
- case 0x0: /* FMUL */
279
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
280
- break;
281
- case 0x1: /* FDIV */
282
- gen_helper_advsimd_divh(tcg_res, tcg_op1, tcg_op2, fpst);
283
- break;
284
- case 0x2: /* FADD */
285
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
286
- break;
287
- case 0x3: /* FSUB */
288
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
289
- break;
290
case 0x4: /* FMAX */
291
gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
292
break;
293
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
294
tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
295
break;
296
default:
297
+ case 0x0: /* FMUL */
298
+ case 0x1: /* FDIV */
299
+ case 0x2: /* FADD */
300
+ case 0x3: /* FSUB */
301
g_assert_not_reached();
302
}
303
304
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
305
case 0x18: /* FMAXNM */
306
gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
307
break;
308
- case 0x1a: /* FADD */
309
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
310
- break;
311
case 0x1c: /* FCMEQ */
312
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
313
break;
314
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
315
case 0x38: /* FMINNM */
316
gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
317
break;
318
- case 0x3a: /* FSUB */
319
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
320
- break;
321
case 0x3e: /* FMIN */
322
gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
323
break;
324
case 0x3f: /* FRSQRTS */
325
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
326
break;
327
- case 0x5b: /* FMUL */
328
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
329
- break;
330
case 0x5c: /* FCMGE */
331
gen_helper_neon_cge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
332
break;
333
case 0x5d: /* FACGE */
334
gen_helper_neon_acge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
335
break;
336
- case 0x5f: /* FDIV */
337
- gen_helper_vfp_divd(tcg_res, tcg_op1, tcg_op2, fpst);
338
- break;
339
case 0x7a: /* FABD */
340
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
341
gen_helper_vfp_absd(tcg_res, tcg_res);
342
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
343
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
344
break;
345
default:
346
+ case 0x1a: /* FADD */
347
case 0x1b: /* FMULX */
348
+ case 0x3a: /* FSUB */
349
+ case 0x5b: /* FMUL */
350
+ case 0x5f: /* FDIV */
351
g_assert_not_reached();
352
}
353
354
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
355
gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
356
tcg_res, fpst);
357
break;
358
- case 0x1a: /* FADD */
359
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
360
- break;
361
case 0x1c: /* FCMEQ */
362
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
363
break;
364
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
365
case 0x38: /* FMINNM */
366
gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
367
break;
368
- case 0x3a: /* FSUB */
369
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
370
- break;
371
case 0x3e: /* FMIN */
372
gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
373
break;
374
case 0x3f: /* FRSQRTS */
375
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
376
break;
377
- case 0x5b: /* FMUL */
378
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
379
- break;
380
case 0x5c: /* FCMGE */
381
gen_helper_neon_cge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
382
break;
383
case 0x5d: /* FACGE */
384
gen_helper_neon_acge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
385
break;
386
- case 0x5f: /* FDIV */
387
- gen_helper_vfp_divs(tcg_res, tcg_op1, tcg_op2, fpst);
388
- break;
389
case 0x7a: /* FABD */
390
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
391
gen_helper_vfp_abss(tcg_res, tcg_res);
392
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
393
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
394
break;
395
default:
396
+ case 0x1a: /* FADD */
397
case 0x1b: /* FMULX */
398
+ case 0x3a: /* FSUB */
399
+ case 0x5b: /* FMUL */
400
+ case 0x5f: /* FDIV */
401
g_assert_not_reached();
402
}
403
404
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
405
case 0x19: /* FMLA */
406
case 0x39: /* FMLS */
407
case 0x18: /* FMAXNM */
408
- case 0x1a: /* FADD */
409
case 0x1c: /* FCMEQ */
410
case 0x1e: /* FMAX */
411
case 0x38: /* FMINNM */
412
- case 0x3a: /* FSUB */
413
case 0x3e: /* FMIN */
414
- case 0x5b: /* FMUL */
415
case 0x5c: /* FCMGE */
416
- case 0x5f: /* FDIV */
417
case 0x7a: /* FABD */
418
case 0x7c: /* FCMGT */
419
if (!fp_access_check(s)) {
420
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
421
return;
422
423
default:
424
+ case 0x1a: /* FADD */
425
case 0x1b: /* FMULX */
426
+ case 0x3a: /* FSUB */
427
+ case 0x5b: /* FMUL */
428
+ case 0x5f: /* FDIV */
429
unallocated_encoding(s);
430
return;
431
}
432
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
433
switch (fpopcode) {
434
case 0x0: /* FMAXNM */
435
case 0x1: /* FMLA */
436
- case 0x2: /* FADD */
437
case 0x4: /* FCMEQ */
438
case 0x6: /* FMAX */
439
case 0x7: /* FRECPS */
440
case 0x8: /* FMINNM */
441
case 0x9: /* FMLS */
442
- case 0xa: /* FSUB */
443
case 0xe: /* FMIN */
444
case 0xf: /* FRSQRTS */
445
- case 0x13: /* FMUL */
446
case 0x14: /* FCMGE */
447
case 0x15: /* FACGE */
448
- case 0x17: /* FDIV */
449
case 0x1a: /* FABD */
450
case 0x1c: /* FCMGT */
451
case 0x1d: /* FACGT */
452
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
453
pairwise = true;
454
break;
455
default:
456
+ case 0x2: /* FADD */
457
case 0x3: /* FMULX */
458
+ case 0xa: /* FSUB */
459
+ case 0x13: /* FMUL */
460
+ case 0x17: /* FDIV */
461
unallocated_encoding(s);
462
return;
463
}
464
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
465
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
466
fpst);
467
break;
468
- case 0x2: /* FADD */
469
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
470
- break;
471
case 0x4: /* FCMEQ */
472
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
473
break;
474
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
475
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
476
fpst);
477
break;
478
- case 0xa: /* FSUB */
479
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
480
- break;
481
case 0xe: /* FMIN */
482
gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
483
break;
484
case 0xf: /* FRSQRTS */
485
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
486
break;
487
- case 0x13: /* FMUL */
488
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
489
- break;
490
case 0x14: /* FCMGE */
491
gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
492
break;
493
case 0x15: /* FACGE */
494
gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
495
break;
496
- case 0x17: /* FDIV */
497
- gen_helper_advsimd_divh(tcg_res, tcg_op1, tcg_op2, fpst);
498
- break;
499
case 0x1a: /* FABD */
500
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
501
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
502
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
503
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
504
break;
505
default:
506
+ case 0x2: /* FADD */
507
case 0x3: /* FMULX */
508
+ case 0xa: /* FSUB */
509
+ case 0x13: /* FMUL */
510
+ case 0x17: /* FDIV */
511
g_assert_not_reached();
512
}
513
514
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
515
break;
516
case 0x01: /* FMLA */
517
case 0x05: /* FMLS */
518
- case 0x09: /* FMUL */
519
is_fp = 1;
520
break;
521
case 0x1d: /* SQRDMLAH */
522
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
523
/* is_fp, but we pass tcg_env not fp_status. */
524
break;
525
default:
526
+ case 0x09: /* FMUL */
527
case 0x19: /* FMULX */
528
unallocated_encoding(s);
529
return;
530
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
531
read_vec_element(s, tcg_res, rd, pass, MO_64);
532
gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
533
break;
534
- case 0x09: /* FMUL */
535
- gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
536
- break;
537
default:
538
+ case 0x09: /* FMUL */
539
case 0x19: /* FMULX */
540
g_assert_not_reached();
541
}
542
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
543
g_assert_not_reached();
544
}
545
break;
546
- case 0x09: /* FMUL */
547
- switch (size) {
548
- case 1:
549
- if (is_scalar) {
550
- gen_helper_advsimd_mulh(tcg_res, tcg_op,
551
- tcg_idx, fpst);
552
- } else {
553
- gen_helper_advsimd_mul2h(tcg_res, tcg_op,
554
- tcg_idx, fpst);
555
- }
556
- break;
557
- case 2:
558
- gen_helper_vfp_muls(tcg_res, tcg_op, tcg_idx, fpst);
559
- break;
560
- default:
561
- g_assert_not_reached();
562
- }
563
- break;
564
case 0x0c: /* SQDMULH */
565
if (size == 1) {
566
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
567
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
568
}
569
break;
570
default:
571
+ case 0x09: /* FMUL */
572
case 0x19: /* FMULX */
573
g_assert_not_reached();
574
}
575
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
576
index XXXXXXX..XXXXXXX 100644
577
--- a/target/arm/tcg/vec_helper.c
578
+++ b/target/arm/tcg/vec_helper.c
579
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_rsqrts_nf_h, float16_rsqrts_nf, float16)
580
DO_3OP(gvec_rsqrts_nf_s, float32_rsqrts_nf, float32)
581
582
#ifdef TARGET_AARCH64
583
+DO_3OP(gvec_fdiv_h, float16_div, float16)
584
+DO_3OP(gvec_fdiv_s, float32_div, float32)
585
+DO_3OP(gvec_fdiv_d, float64_div, float64)
586
+
587
DO_3OP(gvec_fmulx_h, helper_advsimd_mulxh, float16)
588
DO_3OP(gvec_fmulx_s, helper_vfp_mulxs, float32)
589
DO_3OP(gvec_fmulx_d, helper_vfp_mulxd, float64)
590
--
88
--
591
2.34.1
89
2.34.1
diff view generated by jsdifflib
New patch
1
In the helper_compute_fprf functions, we pass a dummy float_status
2
in to the is_signaling_nan() function. This is unnecessary, because
3
we have convenient access to the CPU env pointer here and that
4
is already set up with the correct values for the snan_bit_is_one
5
and no_signaling_nans config settings. is_signaling_nan() doesn't
6
ever update the fp_status with any exception flags, so there is
7
no reason not to use env->fp_status here.
1
8
9
Use env->fp_status instead of the dummy fp_status.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20241202131347.498124-34-peter.maydell@linaro.org
14
---
15
target/ppc/fpu_helper.c | 3 +--
16
1 file changed, 1 insertion(+), 2 deletions(-)
17
18
diff --git a/target/ppc/fpu_helper.c b/target/ppc/fpu_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/ppc/fpu_helper.c
21
+++ b/target/ppc/fpu_helper.c
22
@@ -XXX,XX +XXX,XX @@ void helper_compute_fprf_##tp(CPUPPCState *env, tp arg) \
23
} else if (tp##_is_infinity(arg)) { \
24
fprf = neg ? 0x09 << FPSCR_FPRF : 0x05 << FPSCR_FPRF; \
25
} else { \
26
- float_status dummy = { }; /* snan_bit_is_one = 0 */ \
27
- if (tp##_is_signaling_nan(arg, &dummy)) { \
28
+ if (tp##_is_signaling_nan(arg, &env->fp_status)) { \
29
fprf = 0x00 << FPSCR_FPRF; \
30
} else { \
31
fprf = 0x11 << FPSCR_FPRF; \
32
--
33
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Now that float_status has a bunch of fp parameters,
4
it is easier to copy an existing structure than create
5
one from scratch. Begin by copying the structure that
6
corresponds to the FPSR and make only the adjustments
7
required for BFloat16 semantics.
8
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-20-richard.henderson@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20241203203949.483774-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
target/arm/helper.h | 1 +
15
target/arm/tcg/vec_helper.c | 20 +++++++-------------
9
target/arm/tcg/a64.decode | 6 ++++
16
1 file changed, 7 insertions(+), 13 deletions(-)
10
target/arm/tcg/translate-a64.c | 60 ++++++++++++++++++++++------------
11
target/arm/tcg/vec_helper.c | 6 ++++
12
4 files changed, 53 insertions(+), 20 deletions(-)
13
17
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
17
+++ b/target/arm/helper.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
20
DEF_HELPER_FLAGS_5(gvec_fabd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_5(gvec_fabd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
24
DEF_HELPER_FLAGS_5(gvec_fceq_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(gvec_fceq_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/a64.decode
29
+++ b/target/arm/tcg/a64.decode
30
@@ -XXX,XX +XXX,XX @@ FACGE_s 0111 1110 0.1 ..... 11101 1 ..... ..... @rrr_sd
31
FACGT_s 0111 1110 110 ..... 00101 1 ..... ..... @rrr_h
32
FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
33
34
+FABD_s 0111 1110 110 ..... 00010 1 ..... ..... @rrr_h
35
+FABD_s 0111 1110 1.1 ..... 11010 1 ..... ..... @rrr_sd
36
+
37
### Advanced SIMD three same
38
39
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
40
@@ -XXX,XX +XXX,XX @@ FACGE_v 0.10 1110 0.1 ..... 11101 1 ..... ..... @qrrr_sd
41
FACGT_v 0.10 1110 110 ..... 00101 1 ..... ..... @qrrr_h
42
FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
43
44
+FABD_v 0.10 1110 110 ..... 00010 1 ..... ..... @qrrr_h
45
+FABD_v 0.10 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
46
+
47
### Advanced SIMD scalar x indexed element
48
49
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
50
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/translate-a64.c
53
+++ b/target/arm/tcg/translate-a64.c
54
@@ -XXX,XX +XXX,XX @@ static const FPScalar f_scalar_facgt = {
55
};
56
TRANS(FACGT_s, do_fp3_scalar, a, &f_scalar_facgt)
57
58
+static void gen_fabd_h(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
59
+{
60
+ gen_helper_vfp_subh(d, n, m, s);
61
+ gen_vfp_absh(d, d);
62
+}
63
+
64
+static void gen_fabd_s(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
65
+{
66
+ gen_helper_vfp_subs(d, n, m, s);
67
+ gen_vfp_abss(d, d);
68
+}
69
+
70
+static void gen_fabd_d(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_ptr s)
71
+{
72
+ gen_helper_vfp_subd(d, n, m, s);
73
+ gen_vfp_absd(d, d);
74
+}
75
+
76
+static const FPScalar f_scalar_fabd = {
77
+ gen_fabd_h,
78
+ gen_fabd_s,
79
+ gen_fabd_d,
80
+};
81
+TRANS(FABD_s, do_fp3_scalar, a, &f_scalar_fabd)
82
+
83
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
84
gen_helper_gvec_3_ptr * const fns[3])
85
{
86
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_facgt[3] = {
87
};
88
TRANS(FACGT_v, do_fp3_vector, a, f_vector_facgt)
89
90
+static gen_helper_gvec_3_ptr * const f_vector_fabd[3] = {
91
+ gen_helper_gvec_fabd_h,
92
+ gen_helper_gvec_fabd_s,
93
+ gen_helper_gvec_fabd_d,
94
+};
95
+TRANS(FABD_v, do_fp3_vector, a, f_vector_fabd)
96
+
97
/*
98
* Advanced SIMD scalar/vector x indexed element
99
*/
100
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
101
case 0x3f: /* FRSQRTS */
102
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
103
break;
104
- case 0x7a: /* FABD */
105
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
106
- gen_vfp_absd(tcg_res, tcg_res);
107
- break;
108
default:
109
case 0x18: /* FMAXNM */
110
case 0x19: /* FMLA */
111
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
112
case 0x5c: /* FCMGE */
113
case 0x5d: /* FACGE */
114
case 0x5f: /* FDIV */
115
+ case 0x7a: /* FABD */
116
case 0x7c: /* FCMGT */
117
case 0x7d: /* FACGT */
118
g_assert_not_reached();
119
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
120
case 0x3f: /* FRSQRTS */
121
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
122
break;
123
- case 0x7a: /* FABD */
124
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
125
- gen_vfp_abss(tcg_res, tcg_res);
126
- break;
127
default:
128
case 0x18: /* FMAXNM */
129
case 0x19: /* FMLA */
130
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
131
case 0x5c: /* FCMGE */
132
case 0x5d: /* FACGE */
133
case 0x5f: /* FDIV */
134
+ case 0x7a: /* FABD */
135
case 0x7c: /* FCMGT */
136
case 0x7d: /* FACGT */
137
g_assert_not_reached();
138
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
139
switch (fpopcode) {
140
case 0x1f: /* FRECPS */
141
case 0x3f: /* FRSQRTS */
142
- case 0x7a: /* FABD */
143
break;
144
default:
145
case 0x1b: /* FMULX */
146
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
147
case 0x7d: /* FACGT */
148
case 0x1c: /* FCMEQ */
149
case 0x5c: /* FCMGE */
150
+ case 0x7a: /* FABD */
151
case 0x7c: /* FCMGT */
152
unallocated_encoding(s);
153
return;
154
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
155
switch (fpopcode) {
156
case 0x07: /* FRECPS */
157
case 0x0f: /* FRSQRTS */
158
- case 0x1a: /* FABD */
159
break;
160
default:
161
case 0x03: /* FMULX */
162
case 0x04: /* FCMEQ (reg) */
163
case 0x14: /* FCMGE (reg) */
164
case 0x15: /* FACGE */
165
+ case 0x1a: /* FABD */
166
case 0x1c: /* FCMGT (reg) */
167
case 0x1d: /* FACGT */
168
unallocated_encoding(s);
169
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
170
case 0x0f: /* FRSQRTS */
171
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
172
break;
173
- case 0x1a: /* FABD */
174
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
175
- tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
176
- break;
177
default:
178
case 0x03: /* FMULX */
179
case 0x04: /* FCMEQ (reg) */
180
case 0x14: /* FCMGE (reg) */
181
case 0x15: /* FACGE */
182
+ case 0x1a: /* FABD */
183
case 0x1c: /* FCMGT (reg) */
184
case 0x1d: /* FACGT */
185
g_assert_not_reached();
186
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
187
return;
188
case 0x1f: /* FRECPS */
189
case 0x3f: /* FRSQRTS */
190
- case 0x7a: /* FABD */
191
if (!fp_access_check(s)) {
192
return;
193
}
194
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
195
case 0x5c: /* FCMGE */
196
case 0x5d: /* FACGE */
197
case 0x5f: /* FDIV */
198
+ case 0x7a: /* FABD */
199
case 0x7d: /* FACGT */
200
case 0x7c: /* FCMGT */
201
unallocated_encoding(s);
202
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
203
switch (fpopcode) {
204
case 0x7: /* FRECPS */
205
case 0xf: /* FRSQRTS */
206
- case 0x1a: /* FABD */
207
pairwise = false;
208
break;
209
case 0x10: /* FMAXNMP */
210
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
211
case 0x14: /* FCMGE */
212
case 0x15: /* FACGE */
213
case 0x17: /* FDIV */
214
+ case 0x1a: /* FABD */
215
case 0x1c: /* FCMGT */
216
case 0x1d: /* FACGT */
217
unallocated_encoding(s);
218
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
219
case 0xf: /* FRSQRTS */
220
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
221
break;
222
- case 0x1a: /* FABD */
223
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
224
- tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
225
- break;
226
default:
227
case 0x0: /* FMAXNM */
228
case 0x1: /* FMLA */
229
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
230
case 0x14: /* FCMGE */
231
case 0x15: /* FACGE */
232
case 0x17: /* FDIV */
233
+ case 0x1a: /* FABD */
234
case 0x1c: /* FCMGT */
235
case 0x1d: /* FACGT */
236
g_assert_not_reached();
237
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
18
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
238
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
239
--- a/target/arm/tcg/vec_helper.c
20
--- a/target/arm/tcg/vec_helper.c
240
+++ b/target/arm/tcg/vec_helper.c
21
+++ b/target/arm/tcg/vec_helper.c
241
@@ -XXX,XX +XXX,XX @@ static float32 float32_abd(float32 op1, float32 op2, float_status *stat)
22
@@ -XXX,XX +XXX,XX @@ bool is_ebf(CPUARMState *env, float_status *statusp, float_status *oddstatusp)
242
return float32_abs(float32_sub(op1, op2, stat));
23
* no effect on AArch32 instructions.
24
*/
25
bool ebf = is_a64(env) && env->vfp.fpcr & FPCR_EBF;
26
- *statusp = (float_status){
27
- .tininess_before_rounding = float_tininess_before_rounding,
28
- .float_rounding_mode = float_round_to_odd_inf,
29
- .flush_to_zero = true,
30
- .flush_inputs_to_zero = true,
31
- .default_nan_mode = true,
32
- };
33
+
34
+ *statusp = env->vfp.fp_status;
35
+ set_default_nan_mode(true, statusp);
36
37
if (ebf) {
38
- float_status *fpst = &env->vfp.fp_status;
39
- set_flush_to_zero(get_flush_to_zero(fpst), statusp);
40
- set_flush_inputs_to_zero(get_flush_inputs_to_zero(fpst), statusp);
41
- set_float_rounding_mode(get_float_rounding_mode(fpst), statusp);
42
-
43
/* EBF=1 needs to do a step with round-to-odd semantics */
44
*oddstatusp = *statusp;
45
set_float_rounding_mode(float_round_to_odd, oddstatusp);
46
+ } else {
47
+ set_flush_to_zero(true, statusp);
48
+ set_flush_inputs_to_zero(true, statusp);
49
+ set_float_rounding_mode(float_round_to_odd_inf, statusp);
50
}
51
-
52
return ebf;
243
}
53
}
244
54
245
+static float64 float64_abd(float64 op1, float64 op2, float_status *stat)
246
+{
247
+ return float64_abs(float64_sub(op1, op2, stat));
248
+}
249
+
250
/*
251
* Reciprocal step. These are the AArch32 version which uses a
252
* non-fused multiply-and-subtract.
253
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
254
255
DO_3OP(gvec_fabd_h, float16_abd, float16)
256
DO_3OP(gvec_fabd_s, float32_abd, float32)
257
+DO_3OP(gvec_fabd_d, float64_abd, float64)
258
259
DO_3OP(gvec_fceq_h, float16_ceq, float16)
260
DO_3OP(gvec_fceq_s, float32_ceq, float32)
261
--
55
--
262
2.34.1
56
2.34.1
57
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Currently we hardcode the default NaN value in parts64_default_nan()
2
using a compile-time ifdef ladder. This is awkward for two cases:
3
* for single-QEMU-binary we can't hard-code target-specifics like this
4
* for Arm FEAT_AFP the default NaN value depends on FPCR.AH
5
(specifically the sign bit is different)
2
6
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Add a field to float_status to specify the default NaN value; fall
4
Message-id: 20240506010403.6204-19-richard.henderson@linaro.org
8
back to the old ifdef behaviour if these are not set.
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
10
The default NaN value is specified by setting a uint8_t to a
11
pattern corresponding to the sign and upper fraction parts of
12
the NaN; the lower bits of the fraction are set from bit 0 of
13
the pattern.
14
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20241202131347.498124-35-peter.maydell@linaro.org
7
---
18
---
8
target/arm/helper.h | 5 +
19
include/fpu/softfloat-helpers.h | 11 +++++++
9
target/arm/tcg/a64.decode | 30 ++++++
20
include/fpu/softfloat-types.h | 10 ++++++
10
target/arm/tcg/translate-a64.c | 188 +++++++++++++++++++--------------
21
fpu/softfloat-specialize.c.inc | 55 ++++++++++++++++++++-------------
11
target/arm/tcg/vec_helper.c | 30 ++++++
22
3 files changed, 54 insertions(+), 22 deletions(-)
12
4 files changed, 174 insertions(+), 79 deletions(-)
13
23
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
24
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
26
--- a/include/fpu/softfloat-helpers.h
17
+++ b/target/arm/helper.h
27
+++ b/include/fpu/softfloat-helpers.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
@@ -XXX,XX +XXX,XX @@ static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
19
29
status->float_infzeronan_rule = rule;
20
DEF_HELPER_FLAGS_5(gvec_fceq_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_5(gvec_fceq_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_5(gvec_fceq_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
24
DEF_HELPER_FLAGS_5(gvec_fcge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(gvec_fcge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(gvec_fcge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
28
DEF_HELPER_FLAGS_5(gvec_fcgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
DEF_HELPER_FLAGS_5(gvec_fcgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(gvec_fcgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
32
DEF_HELPER_FLAGS_5(gvec_facge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_5(gvec_facge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(gvec_facge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
36
DEF_HELPER_FLAGS_5(gvec_facgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(gvec_facgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_5(gvec_facgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
40
DEF_HELPER_FLAGS_5(gvec_fmax_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_5(gvec_fmax_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
42
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/a64.decode
45
+++ b/target/arm/tcg/a64.decode
46
@@ -XXX,XX +XXX,XX @@ FMINNM_s 0001 1110 ..1 ..... 0111 10 ..... ..... @rrr_hsd
47
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
48
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
49
50
+FCMEQ_s 0101 1110 010 ..... 00100 1 ..... ..... @rrr_h
51
+FCMEQ_s 0101 1110 0.1 ..... 11100 1 ..... ..... @rrr_sd
52
+
53
+FCMGE_s 0111 1110 010 ..... 00100 1 ..... ..... @rrr_h
54
+FCMGE_s 0111 1110 0.1 ..... 11100 1 ..... ..... @rrr_sd
55
+
56
+FCMGT_s 0111 1110 110 ..... 00100 1 ..... ..... @rrr_h
57
+FCMGT_s 0111 1110 1.1 ..... 11100 1 ..... ..... @rrr_sd
58
+
59
+FACGE_s 0111 1110 010 ..... 00101 1 ..... ..... @rrr_h
60
+FACGE_s 0111 1110 0.1 ..... 11101 1 ..... ..... @rrr_sd
61
+
62
+FACGT_s 0111 1110 110 ..... 00101 1 ..... ..... @rrr_h
63
+FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
64
+
65
### Advanced SIMD three same
66
67
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
68
@@ -XXX,XX +XXX,XX @@ FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
69
FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
70
FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
71
72
+FCMEQ_v 0.00 1110 010 ..... 00100 1 ..... ..... @qrrr_h
73
+FCMEQ_v 0.00 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
74
+
75
+FCMGE_v 0.10 1110 010 ..... 00100 1 ..... ..... @qrrr_h
76
+FCMGE_v 0.10 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
77
+
78
+FCMGT_v 0.10 1110 110 ..... 00100 1 ..... ..... @qrrr_h
79
+FCMGT_v 0.10 1110 1.1 ..... 11100 1 ..... ..... @qrrr_sd
80
+
81
+FACGE_v 0.10 1110 010 ..... 00101 1 ..... ..... @qrrr_h
82
+FACGE_v 0.10 1110 0.1 ..... 11101 1 ..... ..... @qrrr_sd
83
+
84
+FACGT_v 0.10 1110 110 ..... 00101 1 ..... ..... @qrrr_h
85
+FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
86
+
87
### Advanced SIMD scalar x indexed element
88
89
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
90
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/tcg/translate-a64.c
93
+++ b/target/arm/tcg/translate-a64.c
94
@@ -XXX,XX +XXX,XX @@ static const FPScalar f_scalar_fnmul = {
95
};
96
TRANS(FNMUL_s, do_fp3_scalar, a, &f_scalar_fnmul)
97
98
+static const FPScalar f_scalar_fcmeq = {
99
+ gen_helper_advsimd_ceq_f16,
100
+ gen_helper_neon_ceq_f32,
101
+ gen_helper_neon_ceq_f64,
102
+};
103
+TRANS(FCMEQ_s, do_fp3_scalar, a, &f_scalar_fcmeq)
104
+
105
+static const FPScalar f_scalar_fcmge = {
106
+ gen_helper_advsimd_cge_f16,
107
+ gen_helper_neon_cge_f32,
108
+ gen_helper_neon_cge_f64,
109
+};
110
+TRANS(FCMGE_s, do_fp3_scalar, a, &f_scalar_fcmge)
111
+
112
+static const FPScalar f_scalar_fcmgt = {
113
+ gen_helper_advsimd_cgt_f16,
114
+ gen_helper_neon_cgt_f32,
115
+ gen_helper_neon_cgt_f64,
116
+};
117
+TRANS(FCMGT_s, do_fp3_scalar, a, &f_scalar_fcmgt)
118
+
119
+static const FPScalar f_scalar_facge = {
120
+ gen_helper_advsimd_acge_f16,
121
+ gen_helper_neon_acge_f32,
122
+ gen_helper_neon_acge_f64,
123
+};
124
+TRANS(FACGE_s, do_fp3_scalar, a, &f_scalar_facge)
125
+
126
+static const FPScalar f_scalar_facgt = {
127
+ gen_helper_advsimd_acgt_f16,
128
+ gen_helper_neon_acgt_f32,
129
+ gen_helper_neon_acgt_f64,
130
+};
131
+TRANS(FACGT_s, do_fp3_scalar, a, &f_scalar_facgt)
132
+
133
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
134
gen_helper_gvec_3_ptr * const fns[3])
135
{
136
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_fmls[3] = {
137
};
138
TRANS(FMLS_v, do_fp3_vector, a, f_vector_fmls)
139
140
+static gen_helper_gvec_3_ptr * const f_vector_fcmeq[3] = {
141
+ gen_helper_gvec_fceq_h,
142
+ gen_helper_gvec_fceq_s,
143
+ gen_helper_gvec_fceq_d,
144
+};
145
+TRANS(FCMEQ_v, do_fp3_vector, a, f_vector_fcmeq)
146
+
147
+static gen_helper_gvec_3_ptr * const f_vector_fcmge[3] = {
148
+ gen_helper_gvec_fcge_h,
149
+ gen_helper_gvec_fcge_s,
150
+ gen_helper_gvec_fcge_d,
151
+};
152
+TRANS(FCMGE_v, do_fp3_vector, a, f_vector_fcmge)
153
+
154
+static gen_helper_gvec_3_ptr * const f_vector_fcmgt[3] = {
155
+ gen_helper_gvec_fcgt_h,
156
+ gen_helper_gvec_fcgt_s,
157
+ gen_helper_gvec_fcgt_d,
158
+};
159
+TRANS(FCMGT_v, do_fp3_vector, a, f_vector_fcmgt)
160
+
161
+static gen_helper_gvec_3_ptr * const f_vector_facge[3] = {
162
+ gen_helper_gvec_facge_h,
163
+ gen_helper_gvec_facge_s,
164
+ gen_helper_gvec_facge_d,
165
+};
166
+TRANS(FACGE_v, do_fp3_vector, a, f_vector_facge)
167
+
168
+static gen_helper_gvec_3_ptr * const f_vector_facgt[3] = {
169
+ gen_helper_gvec_facgt_h,
170
+ gen_helper_gvec_facgt_s,
171
+ gen_helper_gvec_facgt_d,
172
+};
173
+TRANS(FACGT_v, do_fp3_vector, a, f_vector_facgt)
174
+
175
/*
176
* Advanced SIMD scalar/vector x indexed element
177
*/
178
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
179
read_vec_element(s, tcg_op2, rm, pass, MO_64);
180
181
switch (fpopcode) {
182
- case 0x1c: /* FCMEQ */
183
- gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
184
- break;
185
case 0x1f: /* FRECPS */
186
gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
187
break;
188
case 0x3f: /* FRSQRTS */
189
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
190
break;
191
- case 0x5c: /* FCMGE */
192
- gen_helper_neon_cge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
193
- break;
194
- case 0x5d: /* FACGE */
195
- gen_helper_neon_acge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
196
- break;
197
case 0x7a: /* FABD */
198
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
199
gen_vfp_absd(tcg_res, tcg_res);
200
break;
201
- case 0x7c: /* FCMGT */
202
- gen_helper_neon_cgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
203
- break;
204
- case 0x7d: /* FACGT */
205
- gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
206
- break;
207
default:
208
case 0x18: /* FMAXNM */
209
case 0x19: /* FMLA */
210
case 0x1a: /* FADD */
211
case 0x1b: /* FMULX */
212
+ case 0x1c: /* FCMEQ */
213
case 0x1e: /* FMAX */
214
case 0x38: /* FMINNM */
215
case 0x39: /* FMLS */
216
case 0x3a: /* FSUB */
217
case 0x3e: /* FMIN */
218
case 0x5b: /* FMUL */
219
+ case 0x5c: /* FCMGE */
220
+ case 0x5d: /* FACGE */
221
case 0x5f: /* FDIV */
222
+ case 0x7c: /* FCMGT */
223
+ case 0x7d: /* FACGT */
224
g_assert_not_reached();
225
}
226
227
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
228
read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
229
230
switch (fpopcode) {
231
- case 0x1c: /* FCMEQ */
232
- gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
233
- break;
234
case 0x1f: /* FRECPS */
235
gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
236
break;
237
case 0x3f: /* FRSQRTS */
238
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
239
break;
240
- case 0x5c: /* FCMGE */
241
- gen_helper_neon_cge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
242
- break;
243
- case 0x5d: /* FACGE */
244
- gen_helper_neon_acge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
245
- break;
246
case 0x7a: /* FABD */
247
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
248
gen_vfp_abss(tcg_res, tcg_res);
249
break;
250
- case 0x7c: /* FCMGT */
251
- gen_helper_neon_cgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
252
- break;
253
- case 0x7d: /* FACGT */
254
- gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
255
- break;
256
default:
257
case 0x18: /* FMAXNM */
258
case 0x19: /* FMLA */
259
case 0x1a: /* FADD */
260
case 0x1b: /* FMULX */
261
+ case 0x1c: /* FCMEQ */
262
case 0x1e: /* FMAX */
263
case 0x38: /* FMINNM */
264
case 0x39: /* FMLS */
265
case 0x3a: /* FSUB */
266
case 0x3e: /* FMIN */
267
case 0x5b: /* FMUL */
268
+ case 0x5c: /* FCMGE */
269
+ case 0x5d: /* FACGE */
270
case 0x5f: /* FDIV */
271
+ case 0x7c: /* FCMGT */
272
+ case 0x7d: /* FACGT */
273
g_assert_not_reached();
274
}
275
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
277
switch (fpopcode) {
278
case 0x1f: /* FRECPS */
279
case 0x3f: /* FRSQRTS */
280
+ case 0x7a: /* FABD */
281
+ break;
282
+ default:
283
+ case 0x1b: /* FMULX */
284
case 0x5d: /* FACGE */
285
case 0x7d: /* FACGT */
286
case 0x1c: /* FCMEQ */
287
case 0x5c: /* FCMGE */
288
case 0x7c: /* FCMGT */
289
- case 0x7a: /* FABD */
290
- break;
291
- default:
292
- case 0x1b: /* FMULX */
293
unallocated_encoding(s);
294
return;
295
}
296
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
297
TCGv_i32 tcg_res;
298
299
switch (fpopcode) {
300
- case 0x04: /* FCMEQ (reg) */
301
case 0x07: /* FRECPS */
302
case 0x0f: /* FRSQRTS */
303
- case 0x14: /* FCMGE (reg) */
304
- case 0x15: /* FACGE */
305
case 0x1a: /* FABD */
306
- case 0x1c: /* FCMGT (reg) */
307
- case 0x1d: /* FACGT */
308
break;
309
default:
310
case 0x03: /* FMULX */
311
+ case 0x04: /* FCMEQ (reg) */
312
+ case 0x14: /* FCMGE (reg) */
313
+ case 0x15: /* FACGE */
314
+ case 0x1c: /* FCMGT (reg) */
315
+ case 0x1d: /* FACGT */
316
unallocated_encoding(s);
317
return;
318
}
319
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
320
tcg_res = tcg_temp_new_i32();
321
322
switch (fpopcode) {
323
- case 0x04: /* FCMEQ (reg) */
324
- gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
325
- break;
326
case 0x07: /* FRECPS */
327
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
328
break;
329
case 0x0f: /* FRSQRTS */
330
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
331
break;
332
- case 0x14: /* FCMGE (reg) */
333
- gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
334
- break;
335
- case 0x15: /* FACGE */
336
- gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
337
- break;
338
case 0x1a: /* FABD */
339
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
340
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
341
break;
342
- case 0x1c: /* FCMGT (reg) */
343
- gen_helper_advsimd_cgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
344
- break;
345
- case 0x1d: /* FACGT */
346
- gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
347
- break;
348
default:
349
case 0x03: /* FMULX */
350
+ case 0x04: /* FCMEQ (reg) */
351
+ case 0x14: /* FCMGE (reg) */
352
+ case 0x15: /* FACGE */
353
+ case 0x1c: /* FCMGT (reg) */
354
+ case 0x1d: /* FACGT */
355
g_assert_not_reached();
356
}
357
358
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
359
return;
360
case 0x1f: /* FRECPS */
361
case 0x3f: /* FRSQRTS */
362
- case 0x5d: /* FACGE */
363
- case 0x7d: /* FACGT */
364
- case 0x1c: /* FCMEQ */
365
- case 0x5c: /* FCMGE */
366
case 0x7a: /* FABD */
367
- case 0x7c: /* FCMGT */
368
if (!fp_access_check(s)) {
369
return;
370
}
371
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
372
case 0x19: /* FMLA */
373
case 0x1a: /* FADD */
374
case 0x1b: /* FMULX */
375
+ case 0x1c: /* FCMEQ */
376
case 0x1e: /* FMAX */
377
case 0x38: /* FMINNM */
378
case 0x39: /* FMLS */
379
case 0x3a: /* FSUB */
380
case 0x3e: /* FMIN */
381
case 0x5b: /* FMUL */
382
+ case 0x5c: /* FCMGE */
383
+ case 0x5d: /* FACGE */
384
case 0x5f: /* FDIV */
385
+ case 0x7d: /* FACGT */
386
+ case 0x7c: /* FCMGT */
387
unallocated_encoding(s);
388
return;
389
}
390
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
391
int pass;
392
393
switch (fpopcode) {
394
- case 0x4: /* FCMEQ */
395
case 0x7: /* FRECPS */
396
case 0xf: /* FRSQRTS */
397
- case 0x14: /* FCMGE */
398
- case 0x15: /* FACGE */
399
case 0x1a: /* FABD */
400
- case 0x1c: /* FCMGT */
401
- case 0x1d: /* FACGT */
402
pairwise = false;
403
break;
404
case 0x10: /* FMAXNMP */
405
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
406
case 0x1: /* FMLA */
407
case 0x2: /* FADD */
408
case 0x3: /* FMULX */
409
+ case 0x4: /* FCMEQ */
410
case 0x6: /* FMAX */
411
case 0x8: /* FMINNM */
412
case 0x9: /* FMLS */
413
case 0xa: /* FSUB */
414
case 0xe: /* FMIN */
415
case 0x13: /* FMUL */
416
+ case 0x14: /* FCMGE */
417
+ case 0x15: /* FACGE */
418
case 0x17: /* FDIV */
419
+ case 0x1c: /* FCMGT */
420
+ case 0x1d: /* FACGT */
421
unallocated_encoding(s);
422
return;
423
}
424
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
425
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
426
427
switch (fpopcode) {
428
- case 0x4: /* FCMEQ */
429
- gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
430
- break;
431
case 0x7: /* FRECPS */
432
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
433
break;
434
case 0xf: /* FRSQRTS */
435
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
436
break;
437
- case 0x14: /* FCMGE */
438
- gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
439
- break;
440
- case 0x15: /* FACGE */
441
- gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
442
- break;
443
case 0x1a: /* FABD */
444
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
445
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
446
break;
447
- case 0x1c: /* FCMGT */
448
- gen_helper_advsimd_cgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
449
- break;
450
- case 0x1d: /* FACGT */
451
- gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
452
- break;
453
default:
454
case 0x0: /* FMAXNM */
455
case 0x1: /* FMLA */
456
case 0x2: /* FADD */
457
case 0x3: /* FMULX */
458
+ case 0x4: /* FCMEQ */
459
case 0x6: /* FMAX */
460
case 0x8: /* FMINNM */
461
case 0x9: /* FMLS */
462
case 0xa: /* FSUB */
463
case 0xe: /* FMIN */
464
case 0x13: /* FMUL */
465
+ case 0x14: /* FCMGE */
466
+ case 0x15: /* FACGE */
467
case 0x17: /* FDIV */
468
+ case 0x1c: /* FCMGT */
469
+ case 0x1d: /* FACGT */
470
g_assert_not_reached();
471
}
472
473
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
474
index XXXXXXX..XXXXXXX 100644
475
--- a/target/arm/tcg/vec_helper.c
476
+++ b/target/arm/tcg/vec_helper.c
477
@@ -XXX,XX +XXX,XX @@ static uint32_t float32_ceq(float32 op1, float32 op2, float_status *stat)
478
return -float32_eq_quiet(op1, op2, stat);
479
}
30
}
480
31
481
+static uint64_t float64_ceq(float64 op1, float64 op2, float_status *stat)
32
+static inline void set_float_default_nan_pattern(uint8_t dnan_pattern,
33
+ float_status *status)
482
+{
34
+{
483
+ return -float64_eq_quiet(op1, op2, stat);
35
+ status->default_nan_pattern = dnan_pattern;
484
+}
36
+}
485
+
37
+
486
static uint16_t float16_cge(float16 op1, float16 op2, float_status *stat)
38
static inline void set_flush_to_zero(bool val, float_status *status)
487
{
39
{
488
return -float16_le(op2, op1, stat);
40
status->flush_to_zero = val;
489
@@ -XXX,XX +XXX,XX @@ static uint32_t float32_cge(float32 op1, float32 op2, float_status *stat)
41
@@ -XXX,XX +XXX,XX @@ static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status
490
return -float32_le(op2, op1, stat);
42
return status->float_infzeronan_rule;
491
}
43
}
492
44
493
+static uint64_t float64_cge(float64 op1, float64 op2, float_status *stat)
45
+static inline uint8_t get_float_default_nan_pattern(float_status *status)
494
+{
46
+{
495
+ return -float64_le(op2, op1, stat);
47
+ return status->default_nan_pattern;
496
+}
48
+}
497
+
49
+
498
static uint16_t float16_cgt(float16 op1, float16 op2, float_status *stat)
50
static inline bool get_flush_to_zero(float_status *status)
499
{
51
{
500
return -float16_lt(op2, op1, stat);
52
return status->flush_to_zero;
501
@@ -XXX,XX +XXX,XX @@ static uint32_t float32_cgt(float32 op1, float32 op2, float_status *stat)
53
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
502
return -float32_lt(op2, op1, stat);
54
index XXXXXXX..XXXXXXX 100644
503
}
55
--- a/include/fpu/softfloat-types.h
504
56
+++ b/include/fpu/softfloat-types.h
505
+static uint64_t float64_cgt(float64 op1, float64 op2, float_status *stat)
57
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
506
+{
58
/* should denormalised inputs go to zero and set the input_denormal flag? */
507
+ return -float64_lt(op2, op1, stat);
59
bool flush_inputs_to_zero;
508
+}
60
bool default_nan_mode;
61
+ /*
62
+ * The pattern to use for the default NaN. Here the high bit specifies
63
+ * the default NaN's sign bit, and bits 6..0 specify the high bits of the
64
+ * fractional part. The low bits of the fractional part are copies of bit 0.
65
+ * The exponent of the default NaN is (as for any NaN) always all 1s.
66
+ * Note that a value of 0 here is not a valid NaN. The target must set
67
+ * this to the correct non-zero value, or we will assert when trying to
68
+ * create a default NaN.
69
+ */
70
+ uint8_t default_nan_pattern;
71
/*
72
* The flags below are not used on all specializations and may
73
* constant fold away (see snan_bit_is_one()/no_signalling_nans() in
74
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
75
index XXXXXXX..XXXXXXX 100644
76
--- a/fpu/softfloat-specialize.c.inc
77
+++ b/fpu/softfloat-specialize.c.inc
78
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
79
{
80
bool sign = 0;
81
uint64_t frac;
82
+ uint8_t dnan_pattern = status->default_nan_pattern;
83
84
+ if (dnan_pattern == 0) {
85
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
86
- /* !snan_bit_is_one, set all bits */
87
- frac = (1ULL << DECOMPOSED_BINARY_POINT) - 1;
88
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
89
+ /* Sign bit clear, all frac bits set */
90
+ dnan_pattern = 0b01111111;
91
+#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
92
|| defined(TARGET_MICROBLAZE)
93
- /* !snan_bit_is_one, set sign and msb */
94
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
95
- sign = 1;
96
+ /* Sign bit set, most significant frac bit set */
97
+ dnan_pattern = 0b11000000;
98
#elif defined(TARGET_HPPA)
99
- /* snan_bit_is_one, set msb-1. */
100
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 2);
101
+ /* Sign bit clear, msb-1 frac bit set */
102
+ dnan_pattern = 0b00100000;
103
#elif defined(TARGET_HEXAGON)
104
- sign = 1;
105
- frac = ~0ULL;
106
+ /* Sign bit set, all frac bits set. */
107
+ dnan_pattern = 0b11111111;
108
#else
109
- /*
110
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
111
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
112
- * do not have floating-point.
113
- */
114
- if (snan_bit_is_one(status)) {
115
- /* set all bits other than msb */
116
- frac = (1ULL << (DECOMPOSED_BINARY_POINT - 1)) - 1;
117
- } else {
118
- /* set msb */
119
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
120
- }
121
+ /*
122
+ * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
123
+ * S390, SH4, TriCore, and Xtensa. Our other supported targets
124
+ * do not have floating-point.
125
+ */
126
+ if (snan_bit_is_one(status)) {
127
+ /* sign bit clear, set all frac bits other than msb */
128
+ dnan_pattern = 0b00111111;
129
+ } else {
130
+ /* sign bit clear, set frac msb */
131
+ dnan_pattern = 0b01000000;
132
+ }
133
#endif
134
+ }
135
+ assert(dnan_pattern != 0);
509
+
136
+
510
static uint16_t float16_acge(float16 op1, float16 op2, float_status *stat)
137
+ sign = dnan_pattern >> 7;
511
{
138
+ /*
512
return -float16_le(float16_abs(op2), float16_abs(op1), stat);
139
+ * Place default_nan_pattern [6:0] into bits [62:56],
513
@@ -XXX,XX +XXX,XX @@ static uint32_t float32_acge(float32 op1, float32 op2, float_status *stat)
140
+ * and replecate bit [0] down into [55:0]
514
return -float32_le(float32_abs(op2), float32_abs(op1), stat);
141
+ */
515
}
142
+ frac = deposit64(0, DECOMPOSED_BINARY_POINT - 7, 7, dnan_pattern);
516
143
+ frac = deposit64(frac, 0, DECOMPOSED_BINARY_POINT - 7, -(dnan_pattern & 1));
517
+static uint64_t float64_acge(float64 op1, float64 op2, float_status *stat)
144
518
+{
145
*p = (FloatParts64) {
519
+ return -float64_le(float64_abs(op2), float64_abs(op1), stat);
146
.cls = float_class_qnan,
520
+}
521
+
522
static uint16_t float16_acgt(float16 op1, float16 op2, float_status *stat)
523
{
524
return -float16_lt(float16_abs(op2), float16_abs(op1), stat);
525
@@ -XXX,XX +XXX,XX @@ static uint32_t float32_acgt(float32 op1, float32 op2, float_status *stat)
526
return -float32_lt(float32_abs(op2), float32_abs(op1), stat);
527
}
528
529
+static uint64_t float64_acgt(float64 op1, float64 op2, float_status *stat)
530
+{
531
+ return -float64_lt(float64_abs(op2), float64_abs(op1), stat);
532
+}
533
+
534
static int16_t vfp_tosszh(float16 x, void *fpstp)
535
{
536
float_status *fpst = fpstp;
537
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_fabd_s, float32_abd, float32)
538
539
DO_3OP(gvec_fceq_h, float16_ceq, float16)
540
DO_3OP(gvec_fceq_s, float32_ceq, float32)
541
+DO_3OP(gvec_fceq_d, float64_ceq, float64)
542
543
DO_3OP(gvec_fcge_h, float16_cge, float16)
544
DO_3OP(gvec_fcge_s, float32_cge, float32)
545
+DO_3OP(gvec_fcge_d, float64_cge, float64)
546
547
DO_3OP(gvec_fcgt_h, float16_cgt, float16)
548
DO_3OP(gvec_fcgt_s, float32_cgt, float32)
549
+DO_3OP(gvec_fcgt_d, float64_cgt, float64)
550
551
DO_3OP(gvec_facge_h, float16_acge, float16)
552
DO_3OP(gvec_facge_s, float32_acge, float32)
553
+DO_3OP(gvec_facge_d, float64_acge, float64)
554
555
DO_3OP(gvec_facgt_h, float16_acgt, float16)
556
DO_3OP(gvec_facgt_s, float32_acgt, float32)
557
+DO_3OP(gvec_facgt_d, float64_acgt, float64)
558
559
DO_3OP(gvec_fmax_h, float16_max, float16)
560
DO_3OP(gvec_fmax_s, float32_max, float32)
561
--
147
--
562
2.34.1
148
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the tests/fp code.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-36-peter.maydell@linaro.org
6
---
7
tests/fp/fp-bench.c | 1 +
8
tests/fp/fp-test-log2.c | 1 +
9
tests/fp/fp-test.c | 1 +
10
3 files changed, 3 insertions(+)
11
12
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tests/fp/fp-bench.c
15
+++ b/tests/fp/fp-bench.c
16
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
17
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
18
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
19
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
20
+ set_float_default_nan_pattern(0b01000000, &soft_status);
21
22
f = bench_funcs[operation][precision];
23
g_assert(f);
24
diff --git a/tests/fp/fp-test-log2.c b/tests/fp/fp-test-log2.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/tests/fp/fp-test-log2.c
27
+++ b/tests/fp/fp-test-log2.c
28
@@ -XXX,XX +XXX,XX @@ int main(int ac, char **av)
29
int i;
30
31
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
32
+ set_float_default_nan_pattern(0b01000000, &qsf);
33
set_float_rounding_mode(float_round_nearest_even, &qsf);
34
35
test.d = 0.0;
36
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/tests/fp/fp-test.c
39
+++ b/tests/fp/fp-test.c
40
@@ -XXX,XX +XXX,XX @@ void run_test(void)
41
*/
42
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
43
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
44
+ set_float_default_nan_pattern(0b01000000, &qsf);
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
46
47
genCases_setLevel(test_level);
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-37-peter.maydell@linaro.org
7
---
8
target/microblaze/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/microblaze/cpu.c
15
+++ b/target/microblaze/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_reset_hold(Object *obj, ResetType type)
17
* this architecture.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
23
#if defined(CONFIG_USER_ONLY)
24
/* start in user mode with interrupts enabled. */
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
34
- || defined(TARGET_MICROBLAZE)
35
+#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
/* Sign bit set, most significant frac bit set */
37
dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-38-peter.maydell@linaro.org
7
---
8
target/i386/tcg/fpu_helper.c | 4 ++++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 4 insertions(+), 3 deletions(-)
11
12
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/i386/tcg/fpu_helper.c
15
+++ b/target/i386/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
17
*/
18
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
19
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
+ set_float_default_nan_pattern(0b11000000, &env->mmx_status);
23
+ set_float_default_nan_pattern(0b11000000, &env->sse_status);
24
}
25
26
static inline uint8_t save_exception_flags(CPUX86State *env)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
32
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
/* Sign bit clear, all frac bits set */
34
dnan_pattern = 0b01111111;
35
-#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
- /* Sign bit set, most significant frac bit set */
37
- dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
/* Sign bit clear, msb-1 frac bit set */
40
dnan_pattern = 0b00100000;
41
--
42
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-39-peter.maydell@linaro.org
7
---
8
target/hppa/fpu_helper.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 2 insertions(+), 3 deletions(-)
11
12
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/hppa/fpu_helper.c
15
+++ b/target/hppa/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
17
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
18
/* For inf * 0 + NaN, return the input NaN */
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ /* Default NaN: sign bit clear, msb-1 frac bit set */
21
+ set_float_default_nan_pattern(0b00100000, &env->fp_status);
22
}
23
24
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_HPPA)
34
- /* Sign bit clear, msb-1 frac bit set */
35
- dnan_pattern = 0b00100000;
36
#elif defined(TARGET_HEXAGON)
37
/* Sign bit set, all frac bits set. */
38
dnan_pattern = 0b11111111;
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the alpha target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-40-peter.maydell@linaro.org
6
---
7
target/alpha/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/alpha/cpu.c
13
+++ b/target/alpha/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_initfn(Object *obj)
15
* operand in Fa. That is float_2nan_prop_ba.
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
18
+ /* Default NaN: sign bit clear, msb frac bit set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
#if defined(CONFIG_USER_ONLY)
21
env->flags = ENV_FLAG_PS_USER | ENV_FLAG_FEN;
22
cpu_alpha_store_fpcr(env, (uint64_t)(FPCR_INVD | FPCR_DZED | FPCR_OVFD
23
--
24
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for the arm target.
2
This includes setting it for the old linux-user nwfpe emulation.
3
For nwfpe, our default doesn't match the real kernel, but we
4
avoid making a behaviour change in this commit.
2
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-41-peter.maydell@linaro.org
7
---
9
---
8
target/arm/tcg/a64.decode | 4 ++++
10
linux-user/arm/nwfpe/fpa11.c | 5 +++++
9
target/arm/tcg/translate-a64.c | 43 +++++++++++-----------------------
11
target/arm/cpu.c | 2 ++
10
2 files changed, 18 insertions(+), 29 deletions(-)
12
2 files changed, 7 insertions(+)
11
13
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
14
diff --git a/linux-user/arm/nwfpe/fpa11.c b/linux-user/arm/nwfpe/fpa11.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
16
--- a/linux-user/arm/nwfpe/fpa11.c
15
+++ b/target/arm/tcg/a64.decode
17
+++ b/linux-user/arm/nwfpe/fpa11.c
16
@@ -XXX,XX +XXX,XX @@ SM3TT1A 11001110 010 ..... 10 .. 00 ..... ..... @crypto3i
18
@@ -XXX,XX +XXX,XX @@ void resetFPA11(void)
17
SM3TT1B 11001110 010 ..... 10 .. 01 ..... ..... @crypto3i
19
* this late date.
18
SM3TT2A 11001110 010 ..... 10 .. 10 ..... ..... @crypto3i
20
*/
19
SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
21
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &fpa11->fp_status);
20
+
22
+ /*
21
+### Cryptographic XAR
23
+ * Use the same default NaN value as Arm VFP. This doesn't match
22
+
24
+ * the Linux kernel's nwfpe emulation, which uses an all-1s value.
23
+XAR 1100 1110 100 rm:5 imm:6 rn:5 rd:5
25
+ */
24
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
26
+ set_float_default_nan_pattern(0b01000000, &fpa11->fp_status);
27
}
28
29
void SetRoundingMode(const unsigned int opcode)
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
25
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/tcg/translate-a64.c
32
--- a/target/arm/cpu.c
27
+++ b/target/arm/tcg/translate-a64.c
33
+++ b/target/arm/cpu.c
28
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SM3TT1B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1b)
34
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
29
TRANS_FEAT(SM3TT2A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2a)
35
* the pseudocode function the arguments are in the order c, a, b.
30
TRANS_FEAT(SM3TT2B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2b)
36
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
31
37
* and the input NaN if it is signalling
32
+static bool trans_XAR(DisasContext *s, arg_XAR *a)
38
+ * * Default NaN has sign bit clear, msb frac bit set
33
+{
39
*/
34
+ if (!dc_isar_feature(aa64_sha3, s)) {
40
static void arm_set_default_fp_behaviours(float_status *s)
35
+ return false;
41
{
36
+ }
42
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
37
+ if (fp_access_check(s)) {
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
38
+ gen_gvec_xar(MO_64, vec_full_reg_offset(s, a->rd),
44
set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
39
+ vec_full_reg_offset(s, a->rn),
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
40
+ vec_full_reg_offset(s, a->rm), a->imm, 16,
46
+ set_float_default_nan_pattern(0b01000000, s);
41
+ vec_full_reg_size(s));
42
+ }
43
+ return true;
44
+}
45
+
46
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
47
* Note that it is the caller's responsibility to ensure that the
48
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
49
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
50
}
51
}
47
}
52
48
53
-/* Crypto XAR
49
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
54
- * 31 21 20 16 15 10 9 5 4 0
55
- * +-----------------------+------+--------+------+------+
56
- * | 1 1 0 0 1 1 1 0 1 0 0 | Rm | imm6 | Rn | Rd |
57
- * +-----------------------+------+--------+------+------+
58
- */
59
-static void disas_crypto_xar(DisasContext *s, uint32_t insn)
60
-{
61
- int rm = extract32(insn, 16, 5);
62
- int imm6 = extract32(insn, 10, 6);
63
- int rn = extract32(insn, 5, 5);
64
- int rd = extract32(insn, 0, 5);
65
-
66
- if (!dc_isar_feature(aa64_sha3, s)) {
67
- unallocated_encoding(s);
68
- return;
69
- }
70
-
71
- if (!fp_access_check(s)) {
72
- return;
73
- }
74
-
75
- gen_gvec_xar(MO_64, vec_full_reg_offset(s, rd),
76
- vec_full_reg_offset(s, rn),
77
- vec_full_reg_offset(s, rm), imm6, 16,
78
- vec_full_reg_size(s));
79
-}
80
-
81
/* C3.6 Data processing - SIMD, inc Crypto
82
*
83
* As the decode gets a little complex we are using a table based
84
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
85
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
86
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
87
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
88
- { 0xce800000, 0xffe00000, disas_crypto_xar },
89
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
90
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
91
{ 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
92
--
50
--
93
2.34.1
51
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for loongarch.
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-10-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-42-peter.maydell@linaro.org
7
---
6
---
8
target/arm/tcg/a64.decode | 10 ++++++++
7
target/loongarch/tcg/fpu_helper.c | 2 ++
9
target/arm/tcg/translate-a64.c | 43 ++++++++++------------------------
8
1 file changed, 2 insertions(+)
10
2 files changed, 22 insertions(+), 31 deletions(-)
11
9
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
10
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
12
--- a/target/loongarch/tcg/fpu_helper.c
15
+++ b/target/arm/tcg/a64.decode
13
+++ b/target/loongarch/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
14
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
17
EOR3 1100 1110 000 ..... 0 ..... ..... ..... @rrrr_q1e3
15
*/
18
BCAX 1100 1110 001 ..... 0 ..... ..... ..... @rrrr_q1e3
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
19
SM3SS1 1100 1110 010 ..... 0 ..... ..... ..... @rrrr_q1e3
17
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
20
+
18
+ /* Default NaN: sign bit clear, msb frac bit set */
21
+### Cryptographic three-register, imm2
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
22
+
23
+&crypto3i rd rn rm imm
24
+@crypto3i ........ ... rm:5 .. imm:2 .. rn:5 rd:5 &crypto3i
25
+
26
+SM3TT1A 11001110 010 ..... 10 .. 00 ..... ..... @crypto3i
27
+SM3TT1B 11001110 010 ..... 10 .. 01 ..... ..... @crypto3i
28
+SM3TT2A 11001110 010 ..... 10 .. 10 ..... ..... @crypto3i
29
+SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
30
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/translate-a64.c
33
+++ b/target/arm/tcg/translate-a64.c
34
@@ -XXX,XX +XXX,XX @@ static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
35
return true;
36
}
20
}
37
21
38
+static bool do_crypto3i(DisasContext *s, arg_crypto3i *a, gen_helper_gvec_3 *fn)
22
int ieee_ex_to_loongarch(int xcpt)
39
+{
40
+ if (fp_access_check(s)) {
41
+ gen_gvec_op3_ool(s, true, a->rd, a->rn, a->rm, a->imm, fn);
42
+ }
43
+ return true;
44
+}
45
+TRANS_FEAT(SM3TT1A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1a)
46
+TRANS_FEAT(SM3TT1B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1b)
47
+TRANS_FEAT(SM3TT2A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2a)
48
+TRANS_FEAT(SM3TT2B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2b)
49
+
50
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
51
* Note that it is the caller's responsibility to ensure that the
52
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
53
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
54
vec_full_reg_size(s));
55
}
56
57
-/* Crypto three-reg imm2
58
- * 31 21 20 16 15 14 13 12 11 10 9 5 4 0
59
- * +-----------------------+------+-----+------+--------+------+------+
60
- * | 1 1 0 0 1 1 1 0 0 1 0 | Rm | 1 0 | imm2 | opcode | Rn | Rd |
61
- * +-----------------------+------+-----+------+--------+------+------+
62
- */
63
-static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
64
-{
65
- static gen_helper_gvec_3 * const fns[4] = {
66
- gen_helper_crypto_sm3tt1a, gen_helper_crypto_sm3tt1b,
67
- gen_helper_crypto_sm3tt2a, gen_helper_crypto_sm3tt2b,
68
- };
69
- int opcode = extract32(insn, 10, 2);
70
- int imm2 = extract32(insn, 12, 2);
71
- int rm = extract32(insn, 16, 5);
72
- int rn = extract32(insn, 5, 5);
73
- int rd = extract32(insn, 0, 5);
74
-
75
- if (!dc_isar_feature(aa64_sm3, s)) {
76
- unallocated_encoding(s);
77
- return;
78
- }
79
-
80
- if (!fp_access_check(s)) {
81
- return;
82
- }
83
-
84
- gen_gvec_op3_ool(s, true, rd, rn, rm, imm2, fns[opcode]);
85
-}
86
-
87
/* C3.6 Data processing - SIMD, inc Crypto
88
*
89
* As the decode gets a little complex we are using a table based
90
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
91
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
92
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
93
{ 0xce800000, 0xffe00000, disas_crypto_xar },
94
- { 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
95
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
96
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
97
{ 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
98
--
23
--
99
2.34.1
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for m68k.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-43-peter.maydell@linaro.org
6
---
7
target/m68k/cpu.c | 2 ++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/m68k/cpu.c
14
+++ b/target/m68k/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
16
* preceding paragraph for nonsignaling NaNs.
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
+ /* Default NaN: sign bit clear, all frac bits set */
20
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
21
22
nan = floatx80_default_nan(&env->fp_status);
23
for (i = 0; i < 8; i++) {
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
29
uint8_t dnan_pattern = status->default_nan_pattern;
30
31
if (dnan_pattern == 0) {
32
-#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
+#if defined(TARGET_SPARC)
34
/* Sign bit clear, all frac bits set */
35
dnan_pattern = 0b01111111;
36
#elif defined(TARGET_HEXAGON)
37
--
38
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the default NaN pattern explicitly for MIPS. Note that this
2
is our only target which currently changes the default NaN
3
at runtime (which it was previously doing indirectly when it
4
changed the snan_bit_is_one setting).
2
5
3
Check the function index is in range and use an unsigned
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
variable to avoid the following warning with GCC 13.2.0:
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-44-peter.maydell@linaro.org
9
---
10
target/mips/fpu_helper.h | 7 +++++++
11
target/mips/msa.c | 3 +++
12
2 files changed, 10 insertions(+)
5
13
6
[666/5358] Compiling C object libcommon.fa.p/hw_input_tsc2005.c.o
14
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
7
hw/input/tsc2005.c: In function 'tsc2005_timer_tick':
8
hw/input/tsc2005.c:416:26: warning: array subscript has type 'char' [-Wchar-subscripts]
9
416 | s->dav |= mode_regs[s->function];
10
| ~^~~~~~~~~~
11
12
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Message-id: 20240508143513.44996-1-philmd@linaro.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
[PMM: fixed missing ')']
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/input/tsc2005.c | 5 ++++-
19
1 file changed, 4 insertions(+), 1 deletion(-)
20
21
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/input/tsc2005.c
16
--- a/target/mips/fpu_helper.h
24
+++ b/hw/input/tsc2005.c
17
+++ b/target/mips/fpu_helper.h
25
@@ -XXX,XX +XXX,XX @@ uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len)
18
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
static void tsc2005_timer_tick(void *opaque)
19
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
27
{
20
nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
28
TSC2005State *s = opaque;
21
set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
29
+ unsigned int function = s->function;
22
+ /*
30
+
23
+ * With nan2008, the default NaN value has the sign bit clear and the
31
+ assert(function < ARRAY_SIZE(mode_regs));
24
+ * frac msb set; with the older mode, the sign bit is clear, and all
32
25
+ * frac bits except the msb are set.
33
/* Timer ticked -- a set of conversions has been finished. */
26
+ */
34
27
+ set_float_default_nan_pattern(nan2008 ? 0b01000000 : 0b00111111,
35
@@ -XXX,XX +XXX,XX @@ static void tsc2005_timer_tick(void *opaque)
28
+ &env->active_fpu.fp_status);
36
return;
29
37
30
}
38
s->busy = false;
31
39
- s->dav |= mode_regs[s->function];
32
diff --git a/target/mips/msa.c b/target/mips/msa.c
40
+ s->dav |= mode_regs[function];
33
index XXXXXXX..XXXXXXX 100644
41
s->function = -1;
34
--- a/target/mips/msa.c
42
tsc2005_pin_update(s);
35
+++ b/target/mips/msa.c
36
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
37
/* Inf * 0 + NaN returns the input NaN */
38
set_float_infzeronan_rule(float_infzeronan_dnan_never,
39
&env->active_tc.msa_fp_status);
40
+ /* Default NaN: sign bit clear, frac msb set */
41
+ set_float_default_nan_pattern(0b01000000,
42
+ &env->active_tc.msa_fp_status);
43
}
43
}
44
--
44
--
45
2.34.1
45
2.34.1
46
47
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for openrisc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-45-peter.maydell@linaro.org
6
---
7
target/openrisc/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/openrisc/cpu.c
13
+++ b/target/openrisc/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_reset_hold(Object *obj, ResetType type)
15
*/
16
set_float_2nan_prop_rule(float_2nan_prop_x87, &cpu->env.fp_status);
17
18
+ /* Default NaN: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &cpu->env.fp_status);
20
21
#ifndef CONFIG_USER_ONLY
22
cpu->env.picmr = 0x00000000;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for ppc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-46-peter.maydell@linaro.org
6
---
7
target/ppc/cpu_init.c | 4 ++++
8
1 file changed, 4 insertions(+)
9
10
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/ppc/cpu_init.c
13
+++ b/target/ppc/cpu_init.c
14
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
17
18
+ /* Default NaN: sign bit clear, set frac msb */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
+ set_float_default_nan_pattern(0b01000000, &env->vec_status);
21
+
22
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
23
ppc_spr_t *spr = &env->spr_cb[i];
24
25
--
26
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for sh4. Note that sh4
2
is one of the only three targets (the others being HPPA and
3
sometimes MIPS) that has snan_bit_is_one set.
2
4
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-9-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-47-peter.maydell@linaro.org
7
---
8
---
8
target/arm/tcg/a64.decode | 8 ++
9
target/sh4/cpu.c | 2 ++
9
target/arm/tcg/translate-a64.c | 132 +++++++++++----------------------
10
1 file changed, 2 insertions(+)
10
2 files changed, 51 insertions(+), 89 deletions(-)
11
11
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
12
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
14
--- a/target/sh4/cpu.c
15
+++ b/target/arm/tcg/a64.decode
15
+++ b/target/sh4/cpu.c
16
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_reset_hold(Object *obj, ResetType type)
17
&i imm
17
set_flush_to_zero(1, &env->fp_status);
18
&qrr_e q rd rn esz
18
#endif
19
&qrrr_e q rd rn rm esz
19
set_default_nan_mode(1, &env->fp_status);
20
+&qrrrr_e q rd rn rm ra esz
20
+ /* sign bit clear, set all frac bits other than msb */
21
21
+ set_float_default_nan_pattern(0b00111111, &env->fp_status);
22
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
23
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
24
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
25
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
26
+@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
27
28
### Data Processing - Immediate
29
30
@@ -XXX,XX +XXX,XX @@ SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
31
32
SHA512SU0 1100 1110 110 00000 100000 ..... ..... @rr_q1e0
33
SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
34
+
35
+### Cryptographic four-register
36
+
37
+EOR3 1100 1110 000 ..... 0 ..... ..... ..... @rrrr_q1e3
38
+BCAX 1100 1110 001 ..... 0 ..... ..... ..... @rrrr_q1e3
39
+SM3SS1 1100 1110 010 ..... 0 ..... ..... ..... @rrrr_q1e3
40
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/tcg/translate-a64.c
43
+++ b/target/arm/tcg/translate-a64.c
44
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
45
return true;
46
}
22
}
47
23
48
+static bool do_gvec_fn4(DisasContext *s, arg_qrrrr_e *a, GVecGen4Fn *fn)
24
static void superh_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
49
+{
50
+ if (!a->q && a->esz == MO_64) {
51
+ return false;
52
+ }
53
+ if (fp_access_check(s)) {
54
+ gen_gvec_fn4(s, a->q, a->rd, a->rn, a->rm, a->ra, fn, a->esz);
55
+ }
56
+ return true;
57
+}
58
+
59
/*
60
* This utility function is for doing register extension with an
61
* optional shift. You will likely want to pass a temporary for the
62
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
63
TRANS_FEAT(SHA512SU0, aa64_sha512, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha512su0)
64
TRANS_FEAT(SM4E, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4e)
65
66
+TRANS_FEAT(EOR3, aa64_sha3, do_gvec_fn4, a, gen_gvec_eor3)
67
+TRANS_FEAT(BCAX, aa64_sha3, do_gvec_fn4, a, gen_gvec_bcax)
68
+
69
+static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
70
+{
71
+ if (!dc_isar_feature(aa64_sm3, s)) {
72
+ return false;
73
+ }
74
+ if (fp_access_check(s)) {
75
+ TCGv_i32 tcg_op1 = tcg_temp_new_i32();
76
+ TCGv_i32 tcg_op2 = tcg_temp_new_i32();
77
+ TCGv_i32 tcg_op3 = tcg_temp_new_i32();
78
+ TCGv_i32 tcg_res = tcg_temp_new_i32();
79
+ unsigned vsz, dofs;
80
+
81
+ read_vec_element_i32(s, tcg_op1, a->rn, 3, MO_32);
82
+ read_vec_element_i32(s, tcg_op2, a->rm, 3, MO_32);
83
+ read_vec_element_i32(s, tcg_op3, a->ra, 3, MO_32);
84
+
85
+ tcg_gen_rotri_i32(tcg_res, tcg_op1, 20);
86
+ tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2);
87
+ tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3);
88
+ tcg_gen_rotri_i32(tcg_res, tcg_res, 25);
89
+
90
+ /* Clear the whole register first, then store bits [127:96]. */
91
+ vsz = vec_full_reg_size(s);
92
+ dofs = vec_full_reg_offset(s, a->rd);
93
+ tcg_gen_gvec_dup_imm(MO_64, dofs, vsz, vsz, 0);
94
+ write_vec_element_i32(s, tcg_res, a->rd, 3, MO_32);
95
+ }
96
+ return true;
97
+}
98
99
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
100
* Note that it is the caller's responsibility to ensure that the
101
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
102
}
103
}
104
105
-/* Crypto four-register
106
- * 31 23 22 21 20 16 15 14 10 9 5 4 0
107
- * +-------------------+-----+------+---+------+------+------+
108
- * | 1 1 0 0 1 1 1 0 0 | Op0 | Rm | 0 | Ra | Rn | Rd |
109
- * +-------------------+-----+------+---+------+------+------+
110
- */
111
-static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
112
-{
113
- int op0 = extract32(insn, 21, 2);
114
- int rm = extract32(insn, 16, 5);
115
- int ra = extract32(insn, 10, 5);
116
- int rn = extract32(insn, 5, 5);
117
- int rd = extract32(insn, 0, 5);
118
- bool feature;
119
-
120
- switch (op0) {
121
- case 0: /* EOR3 */
122
- case 1: /* BCAX */
123
- feature = dc_isar_feature(aa64_sha3, s);
124
- break;
125
- case 2: /* SM3SS1 */
126
- feature = dc_isar_feature(aa64_sm3, s);
127
- break;
128
- default:
129
- unallocated_encoding(s);
130
- return;
131
- }
132
-
133
- if (!feature) {
134
- unallocated_encoding(s);
135
- return;
136
- }
137
-
138
- if (!fp_access_check(s)) {
139
- return;
140
- }
141
-
142
- if (op0 < 2) {
143
- TCGv_i64 tcg_op1, tcg_op2, tcg_op3, tcg_res[2];
144
- int pass;
145
-
146
- tcg_op1 = tcg_temp_new_i64();
147
- tcg_op2 = tcg_temp_new_i64();
148
- tcg_op3 = tcg_temp_new_i64();
149
- tcg_res[0] = tcg_temp_new_i64();
150
- tcg_res[1] = tcg_temp_new_i64();
151
-
152
- for (pass = 0; pass < 2; pass++) {
153
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
154
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
155
- read_vec_element(s, tcg_op3, ra, pass, MO_64);
156
-
157
- if (op0 == 0) {
158
- /* EOR3 */
159
- tcg_gen_xor_i64(tcg_res[pass], tcg_op2, tcg_op3);
160
- } else {
161
- /* BCAX */
162
- tcg_gen_andc_i64(tcg_res[pass], tcg_op2, tcg_op3);
163
- }
164
- tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
165
- }
166
- write_vec_element(s, tcg_res[0], rd, 0, MO_64);
167
- write_vec_element(s, tcg_res[1], rd, 1, MO_64);
168
- } else {
169
- TCGv_i32 tcg_op1, tcg_op2, tcg_op3, tcg_res, tcg_zero;
170
-
171
- tcg_op1 = tcg_temp_new_i32();
172
- tcg_op2 = tcg_temp_new_i32();
173
- tcg_op3 = tcg_temp_new_i32();
174
- tcg_res = tcg_temp_new_i32();
175
- tcg_zero = tcg_constant_i32(0);
176
-
177
- read_vec_element_i32(s, tcg_op1, rn, 3, MO_32);
178
- read_vec_element_i32(s, tcg_op2, rm, 3, MO_32);
179
- read_vec_element_i32(s, tcg_op3, ra, 3, MO_32);
180
-
181
- tcg_gen_rotri_i32(tcg_res, tcg_op1, 20);
182
- tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2);
183
- tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3);
184
- tcg_gen_rotri_i32(tcg_res, tcg_res, 25);
185
-
186
- write_vec_element_i32(s, tcg_zero, rd, 0, MO_32);
187
- write_vec_element_i32(s, tcg_zero, rd, 1, MO_32);
188
- write_vec_element_i32(s, tcg_zero, rd, 2, MO_32);
189
- write_vec_element_i32(s, tcg_res, rd, 3, MO_32);
190
- }
191
-}
192
-
193
/* Crypto XAR
194
* 31 21 20 16 15 10 9 5 4 0
195
* +-----------------------+------+--------+------+------+
196
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
197
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
198
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
199
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
200
- { 0xce000000, 0xff808000, disas_crypto_four_reg },
201
{ 0xce800000, 0xffe00000, disas_crypto_xar },
202
{ 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
203
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
204
--
25
--
205
2.34.1
26
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for rx.
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-8-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-48-peter.maydell@linaro.org
7
---
6
---
8
target/arm/tcg/a64.decode | 5 ++++
7
target/rx/cpu.c | 2 ++
9
target/arm/tcg/translate-a64.c | 50 ++--------------------------------
8
1 file changed, 2 insertions(+)
10
2 files changed, 8 insertions(+), 47 deletions(-)
11
9
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
10
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
13
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
12
--- a/target/rx/cpu.c
15
+++ b/target/arm/tcg/a64.decode
13
+++ b/target/rx/cpu.c
16
@@ -XXX,XX +XXX,XX @@ RAX1 1100 1110 011 ..... 100011 ..... ..... @rrr_q1e3
14
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_reset_hold(Object *obj, ResetType type)
17
SM3PARTW1 1100 1110 011 ..... 110000 ..... ..... @rrr_q1e0
15
* then prefer dest over source", which is float_2nan_prop_s_ab.
18
SM3PARTW2 1100 1110 011 ..... 110001 ..... ..... @rrr_q1e0
16
*/
19
SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
20
+
18
+ /* Default NaN value: sign bit clear, set frac msb */
21
+### Cryptographic two-register SHA512
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
22
+
23
+SHA512SU0 1100 1110 110 00000 100000 ..... ..... @rr_q1e0
24
+SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
25
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/tcg/translate-a64.c
28
+++ b/target/arm/tcg/translate-a64.c
29
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SM3PARTW1, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3part
30
TRANS_FEAT(SM3PARTW2, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw2)
31
TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
32
33
+TRANS_FEAT(SHA512SU0, aa64_sha512, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha512su0)
34
+TRANS_FEAT(SM4E, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4e)
35
+
36
37
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
38
* Note that it is the caller's responsibility to ensure that the
39
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
40
}
41
}
20
}
42
21
43
-/* Crypto two-reg SHA512
22
static ObjectClass *rx_cpu_class_by_name(const char *cpu_model)
44
- * 31 12 11 10 9 5 4 0
45
- * +-----------------------------------------+--------+------+------+
46
- * | 1 1 0 0 1 1 1 0 1 1 0 0 0 0 0 0 1 0 0 0 | opcode | Rn | Rd |
47
- * +-----------------------------------------+--------+------+------+
48
- */
49
-static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
50
-{
51
- int opcode = extract32(insn, 10, 2);
52
- int rn = extract32(insn, 5, 5);
53
- int rd = extract32(insn, 0, 5);
54
- bool feature;
55
-
56
- switch (opcode) {
57
- case 0: /* SHA512SU0 */
58
- feature = dc_isar_feature(aa64_sha512, s);
59
- break;
60
- case 1: /* SM4E */
61
- feature = dc_isar_feature(aa64_sm4, s);
62
- break;
63
- default:
64
- unallocated_encoding(s);
65
- return;
66
- }
67
-
68
- if (!feature) {
69
- unallocated_encoding(s);
70
- return;
71
- }
72
-
73
- if (!fp_access_check(s)) {
74
- return;
75
- }
76
-
77
- switch (opcode) {
78
- case 0: /* SHA512SU0 */
79
- gen_gvec_op2_ool(s, true, rd, rn, 0, gen_helper_crypto_sha512su0);
80
- break;
81
- case 1: /* SM4E */
82
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, gen_helper_crypto_sm4e);
83
- break;
84
- default:
85
- g_assert_not_reached();
86
- }
87
-}
88
-
89
/* Crypto four-register
90
* 31 23 22 21 20 16 15 14 10 9 5 4 0
91
* +-------------------+-----+------+---+------+------+------+
92
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
93
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
94
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
95
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
96
- { 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
97
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
98
{ 0xce800000, 0xffe00000, disas_crypto_xar },
99
{ 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
100
--
23
--
101
2.34.1
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for s390x.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-49-peter.maydell@linaro.org
6
---
7
target/s390x/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/s390x/cpu.c
13
+++ b/target/s390x/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_always,
17
&env->fpu_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fpu_status);
20
/* fall through */
21
case RESET_TYPE_S390_CPU_NORMAL:
22
env->psw.mask &= ~PSW_MASK_RI;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for SPARC, and remove
2
the ifdef from parts64_default_nan.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-50-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 5 +----
10
2 files changed, 3 insertions(+), 4 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
18
/* For inf * 0 + NaN, return the input NaN */
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ /* Default NaN value: sign bit clear, all frac bits set */
21
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
uint8_t dnan_pattern = status->default_nan_pattern;
31
32
if (dnan_pattern == 0) {
33
-#if defined(TARGET_SPARC)
34
- /* Sign bit clear, all frac bits set */
35
- dnan_pattern = 0b01111111;
36
-#elif defined(TARGET_HEXAGON)
37
+#if defined(TARGET_HEXAGON)
38
/* Sign bit set, all frac bits set. */
39
dnan_pattern = 0b11111111;
40
#else
41
--
42
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for xtensa.
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-6-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-51-peter.maydell@linaro.org
7
---
6
---
8
target/arm/tcg/a64.decode | 6 ++++
7
target/xtensa/cpu.c | 2 ++
9
target/arm/tcg/translate-a64.c | 54 +++-------------------------------
8
1 file changed, 2 insertions(+)
10
2 files changed, 10 insertions(+), 50 deletions(-)
11
9
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
10
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
13
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
12
--- a/target/xtensa/cpu.c
15
+++ b/target/arm/tcg/a64.decode
13
+++ b/target/xtensa/cpu.c
16
@@ -XXX,XX +XXX,XX @@ SHA1SU0 0101 1110 000 ..... 001100 ..... ..... @rrr_q1e0
14
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
17
SHA256H 0101 1110 000 ..... 010000 ..... ..... @rrr_q1e0
15
/* For inf * 0 + NaN, return the input NaN */
18
SHA256H2 0101 1110 000 ..... 010100 ..... ..... @rrr_q1e0
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
19
SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
17
set_no_signaling_nans(!dfpu, &env->fp_status);
20
+
18
+ /* Default NaN value: sign bit clear, set frac msb */
21
+### Cryptographic two-register SHA
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
22
+
20
xtensa_use_first_nan(env, !dfpu);
23
+SHA1H 0101 1110 0010 1000 0000 10 ..... ..... @rr_q1e0
24
+SHA1SU1 0101 1110 0010 1000 0001 10 ..... ..... @rr_q1e0
25
+SHA256SU0 0101 1110 0010 1000 0010 10 ..... ..... @rr_q1e0
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/translate-a64.c
29
+++ b/target/arm/tcg/translate-a64.c
30
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SHA256H, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256
31
TRANS_FEAT(SHA256H2, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h2)
32
TRANS_FEAT(SHA256SU1, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256su1)
33
34
+TRANS_FEAT(SHA1H, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1h)
35
+TRANS_FEAT(SHA1SU1, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1su1)
36
+TRANS_FEAT(SHA256SU0, aa64_sha256, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha256su0)
37
+
38
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
39
* Note that it is the caller's responsibility to ensure that the
40
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
41
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
42
}
43
}
21
}
44
22
45
-/* Crypto two-reg SHA
46
- * 31 24 23 22 21 17 16 12 11 10 9 5 4 0
47
- * +-----------------+------+-----------+--------+-----+------+------+
48
- * | 0 1 0 1 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
49
- * +-----------------+------+-----------+--------+-----+------+------+
50
- */
51
-static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
52
-{
53
- int size = extract32(insn, 22, 2);
54
- int opcode = extract32(insn, 12, 5);
55
- int rn = extract32(insn, 5, 5);
56
- int rd = extract32(insn, 0, 5);
57
- gen_helper_gvec_2 *genfn;
58
- bool feature;
59
-
60
- if (size != 0) {
61
- unallocated_encoding(s);
62
- return;
63
- }
64
-
65
- switch (opcode) {
66
- case 0: /* SHA1H */
67
- feature = dc_isar_feature(aa64_sha1, s);
68
- genfn = gen_helper_crypto_sha1h;
69
- break;
70
- case 1: /* SHA1SU1 */
71
- feature = dc_isar_feature(aa64_sha1, s);
72
- genfn = gen_helper_crypto_sha1su1;
73
- break;
74
- case 2: /* SHA256SU0 */
75
- feature = dc_isar_feature(aa64_sha256, s);
76
- genfn = gen_helper_crypto_sha256su0;
77
- break;
78
- default:
79
- unallocated_encoding(s);
80
- return;
81
- }
82
-
83
- if (!feature) {
84
- unallocated_encoding(s);
85
- return;
86
- }
87
-
88
- if (!fp_access_check(s)) {
89
- return;
90
- }
91
- gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
92
-}
93
-
94
/* Crypto three-reg SHA512
95
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
96
* +-----------------------+------+---+---+-----+--------+------+------+
97
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
98
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
99
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
100
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
101
- { 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
102
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
103
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
104
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
105
--
23
--
106
2.34.1
24
2.34.1
diff view generated by jsdifflib
1
From: Alexandra Diupina <adiupina@astralinux.ru>
1
Set the default NaN pattern explicitly for hexagon.
2
Remove the ifdef from parts64_default_nan(); the only
3
remaining unconverted targets all use the default case.
2
4
3
Add xlnx_dpdma_read_descriptor() and
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
xlnx_dpdma_write_descriptor() functions.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
xlnx_dpdma_read_descriptor() combines reading a
7
Message-id: 20241202131347.498124-52-peter.maydell@linaro.org
6
descriptor from desc_addr by calling dma_memory_read()
8
---
7
and swapping the desc fields from guest memory order
9
target/hexagon/cpu.c | 2 ++
8
to host memory order. xlnx_dpdma_write_descriptor()
10
fpu/softfloat-specialize.c.inc | 5 -----
9
performs similar actions when writing a descriptor.
11
2 files changed, 2 insertions(+), 5 deletions(-)
10
12
11
Found by Linux Verification Center (linuxtesting.org) with SVACE.
13
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
12
13
Fixes: d3c6369a96 ("introduce xlnx-dpdma")
14
Signed-off-by: Alexandra Diupina <adiupina@astralinux.ru>
15
[PMM: tweaked indent, dropped behaviour change for write-failure case]
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
hw/dma/xlnx_dpdma.c | 68 ++++++++++++++++++++++++++++++++++++++++++---
20
1 file changed, 64 insertions(+), 4 deletions(-)
21
22
diff --git a/hw/dma/xlnx_dpdma.c b/hw/dma/xlnx_dpdma.c
23
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/dma/xlnx_dpdma.c
15
--- a/target/hexagon/cpu.c
25
+++ b/hw/dma/xlnx_dpdma.c
16
+++ b/target/hexagon/cpu.c
26
@@ -XXX,XX +XXX,XX @@ static void xlnx_dpdma_register_types(void)
17
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_reset_hold(Object *obj, ResetType type)
27
type_register_static(&xlnx_dpdma_info);
18
19
set_default_nan_mode(1, &env->fp_status);
20
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
21
+ /* Default NaN value: sign bit set, all frac bits set */
22
+ set_float_default_nan_pattern(0b11111111, &env->fp_status);
28
}
23
}
29
24
30
+static MemTxResult xlnx_dpdma_read_descriptor(XlnxDPDMAState *s,
25
static void hexagon_cpu_disas_set_info(CPUState *s, disassemble_info *info)
31
+ uint64_t desc_addr,
26
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
+ DPDMADescriptor *desc)
27
index XXXXXXX..XXXXXXX 100644
33
+{
28
--- a/fpu/softfloat-specialize.c.inc
34
+ MemTxResult res = dma_memory_read(&address_space_memory, desc_addr,
29
+++ b/fpu/softfloat-specialize.c.inc
35
+ &desc, sizeof(DPDMADescriptor),
30
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
36
+ MEMTXATTRS_UNSPECIFIED);
31
uint8_t dnan_pattern = status->default_nan_pattern;
37
+ if (res) {
32
38
+ return res;
33
if (dnan_pattern == 0) {
39
+ }
34
-#if defined(TARGET_HEXAGON)
40
+
35
- /* Sign bit set, all frac bits set. */
41
+ /* Convert from LE into host endianness. */
36
- dnan_pattern = 0b11111111;
42
+ desc->control = le32_to_cpu(desc->control);
37
-#else
43
+ desc->descriptor_id = le32_to_cpu(desc->descriptor_id);
38
/*
44
+ desc->xfer_size = le32_to_cpu(desc->xfer_size);
39
* This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
45
+ desc->line_size_stride = le32_to_cpu(desc->line_size_stride);
40
* S390, SH4, TriCore, and Xtensa. Our other supported targets
46
+ desc->timestamp_lsb = le32_to_cpu(desc->timestamp_lsb);
41
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
47
+ desc->timestamp_msb = le32_to_cpu(desc->timestamp_msb);
42
/* sign bit clear, set frac msb */
48
+ desc->address_extension = le32_to_cpu(desc->address_extension);
43
dnan_pattern = 0b01000000;
49
+ desc->next_descriptor = le32_to_cpu(desc->next_descriptor);
50
+ desc->source_address = le32_to_cpu(desc->source_address);
51
+ desc->address_extension_23 = le32_to_cpu(desc->address_extension_23);
52
+ desc->address_extension_45 = le32_to_cpu(desc->address_extension_45);
53
+ desc->source_address2 = le32_to_cpu(desc->source_address2);
54
+ desc->source_address3 = le32_to_cpu(desc->source_address3);
55
+ desc->source_address4 = le32_to_cpu(desc->source_address4);
56
+ desc->source_address5 = le32_to_cpu(desc->source_address5);
57
+ desc->crc = le32_to_cpu(desc->crc);
58
+
59
+ return res;
60
+}
61
+
62
+static MemTxResult xlnx_dpdma_write_descriptor(uint64_t desc_addr,
63
+ DPDMADescriptor *desc)
64
+{
65
+ DPDMADescriptor tmp_desc = *desc;
66
+
67
+ /* Convert from host endianness into LE. */
68
+ tmp_desc.control = cpu_to_le32(tmp_desc.control);
69
+ tmp_desc.descriptor_id = cpu_to_le32(tmp_desc.descriptor_id);
70
+ tmp_desc.xfer_size = cpu_to_le32(tmp_desc.xfer_size);
71
+ tmp_desc.line_size_stride = cpu_to_le32(tmp_desc.line_size_stride);
72
+ tmp_desc.timestamp_lsb = cpu_to_le32(tmp_desc.timestamp_lsb);
73
+ tmp_desc.timestamp_msb = cpu_to_le32(tmp_desc.timestamp_msb);
74
+ tmp_desc.address_extension = cpu_to_le32(tmp_desc.address_extension);
75
+ tmp_desc.next_descriptor = cpu_to_le32(tmp_desc.next_descriptor);
76
+ tmp_desc.source_address = cpu_to_le32(tmp_desc.source_address);
77
+ tmp_desc.address_extension_23 = cpu_to_le32(tmp_desc.address_extension_23);
78
+ tmp_desc.address_extension_45 = cpu_to_le32(tmp_desc.address_extension_45);
79
+ tmp_desc.source_address2 = cpu_to_le32(tmp_desc.source_address2);
80
+ tmp_desc.source_address3 = cpu_to_le32(tmp_desc.source_address3);
81
+ tmp_desc.source_address4 = cpu_to_le32(tmp_desc.source_address4);
82
+ tmp_desc.source_address5 = cpu_to_le32(tmp_desc.source_address5);
83
+ tmp_desc.crc = cpu_to_le32(tmp_desc.crc);
84
+
85
+ return dma_memory_write(&address_space_memory, desc_addr, &tmp_desc,
86
+ sizeof(DPDMADescriptor), MEMTXATTRS_UNSPECIFIED);
87
+}
88
+
89
size_t xlnx_dpdma_start_operation(XlnxDPDMAState *s, uint8_t channel,
90
bool one_desc)
91
{
92
@@ -XXX,XX +XXX,XX @@ size_t xlnx_dpdma_start_operation(XlnxDPDMAState *s, uint8_t channel,
93
desc_addr = xlnx_dpdma_descriptor_next_address(s, channel);
94
}
44
}
95
45
-#endif
96
- if (dma_memory_read(&address_space_memory, desc_addr, &desc,
46
}
97
- sizeof(DPDMADescriptor), MEMTXATTRS_UNSPECIFIED)) {
47
assert(dnan_pattern != 0);
98
+ if (xlnx_dpdma_read_descriptor(s, desc_addr, &desc)) {
48
99
s->registers[DPDMA_EISR] |= ((1 << 1) << channel);
100
xlnx_dpdma_update_irq(s);
101
s->operation_finished[channel] = true;
102
@@ -XXX,XX +XXX,XX @@ size_t xlnx_dpdma_start_operation(XlnxDPDMAState *s, uint8_t channel,
103
/* The descriptor need to be updated when it's completed. */
104
DPRINTF("update the descriptor with the done flag set.\n");
105
xlnx_dpdma_desc_set_done(&desc);
106
- dma_memory_write(&address_space_memory, desc_addr, &desc,
107
- sizeof(DPDMADescriptor), MEMTXATTRS_UNSPECIFIED);
108
+ if (xlnx_dpdma_write_descriptor(desc_addr, &desc)) {
109
+ DPRINTF("Can't write the descriptor.\n");
110
+ /* TODO: check hardware behaviour for memory write failure */
111
+ }
112
}
113
114
if (xlnx_dpdma_desc_completion_interrupt(&desc)) {
115
--
49
--
116
2.34.1
50
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for riscv.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-53-peter.maydell@linaro.org
6
---
7
target/riscv/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/riscv/cpu.c
13
+++ b/target/riscv/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj, ResetType type)
15
cs->exception_index = RISCV_EXCP_NONE;
16
env->load_res = -1;
17
set_default_nan_mode(1, &env->fp_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
env->vill = true;
21
22
#ifndef CONFIG_USER_ONLY
23
--
24
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for tricore.
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-5-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-54-peter.maydell@linaro.org
7
---
6
---
8
target/arm/tcg/a64.decode | 11 +++++
7
target/tricore/helper.c | 2 ++
9
target/arm/tcg/translate-a64.c | 78 +++++-----------------------------
8
1 file changed, 2 insertions(+)
10
2 files changed, 21 insertions(+), 68 deletions(-)
11
9
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
10
diff --git a/target/tricore/helper.c b/target/tricore/helper.c
13
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
12
--- a/target/tricore/helper.c
15
+++ b/target/arm/tcg/a64.decode
13
+++ b/target/tricore/helper.c
16
@@ -XXX,XX +XXX,XX @@
14
@@ -XXX,XX +XXX,XX @@ void fpu_set_state(CPUTriCoreState *env)
17
15
set_flush_to_zero(1, &env->fp_status);
18
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
16
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
19
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
17
set_default_nan_mode(1, &env->fp_status);
20
+@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
18
+ /* Default NaN pattern: sign bit clear, frac msb set */
21
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
22
### Data Processing - Immediate
23
24
@@ -XXX,XX +XXX,XX @@ AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
25
AESD 01001110 00 10100 00101 10 ..... ..... @r2r_q1e0
26
AESMC 01001110 00 10100 00110 10 ..... ..... @rr_q1e0
27
AESIMC 01001110 00 10100 00111 10 ..... ..... @rr_q1e0
28
+
29
+### Cryptographic three-register SHA
30
+
31
+SHA1C 0101 1110 000 ..... 000000 ..... ..... @rrr_q1e0
32
+SHA1P 0101 1110 000 ..... 000100 ..... ..... @rrr_q1e0
33
+SHA1M 0101 1110 000 ..... 001000 ..... ..... @rrr_q1e0
34
+SHA1SU0 0101 1110 000 ..... 001100 ..... ..... @rrr_q1e0
35
+SHA256H 0101 1110 000 ..... 010000 ..... ..... @rrr_q1e0
36
+SHA256H2 0101 1110 000 ..... 010100 ..... ..... @rrr_q1e0
37
+SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
41
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
43
}
20
}
44
21
45
/*
22
uint32_t psw_read(CPUTriCoreState *env)
46
- * Cryptographic AES
47
+ * Cryptographic AES, SHA
48
*/
49
50
TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
51
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(AESD, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aesd)
52
TRANS_FEAT(AESMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesmc)
53
TRANS_FEAT(AESIMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesimc)
54
55
+TRANS_FEAT(SHA1C, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1c)
56
+TRANS_FEAT(SHA1P, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1p)
57
+TRANS_FEAT(SHA1M, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1m)
58
+TRANS_FEAT(SHA1SU0, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1su0)
59
+
60
+TRANS_FEAT(SHA256H, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h)
61
+TRANS_FEAT(SHA256H2, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h2)
62
+TRANS_FEAT(SHA256SU1, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256su1)
63
+
64
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
65
* Note that it is the caller's responsibility to ensure that the
66
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
67
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
68
}
69
}
70
71
-/* Crypto three-reg SHA
72
- * 31 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
73
- * +-----------------+------+---+------+---+--------+-----+------+------+
74
- * | 0 1 0 1 1 1 1 0 | size | 0 | Rm | 0 | opcode | 0 0 | Rn | Rd |
75
- * +-----------------+------+---+------+---+--------+-----+------+------+
76
- */
77
-static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
78
-{
79
- int size = extract32(insn, 22, 2);
80
- int opcode = extract32(insn, 12, 3);
81
- int rm = extract32(insn, 16, 5);
82
- int rn = extract32(insn, 5, 5);
83
- int rd = extract32(insn, 0, 5);
84
- gen_helper_gvec_3 *genfn;
85
- bool feature;
86
-
87
- if (size != 0) {
88
- unallocated_encoding(s);
89
- return;
90
- }
91
-
92
- switch (opcode) {
93
- case 0: /* SHA1C */
94
- genfn = gen_helper_crypto_sha1c;
95
- feature = dc_isar_feature(aa64_sha1, s);
96
- break;
97
- case 1: /* SHA1P */
98
- genfn = gen_helper_crypto_sha1p;
99
- feature = dc_isar_feature(aa64_sha1, s);
100
- break;
101
- case 2: /* SHA1M */
102
- genfn = gen_helper_crypto_sha1m;
103
- feature = dc_isar_feature(aa64_sha1, s);
104
- break;
105
- case 3: /* SHA1SU0 */
106
- genfn = gen_helper_crypto_sha1su0;
107
- feature = dc_isar_feature(aa64_sha1, s);
108
- break;
109
- case 4: /* SHA256H */
110
- genfn = gen_helper_crypto_sha256h;
111
- feature = dc_isar_feature(aa64_sha256, s);
112
- break;
113
- case 5: /* SHA256H2 */
114
- genfn = gen_helper_crypto_sha256h2;
115
- feature = dc_isar_feature(aa64_sha256, s);
116
- break;
117
- case 6: /* SHA256SU1 */
118
- genfn = gen_helper_crypto_sha256su1;
119
- feature = dc_isar_feature(aa64_sha256, s);
120
- break;
121
- default:
122
- unallocated_encoding(s);
123
- return;
124
- }
125
-
126
- if (!feature) {
127
- unallocated_encoding(s);
128
- return;
129
- }
130
-
131
- if (!fp_access_check(s)) {
132
- return;
133
- }
134
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
135
-}
136
-
137
/* Crypto two-reg SHA
138
* 31 24 23 22 21 17 16 12 11 10 9 5 4 0
139
* +-----------------+------+-----------+--------+-----+------+------+
140
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
141
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
142
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
143
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
144
- { 0x5e000000, 0xff208c00, disas_crypto_three_reg_sha },
145
{ 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
146
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
147
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
148
--
23
--
149
2.34.1
24
2.34.1
diff view generated by jsdifflib
1
From: Andrey Shumilin <shum.sdl@nppct.ru>
1
Now that all our targets have bene converted to explicitly specify
2
their pattern for the default NaN value we can remove the remaining
3
fallback code in parts64_default_nan().
2
4
3
In gic_cpu_read() and gic_cpu_write(), we delegate the handling of
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
reading and writing the Non-Secure view of the GICC_APR<n> registers
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
to functions gic_apr_ns_view() and gic_apr_write_ns_view().
7
Message-id: 20241202131347.498124-55-peter.maydell@linaro.org
6
Unfortunately we got the order of the arguments wrong, swapping the
8
---
7
CPU number and the register number (which the compiler doesn't catch
9
fpu/softfloat-specialize.c.inc | 14 --------------
8
because they're both integers).
10
1 file changed, 14 deletions(-)
9
11
10
Most guests probably didn't notice this bug because directly
12
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
11
accessing the APR registers is typically something only done by
12
firmware when it is doing state save for going into a sleep mode.
13
14
Correct the mismatched call arguments.
15
16
Found by Linux Verification Center (linuxtesting.org) with SVACE.
17
18
Cc: qemu-stable@nongnu.org
19
Fixes: 51fd06e0ee ("hw/intc/arm_gic: Fix handling of GICC_APR<n>, GICC_NSAPR<n> registers")
20
Signed-off-by: Andrey Shumilin <shum.sdl@nppct.ru>
21
[PMM: Rewrote commit message]
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
24
Reviewed-by: Alex Bennée<alex.bennee@linaro.org>
25
---
26
hw/intc/arm_gic.c | 4 ++--
27
1 file changed, 2 insertions(+), 2 deletions(-)
28
29
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
30
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/arm_gic.c
14
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/hw/intc/arm_gic.c
15
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
16
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
34
*data = s->h_apr[gic_get_vcpu_real_id(cpu)];
17
uint64_t frac;
35
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
18
uint8_t dnan_pattern = status->default_nan_pattern;
36
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
19
37
- *data = gic_apr_ns_view(s, regno, cpu);
20
- if (dnan_pattern == 0) {
38
+ *data = gic_apr_ns_view(s, cpu, regno);
21
- /*
39
} else {
22
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
40
*data = s->apr[regno][cpu];
23
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
41
}
24
- * do not have floating-point.
42
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
25
- */
43
s->h_apr[gic_get_vcpu_real_id(cpu)] = value;
26
- if (snan_bit_is_one(status)) {
44
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
27
- /* sign bit clear, set all frac bits other than msb */
45
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
28
- dnan_pattern = 0b00111111;
46
- gic_apr_write_ns_view(s, regno, cpu, value);
29
- } else {
47
+ gic_apr_write_ns_view(s, cpu, regno, value);
30
- /* sign bit clear, set frac msb */
48
} else {
31
- dnan_pattern = 0b01000000;
49
s->apr[regno][cpu] = value;
32
- }
50
}
33
- }
34
assert(dnan_pattern != 0);
35
36
sign = dnan_pattern >> 7;
51
--
37
--
52
2.34.1
38
2.34.1
53
54
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Inline pickNaNMulAdd into its only caller. This makes
4
one assert redundant with the immediately preceding IF.
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20240506010403.6204-7-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241203203949.483774-3-richard.henderson@linaro.org
9
[PMM: keep comment from old code in new location]
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/tcg/a64.decode | 11 ++++
12
fpu/softfloat-parts.c.inc | 41 +++++++++++++++++++++++++-
9
target/arm/tcg/translate-a64.c | 97 ++++++++--------------------------
13
fpu/softfloat-specialize.c.inc | 54 ----------------------------------
10
2 files changed, 32 insertions(+), 76 deletions(-)
14
2 files changed, 40 insertions(+), 55 deletions(-)
11
15
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/tcg/a64.decode
18
--- a/fpu/softfloat-parts.c.inc
15
+++ b/target/arm/tcg/a64.decode
19
+++ b/fpu/softfloat-parts.c.inc
16
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
17
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
21
}
18
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
22
19
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
23
if (s->default_nan_mode) {
20
+@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
24
+ /*
21
25
+ * We guarantee not to require the target to tell us how to
22
### Data Processing - Immediate
26
+ * pick a NaN if we're always returning the default NaN.
23
27
+ * But if we're not in default-NaN mode then the target must
24
@@ -XXX,XX +XXX,XX @@ SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
28
+ * specify.
25
SHA1H 0101 1110 0010 1000 0000 10 ..... ..... @rr_q1e0
29
+ */
26
SHA1SU1 0101 1110 0010 1000 0001 10 ..... ..... @rr_q1e0
30
which = 3;
27
SHA256SU0 0101 1110 0010 1000 0010 10 ..... ..... @rr_q1e0
31
+ } else if (infzero) {
32
+ /*
33
+ * Inf * 0 + NaN -- some implementations return the
34
+ * default NaN here, and some return the input NaN.
35
+ */
36
+ switch (s->float_infzeronan_rule) {
37
+ case float_infzeronan_dnan_never:
38
+ which = 2;
39
+ break;
40
+ case float_infzeronan_dnan_always:
41
+ which = 3;
42
+ break;
43
+ case float_infzeronan_dnan_if_qnan:
44
+ which = is_qnan(c->cls) ? 3 : 2;
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
} else {
50
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
51
+ FloatClass cls[3] = { a->cls, b->cls, c->cls };
52
+ Float3NaNPropRule rule = s->float_3nan_prop_rule;
28
+
53
+
29
+### Cryptographic three-register SHA512
54
+ assert(rule != float_3nan_prop_none);
30
+
55
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
31
+SHA512H 1100 1110 011 ..... 100000 ..... ..... @rrr_q1e0
56
+ /* We have at least one SNaN input and should prefer it */
32
+SHA512H2 1100 1110 011 ..... 100001 ..... ..... @rrr_q1e0
57
+ do {
33
+SHA512SU1 1100 1110 011 ..... 100010 ..... ..... @rrr_q1e0
58
+ which = rule & R_3NAN_1ST_MASK;
34
+RAX1 1100 1110 011 ..... 100011 ..... ..... @rrr_q1e3
59
+ rule >>= R_3NAN_1ST_LENGTH;
35
+SM3PARTW1 1100 1110 011 ..... 110000 ..... ..... @rrr_q1e0
60
+ } while (!is_snan(cls[which]));
36
+SM3PARTW2 1100 1110 011 ..... 110001 ..... ..... @rrr_q1e0
61
+ } else {
37
+SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
62
+ do {
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
63
+ which = rule & R_3NAN_1ST_MASK;
64
+ rule >>= R_3NAN_1ST_LENGTH;
65
+ } while (!is_nan(cls[which]));
66
+ }
67
}
68
69
if (which == 3) {
70
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
39
index XXXXXXX..XXXXXXX 100644
71
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
72
--- a/fpu/softfloat-specialize.c.inc
41
+++ b/target/arm/tcg/translate-a64.c
73
+++ b/fpu/softfloat-specialize.c.inc
42
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_op3_ool(DisasContext *s, arg_qrrr_e *a, int data,
74
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
43
return true;
44
}
45
46
+static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
47
+{
48
+ if (!a->q && a->esz == MO_64) {
49
+ return false;
50
+ }
51
+ if (fp_access_check(s)) {
52
+ gen_gvec_fn3(s, a->q, a->rd, a->rn, a->rm, fn, a->esz);
53
+ }
54
+ return true;
55
+}
56
+
57
/*
58
* This utility function is for doing register extension with an
59
* optional shift. You will likely want to pass a temporary for the
60
@@ -XXX,XX +XXX,XX @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
61
}
62
63
/*
64
- * Cryptographic AES, SHA
65
+ * Cryptographic AES, SHA, SHA512
66
*/
67
68
TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
69
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SHA1H, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1h)
70
TRANS_FEAT(SHA1SU1, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1su1)
71
TRANS_FEAT(SHA256SU0, aa64_sha256, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha256su0)
72
73
+TRANS_FEAT(SHA512H, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512h)
74
+TRANS_FEAT(SHA512H2, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512h2)
75
+TRANS_FEAT(SHA512SU1, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512su1)
76
+TRANS_FEAT(RAX1, aa64_sha3, do_gvec_fn3, a, gen_gvec_rax1)
77
+TRANS_FEAT(SM3PARTW1, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw1)
78
+TRANS_FEAT(SM3PARTW2, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw2)
79
+TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
80
+
81
+
82
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
83
* Note that it is the caller's responsibility to ensure that the
84
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
85
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
86
}
75
}
87
}
76
}
88
77
89
-/* Crypto three-reg SHA512
78
-/*----------------------------------------------------------------------------
90
- * 31 21 20 16 15 14 13 12 11 10 9 5 4 0
79
-| Select which NaN to propagate for a three-input operation.
91
- * +-----------------------+------+---+---+-----+--------+------+------+
80
-| For the moment we assume that no CPU needs the 'larger significand'
92
- * | 1 1 0 0 1 1 1 0 0 1 1 | Rm | 1 | O | 0 0 | opcode | Rn | Rd |
81
-| information.
93
- * +-----------------------+------+---+---+-----+--------+------+------+
82
-| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
94
- */
83
-*----------------------------------------------------------------------------*/
95
-static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
84
-static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
85
- bool infzero, bool have_snan, float_status *status)
96
-{
86
-{
97
- int opcode = extract32(insn, 10, 2);
87
- FloatClass cls[3] = { a_cls, b_cls, c_cls };
98
- int o = extract32(insn, 14, 1);
88
- Float3NaNPropRule rule = status->float_3nan_prop_rule;
99
- int rm = extract32(insn, 16, 5);
89
- int which;
100
- int rn = extract32(insn, 5, 5);
101
- int rd = extract32(insn, 0, 5);
102
- bool feature;
103
- gen_helper_gvec_3 *oolfn = NULL;
104
- GVecGen3Fn *gvecfn = NULL;
105
-
90
-
106
- if (o == 0) {
91
- /*
107
- switch (opcode) {
92
- * We guarantee not to require the target to tell us how to
108
- case 0: /* SHA512H */
93
- * pick a NaN if we're always returning the default NaN.
109
- feature = dc_isar_feature(aa64_sha512, s);
94
- * But if we're not in default-NaN mode then the target must
110
- oolfn = gen_helper_crypto_sha512h;
95
- * specify.
111
- break;
96
- */
112
- case 1: /* SHA512H2 */
97
- assert(!status->default_nan_mode);
113
- feature = dc_isar_feature(aa64_sha512, s);
98
-
114
- oolfn = gen_helper_crypto_sha512h2;
99
- if (infzero) {
115
- break;
100
- /*
116
- case 2: /* SHA512SU1 */
101
- * Inf * 0 + NaN -- some implementations return the default NaN here,
117
- feature = dc_isar_feature(aa64_sha512, s);
102
- * and some return the input NaN.
118
- oolfn = gen_helper_crypto_sha512su1;
103
- */
119
- break;
104
- switch (status->float_infzeronan_rule) {
120
- case 3: /* RAX1 */
105
- case float_infzeronan_dnan_never:
121
- feature = dc_isar_feature(aa64_sha3, s);
106
- return 2;
122
- gvecfn = gen_gvec_rax1;
107
- case float_infzeronan_dnan_always:
123
- break;
108
- return 3;
109
- case float_infzeronan_dnan_if_qnan:
110
- return is_qnan(c_cls) ? 3 : 2;
124
- default:
111
- default:
125
- g_assert_not_reached();
112
- g_assert_not_reached();
126
- }
113
- }
127
- } else {
128
- switch (opcode) {
129
- case 0: /* SM3PARTW1 */
130
- feature = dc_isar_feature(aa64_sm3, s);
131
- oolfn = gen_helper_crypto_sm3partw1;
132
- break;
133
- case 1: /* SM3PARTW2 */
134
- feature = dc_isar_feature(aa64_sm3, s);
135
- oolfn = gen_helper_crypto_sm3partw2;
136
- break;
137
- case 2: /* SM4EKEY */
138
- feature = dc_isar_feature(aa64_sm4, s);
139
- oolfn = gen_helper_crypto_sm4ekey;
140
- break;
141
- default:
142
- unallocated_encoding(s);
143
- return;
144
- }
145
- }
114
- }
146
-
115
-
147
- if (!feature) {
116
- assert(rule != float_3nan_prop_none);
148
- unallocated_encoding(s);
117
- if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
149
- return;
118
- /* We have at least one SNaN input and should prefer it */
119
- do {
120
- which = rule & R_3NAN_1ST_MASK;
121
- rule >>= R_3NAN_1ST_LENGTH;
122
- } while (!is_snan(cls[which]));
123
- } else {
124
- do {
125
- which = rule & R_3NAN_1ST_MASK;
126
- rule >>= R_3NAN_1ST_LENGTH;
127
- } while (!is_nan(cls[which]));
150
- }
128
- }
151
-
129
- return which;
152
- if (!fp_access_check(s)) {
153
- return;
154
- }
155
-
156
- if (oolfn) {
157
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
158
- } else {
159
- gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
160
- }
161
-}
130
-}
162
-
131
-
163
/* Crypto two-reg SHA512
132
/*----------------------------------------------------------------------------
164
* 31 12 11 10 9 5 4 0
133
| Returns 1 if the double-precision floating-point value `a' is a quiet
165
* +-----------------------------------------+--------+------+------+
134
| NaN; otherwise returns 0.
166
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
167
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
168
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
169
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
170
- { 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
171
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
172
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
173
{ 0xce800000, 0xffe00000, disas_crypto_xar },
174
--
135
--
175
2.34.1
136
2.34.1
137
138
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
These are the last instructions within disas_simd_three_reg_same_fp16,
3
Remove "3" as a special case for which and simply
4
so remove it.
4
branch to return the desired value.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20240506010403.6204-23-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241203203949.483774-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/helper.h | 16 ++
11
fpu/softfloat-parts.c.inc | 20 ++++++++++----------
12
target/arm/tcg/a64.decode | 24 +++
12
1 file changed, 10 insertions(+), 10 deletions(-)
13
target/arm/tcg/translate-a64.c | 296 ++++++---------------------------
14
target/arm/tcg/vec_helper.c | 16 ++
15
4 files changed, 107 insertions(+), 245 deletions(-)
16
13
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
16
--- a/fpu/softfloat-parts.c.inc
20
+++ b/target/arm/helper.h
17
+++ b/fpu/softfloat-parts.c.inc
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_faddp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
22
DEF_HELPER_FLAGS_5(gvec_faddp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
* But if we're not in default-NaN mode then the target must
23
DEF_HELPER_FLAGS_5(gvec_faddp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
* specify.
24
21
*/
25
+DEF_HELPER_FLAGS_5(gvec_fmaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
- which = 3;
26
+DEF_HELPER_FLAGS_5(gvec_fmaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+ goto default_nan;
27
+DEF_HELPER_FLAGS_5(gvec_fmaxp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
} else if (infzero) {
28
+
25
/*
29
+DEF_HELPER_FLAGS_5(gvec_fminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
* Inf * 0 + NaN -- some implementations return the
30
+DEF_HELPER_FLAGS_5(gvec_fminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
31
+DEF_HELPER_FLAGS_5(gvec_fminp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
*/
32
+
29
switch (s->float_infzeronan_rule) {
33
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
case float_infzeronan_dnan_never:
34
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
- which = 2;
35
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_5(gvec_fminnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_5(gvec_fminnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_5(gvec_fminnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
40
+
41
#ifdef TARGET_AARCH64
42
#include "tcg/helper-a64.h"
43
#include "tcg/helper-sve.h"
44
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/a64.decode
47
+++ b/target/arm/tcg/a64.decode
48
@@ -XXX,XX +XXX,XX @@ FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
49
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
50
FADDP_s 0111 1110 0.11 0000 1101 10 ..... ..... @rr_sd
51
52
+FMAXP_s 0101 1110 0011 0000 1111 10 ..... ..... @rr_h
53
+FMAXP_s 0111 1110 0.11 0000 1111 10 ..... ..... @rr_sd
54
+
55
+FMINP_s 0101 1110 1011 0000 1111 10 ..... ..... @rr_h
56
+FMINP_s 0111 1110 1.11 0000 1111 10 ..... ..... @rr_sd
57
+
58
+FMAXNMP_s 0101 1110 0011 0000 1100 10 ..... ..... @rr_h
59
+FMAXNMP_s 0111 1110 0.11 0000 1100 10 ..... ..... @rr_sd
60
+
61
+FMINNMP_s 0101 1110 1011 0000 1100 10 ..... ..... @rr_h
62
+FMINNMP_s 0111 1110 1.11 0000 1100 10 ..... ..... @rr_sd
63
+
64
### Advanced SIMD three same
65
66
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
67
@@ -XXX,XX +XXX,XX @@ FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
68
FADDP_v 0.10 1110 010 ..... 00010 1 ..... ..... @qrrr_h
69
FADDP_v 0.10 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
70
71
+FMAXP_v 0.10 1110 010 ..... 00110 1 ..... ..... @qrrr_h
72
+FMAXP_v 0.10 1110 0.1 ..... 11110 1 ..... ..... @qrrr_sd
73
+
74
+FMINP_v 0.10 1110 110 ..... 00110 1 ..... ..... @qrrr_h
75
+FMINP_v 0.10 1110 1.1 ..... 11110 1 ..... ..... @qrrr_sd
76
+
77
+FMAXNMP_v 0.10 1110 010 ..... 00000 1 ..... ..... @qrrr_h
78
+FMAXNMP_v 0.10 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
79
+
80
+FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
81
+FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
82
+
83
### Advanced SIMD scalar x indexed element
84
85
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
86
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/tcg/translate-a64.c
89
+++ b/target/arm/tcg/translate-a64.c
90
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_faddp[3] = {
91
};
92
TRANS(FADDP_v, do_fp3_vector, a, f_vector_faddp)
93
94
+static gen_helper_gvec_3_ptr * const f_vector_fmaxp[3] = {
95
+ gen_helper_gvec_fmaxp_h,
96
+ gen_helper_gvec_fmaxp_s,
97
+ gen_helper_gvec_fmaxp_d,
98
+};
99
+TRANS(FMAXP_v, do_fp3_vector, a, f_vector_fmaxp)
100
+
101
+static gen_helper_gvec_3_ptr * const f_vector_fminp[3] = {
102
+ gen_helper_gvec_fminp_h,
103
+ gen_helper_gvec_fminp_s,
104
+ gen_helper_gvec_fminp_d,
105
+};
106
+TRANS(FMINP_v, do_fp3_vector, a, f_vector_fminp)
107
+
108
+static gen_helper_gvec_3_ptr * const f_vector_fmaxnmp[3] = {
109
+ gen_helper_gvec_fmaxnump_h,
110
+ gen_helper_gvec_fmaxnump_s,
111
+ gen_helper_gvec_fmaxnump_d,
112
+};
113
+TRANS(FMAXNMP_v, do_fp3_vector, a, f_vector_fmaxnmp)
114
+
115
+static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
116
+ gen_helper_gvec_fminnump_h,
117
+ gen_helper_gvec_fminnump_s,
118
+ gen_helper_gvec_fminnump_d,
119
+};
120
+TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
121
+
122
/*
123
* Advanced SIMD scalar/vector x indexed element
124
*/
125
@@ -XXX,XX +XXX,XX @@ static bool do_fp3_scalar_pair(DisasContext *s, arg_rr_e *a, const FPScalar *f)
126
}
127
128
TRANS(FADDP_s, do_fp3_scalar_pair, a, &f_scalar_fadd)
129
+TRANS(FMAXP_s, do_fp3_scalar_pair, a, &f_scalar_fmax)
130
+TRANS(FMINP_s, do_fp3_scalar_pair, a, &f_scalar_fmin)
131
+TRANS(FMAXNMP_s, do_fp3_scalar_pair, a, &f_scalar_fmaxnm)
132
+TRANS(FMINNMP_s, do_fp3_scalar_pair, a, &f_scalar_fminnm)
133
134
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
135
* Note that it is the caller's responsibility to ensure that the
136
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
137
int opcode = extract32(insn, 12, 5);
138
int rn = extract32(insn, 5, 5);
139
int rd = extract32(insn, 0, 5);
140
- TCGv_ptr fpst;
141
142
/* For some ops (the FP ones), size[1] is part of the encoding.
143
* For ADDP strictly it is not but size[1] is always 1 for valid
144
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
145
if (!fp_access_check(s)) {
146
return;
147
}
148
-
149
- fpst = NULL;
150
break;
151
+ default:
152
case 0xc: /* FMAXNMP */
153
+ case 0xd: /* FADDP */
154
case 0xf: /* FMAXP */
155
case 0x2c: /* FMINNMP */
156
case 0x2f: /* FMINP */
157
- /* FP op, size[0] is 32 or 64 bit*/
158
- if (!u) {
159
- if (!dc_isar_feature(aa64_fp16, s)) {
160
- unallocated_encoding(s);
161
- return;
162
- } else {
163
- size = MO_16;
164
- }
165
- } else {
166
- size = extract32(size, 0, 1) ? MO_64 : MO_32;
167
- }
168
-
169
- if (!fp_access_check(s)) {
170
- return;
171
- }
172
-
173
- fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
174
- break;
175
- default:
176
- case 0xd: /* FADDP */
177
unallocated_encoding(s);
178
return;
179
}
180
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
181
case 0x3b: /* ADDP */
182
tcg_gen_add_i64(tcg_res, tcg_op1, tcg_op2);
183
break;
32
break;
184
- case 0xc: /* FMAXNMP */
33
case float_infzeronan_dnan_always:
185
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
34
- which = 3;
186
- break;
35
- break;
187
- case 0xf: /* FMAXP */
36
+ goto default_nan;
188
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
37
case float_infzeronan_dnan_if_qnan:
189
- break;
38
- which = is_qnan(c->cls) ? 3 : 2;
190
- case 0x2c: /* FMINNMP */
39
+ if (is_qnan(c->cls)) {
191
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
40
+ goto default_nan;
192
- break;
41
+ }
193
- case 0x2f: /* FMINP */
42
break;
194
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
195
- break;
196
default:
43
default:
197
+ case 0xc: /* FMAXNMP */
198
case 0xd: /* FADDP */
199
+ case 0xf: /* FMAXP */
200
+ case 0x2c: /* FMINNMP */
201
+ case 0x2f: /* FMINP */
202
g_assert_not_reached();
44
g_assert_not_reached();
203
}
45
}
204
46
+ which = 2;
205
write_fp_dreg(s, rd, tcg_res);
206
} else {
47
} else {
207
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
48
FloatClass cls[3] = { a->cls, b->cls, c->cls };
208
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
49
Float3NaNPropRule rule = s->float_3nan_prop_rule;
209
- TCGv_i32 tcg_res = tcg_temp_new_i32();
50
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
210
-
51
}
211
- read_vec_element_i32(s, tcg_op1, rn, 0, size);
212
- read_vec_element_i32(s, tcg_op2, rn, 1, size);
213
-
214
- if (size == MO_16) {
215
- switch (opcode) {
216
- case 0xc: /* FMAXNMP */
217
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
218
- break;
219
- case 0xf: /* FMAXP */
220
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
221
- break;
222
- case 0x2c: /* FMINNMP */
223
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
224
- break;
225
- case 0x2f: /* FMINP */
226
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
227
- break;
228
- default:
229
- case 0xd: /* FADDP */
230
- g_assert_not_reached();
231
- }
232
- } else {
233
- switch (opcode) {
234
- case 0xc: /* FMAXNMP */
235
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
236
- break;
237
- case 0xf: /* FMAXP */
238
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
239
- break;
240
- case 0x2c: /* FMINNMP */
241
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
242
- break;
243
- case 0x2f: /* FMINP */
244
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
245
- break;
246
- default:
247
- case 0xd: /* FADDP */
248
- g_assert_not_reached();
249
- }
250
- }
251
-
252
- write_fp_sreg(s, rd, tcg_res);
253
+ g_assert_not_reached();
254
}
52
}
255
}
53
256
54
- if (which == 3) {
257
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
55
- parts_default_nan(a, s);
258
static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
56
- return a;
259
int size, int rn, int rm, int rd)
260
{
261
- TCGv_ptr fpst;
262
int pass;
263
264
- /* Floating point operations need fpst */
265
- if (opcode >= 0x58) {
266
- fpst = fpstatus_ptr(FPST_FPCR);
267
- } else {
268
- fpst = NULL;
269
- }
57
- }
270
-
58
-
271
if (!fp_access_check(s)) {
59
switch (which) {
272
return;
60
case 0:
61
break;
62
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
63
parts_silence_nan(a, s);
273
}
64
}
274
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
65
return a;
275
case 0x17: /* ADDP */
66
+
276
tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
67
+ default_nan:
277
break;
68
+ parts_default_nan(a, s);
278
- case 0x58: /* FMAXNMP */
69
+ return a;
279
- gen_helper_vfp_maxnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
280
- break;
281
- case 0x5e: /* FMAXP */
282
- gen_helper_vfp_maxd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
283
- break;
284
- case 0x78: /* FMINNMP */
285
- gen_helper_vfp_minnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
286
- break;
287
- case 0x7e: /* FMINP */
288
- gen_helper_vfp_mind(tcg_res[pass], tcg_op1, tcg_op2, fpst);
289
- break;
290
default:
291
+ case 0x58: /* FMAXNMP */
292
case 0x5a: /* FADDP */
293
+ case 0x5e: /* FMAXP */
294
+ case 0x78: /* FMINNMP */
295
+ case 0x7e: /* FMINP */
296
g_assert_not_reached();
297
}
298
}
299
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
300
genfn = fns[size][u];
301
break;
302
}
303
- /* The FP operations are all on single floats (32 bit) */
304
- case 0x58: /* FMAXNMP */
305
- gen_helper_vfp_maxnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
306
- break;
307
- case 0x5e: /* FMAXP */
308
- gen_helper_vfp_maxs(tcg_res[pass], tcg_op1, tcg_op2, fpst);
309
- break;
310
- case 0x78: /* FMINNMP */
311
- gen_helper_vfp_minnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
312
- break;
313
- case 0x7e: /* FMINP */
314
- gen_helper_vfp_mins(tcg_res[pass], tcg_op1, tcg_op2, fpst);
315
- break;
316
default:
317
+ case 0x58: /* FMAXNMP */
318
case 0x5a: /* FADDP */
319
+ case 0x5e: /* FMAXP */
320
+ case 0x78: /* FMINNMP */
321
+ case 0x7e: /* FMINP */
322
g_assert_not_reached();
323
}
324
325
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
326
}
327
328
switch (fpopcode) {
329
- case 0x58: /* FMAXNMP */
330
- case 0x5e: /* FMAXP */
331
- case 0x78: /* FMINNMP */
332
- case 0x7e: /* FMINP */
333
- if (size && !is_q) {
334
- unallocated_encoding(s);
335
- return;
336
- }
337
- handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
338
- rn, rm, rd);
339
- return;
340
-
341
case 0x1d: /* FMLAL */
342
case 0x3d: /* FMLSL */
343
case 0x59: /* FMLAL2 */
344
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
345
case 0x3a: /* FSUB */
346
case 0x3e: /* FMIN */
347
case 0x3f: /* FRSQRTS */
348
+ case 0x58: /* FMAXNMP */
349
case 0x5a: /* FADDP */
350
case 0x5b: /* FMUL */
351
case 0x5c: /* FCMGE */
352
case 0x5d: /* FACGE */
353
+ case 0x5e: /* FMAXP */
354
case 0x5f: /* FDIV */
355
+ case 0x78: /* FMINNMP */
356
case 0x7a: /* FABD */
357
case 0x7d: /* FACGT */
358
case 0x7c: /* FCMGT */
359
+ case 0x7e: /* FMINP */
360
unallocated_encoding(s);
361
return;
362
}
363
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
364
}
365
}
70
}
366
71
367
-/*
72
/*
368
- * Advanced SIMD three same (ARMv8.2 FP16 variants)
369
- *
370
- * 31 30 29 28 24 23 22 21 20 16 15 14 13 11 10 9 5 4 0
371
- * +---+---+---+-----------+---------+------+-----+--------+---+------+------+
372
- * | 0 | Q | U | 0 1 1 1 0 | a | 1 0 | Rm | 0 0 | opcode | 1 | Rn | Rd |
373
- * +---+---+---+-----------+---------+------+-----+--------+---+------+------+
374
- *
375
- * This includes FMULX, FCMEQ (register), FRECPS, FRSQRTS, FCMGE
376
- * (register), FACGE, FABD, FCMGT (register) and FACGT.
377
- *
378
- */
379
-static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
380
-{
381
- int opcode = extract32(insn, 11, 3);
382
- int u = extract32(insn, 29, 1);
383
- int a = extract32(insn, 23, 1);
384
- int is_q = extract32(insn, 30, 1);
385
- int rm = extract32(insn, 16, 5);
386
- int rn = extract32(insn, 5, 5);
387
- int rd = extract32(insn, 0, 5);
388
- /*
389
- * For these floating point ops, the U, a and opcode bits
390
- * together indicate the operation.
391
- */
392
- int fpopcode = opcode | (a << 3) | (u << 4);
393
- bool pairwise;
394
- TCGv_ptr fpst;
395
- int pass;
396
-
397
- switch (fpopcode) {
398
- case 0x10: /* FMAXNMP */
399
- case 0x16: /* FMAXP */
400
- case 0x18: /* FMINNMP */
401
- case 0x1e: /* FMINP */
402
- pairwise = true;
403
- break;
404
- default:
405
- case 0x0: /* FMAXNM */
406
- case 0x1: /* FMLA */
407
- case 0x2: /* FADD */
408
- case 0x3: /* FMULX */
409
- case 0x4: /* FCMEQ */
410
- case 0x6: /* FMAX */
411
- case 0x7: /* FRECPS */
412
- case 0x8: /* FMINNM */
413
- case 0x9: /* FMLS */
414
- case 0xa: /* FSUB */
415
- case 0xe: /* FMIN */
416
- case 0xf: /* FRSQRTS */
417
- case 0x12: /* FADDP */
418
- case 0x13: /* FMUL */
419
- case 0x14: /* FCMGE */
420
- case 0x15: /* FACGE */
421
- case 0x17: /* FDIV */
422
- case 0x1a: /* FABD */
423
- case 0x1c: /* FCMGT */
424
- case 0x1d: /* FACGT */
425
- unallocated_encoding(s);
426
- return;
427
- }
428
-
429
- if (!dc_isar_feature(aa64_fp16, s)) {
430
- unallocated_encoding(s);
431
- return;
432
- }
433
-
434
- if (!fp_access_check(s)) {
435
- return;
436
- }
437
-
438
- fpst = fpstatus_ptr(FPST_FPCR_F16);
439
-
440
- if (pairwise) {
441
- int maxpass = is_q ? 8 : 4;
442
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
443
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
444
- TCGv_i32 tcg_res[8];
445
-
446
- for (pass = 0; pass < maxpass; pass++) {
447
- int passreg = pass < (maxpass / 2) ? rn : rm;
448
- int passelt = (pass << 1) & (maxpass - 1);
449
-
450
- read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_16);
451
- read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_16);
452
- tcg_res[pass] = tcg_temp_new_i32();
453
-
454
- switch (fpopcode) {
455
- case 0x10: /* FMAXNMP */
456
- gen_helper_advsimd_maxnumh(tcg_res[pass], tcg_op1, tcg_op2,
457
- fpst);
458
- break;
459
- case 0x16: /* FMAXP */
460
- gen_helper_advsimd_maxh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
461
- break;
462
- case 0x18: /* FMINNMP */
463
- gen_helper_advsimd_minnumh(tcg_res[pass], tcg_op1, tcg_op2,
464
- fpst);
465
- break;
466
- case 0x1e: /* FMINP */
467
- gen_helper_advsimd_minh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
468
- break;
469
- default:
470
- case 0x12: /* FADDP */
471
- g_assert_not_reached();
472
- }
473
- }
474
-
475
- for (pass = 0; pass < maxpass; pass++) {
476
- write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
477
- }
478
- } else {
479
- g_assert_not_reached();
480
- }
481
-
482
- clear_vec_high(s, is_q, rd);
483
-}
484
-
485
/* AdvSIMD three same extra
486
* 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
487
* +---+---+---+-----------+------+---+------+---+--------+---+----+----+
488
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
489
{ 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
490
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
491
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
492
- { 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
493
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
494
{ 0x00000000, 0x00000000, NULL }
495
};
496
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
497
index XXXXXXX..XXXXXXX 100644
498
--- a/target/arm/tcg/vec_helper.c
499
+++ b/target/arm/tcg/vec_helper.c
500
@@ -XXX,XX +XXX,XX @@ DO_3OP_PAIR(gvec_faddp_h, float16_add, float16, H2)
501
DO_3OP_PAIR(gvec_faddp_s, float32_add, float32, H4)
502
DO_3OP_PAIR(gvec_faddp_d, float64_add, float64, )
503
504
+DO_3OP_PAIR(gvec_fmaxp_h, float16_max, float16, H2)
505
+DO_3OP_PAIR(gvec_fmaxp_s, float32_max, float32, H4)
506
+DO_3OP_PAIR(gvec_fmaxp_d, float64_max, float64, )
507
+
508
+DO_3OP_PAIR(gvec_fminp_h, float16_min, float16, H2)
509
+DO_3OP_PAIR(gvec_fminp_s, float32_min, float32, H4)
510
+DO_3OP_PAIR(gvec_fminp_d, float64_min, float64, )
511
+
512
+DO_3OP_PAIR(gvec_fmaxnump_h, float16_maxnum, float16, H2)
513
+DO_3OP_PAIR(gvec_fmaxnump_s, float32_maxnum, float32, H4)
514
+DO_3OP_PAIR(gvec_fmaxnump_d, float64_maxnum, float64, )
515
+
516
+DO_3OP_PAIR(gvec_fminnump_h, float16_minnum, float16, H2)
517
+DO_3OP_PAIR(gvec_fminnump_s, float32_minnum, float32, H4)
518
+DO_3OP_PAIR(gvec_fminnump_d, float64_minnum, float64, )
519
+
520
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
521
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
522
{ \
523
--
73
--
524
2.34.1
74
2.34.1
75
76
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Split some routines out of translate-a64.c and translate-sve.c
3
Assign the pointer return value to 'a' directly,
4
that are used by both.
4
rather than going through an intermediary index.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20240506010403.6204-3-richard.henderson@linaro.org
8
Message-id: 20241203203949.483774-5-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/tcg/translate-a64.h | 4 +
11
fpu/softfloat-parts.c.inc | 32 ++++++++++----------------------
13
target/arm/tcg/gengvec64.c | 190 +++++++++++++++++++++++++++++++++
12
1 file changed, 10 insertions(+), 22 deletions(-)
14
target/arm/tcg/translate-a64.c | 26 -----
15
target/arm/tcg/translate-sve.c | 145 +------------------------
16
target/arm/tcg/meson.build | 1 +
17
5 files changed, 197 insertions(+), 169 deletions(-)
18
create mode 100644 target/arm/tcg/gengvec64.c
19
13
20
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/tcg/translate-a64.h
16
--- a/fpu/softfloat-parts.c.inc
23
+++ b/target/arm/tcg/translate-a64.h
17
+++ b/fpu/softfloat-parts.c.inc
24
@@ -XXX,XX +XXX,XX @@ void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
25
void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
19
FloatPartsN *c, float_status *s,
26
uint32_t rm_ofs, int64_t shift,
20
int ab_mask, int abc_mask)
27
uint32_t opr_sz, uint32_t max_sz);
21
{
28
+void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
22
- int which;
29
+ uint32_t a, uint32_t oprsz, uint32_t maxsz);
23
bool infzero = (ab_mask == float_cmask_infzero);
30
+void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
24
bool have_snan = (abc_mask & float_cmask_snan);
31
+ uint32_t a, uint32_t oprsz, uint32_t maxsz);
25
+ FloatPartsN *ret;
32
26
33
void gen_sve_ldr(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
27
if (unlikely(have_snan)) {
34
void gen_sve_str(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
28
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
35
diff --git a/target/arm/tcg/gengvec64.c b/target/arm/tcg/gengvec64.c
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
36
new file mode 100644
30
default:
37
index XXXXXXX..XXXXXXX
31
g_assert_not_reached();
38
--- /dev/null
32
}
39
+++ b/target/arm/tcg/gengvec64.c
33
- which = 2;
40
@@ -XXX,XX +XXX,XX @@
34
+ ret = c;
41
+/*
35
} else {
42
+ * AArch64 generic vector expansion
36
- FloatClass cls[3] = { a->cls, b->cls, c->cls };
43
+ *
37
+ FloatPartsN *val[3] = { a, b, c };
44
+ * Copyright (c) 2013 Alexander Graf <agraf@suse.de>
38
Float3NaNPropRule rule = s->float_3nan_prop_rule;
45
+ *
39
46
+ * This library is free software; you can redistribute it and/or
40
assert(rule != float_3nan_prop_none);
47
+ * modify it under the terms of the GNU Lesser General Public
41
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
48
+ * License as published by the Free Software Foundation; either
42
/* We have at least one SNaN input and should prefer it */
49
+ * version 2.1 of the License, or (at your option) any later version.
43
do {
50
+ *
44
- which = rule & R_3NAN_1ST_MASK;
51
+ * This library is distributed in the hope that it will be useful,
45
+ ret = val[rule & R_3NAN_1ST_MASK];
52
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
46
rule >>= R_3NAN_1ST_LENGTH;
53
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
47
- } while (!is_snan(cls[which]));
54
+ * Lesser General Public License for more details.
48
+ } while (!is_snan(ret->cls));
55
+ *
49
} else {
56
+ * You should have received a copy of the GNU Lesser General Public
50
do {
57
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
51
- which = rule & R_3NAN_1ST_MASK;
58
+ */
52
+ ret = val[rule & R_3NAN_1ST_MASK];
59
+
53
rule >>= R_3NAN_1ST_LENGTH;
60
+#include "qemu/osdep.h"
54
- } while (!is_nan(cls[which]));
61
+#include "translate.h"
55
+ } while (!is_nan(ret->cls));
62
+#include "translate-a64.h"
56
}
63
+
57
}
64
+
58
65
+static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
59
- switch (which) {
66
+{
60
- case 0:
67
+ tcg_gen_rotli_i64(d, m, 1);
61
- break;
68
+ tcg_gen_xor_i64(d, d, n);
62
- case 1:
69
+}
63
- a = b;
70
+
64
- break;
71
+static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
65
- case 2:
72
+{
66
- a = c;
73
+ tcg_gen_rotli_vec(vece, d, m, 1);
67
- break;
74
+ tcg_gen_xor_vec(vece, d, d, n);
68
- default:
75
+}
69
- g_assert_not_reached();
76
+
70
+ if (is_snan(ret->cls)) {
77
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
71
+ parts_silence_nan(ret, s);
78
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
72
}
79
+{
73
- if (is_snan(a->cls)) {
80
+ static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
74
- parts_silence_nan(a, s);
81
+ static const GVecGen3 op = {
82
+ .fni8 = gen_rax1_i64,
83
+ .fniv = gen_rax1_vec,
84
+ .opt_opc = vecop_list,
85
+ .fno = gen_helper_crypto_rax1,
86
+ .vece = MO_64,
87
+ };
88
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
89
+}
90
+
91
+static void gen_xar8_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
92
+{
93
+ TCGv_i64 t = tcg_temp_new_i64();
94
+ uint64_t mask = dup_const(MO_8, 0xff >> sh);
95
+
96
+ tcg_gen_xor_i64(t, n, m);
97
+ tcg_gen_shri_i64(d, t, sh);
98
+ tcg_gen_shli_i64(t, t, 8 - sh);
99
+ tcg_gen_andi_i64(d, d, mask);
100
+ tcg_gen_andi_i64(t, t, ~mask);
101
+ tcg_gen_or_i64(d, d, t);
102
+}
103
+
104
+static void gen_xar16_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
105
+{
106
+ TCGv_i64 t = tcg_temp_new_i64();
107
+ uint64_t mask = dup_const(MO_16, 0xffff >> sh);
108
+
109
+ tcg_gen_xor_i64(t, n, m);
110
+ tcg_gen_shri_i64(d, t, sh);
111
+ tcg_gen_shli_i64(t, t, 16 - sh);
112
+ tcg_gen_andi_i64(d, d, mask);
113
+ tcg_gen_andi_i64(t, t, ~mask);
114
+ tcg_gen_or_i64(d, d, t);
115
+}
116
+
117
+static void gen_xar_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, int32_t sh)
118
+{
119
+ tcg_gen_xor_i32(d, n, m);
120
+ tcg_gen_rotri_i32(d, d, sh);
121
+}
122
+
123
+static void gen_xar_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
124
+{
125
+ tcg_gen_xor_i64(d, n, m);
126
+ tcg_gen_rotri_i64(d, d, sh);
127
+}
128
+
129
+static void gen_xar_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
130
+ TCGv_vec m, int64_t sh)
131
+{
132
+ tcg_gen_xor_vec(vece, d, n, m);
133
+ tcg_gen_rotri_vec(vece, d, d, sh);
134
+}
135
+
136
+void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
137
+ uint32_t rm_ofs, int64_t shift,
138
+ uint32_t opr_sz, uint32_t max_sz)
139
+{
140
+ static const TCGOpcode vecop[] = { INDEX_op_rotli_vec, 0 };
141
+ static const GVecGen3i ops[4] = {
142
+ { .fni8 = gen_xar8_i64,
143
+ .fniv = gen_xar_vec,
144
+ .fno = gen_helper_sve2_xar_b,
145
+ .opt_opc = vecop,
146
+ .vece = MO_8 },
147
+ { .fni8 = gen_xar16_i64,
148
+ .fniv = gen_xar_vec,
149
+ .fno = gen_helper_sve2_xar_h,
150
+ .opt_opc = vecop,
151
+ .vece = MO_16 },
152
+ { .fni4 = gen_xar_i32,
153
+ .fniv = gen_xar_vec,
154
+ .fno = gen_helper_sve2_xar_s,
155
+ .opt_opc = vecop,
156
+ .vece = MO_32 },
157
+ { .fni8 = gen_xar_i64,
158
+ .fniv = gen_xar_vec,
159
+ .fno = gen_helper_gvec_xar_d,
160
+ .opt_opc = vecop,
161
+ .vece = MO_64 }
162
+ };
163
+ int esize = 8 << vece;
164
+
165
+ /* The SVE2 range is 1 .. esize; the AdvSIMD range is 0 .. esize-1. */
166
+ tcg_debug_assert(shift >= 0);
167
+ tcg_debug_assert(shift <= esize);
168
+ shift &= esize - 1;
169
+
170
+ if (shift == 0) {
171
+ /* xar with no rotate devolves to xor. */
172
+ tcg_gen_gvec_xor(vece, rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz);
173
+ } else {
174
+ tcg_gen_gvec_3i(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz,
175
+ shift, &ops[vece]);
176
+ }
177
+}
178
+
179
+static void gen_eor3_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
180
+{
181
+ tcg_gen_xor_i64(d, n, m);
182
+ tcg_gen_xor_i64(d, d, k);
183
+}
184
+
185
+static void gen_eor3_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
186
+ TCGv_vec m, TCGv_vec k)
187
+{
188
+ tcg_gen_xor_vec(vece, d, n, m);
189
+ tcg_gen_xor_vec(vece, d, d, k);
190
+}
191
+
192
+void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
193
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
194
+{
195
+ static const GVecGen4 op = {
196
+ .fni8 = gen_eor3_i64,
197
+ .fniv = gen_eor3_vec,
198
+ .fno = gen_helper_sve2_eor3,
199
+ .vece = MO_64,
200
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
201
+ };
202
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
203
+}
204
+
205
+static void gen_bcax_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
206
+{
207
+ tcg_gen_andc_i64(d, m, k);
208
+ tcg_gen_xor_i64(d, d, n);
209
+}
210
+
211
+static void gen_bcax_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
212
+ TCGv_vec m, TCGv_vec k)
213
+{
214
+ tcg_gen_andc_vec(vece, d, m, k);
215
+ tcg_gen_xor_vec(vece, d, d, n);
216
+}
217
+
218
+void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
219
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
220
+{
221
+ static const GVecGen4 op = {
222
+ .fni8 = gen_bcax_i64,
223
+ .fniv = gen_bcax_vec,
224
+ .fno = gen_helper_sve2_bcax,
225
+ .vece = MO_64,
226
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
227
+ };
228
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
229
+}
230
+
231
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
232
index XXXXXXX..XXXXXXX 100644
233
--- a/target/arm/tcg/translate-a64.c
234
+++ b/target/arm/tcg/translate-a64.c
235
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
236
gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
237
}
238
239
-static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
240
-{
241
- tcg_gen_rotli_i64(d, m, 1);
242
- tcg_gen_xor_i64(d, d, n);
243
-}
244
-
245
-static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
246
-{
247
- tcg_gen_rotli_vec(vece, d, m, 1);
248
- tcg_gen_xor_vec(vece, d, d, n);
249
-}
250
-
251
-void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
252
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
253
-{
254
- static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
255
- static const GVecGen3 op = {
256
- .fni8 = gen_rax1_i64,
257
- .fniv = gen_rax1_vec,
258
- .opt_opc = vecop_list,
259
- .fno = gen_helper_crypto_rax1,
260
- .vece = MO_64,
261
- };
262
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
263
-}
264
-
265
/* Crypto three-reg SHA512
266
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
267
* +-----------------------+------+---+---+-----+--------+------+------+
268
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
269
index XXXXXXX..XXXXXXX 100644
270
--- a/target/arm/tcg/translate-sve.c
271
+++ b/target/arm/tcg/translate-sve.c
272
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(ORR_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_or, a)
273
TRANS_FEAT(EOR_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_xor, a)
274
TRANS_FEAT(BIC_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_andc, a)
275
276
-static void gen_xar8_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
277
-{
278
- TCGv_i64 t = tcg_temp_new_i64();
279
- uint64_t mask = dup_const(MO_8, 0xff >> sh);
280
-
281
- tcg_gen_xor_i64(t, n, m);
282
- tcg_gen_shri_i64(d, t, sh);
283
- tcg_gen_shli_i64(t, t, 8 - sh);
284
- tcg_gen_andi_i64(d, d, mask);
285
- tcg_gen_andi_i64(t, t, ~mask);
286
- tcg_gen_or_i64(d, d, t);
287
-}
288
-
289
-static void gen_xar16_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
290
-{
291
- TCGv_i64 t = tcg_temp_new_i64();
292
- uint64_t mask = dup_const(MO_16, 0xffff >> sh);
293
-
294
- tcg_gen_xor_i64(t, n, m);
295
- tcg_gen_shri_i64(d, t, sh);
296
- tcg_gen_shli_i64(t, t, 16 - sh);
297
- tcg_gen_andi_i64(d, d, mask);
298
- tcg_gen_andi_i64(t, t, ~mask);
299
- tcg_gen_or_i64(d, d, t);
300
-}
301
-
302
-static void gen_xar_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, int32_t sh)
303
-{
304
- tcg_gen_xor_i32(d, n, m);
305
- tcg_gen_rotri_i32(d, d, sh);
306
-}
307
-
308
-static void gen_xar_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
309
-{
310
- tcg_gen_xor_i64(d, n, m);
311
- tcg_gen_rotri_i64(d, d, sh);
312
-}
313
-
314
-static void gen_xar_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
315
- TCGv_vec m, int64_t sh)
316
-{
317
- tcg_gen_xor_vec(vece, d, n, m);
318
- tcg_gen_rotri_vec(vece, d, d, sh);
319
-}
320
-
321
-void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
322
- uint32_t rm_ofs, int64_t shift,
323
- uint32_t opr_sz, uint32_t max_sz)
324
-{
325
- static const TCGOpcode vecop[] = { INDEX_op_rotli_vec, 0 };
326
- static const GVecGen3i ops[4] = {
327
- { .fni8 = gen_xar8_i64,
328
- .fniv = gen_xar_vec,
329
- .fno = gen_helper_sve2_xar_b,
330
- .opt_opc = vecop,
331
- .vece = MO_8 },
332
- { .fni8 = gen_xar16_i64,
333
- .fniv = gen_xar_vec,
334
- .fno = gen_helper_sve2_xar_h,
335
- .opt_opc = vecop,
336
- .vece = MO_16 },
337
- { .fni4 = gen_xar_i32,
338
- .fniv = gen_xar_vec,
339
- .fno = gen_helper_sve2_xar_s,
340
- .opt_opc = vecop,
341
- .vece = MO_32 },
342
- { .fni8 = gen_xar_i64,
343
- .fniv = gen_xar_vec,
344
- .fno = gen_helper_gvec_xar_d,
345
- .opt_opc = vecop,
346
- .vece = MO_64 }
347
- };
348
- int esize = 8 << vece;
349
-
350
- /* The SVE2 range is 1 .. esize; the AdvSIMD range is 0 .. esize-1. */
351
- tcg_debug_assert(shift >= 0);
352
- tcg_debug_assert(shift <= esize);
353
- shift &= esize - 1;
354
-
355
- if (shift == 0) {
356
- /* xar with no rotate devolves to xor. */
357
- tcg_gen_gvec_xor(vece, rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz);
358
- } else {
359
- tcg_gen_gvec_3i(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz,
360
- shift, &ops[vece]);
361
- }
75
- }
362
-}
76
- return a;
363
-
77
+ return ret;
364
static bool trans_XAR(DisasContext *s, arg_rrri_esz *a)
78
365
{
79
default_nan:
366
if (a->esz < 0 || !dc_isar_feature(aa64_sve2, s)) {
80
parts_default_nan(a, s);
367
@@ -XXX,XX +XXX,XX @@ static bool trans_XAR(DisasContext *s, arg_rrri_esz *a)
368
return true;
369
}
370
371
-static void gen_eor3_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
372
-{
373
- tcg_gen_xor_i64(d, n, m);
374
- tcg_gen_xor_i64(d, d, k);
375
-}
376
-
377
-static void gen_eor3_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
378
- TCGv_vec m, TCGv_vec k)
379
-{
380
- tcg_gen_xor_vec(vece, d, n, m);
381
- tcg_gen_xor_vec(vece, d, d, k);
382
-}
383
-
384
-static void gen_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
385
- uint32_t a, uint32_t oprsz, uint32_t maxsz)
386
-{
387
- static const GVecGen4 op = {
388
- .fni8 = gen_eor3_i64,
389
- .fniv = gen_eor3_vec,
390
- .fno = gen_helper_sve2_eor3,
391
- .vece = MO_64,
392
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
393
- };
394
- tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
395
-}
396
-
397
-TRANS_FEAT(EOR3, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_eor3, a)
398
-
399
-static void gen_bcax_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
400
-{
401
- tcg_gen_andc_i64(d, m, k);
402
- tcg_gen_xor_i64(d, d, n);
403
-}
404
-
405
-static void gen_bcax_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
406
- TCGv_vec m, TCGv_vec k)
407
-{
408
- tcg_gen_andc_vec(vece, d, m, k);
409
- tcg_gen_xor_vec(vece, d, d, n);
410
-}
411
-
412
-static void gen_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
413
- uint32_t a, uint32_t oprsz, uint32_t maxsz)
414
-{
415
- static const GVecGen4 op = {
416
- .fni8 = gen_bcax_i64,
417
- .fniv = gen_bcax_vec,
418
- .fno = gen_helper_sve2_bcax,
419
- .vece = MO_64,
420
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
421
- };
422
- tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
423
-}
424
-
425
-TRANS_FEAT(BCAX, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_bcax, a)
426
+TRANS_FEAT(EOR3, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_gvec_eor3, a)
427
+TRANS_FEAT(BCAX, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_gvec_bcax, a)
428
429
static void gen_bsl(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
430
uint32_t a, uint32_t oprsz, uint32_t maxsz)
431
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
432
index XXXXXXX..XXXXXXX 100644
433
--- a/target/arm/tcg/meson.build
434
+++ b/target/arm/tcg/meson.build
435
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
436
437
arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
438
'cpu64.c',
439
+ 'gengvec64.c',
440
'translate-a64.c',
441
'translate-sve.c',
442
'translate-sme.c',
443
--
81
--
444
2.34.1
82
2.34.1
445
83
446
84
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
While all indices into val[] should be in [0-2], the mask
4
applied is two bits. To help static analysis see there is
5
no possibility of read beyond the end of the array, pad the
6
array to 4 entries, with the final being (implicitly) NULL.
2
7
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20240506010403.6204-2-richard.henderson@linaro.org
10
Message-id: 20241203203949.483774-6-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
12
---
9
target/arm/tcg/translate.h | 5 +
13
fpu/softfloat-parts.c.inc | 2 +-
10
target/arm/tcg/gengvec.c | 1612 ++++++++++++++++++++++++++++++++++++
14
1 file changed, 1 insertion(+), 1 deletion(-)
11
target/arm/tcg/translate.c | 1588 -----------------------------------
12
target/arm/tcg/meson.build | 1 +
13
4 files changed, 1618 insertions(+), 1588 deletions(-)
14
create mode 100644 target/arm/tcg/gengvec.c
15
15
16
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/translate.h
18
--- a/fpu/softfloat-parts.c.inc
19
+++ b/target/arm/tcg/translate.h
19
+++ b/fpu/softfloat-parts.c.inc
20
@@ -XXX,XX +XXX,XX @@ void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
21
void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
21
}
22
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
22
ret = c;
23
23
} else {
24
+void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh);
24
- FloatPartsN *val[3] = { a, b, c };
25
+void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh);
25
+ FloatPartsN *val[R_3NAN_1ST_MASK + 1] = { a, b, c };
26
+void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh);
26
Float3NaNPropRule rule = s->float_3nan_prop_rule;
27
+void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh);
27
28
+
28
assert(rule != float_3nan_prop_none);
29
void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
30
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
31
void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
32
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
33
new file mode 100644
34
index XXXXXXX..XXXXXXX
35
--- /dev/null
36
+++ b/target/arm/tcg/gengvec.c
37
@@ -XXX,XX +XXX,XX @@
38
+/*
39
+ * ARM generic vector expansion
40
+ *
41
+ * Copyright (c) 2003 Fabrice Bellard
42
+ * Copyright (c) 2005-2007 CodeSourcery
43
+ * Copyright (c) 2007 OpenedHand, Ltd.
44
+ *
45
+ * This library is free software; you can redistribute it and/or
46
+ * modify it under the terms of the GNU Lesser General Public
47
+ * License as published by the Free Software Foundation; either
48
+ * version 2.1 of the License, or (at your option) any later version.
49
+ *
50
+ * This library is distributed in the hope that it will be useful,
51
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
52
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
53
+ * Lesser General Public License for more details.
54
+ *
55
+ * You should have received a copy of the GNU Lesser General Public
56
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
57
+ */
58
+
59
+#include "qemu/osdep.h"
60
+#include "translate.h"
61
+
62
+
63
+static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
64
+ uint32_t opr_sz, uint32_t max_sz,
65
+ gen_helper_gvec_3_ptr *fn)
66
+{
67
+ TCGv_ptr qc_ptr = tcg_temp_new_ptr();
68
+
69
+ tcg_gen_addi_ptr(qc_ptr, tcg_env, offsetof(CPUARMState, vfp.qc));
70
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, qc_ptr,
71
+ opr_sz, max_sz, 0, fn);
72
+}
73
+
74
+void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
75
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
76
+{
77
+ static gen_helper_gvec_3_ptr * const fns[2] = {
78
+ gen_helper_gvec_qrdmlah_s16, gen_helper_gvec_qrdmlah_s32
79
+ };
80
+ tcg_debug_assert(vece >= 1 && vece <= 2);
81
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
82
+}
83
+
84
+void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
85
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
86
+{
87
+ static gen_helper_gvec_3_ptr * const fns[2] = {
88
+ gen_helper_gvec_qrdmlsh_s16, gen_helper_gvec_qrdmlsh_s32
89
+ };
90
+ tcg_debug_assert(vece >= 1 && vece <= 2);
91
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
92
+}
93
+
94
+#define GEN_CMP0(NAME, COND) \
95
+ void NAME(unsigned vece, uint32_t d, uint32_t m, \
96
+ uint32_t opr_sz, uint32_t max_sz) \
97
+ { tcg_gen_gvec_cmpi(COND, vece, d, m, 0, opr_sz, max_sz); }
98
+
99
+GEN_CMP0(gen_gvec_ceq0, TCG_COND_EQ)
100
+GEN_CMP0(gen_gvec_cle0, TCG_COND_LE)
101
+GEN_CMP0(gen_gvec_cge0, TCG_COND_GE)
102
+GEN_CMP0(gen_gvec_clt0, TCG_COND_LT)
103
+GEN_CMP0(gen_gvec_cgt0, TCG_COND_GT)
104
+
105
+#undef GEN_CMP0
106
+
107
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
108
+{
109
+ tcg_gen_vec_sar8i_i64(a, a, shift);
110
+ tcg_gen_vec_add8_i64(d, d, a);
111
+}
112
+
113
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
114
+{
115
+ tcg_gen_vec_sar16i_i64(a, a, shift);
116
+ tcg_gen_vec_add16_i64(d, d, a);
117
+}
118
+
119
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
120
+{
121
+ tcg_gen_sari_i32(a, a, shift);
122
+ tcg_gen_add_i32(d, d, a);
123
+}
124
+
125
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
126
+{
127
+ tcg_gen_sari_i64(a, a, shift);
128
+ tcg_gen_add_i64(d, d, a);
129
+}
130
+
131
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
132
+{
133
+ tcg_gen_sari_vec(vece, a, a, sh);
134
+ tcg_gen_add_vec(vece, d, d, a);
135
+}
136
+
137
+void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
138
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
139
+{
140
+ static const TCGOpcode vecop_list[] = {
141
+ INDEX_op_sari_vec, INDEX_op_add_vec, 0
142
+ };
143
+ static const GVecGen2i ops[4] = {
144
+ { .fni8 = gen_ssra8_i64,
145
+ .fniv = gen_ssra_vec,
146
+ .fno = gen_helper_gvec_ssra_b,
147
+ .load_dest = true,
148
+ .opt_opc = vecop_list,
149
+ .vece = MO_8 },
150
+ { .fni8 = gen_ssra16_i64,
151
+ .fniv = gen_ssra_vec,
152
+ .fno = gen_helper_gvec_ssra_h,
153
+ .load_dest = true,
154
+ .opt_opc = vecop_list,
155
+ .vece = MO_16 },
156
+ { .fni4 = gen_ssra32_i32,
157
+ .fniv = gen_ssra_vec,
158
+ .fno = gen_helper_gvec_ssra_s,
159
+ .load_dest = true,
160
+ .opt_opc = vecop_list,
161
+ .vece = MO_32 },
162
+ { .fni8 = gen_ssra64_i64,
163
+ .fniv = gen_ssra_vec,
164
+ .fno = gen_helper_gvec_ssra_d,
165
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
166
+ .opt_opc = vecop_list,
167
+ .load_dest = true,
168
+ .vece = MO_64 },
169
+ };
170
+
171
+ /* tszimm encoding produces immediates in the range [1..esize]. */
172
+ tcg_debug_assert(shift > 0);
173
+ tcg_debug_assert(shift <= (8 << vece));
174
+
175
+ /*
176
+ * Shifts larger than the element size are architecturally valid.
177
+ * Signed results in all sign bits.
178
+ */
179
+ shift = MIN(shift, (8 << vece) - 1);
180
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
181
+}
182
+
183
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
184
+{
185
+ tcg_gen_vec_shr8i_i64(a, a, shift);
186
+ tcg_gen_vec_add8_i64(d, d, a);
187
+}
188
+
189
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
190
+{
191
+ tcg_gen_vec_shr16i_i64(a, a, shift);
192
+ tcg_gen_vec_add16_i64(d, d, a);
193
+}
194
+
195
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
196
+{
197
+ tcg_gen_shri_i32(a, a, shift);
198
+ tcg_gen_add_i32(d, d, a);
199
+}
200
+
201
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
202
+{
203
+ tcg_gen_shri_i64(a, a, shift);
204
+ tcg_gen_add_i64(d, d, a);
205
+}
206
+
207
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
208
+{
209
+ tcg_gen_shri_vec(vece, a, a, sh);
210
+ tcg_gen_add_vec(vece, d, d, a);
211
+}
212
+
213
+void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
214
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
215
+{
216
+ static const TCGOpcode vecop_list[] = {
217
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
218
+ };
219
+ static const GVecGen2i ops[4] = {
220
+ { .fni8 = gen_usra8_i64,
221
+ .fniv = gen_usra_vec,
222
+ .fno = gen_helper_gvec_usra_b,
223
+ .load_dest = true,
224
+ .opt_opc = vecop_list,
225
+ .vece = MO_8, },
226
+ { .fni8 = gen_usra16_i64,
227
+ .fniv = gen_usra_vec,
228
+ .fno = gen_helper_gvec_usra_h,
229
+ .load_dest = true,
230
+ .opt_opc = vecop_list,
231
+ .vece = MO_16, },
232
+ { .fni4 = gen_usra32_i32,
233
+ .fniv = gen_usra_vec,
234
+ .fno = gen_helper_gvec_usra_s,
235
+ .load_dest = true,
236
+ .opt_opc = vecop_list,
237
+ .vece = MO_32, },
238
+ { .fni8 = gen_usra64_i64,
239
+ .fniv = gen_usra_vec,
240
+ .fno = gen_helper_gvec_usra_d,
241
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
242
+ .load_dest = true,
243
+ .opt_opc = vecop_list,
244
+ .vece = MO_64, },
245
+ };
246
+
247
+ /* tszimm encoding produces immediates in the range [1..esize]. */
248
+ tcg_debug_assert(shift > 0);
249
+ tcg_debug_assert(shift <= (8 << vece));
250
+
251
+ /*
252
+ * Shifts larger than the element size are architecturally valid.
253
+ * Unsigned results in all zeros as input to accumulate: nop.
254
+ */
255
+ if (shift < (8 << vece)) {
256
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
257
+ } else {
258
+ /* Nop, but we do need to clear the tail. */
259
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
260
+ }
261
+}
262
+
263
+/*
264
+ * Shift one less than the requested amount, and the low bit is
265
+ * the rounding bit. For the 8 and 16-bit operations, because we
266
+ * mask the low bit, we can perform a normal integer shift instead
267
+ * of a vector shift.
268
+ */
269
+static void gen_srshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
270
+{
271
+ TCGv_i64 t = tcg_temp_new_i64();
272
+
273
+ tcg_gen_shri_i64(t, a, sh - 1);
274
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
275
+ tcg_gen_vec_sar8i_i64(d, a, sh);
276
+ tcg_gen_vec_add8_i64(d, d, t);
277
+}
278
+
279
+static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
280
+{
281
+ TCGv_i64 t = tcg_temp_new_i64();
282
+
283
+ tcg_gen_shri_i64(t, a, sh - 1);
284
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
285
+ tcg_gen_vec_sar16i_i64(d, a, sh);
286
+ tcg_gen_vec_add16_i64(d, d, t);
287
+}
288
+
289
+void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
290
+{
291
+ TCGv_i32 t;
292
+
293
+ /* Handle shift by the input size for the benefit of trans_SRSHR_ri */
294
+ if (sh == 32) {
295
+ tcg_gen_movi_i32(d, 0);
296
+ return;
297
+ }
298
+ t = tcg_temp_new_i32();
299
+ tcg_gen_extract_i32(t, a, sh - 1, 1);
300
+ tcg_gen_sari_i32(d, a, sh);
301
+ tcg_gen_add_i32(d, d, t);
302
+}
303
+
304
+ void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
305
+{
306
+ TCGv_i64 t = tcg_temp_new_i64();
307
+
308
+ tcg_gen_extract_i64(t, a, sh - 1, 1);
309
+ tcg_gen_sari_i64(d, a, sh);
310
+ tcg_gen_add_i64(d, d, t);
311
+}
312
+
313
+static void gen_srshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
314
+{
315
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
316
+ TCGv_vec ones = tcg_temp_new_vec_matching(d);
317
+
318
+ tcg_gen_shri_vec(vece, t, a, sh - 1);
319
+ tcg_gen_dupi_vec(vece, ones, 1);
320
+ tcg_gen_and_vec(vece, t, t, ones);
321
+ tcg_gen_sari_vec(vece, d, a, sh);
322
+ tcg_gen_add_vec(vece, d, d, t);
323
+}
324
+
325
+void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
326
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
327
+{
328
+ static const TCGOpcode vecop_list[] = {
329
+ INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
330
+ };
331
+ static const GVecGen2i ops[4] = {
332
+ { .fni8 = gen_srshr8_i64,
333
+ .fniv = gen_srshr_vec,
334
+ .fno = gen_helper_gvec_srshr_b,
335
+ .opt_opc = vecop_list,
336
+ .vece = MO_8 },
337
+ { .fni8 = gen_srshr16_i64,
338
+ .fniv = gen_srshr_vec,
339
+ .fno = gen_helper_gvec_srshr_h,
340
+ .opt_opc = vecop_list,
341
+ .vece = MO_16 },
342
+ { .fni4 = gen_srshr32_i32,
343
+ .fniv = gen_srshr_vec,
344
+ .fno = gen_helper_gvec_srshr_s,
345
+ .opt_opc = vecop_list,
346
+ .vece = MO_32 },
347
+ { .fni8 = gen_srshr64_i64,
348
+ .fniv = gen_srshr_vec,
349
+ .fno = gen_helper_gvec_srshr_d,
350
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
351
+ .opt_opc = vecop_list,
352
+ .vece = MO_64 },
353
+ };
354
+
355
+ /* tszimm encoding produces immediates in the range [1..esize] */
356
+ tcg_debug_assert(shift > 0);
357
+ tcg_debug_assert(shift <= (8 << vece));
358
+
359
+ if (shift == (8 << vece)) {
360
+ /*
361
+ * Shifts larger than the element size are architecturally valid.
362
+ * Signed results in all sign bits. With rounding, this produces
363
+ * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
364
+ * I.e. always zero.
365
+ */
366
+ tcg_gen_gvec_dup_imm(vece, rd_ofs, opr_sz, max_sz, 0);
367
+ } else {
368
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
369
+ }
370
+}
371
+
372
+static void gen_srsra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
373
+{
374
+ TCGv_i64 t = tcg_temp_new_i64();
375
+
376
+ gen_srshr8_i64(t, a, sh);
377
+ tcg_gen_vec_add8_i64(d, d, t);
378
+}
379
+
380
+static void gen_srsra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
381
+{
382
+ TCGv_i64 t = tcg_temp_new_i64();
383
+
384
+ gen_srshr16_i64(t, a, sh);
385
+ tcg_gen_vec_add16_i64(d, d, t);
386
+}
387
+
388
+static void gen_srsra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
389
+{
390
+ TCGv_i32 t = tcg_temp_new_i32();
391
+
392
+ gen_srshr32_i32(t, a, sh);
393
+ tcg_gen_add_i32(d, d, t);
394
+}
395
+
396
+static void gen_srsra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
397
+{
398
+ TCGv_i64 t = tcg_temp_new_i64();
399
+
400
+ gen_srshr64_i64(t, a, sh);
401
+ tcg_gen_add_i64(d, d, t);
402
+}
403
+
404
+static void gen_srsra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
405
+{
406
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
407
+
408
+ gen_srshr_vec(vece, t, a, sh);
409
+ tcg_gen_add_vec(vece, d, d, t);
410
+}
411
+
412
+void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
413
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
414
+{
415
+ static const TCGOpcode vecop_list[] = {
416
+ INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
417
+ };
418
+ static const GVecGen2i ops[4] = {
419
+ { .fni8 = gen_srsra8_i64,
420
+ .fniv = gen_srsra_vec,
421
+ .fno = gen_helper_gvec_srsra_b,
422
+ .opt_opc = vecop_list,
423
+ .load_dest = true,
424
+ .vece = MO_8 },
425
+ { .fni8 = gen_srsra16_i64,
426
+ .fniv = gen_srsra_vec,
427
+ .fno = gen_helper_gvec_srsra_h,
428
+ .opt_opc = vecop_list,
429
+ .load_dest = true,
430
+ .vece = MO_16 },
431
+ { .fni4 = gen_srsra32_i32,
432
+ .fniv = gen_srsra_vec,
433
+ .fno = gen_helper_gvec_srsra_s,
434
+ .opt_opc = vecop_list,
435
+ .load_dest = true,
436
+ .vece = MO_32 },
437
+ { .fni8 = gen_srsra64_i64,
438
+ .fniv = gen_srsra_vec,
439
+ .fno = gen_helper_gvec_srsra_d,
440
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
441
+ .opt_opc = vecop_list,
442
+ .load_dest = true,
443
+ .vece = MO_64 },
444
+ };
445
+
446
+ /* tszimm encoding produces immediates in the range [1..esize] */
447
+ tcg_debug_assert(shift > 0);
448
+ tcg_debug_assert(shift <= (8 << vece));
449
+
450
+ /*
451
+ * Shifts larger than the element size are architecturally valid.
452
+ * Signed results in all sign bits. With rounding, this produces
453
+ * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
454
+ * I.e. always zero. With accumulation, this leaves D unchanged.
455
+ */
456
+ if (shift == (8 << vece)) {
457
+ /* Nop, but we do need to clear the tail. */
458
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
459
+ } else {
460
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
461
+ }
462
+}
463
+
464
+static void gen_urshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
465
+{
466
+ TCGv_i64 t = tcg_temp_new_i64();
467
+
468
+ tcg_gen_shri_i64(t, a, sh - 1);
469
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
470
+ tcg_gen_vec_shr8i_i64(d, a, sh);
471
+ tcg_gen_vec_add8_i64(d, d, t);
472
+}
473
+
474
+static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
475
+{
476
+ TCGv_i64 t = tcg_temp_new_i64();
477
+
478
+ tcg_gen_shri_i64(t, a, sh - 1);
479
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
480
+ tcg_gen_vec_shr16i_i64(d, a, sh);
481
+ tcg_gen_vec_add16_i64(d, d, t);
482
+}
483
+
484
+void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
485
+{
486
+ TCGv_i32 t;
487
+
488
+ /* Handle shift by the input size for the benefit of trans_URSHR_ri */
489
+ if (sh == 32) {
490
+ tcg_gen_extract_i32(d, a, sh - 1, 1);
491
+ return;
492
+ }
493
+ t = tcg_temp_new_i32();
494
+ tcg_gen_extract_i32(t, a, sh - 1, 1);
495
+ tcg_gen_shri_i32(d, a, sh);
496
+ tcg_gen_add_i32(d, d, t);
497
+}
498
+
499
+void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
500
+{
501
+ TCGv_i64 t = tcg_temp_new_i64();
502
+
503
+ tcg_gen_extract_i64(t, a, sh - 1, 1);
504
+ tcg_gen_shri_i64(d, a, sh);
505
+ tcg_gen_add_i64(d, d, t);
506
+}
507
+
508
+static void gen_urshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t shift)
509
+{
510
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
511
+ TCGv_vec ones = tcg_temp_new_vec_matching(d);
512
+
513
+ tcg_gen_shri_vec(vece, t, a, shift - 1);
514
+ tcg_gen_dupi_vec(vece, ones, 1);
515
+ tcg_gen_and_vec(vece, t, t, ones);
516
+ tcg_gen_shri_vec(vece, d, a, shift);
517
+ tcg_gen_add_vec(vece, d, d, t);
518
+}
519
+
520
+void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
521
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
522
+{
523
+ static const TCGOpcode vecop_list[] = {
524
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
525
+ };
526
+ static const GVecGen2i ops[4] = {
527
+ { .fni8 = gen_urshr8_i64,
528
+ .fniv = gen_urshr_vec,
529
+ .fno = gen_helper_gvec_urshr_b,
530
+ .opt_opc = vecop_list,
531
+ .vece = MO_8 },
532
+ { .fni8 = gen_urshr16_i64,
533
+ .fniv = gen_urshr_vec,
534
+ .fno = gen_helper_gvec_urshr_h,
535
+ .opt_opc = vecop_list,
536
+ .vece = MO_16 },
537
+ { .fni4 = gen_urshr32_i32,
538
+ .fniv = gen_urshr_vec,
539
+ .fno = gen_helper_gvec_urshr_s,
540
+ .opt_opc = vecop_list,
541
+ .vece = MO_32 },
542
+ { .fni8 = gen_urshr64_i64,
543
+ .fniv = gen_urshr_vec,
544
+ .fno = gen_helper_gvec_urshr_d,
545
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
546
+ .opt_opc = vecop_list,
547
+ .vece = MO_64 },
548
+ };
549
+
550
+ /* tszimm encoding produces immediates in the range [1..esize] */
551
+ tcg_debug_assert(shift > 0);
552
+ tcg_debug_assert(shift <= (8 << vece));
553
+
554
+ if (shift == (8 << vece)) {
555
+ /*
556
+ * Shifts larger than the element size are architecturally valid.
557
+ * Unsigned results in zero. With rounding, this produces a
558
+ * copy of the most significant bit.
559
+ */
560
+ tcg_gen_gvec_shri(vece, rd_ofs, rm_ofs, shift - 1, opr_sz, max_sz);
561
+ } else {
562
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
563
+ }
564
+}
565
+
566
+static void gen_ursra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
567
+{
568
+ TCGv_i64 t = tcg_temp_new_i64();
569
+
570
+ if (sh == 8) {
571
+ tcg_gen_vec_shr8i_i64(t, a, 7);
572
+ } else {
573
+ gen_urshr8_i64(t, a, sh);
574
+ }
575
+ tcg_gen_vec_add8_i64(d, d, t);
576
+}
577
+
578
+static void gen_ursra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
579
+{
580
+ TCGv_i64 t = tcg_temp_new_i64();
581
+
582
+ if (sh == 16) {
583
+ tcg_gen_vec_shr16i_i64(t, a, 15);
584
+ } else {
585
+ gen_urshr16_i64(t, a, sh);
586
+ }
587
+ tcg_gen_vec_add16_i64(d, d, t);
588
+}
589
+
590
+static void gen_ursra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
591
+{
592
+ TCGv_i32 t = tcg_temp_new_i32();
593
+
594
+ if (sh == 32) {
595
+ tcg_gen_shri_i32(t, a, 31);
596
+ } else {
597
+ gen_urshr32_i32(t, a, sh);
598
+ }
599
+ tcg_gen_add_i32(d, d, t);
600
+}
601
+
602
+static void gen_ursra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
603
+{
604
+ TCGv_i64 t = tcg_temp_new_i64();
605
+
606
+ if (sh == 64) {
607
+ tcg_gen_shri_i64(t, a, 63);
608
+ } else {
609
+ gen_urshr64_i64(t, a, sh);
610
+ }
611
+ tcg_gen_add_i64(d, d, t);
612
+}
613
+
614
+static void gen_ursra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
615
+{
616
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
617
+
618
+ if (sh == (8 << vece)) {
619
+ tcg_gen_shri_vec(vece, t, a, sh - 1);
620
+ } else {
621
+ gen_urshr_vec(vece, t, a, sh);
622
+ }
623
+ tcg_gen_add_vec(vece, d, d, t);
624
+}
625
+
626
+void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
627
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
628
+{
629
+ static const TCGOpcode vecop_list[] = {
630
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
631
+ };
632
+ static const GVecGen2i ops[4] = {
633
+ { .fni8 = gen_ursra8_i64,
634
+ .fniv = gen_ursra_vec,
635
+ .fno = gen_helper_gvec_ursra_b,
636
+ .opt_opc = vecop_list,
637
+ .load_dest = true,
638
+ .vece = MO_8 },
639
+ { .fni8 = gen_ursra16_i64,
640
+ .fniv = gen_ursra_vec,
641
+ .fno = gen_helper_gvec_ursra_h,
642
+ .opt_opc = vecop_list,
643
+ .load_dest = true,
644
+ .vece = MO_16 },
645
+ { .fni4 = gen_ursra32_i32,
646
+ .fniv = gen_ursra_vec,
647
+ .fno = gen_helper_gvec_ursra_s,
648
+ .opt_opc = vecop_list,
649
+ .load_dest = true,
650
+ .vece = MO_32 },
651
+ { .fni8 = gen_ursra64_i64,
652
+ .fniv = gen_ursra_vec,
653
+ .fno = gen_helper_gvec_ursra_d,
654
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
655
+ .opt_opc = vecop_list,
656
+ .load_dest = true,
657
+ .vece = MO_64 },
658
+ };
659
+
660
+ /* tszimm encoding produces immediates in the range [1..esize] */
661
+ tcg_debug_assert(shift > 0);
662
+ tcg_debug_assert(shift <= (8 << vece));
663
+
664
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
665
+}
666
+
667
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
668
+{
669
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
670
+ TCGv_i64 t = tcg_temp_new_i64();
671
+
672
+ tcg_gen_shri_i64(t, a, shift);
673
+ tcg_gen_andi_i64(t, t, mask);
674
+ tcg_gen_andi_i64(d, d, ~mask);
675
+ tcg_gen_or_i64(d, d, t);
676
+}
677
+
678
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
679
+{
680
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
681
+ TCGv_i64 t = tcg_temp_new_i64();
682
+
683
+ tcg_gen_shri_i64(t, a, shift);
684
+ tcg_gen_andi_i64(t, t, mask);
685
+ tcg_gen_andi_i64(d, d, ~mask);
686
+ tcg_gen_or_i64(d, d, t);
687
+}
688
+
689
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
690
+{
691
+ tcg_gen_shri_i32(a, a, shift);
692
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
693
+}
694
+
695
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
696
+{
697
+ tcg_gen_shri_i64(a, a, shift);
698
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
699
+}
700
+
701
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
702
+{
703
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
704
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
705
+
706
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
707
+ tcg_gen_shri_vec(vece, t, a, sh);
708
+ tcg_gen_and_vec(vece, d, d, m);
709
+ tcg_gen_or_vec(vece, d, d, t);
710
+}
711
+
712
+void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
713
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
714
+{
715
+ static const TCGOpcode vecop_list[] = { INDEX_op_shri_vec, 0 };
716
+ const GVecGen2i ops[4] = {
717
+ { .fni8 = gen_shr8_ins_i64,
718
+ .fniv = gen_shr_ins_vec,
719
+ .fno = gen_helper_gvec_sri_b,
720
+ .load_dest = true,
721
+ .opt_opc = vecop_list,
722
+ .vece = MO_8 },
723
+ { .fni8 = gen_shr16_ins_i64,
724
+ .fniv = gen_shr_ins_vec,
725
+ .fno = gen_helper_gvec_sri_h,
726
+ .load_dest = true,
727
+ .opt_opc = vecop_list,
728
+ .vece = MO_16 },
729
+ { .fni4 = gen_shr32_ins_i32,
730
+ .fniv = gen_shr_ins_vec,
731
+ .fno = gen_helper_gvec_sri_s,
732
+ .load_dest = true,
733
+ .opt_opc = vecop_list,
734
+ .vece = MO_32 },
735
+ { .fni8 = gen_shr64_ins_i64,
736
+ .fniv = gen_shr_ins_vec,
737
+ .fno = gen_helper_gvec_sri_d,
738
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
739
+ .load_dest = true,
740
+ .opt_opc = vecop_list,
741
+ .vece = MO_64 },
742
+ };
743
+
744
+ /* tszimm encoding produces immediates in the range [1..esize]. */
745
+ tcg_debug_assert(shift > 0);
746
+ tcg_debug_assert(shift <= (8 << vece));
747
+
748
+ /* Shift of esize leaves destination unchanged. */
749
+ if (shift < (8 << vece)) {
750
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
751
+ } else {
752
+ /* Nop, but we do need to clear the tail. */
753
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
754
+ }
755
+}
756
+
757
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
758
+{
759
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
760
+ TCGv_i64 t = tcg_temp_new_i64();
761
+
762
+ tcg_gen_shli_i64(t, a, shift);
763
+ tcg_gen_andi_i64(t, t, mask);
764
+ tcg_gen_andi_i64(d, d, ~mask);
765
+ tcg_gen_or_i64(d, d, t);
766
+}
767
+
768
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
769
+{
770
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
771
+ TCGv_i64 t = tcg_temp_new_i64();
772
+
773
+ tcg_gen_shli_i64(t, a, shift);
774
+ tcg_gen_andi_i64(t, t, mask);
775
+ tcg_gen_andi_i64(d, d, ~mask);
776
+ tcg_gen_or_i64(d, d, t);
777
+}
778
+
779
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
780
+{
781
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
782
+}
783
+
784
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
785
+{
786
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
787
+}
788
+
789
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
790
+{
791
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
792
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
793
+
794
+ tcg_gen_shli_vec(vece, t, a, sh);
795
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
796
+ tcg_gen_and_vec(vece, d, d, m);
797
+ tcg_gen_or_vec(vece, d, d, t);
798
+}
799
+
800
+void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
801
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
802
+{
803
+ static const TCGOpcode vecop_list[] = { INDEX_op_shli_vec, 0 };
804
+ const GVecGen2i ops[4] = {
805
+ { .fni8 = gen_shl8_ins_i64,
806
+ .fniv = gen_shl_ins_vec,
807
+ .fno = gen_helper_gvec_sli_b,
808
+ .load_dest = true,
809
+ .opt_opc = vecop_list,
810
+ .vece = MO_8 },
811
+ { .fni8 = gen_shl16_ins_i64,
812
+ .fniv = gen_shl_ins_vec,
813
+ .fno = gen_helper_gvec_sli_h,
814
+ .load_dest = true,
815
+ .opt_opc = vecop_list,
816
+ .vece = MO_16 },
817
+ { .fni4 = gen_shl32_ins_i32,
818
+ .fniv = gen_shl_ins_vec,
819
+ .fno = gen_helper_gvec_sli_s,
820
+ .load_dest = true,
821
+ .opt_opc = vecop_list,
822
+ .vece = MO_32 },
823
+ { .fni8 = gen_shl64_ins_i64,
824
+ .fniv = gen_shl_ins_vec,
825
+ .fno = gen_helper_gvec_sli_d,
826
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
827
+ .load_dest = true,
828
+ .opt_opc = vecop_list,
829
+ .vece = MO_64 },
830
+ };
831
+
832
+ /* tszimm encoding produces immediates in the range [0..esize-1]. */
833
+ tcg_debug_assert(shift >= 0);
834
+ tcg_debug_assert(shift < (8 << vece));
835
+
836
+ if (shift == 0) {
837
+ tcg_gen_gvec_mov(vece, rd_ofs, rm_ofs, opr_sz, max_sz);
838
+ } else {
839
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
840
+ }
841
+}
842
+
843
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
844
+{
845
+ gen_helper_neon_mul_u8(a, a, b);
846
+ gen_helper_neon_add_u8(d, d, a);
847
+}
848
+
849
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
850
+{
851
+ gen_helper_neon_mul_u8(a, a, b);
852
+ gen_helper_neon_sub_u8(d, d, a);
853
+}
854
+
855
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
856
+{
857
+ gen_helper_neon_mul_u16(a, a, b);
858
+ gen_helper_neon_add_u16(d, d, a);
859
+}
860
+
861
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
862
+{
863
+ gen_helper_neon_mul_u16(a, a, b);
864
+ gen_helper_neon_sub_u16(d, d, a);
865
+}
866
+
867
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
868
+{
869
+ tcg_gen_mul_i32(a, a, b);
870
+ tcg_gen_add_i32(d, d, a);
871
+}
872
+
873
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
874
+{
875
+ tcg_gen_mul_i32(a, a, b);
876
+ tcg_gen_sub_i32(d, d, a);
877
+}
878
+
879
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
880
+{
881
+ tcg_gen_mul_i64(a, a, b);
882
+ tcg_gen_add_i64(d, d, a);
883
+}
884
+
885
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
886
+{
887
+ tcg_gen_mul_i64(a, a, b);
888
+ tcg_gen_sub_i64(d, d, a);
889
+}
890
+
891
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
892
+{
893
+ tcg_gen_mul_vec(vece, a, a, b);
894
+ tcg_gen_add_vec(vece, d, d, a);
895
+}
896
+
897
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
898
+{
899
+ tcg_gen_mul_vec(vece, a, a, b);
900
+ tcg_gen_sub_vec(vece, d, d, a);
901
+}
902
+
903
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
904
+ * these tables are shared with AArch64 which does support them.
905
+ */
906
+void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
907
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
908
+{
909
+ static const TCGOpcode vecop_list[] = {
910
+ INDEX_op_mul_vec, INDEX_op_add_vec, 0
911
+ };
912
+ static const GVecGen3 ops[4] = {
913
+ { .fni4 = gen_mla8_i32,
914
+ .fniv = gen_mla_vec,
915
+ .load_dest = true,
916
+ .opt_opc = vecop_list,
917
+ .vece = MO_8 },
918
+ { .fni4 = gen_mla16_i32,
919
+ .fniv = gen_mla_vec,
920
+ .load_dest = true,
921
+ .opt_opc = vecop_list,
922
+ .vece = MO_16 },
923
+ { .fni4 = gen_mla32_i32,
924
+ .fniv = gen_mla_vec,
925
+ .load_dest = true,
926
+ .opt_opc = vecop_list,
927
+ .vece = MO_32 },
928
+ { .fni8 = gen_mla64_i64,
929
+ .fniv = gen_mla_vec,
930
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
931
+ .load_dest = true,
932
+ .opt_opc = vecop_list,
933
+ .vece = MO_64 },
934
+ };
935
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
936
+}
937
+
938
+void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
939
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
940
+{
941
+ static const TCGOpcode vecop_list[] = {
942
+ INDEX_op_mul_vec, INDEX_op_sub_vec, 0
943
+ };
944
+ static const GVecGen3 ops[4] = {
945
+ { .fni4 = gen_mls8_i32,
946
+ .fniv = gen_mls_vec,
947
+ .load_dest = true,
948
+ .opt_opc = vecop_list,
949
+ .vece = MO_8 },
950
+ { .fni4 = gen_mls16_i32,
951
+ .fniv = gen_mls_vec,
952
+ .load_dest = true,
953
+ .opt_opc = vecop_list,
954
+ .vece = MO_16 },
955
+ { .fni4 = gen_mls32_i32,
956
+ .fniv = gen_mls_vec,
957
+ .load_dest = true,
958
+ .opt_opc = vecop_list,
959
+ .vece = MO_32 },
960
+ { .fni8 = gen_mls64_i64,
961
+ .fniv = gen_mls_vec,
962
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
963
+ .load_dest = true,
964
+ .opt_opc = vecop_list,
965
+ .vece = MO_64 },
966
+ };
967
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
968
+}
969
+
970
+/* CMTST : test is "if (X & Y != 0)". */
971
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
972
+{
973
+ tcg_gen_and_i32(d, a, b);
974
+ tcg_gen_negsetcond_i32(TCG_COND_NE, d, d, tcg_constant_i32(0));
975
+}
976
+
977
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
978
+{
979
+ tcg_gen_and_i64(d, a, b);
980
+ tcg_gen_negsetcond_i64(TCG_COND_NE, d, d, tcg_constant_i64(0));
981
+}
982
+
983
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
984
+{
985
+ tcg_gen_and_vec(vece, d, a, b);
986
+ tcg_gen_dupi_vec(vece, a, 0);
987
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
988
+}
989
+
990
+void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
991
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
992
+{
993
+ static const TCGOpcode vecop_list[] = { INDEX_op_cmp_vec, 0 };
994
+ static const GVecGen3 ops[4] = {
995
+ { .fni4 = gen_helper_neon_tst_u8,
996
+ .fniv = gen_cmtst_vec,
997
+ .opt_opc = vecop_list,
998
+ .vece = MO_8 },
999
+ { .fni4 = gen_helper_neon_tst_u16,
1000
+ .fniv = gen_cmtst_vec,
1001
+ .opt_opc = vecop_list,
1002
+ .vece = MO_16 },
1003
+ { .fni4 = gen_cmtst_i32,
1004
+ .fniv = gen_cmtst_vec,
1005
+ .opt_opc = vecop_list,
1006
+ .vece = MO_32 },
1007
+ { .fni8 = gen_cmtst_i64,
1008
+ .fniv = gen_cmtst_vec,
1009
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1010
+ .opt_opc = vecop_list,
1011
+ .vece = MO_64 },
1012
+ };
1013
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1014
+}
1015
+
1016
+void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
1017
+{
1018
+ TCGv_i32 lval = tcg_temp_new_i32();
1019
+ TCGv_i32 rval = tcg_temp_new_i32();
1020
+ TCGv_i32 lsh = tcg_temp_new_i32();
1021
+ TCGv_i32 rsh = tcg_temp_new_i32();
1022
+ TCGv_i32 zero = tcg_constant_i32(0);
1023
+ TCGv_i32 max = tcg_constant_i32(32);
1024
+
1025
+ /*
1026
+ * Rely on the TCG guarantee that out of range shifts produce
1027
+ * unspecified results, not undefined behaviour (i.e. no trap).
1028
+ * Discard out-of-range results after the fact.
1029
+ */
1030
+ tcg_gen_ext8s_i32(lsh, shift);
1031
+ tcg_gen_neg_i32(rsh, lsh);
1032
+ tcg_gen_shl_i32(lval, src, lsh);
1033
+ tcg_gen_shr_i32(rval, src, rsh);
1034
+ tcg_gen_movcond_i32(TCG_COND_LTU, dst, lsh, max, lval, zero);
1035
+ tcg_gen_movcond_i32(TCG_COND_LTU, dst, rsh, max, rval, dst);
1036
+}
1037
+
1038
+void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
1039
+{
1040
+ TCGv_i64 lval = tcg_temp_new_i64();
1041
+ TCGv_i64 rval = tcg_temp_new_i64();
1042
+ TCGv_i64 lsh = tcg_temp_new_i64();
1043
+ TCGv_i64 rsh = tcg_temp_new_i64();
1044
+ TCGv_i64 zero = tcg_constant_i64(0);
1045
+ TCGv_i64 max = tcg_constant_i64(64);
1046
+
1047
+ /*
1048
+ * Rely on the TCG guarantee that out of range shifts produce
1049
+ * unspecified results, not undefined behaviour (i.e. no trap).
1050
+ * Discard out-of-range results after the fact.
1051
+ */
1052
+ tcg_gen_ext8s_i64(lsh, shift);
1053
+ tcg_gen_neg_i64(rsh, lsh);
1054
+ tcg_gen_shl_i64(lval, src, lsh);
1055
+ tcg_gen_shr_i64(rval, src, rsh);
1056
+ tcg_gen_movcond_i64(TCG_COND_LTU, dst, lsh, max, lval, zero);
1057
+ tcg_gen_movcond_i64(TCG_COND_LTU, dst, rsh, max, rval, dst);
1058
+}
1059
+
1060
+static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
1061
+ TCGv_vec src, TCGv_vec shift)
1062
+{
1063
+ TCGv_vec lval = tcg_temp_new_vec_matching(dst);
1064
+ TCGv_vec rval = tcg_temp_new_vec_matching(dst);
1065
+ TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
1066
+ TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
1067
+ TCGv_vec msk, max;
1068
+
1069
+ tcg_gen_neg_vec(vece, rsh, shift);
1070
+ if (vece == MO_8) {
1071
+ tcg_gen_mov_vec(lsh, shift);
1072
+ } else {
1073
+ msk = tcg_temp_new_vec_matching(dst);
1074
+ tcg_gen_dupi_vec(vece, msk, 0xff);
1075
+ tcg_gen_and_vec(vece, lsh, shift, msk);
1076
+ tcg_gen_and_vec(vece, rsh, rsh, msk);
1077
+ }
1078
+
1079
+ /*
1080
+ * Rely on the TCG guarantee that out of range shifts produce
1081
+ * unspecified results, not undefined behaviour (i.e. no trap).
1082
+ * Discard out-of-range results after the fact.
1083
+ */
1084
+ tcg_gen_shlv_vec(vece, lval, src, lsh);
1085
+ tcg_gen_shrv_vec(vece, rval, src, rsh);
1086
+
1087
+ max = tcg_temp_new_vec_matching(dst);
1088
+ tcg_gen_dupi_vec(vece, max, 8 << vece);
1089
+
1090
+ /*
1091
+ * The choice of LT (signed) and GEU (unsigned) are biased toward
1092
+ * the instructions of the x86_64 host. For MO_8, the whole byte
1093
+ * is significant so we must use an unsigned compare; otherwise we
1094
+ * have already masked to a byte and so a signed compare works.
1095
+ * Other tcg hosts have a full set of comparisons and do not care.
1096
+ */
1097
+ if (vece == MO_8) {
1098
+ tcg_gen_cmp_vec(TCG_COND_GEU, vece, lsh, lsh, max);
1099
+ tcg_gen_cmp_vec(TCG_COND_GEU, vece, rsh, rsh, max);
1100
+ tcg_gen_andc_vec(vece, lval, lval, lsh);
1101
+ tcg_gen_andc_vec(vece, rval, rval, rsh);
1102
+ } else {
1103
+ tcg_gen_cmp_vec(TCG_COND_LT, vece, lsh, lsh, max);
1104
+ tcg_gen_cmp_vec(TCG_COND_LT, vece, rsh, rsh, max);
1105
+ tcg_gen_and_vec(vece, lval, lval, lsh);
1106
+ tcg_gen_and_vec(vece, rval, rval, rsh);
1107
+ }
1108
+ tcg_gen_or_vec(vece, dst, lval, rval);
1109
+}
1110
+
1111
+void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1112
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1113
+{
1114
+ static const TCGOpcode vecop_list[] = {
1115
+ INDEX_op_neg_vec, INDEX_op_shlv_vec,
1116
+ INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
1117
+ };
1118
+ static const GVecGen3 ops[4] = {
1119
+ { .fniv = gen_ushl_vec,
1120
+ .fno = gen_helper_gvec_ushl_b,
1121
+ .opt_opc = vecop_list,
1122
+ .vece = MO_8 },
1123
+ { .fniv = gen_ushl_vec,
1124
+ .fno = gen_helper_gvec_ushl_h,
1125
+ .opt_opc = vecop_list,
1126
+ .vece = MO_16 },
1127
+ { .fni4 = gen_ushl_i32,
1128
+ .fniv = gen_ushl_vec,
1129
+ .opt_opc = vecop_list,
1130
+ .vece = MO_32 },
1131
+ { .fni8 = gen_ushl_i64,
1132
+ .fniv = gen_ushl_vec,
1133
+ .opt_opc = vecop_list,
1134
+ .vece = MO_64 },
1135
+ };
1136
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1137
+}
1138
+
1139
+void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
1140
+{
1141
+ TCGv_i32 lval = tcg_temp_new_i32();
1142
+ TCGv_i32 rval = tcg_temp_new_i32();
1143
+ TCGv_i32 lsh = tcg_temp_new_i32();
1144
+ TCGv_i32 rsh = tcg_temp_new_i32();
1145
+ TCGv_i32 zero = tcg_constant_i32(0);
1146
+ TCGv_i32 max = tcg_constant_i32(31);
1147
+
1148
+ /*
1149
+ * Rely on the TCG guarantee that out of range shifts produce
1150
+ * unspecified results, not undefined behaviour (i.e. no trap).
1151
+ * Discard out-of-range results after the fact.
1152
+ */
1153
+ tcg_gen_ext8s_i32(lsh, shift);
1154
+ tcg_gen_neg_i32(rsh, lsh);
1155
+ tcg_gen_shl_i32(lval, src, lsh);
1156
+ tcg_gen_umin_i32(rsh, rsh, max);
1157
+ tcg_gen_sar_i32(rval, src, rsh);
1158
+ tcg_gen_movcond_i32(TCG_COND_LEU, lval, lsh, max, lval, zero);
1159
+ tcg_gen_movcond_i32(TCG_COND_LT, dst, lsh, zero, rval, lval);
1160
+}
1161
+
1162
+void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
1163
+{
1164
+ TCGv_i64 lval = tcg_temp_new_i64();
1165
+ TCGv_i64 rval = tcg_temp_new_i64();
1166
+ TCGv_i64 lsh = tcg_temp_new_i64();
1167
+ TCGv_i64 rsh = tcg_temp_new_i64();
1168
+ TCGv_i64 zero = tcg_constant_i64(0);
1169
+ TCGv_i64 max = tcg_constant_i64(63);
1170
+
1171
+ /*
1172
+ * Rely on the TCG guarantee that out of range shifts produce
1173
+ * unspecified results, not undefined behaviour (i.e. no trap).
1174
+ * Discard out-of-range results after the fact.
1175
+ */
1176
+ tcg_gen_ext8s_i64(lsh, shift);
1177
+ tcg_gen_neg_i64(rsh, lsh);
1178
+ tcg_gen_shl_i64(lval, src, lsh);
1179
+ tcg_gen_umin_i64(rsh, rsh, max);
1180
+ tcg_gen_sar_i64(rval, src, rsh);
1181
+ tcg_gen_movcond_i64(TCG_COND_LEU, lval, lsh, max, lval, zero);
1182
+ tcg_gen_movcond_i64(TCG_COND_LT, dst, lsh, zero, rval, lval);
1183
+}
1184
+
1185
+static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
1186
+ TCGv_vec src, TCGv_vec shift)
1187
+{
1188
+ TCGv_vec lval = tcg_temp_new_vec_matching(dst);
1189
+ TCGv_vec rval = tcg_temp_new_vec_matching(dst);
1190
+ TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
1191
+ TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
1192
+ TCGv_vec tmp = tcg_temp_new_vec_matching(dst);
1193
+
1194
+ /*
1195
+ * Rely on the TCG guarantee that out of range shifts produce
1196
+ * unspecified results, not undefined behaviour (i.e. no trap).
1197
+ * Discard out-of-range results after the fact.
1198
+ */
1199
+ tcg_gen_neg_vec(vece, rsh, shift);
1200
+ if (vece == MO_8) {
1201
+ tcg_gen_mov_vec(lsh, shift);
1202
+ } else {
1203
+ tcg_gen_dupi_vec(vece, tmp, 0xff);
1204
+ tcg_gen_and_vec(vece, lsh, shift, tmp);
1205
+ tcg_gen_and_vec(vece, rsh, rsh, tmp);
1206
+ }
1207
+
1208
+ /* Bound rsh so out of bound right shift gets -1. */
1209
+ tcg_gen_dupi_vec(vece, tmp, (8 << vece) - 1);
1210
+ tcg_gen_umin_vec(vece, rsh, rsh, tmp);
1211
+ tcg_gen_cmp_vec(TCG_COND_GT, vece, tmp, lsh, tmp);
1212
+
1213
+ tcg_gen_shlv_vec(vece, lval, src, lsh);
1214
+ tcg_gen_sarv_vec(vece, rval, src, rsh);
1215
+
1216
+ /* Select in-bound left shift. */
1217
+ tcg_gen_andc_vec(vece, lval, lval, tmp);
1218
+
1219
+ /* Select between left and right shift. */
1220
+ if (vece == MO_8) {
1221
+ tcg_gen_dupi_vec(vece, tmp, 0);
1222
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, rval, lval);
1223
+ } else {
1224
+ tcg_gen_dupi_vec(vece, tmp, 0x80);
1225
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, lval, rval);
1226
+ }
1227
+}
1228
+
1229
+void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1230
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1231
+{
1232
+ static const TCGOpcode vecop_list[] = {
1233
+ INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
1234
+ INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
1235
+ };
1236
+ static const GVecGen3 ops[4] = {
1237
+ { .fniv = gen_sshl_vec,
1238
+ .fno = gen_helper_gvec_sshl_b,
1239
+ .opt_opc = vecop_list,
1240
+ .vece = MO_8 },
1241
+ { .fniv = gen_sshl_vec,
1242
+ .fno = gen_helper_gvec_sshl_h,
1243
+ .opt_opc = vecop_list,
1244
+ .vece = MO_16 },
1245
+ { .fni4 = gen_sshl_i32,
1246
+ .fniv = gen_sshl_vec,
1247
+ .opt_opc = vecop_list,
1248
+ .vece = MO_32 },
1249
+ { .fni8 = gen_sshl_i64,
1250
+ .fniv = gen_sshl_vec,
1251
+ .opt_opc = vecop_list,
1252
+ .vece = MO_64 },
1253
+ };
1254
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1255
+}
1256
+
1257
+static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
1258
+ TCGv_vec a, TCGv_vec b)
1259
+{
1260
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
1261
+ tcg_gen_add_vec(vece, x, a, b);
1262
+ tcg_gen_usadd_vec(vece, t, a, b);
1263
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
1264
+ tcg_gen_or_vec(vece, sat, sat, x);
1265
+}
1266
+
1267
+void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1268
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1269
+{
1270
+ static const TCGOpcode vecop_list[] = {
1271
+ INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
1272
+ };
1273
+ static const GVecGen4 ops[4] = {
1274
+ { .fniv = gen_uqadd_vec,
1275
+ .fno = gen_helper_gvec_uqadd_b,
1276
+ .write_aofs = true,
1277
+ .opt_opc = vecop_list,
1278
+ .vece = MO_8 },
1279
+ { .fniv = gen_uqadd_vec,
1280
+ .fno = gen_helper_gvec_uqadd_h,
1281
+ .write_aofs = true,
1282
+ .opt_opc = vecop_list,
1283
+ .vece = MO_16 },
1284
+ { .fniv = gen_uqadd_vec,
1285
+ .fno = gen_helper_gvec_uqadd_s,
1286
+ .write_aofs = true,
1287
+ .opt_opc = vecop_list,
1288
+ .vece = MO_32 },
1289
+ { .fniv = gen_uqadd_vec,
1290
+ .fno = gen_helper_gvec_uqadd_d,
1291
+ .write_aofs = true,
1292
+ .opt_opc = vecop_list,
1293
+ .vece = MO_64 },
1294
+ };
1295
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
1296
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1297
+}
1298
+
1299
+static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
1300
+ TCGv_vec a, TCGv_vec b)
1301
+{
1302
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
1303
+ tcg_gen_add_vec(vece, x, a, b);
1304
+ tcg_gen_ssadd_vec(vece, t, a, b);
1305
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
1306
+ tcg_gen_or_vec(vece, sat, sat, x);
1307
+}
1308
+
1309
+void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1310
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1311
+{
1312
+ static const TCGOpcode vecop_list[] = {
1313
+ INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
1314
+ };
1315
+ static const GVecGen4 ops[4] = {
1316
+ { .fniv = gen_sqadd_vec,
1317
+ .fno = gen_helper_gvec_sqadd_b,
1318
+ .opt_opc = vecop_list,
1319
+ .write_aofs = true,
1320
+ .vece = MO_8 },
1321
+ { .fniv = gen_sqadd_vec,
1322
+ .fno = gen_helper_gvec_sqadd_h,
1323
+ .opt_opc = vecop_list,
1324
+ .write_aofs = true,
1325
+ .vece = MO_16 },
1326
+ { .fniv = gen_sqadd_vec,
1327
+ .fno = gen_helper_gvec_sqadd_s,
1328
+ .opt_opc = vecop_list,
1329
+ .write_aofs = true,
1330
+ .vece = MO_32 },
1331
+ { .fniv = gen_sqadd_vec,
1332
+ .fno = gen_helper_gvec_sqadd_d,
1333
+ .opt_opc = vecop_list,
1334
+ .write_aofs = true,
1335
+ .vece = MO_64 },
1336
+ };
1337
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
1338
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1339
+}
1340
+
1341
+static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
1342
+ TCGv_vec a, TCGv_vec b)
1343
+{
1344
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
1345
+ tcg_gen_sub_vec(vece, x, a, b);
1346
+ tcg_gen_ussub_vec(vece, t, a, b);
1347
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
1348
+ tcg_gen_or_vec(vece, sat, sat, x);
1349
+}
1350
+
1351
+void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1352
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1353
+{
1354
+ static const TCGOpcode vecop_list[] = {
1355
+ INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
1356
+ };
1357
+ static const GVecGen4 ops[4] = {
1358
+ { .fniv = gen_uqsub_vec,
1359
+ .fno = gen_helper_gvec_uqsub_b,
1360
+ .opt_opc = vecop_list,
1361
+ .write_aofs = true,
1362
+ .vece = MO_8 },
1363
+ { .fniv = gen_uqsub_vec,
1364
+ .fno = gen_helper_gvec_uqsub_h,
1365
+ .opt_opc = vecop_list,
1366
+ .write_aofs = true,
1367
+ .vece = MO_16 },
1368
+ { .fniv = gen_uqsub_vec,
1369
+ .fno = gen_helper_gvec_uqsub_s,
1370
+ .opt_opc = vecop_list,
1371
+ .write_aofs = true,
1372
+ .vece = MO_32 },
1373
+ { .fniv = gen_uqsub_vec,
1374
+ .fno = gen_helper_gvec_uqsub_d,
1375
+ .opt_opc = vecop_list,
1376
+ .write_aofs = true,
1377
+ .vece = MO_64 },
1378
+ };
1379
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
1380
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1381
+}
1382
+
1383
+static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
1384
+ TCGv_vec a, TCGv_vec b)
1385
+{
1386
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
1387
+ tcg_gen_sub_vec(vece, x, a, b);
1388
+ tcg_gen_sssub_vec(vece, t, a, b);
1389
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
1390
+ tcg_gen_or_vec(vece, sat, sat, x);
1391
+}
1392
+
1393
+void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1394
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1395
+{
1396
+ static const TCGOpcode vecop_list[] = {
1397
+ INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
1398
+ };
1399
+ static const GVecGen4 ops[4] = {
1400
+ { .fniv = gen_sqsub_vec,
1401
+ .fno = gen_helper_gvec_sqsub_b,
1402
+ .opt_opc = vecop_list,
1403
+ .write_aofs = true,
1404
+ .vece = MO_8 },
1405
+ { .fniv = gen_sqsub_vec,
1406
+ .fno = gen_helper_gvec_sqsub_h,
1407
+ .opt_opc = vecop_list,
1408
+ .write_aofs = true,
1409
+ .vece = MO_16 },
1410
+ { .fniv = gen_sqsub_vec,
1411
+ .fno = gen_helper_gvec_sqsub_s,
1412
+ .opt_opc = vecop_list,
1413
+ .write_aofs = true,
1414
+ .vece = MO_32 },
1415
+ { .fniv = gen_sqsub_vec,
1416
+ .fno = gen_helper_gvec_sqsub_d,
1417
+ .opt_opc = vecop_list,
1418
+ .write_aofs = true,
1419
+ .vece = MO_64 },
1420
+ };
1421
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
1422
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1423
+}
1424
+
1425
+static void gen_sabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
1426
+{
1427
+ TCGv_i32 t = tcg_temp_new_i32();
1428
+
1429
+ tcg_gen_sub_i32(t, a, b);
1430
+ tcg_gen_sub_i32(d, b, a);
1431
+ tcg_gen_movcond_i32(TCG_COND_LT, d, a, b, d, t);
1432
+}
1433
+
1434
+static void gen_sabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
1435
+{
1436
+ TCGv_i64 t = tcg_temp_new_i64();
1437
+
1438
+ tcg_gen_sub_i64(t, a, b);
1439
+ tcg_gen_sub_i64(d, b, a);
1440
+ tcg_gen_movcond_i64(TCG_COND_LT, d, a, b, d, t);
1441
+}
1442
+
1443
+static void gen_sabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
1444
+{
1445
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
1446
+
1447
+ tcg_gen_smin_vec(vece, t, a, b);
1448
+ tcg_gen_smax_vec(vece, d, a, b);
1449
+ tcg_gen_sub_vec(vece, d, d, t);
1450
+}
1451
+
1452
+void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1453
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1454
+{
1455
+ static const TCGOpcode vecop_list[] = {
1456
+ INDEX_op_sub_vec, INDEX_op_smin_vec, INDEX_op_smax_vec, 0
1457
+ };
1458
+ static const GVecGen3 ops[4] = {
1459
+ { .fniv = gen_sabd_vec,
1460
+ .fno = gen_helper_gvec_sabd_b,
1461
+ .opt_opc = vecop_list,
1462
+ .vece = MO_8 },
1463
+ { .fniv = gen_sabd_vec,
1464
+ .fno = gen_helper_gvec_sabd_h,
1465
+ .opt_opc = vecop_list,
1466
+ .vece = MO_16 },
1467
+ { .fni4 = gen_sabd_i32,
1468
+ .fniv = gen_sabd_vec,
1469
+ .fno = gen_helper_gvec_sabd_s,
1470
+ .opt_opc = vecop_list,
1471
+ .vece = MO_32 },
1472
+ { .fni8 = gen_sabd_i64,
1473
+ .fniv = gen_sabd_vec,
1474
+ .fno = gen_helper_gvec_sabd_d,
1475
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1476
+ .opt_opc = vecop_list,
1477
+ .vece = MO_64 },
1478
+ };
1479
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1480
+}
1481
+
1482
+static void gen_uabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
1483
+{
1484
+ TCGv_i32 t = tcg_temp_new_i32();
1485
+
1486
+ tcg_gen_sub_i32(t, a, b);
1487
+ tcg_gen_sub_i32(d, b, a);
1488
+ tcg_gen_movcond_i32(TCG_COND_LTU, d, a, b, d, t);
1489
+}
1490
+
1491
+static void gen_uabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
1492
+{
1493
+ TCGv_i64 t = tcg_temp_new_i64();
1494
+
1495
+ tcg_gen_sub_i64(t, a, b);
1496
+ tcg_gen_sub_i64(d, b, a);
1497
+ tcg_gen_movcond_i64(TCG_COND_LTU, d, a, b, d, t);
1498
+}
1499
+
1500
+static void gen_uabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
1501
+{
1502
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
1503
+
1504
+ tcg_gen_umin_vec(vece, t, a, b);
1505
+ tcg_gen_umax_vec(vece, d, a, b);
1506
+ tcg_gen_sub_vec(vece, d, d, t);
1507
+}
1508
+
1509
+void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1510
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1511
+{
1512
+ static const TCGOpcode vecop_list[] = {
1513
+ INDEX_op_sub_vec, INDEX_op_umin_vec, INDEX_op_umax_vec, 0
1514
+ };
1515
+ static const GVecGen3 ops[4] = {
1516
+ { .fniv = gen_uabd_vec,
1517
+ .fno = gen_helper_gvec_uabd_b,
1518
+ .opt_opc = vecop_list,
1519
+ .vece = MO_8 },
1520
+ { .fniv = gen_uabd_vec,
1521
+ .fno = gen_helper_gvec_uabd_h,
1522
+ .opt_opc = vecop_list,
1523
+ .vece = MO_16 },
1524
+ { .fni4 = gen_uabd_i32,
1525
+ .fniv = gen_uabd_vec,
1526
+ .fno = gen_helper_gvec_uabd_s,
1527
+ .opt_opc = vecop_list,
1528
+ .vece = MO_32 },
1529
+ { .fni8 = gen_uabd_i64,
1530
+ .fniv = gen_uabd_vec,
1531
+ .fno = gen_helper_gvec_uabd_d,
1532
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1533
+ .opt_opc = vecop_list,
1534
+ .vece = MO_64 },
1535
+ };
1536
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1537
+}
1538
+
1539
+static void gen_saba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
1540
+{
1541
+ TCGv_i32 t = tcg_temp_new_i32();
1542
+ gen_sabd_i32(t, a, b);
1543
+ tcg_gen_add_i32(d, d, t);
1544
+}
1545
+
1546
+static void gen_saba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
1547
+{
1548
+ TCGv_i64 t = tcg_temp_new_i64();
1549
+ gen_sabd_i64(t, a, b);
1550
+ tcg_gen_add_i64(d, d, t);
1551
+}
1552
+
1553
+static void gen_saba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
1554
+{
1555
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
1556
+ gen_sabd_vec(vece, t, a, b);
1557
+ tcg_gen_add_vec(vece, d, d, t);
1558
+}
1559
+
1560
+void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1561
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1562
+{
1563
+ static const TCGOpcode vecop_list[] = {
1564
+ INDEX_op_sub_vec, INDEX_op_add_vec,
1565
+ INDEX_op_smin_vec, INDEX_op_smax_vec, 0
1566
+ };
1567
+ static const GVecGen3 ops[4] = {
1568
+ { .fniv = gen_saba_vec,
1569
+ .fno = gen_helper_gvec_saba_b,
1570
+ .opt_opc = vecop_list,
1571
+ .load_dest = true,
1572
+ .vece = MO_8 },
1573
+ { .fniv = gen_saba_vec,
1574
+ .fno = gen_helper_gvec_saba_h,
1575
+ .opt_opc = vecop_list,
1576
+ .load_dest = true,
1577
+ .vece = MO_16 },
1578
+ { .fni4 = gen_saba_i32,
1579
+ .fniv = gen_saba_vec,
1580
+ .fno = gen_helper_gvec_saba_s,
1581
+ .opt_opc = vecop_list,
1582
+ .load_dest = true,
1583
+ .vece = MO_32 },
1584
+ { .fni8 = gen_saba_i64,
1585
+ .fniv = gen_saba_vec,
1586
+ .fno = gen_helper_gvec_saba_d,
1587
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1588
+ .opt_opc = vecop_list,
1589
+ .load_dest = true,
1590
+ .vece = MO_64 },
1591
+ };
1592
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1593
+}
1594
+
1595
+static void gen_uaba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
1596
+{
1597
+ TCGv_i32 t = tcg_temp_new_i32();
1598
+ gen_uabd_i32(t, a, b);
1599
+ tcg_gen_add_i32(d, d, t);
1600
+}
1601
+
1602
+static void gen_uaba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
1603
+{
1604
+ TCGv_i64 t = tcg_temp_new_i64();
1605
+ gen_uabd_i64(t, a, b);
1606
+ tcg_gen_add_i64(d, d, t);
1607
+}
1608
+
1609
+static void gen_uaba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
1610
+{
1611
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
1612
+ gen_uabd_vec(vece, t, a, b);
1613
+ tcg_gen_add_vec(vece, d, d, t);
1614
+}
1615
+
1616
+void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1617
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1618
+{
1619
+ static const TCGOpcode vecop_list[] = {
1620
+ INDEX_op_sub_vec, INDEX_op_add_vec,
1621
+ INDEX_op_umin_vec, INDEX_op_umax_vec, 0
1622
+ };
1623
+ static const GVecGen3 ops[4] = {
1624
+ { .fniv = gen_uaba_vec,
1625
+ .fno = gen_helper_gvec_uaba_b,
1626
+ .opt_opc = vecop_list,
1627
+ .load_dest = true,
1628
+ .vece = MO_8 },
1629
+ { .fniv = gen_uaba_vec,
1630
+ .fno = gen_helper_gvec_uaba_h,
1631
+ .opt_opc = vecop_list,
1632
+ .load_dest = true,
1633
+ .vece = MO_16 },
1634
+ { .fni4 = gen_uaba_i32,
1635
+ .fniv = gen_uaba_vec,
1636
+ .fno = gen_helper_gvec_uaba_s,
1637
+ .opt_opc = vecop_list,
1638
+ .load_dest = true,
1639
+ .vece = MO_32 },
1640
+ { .fni8 = gen_uaba_i64,
1641
+ .fniv = gen_uaba_vec,
1642
+ .fno = gen_helper_gvec_uaba_d,
1643
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1644
+ .opt_opc = vecop_list,
1645
+ .load_dest = true,
1646
+ .vece = MO_64 },
1647
+ };
1648
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
1649
+}
1650
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
1651
index XXXXXXX..XXXXXXX 100644
1652
--- a/target/arm/tcg/translate.c
1653
+++ b/target/arm/tcg/translate.c
1654
@@ -XXX,XX +XXX,XX @@ static void gen_exception_return(DisasContext *s, TCGv_i32 pc)
1655
gen_rfe(s, pc, load_cpu_field(spsr));
1656
}
1657
1658
-static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
1659
- uint32_t opr_sz, uint32_t max_sz,
1660
- gen_helper_gvec_3_ptr *fn)
1661
-{
1662
- TCGv_ptr qc_ptr = tcg_temp_new_ptr();
1663
-
1664
- tcg_gen_addi_ptr(qc_ptr, tcg_env, offsetof(CPUARMState, vfp.qc));
1665
- tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, qc_ptr,
1666
- opr_sz, max_sz, 0, fn);
1667
-}
1668
-
1669
-void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1670
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1671
-{
1672
- static gen_helper_gvec_3_ptr * const fns[2] = {
1673
- gen_helper_gvec_qrdmlah_s16, gen_helper_gvec_qrdmlah_s32
1674
- };
1675
- tcg_debug_assert(vece >= 1 && vece <= 2);
1676
- gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
1677
-}
1678
-
1679
-void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
1680
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
1681
-{
1682
- static gen_helper_gvec_3_ptr * const fns[2] = {
1683
- gen_helper_gvec_qrdmlsh_s16, gen_helper_gvec_qrdmlsh_s32
1684
- };
1685
- tcg_debug_assert(vece >= 1 && vece <= 2);
1686
- gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
1687
-}
1688
-
1689
-#define GEN_CMP0(NAME, COND) \
1690
- void NAME(unsigned vece, uint32_t d, uint32_t m, \
1691
- uint32_t opr_sz, uint32_t max_sz) \
1692
- { tcg_gen_gvec_cmpi(COND, vece, d, m, 0, opr_sz, max_sz); }
1693
-
1694
-GEN_CMP0(gen_gvec_ceq0, TCG_COND_EQ)
1695
-GEN_CMP0(gen_gvec_cle0, TCG_COND_LE)
1696
-GEN_CMP0(gen_gvec_cge0, TCG_COND_GE)
1697
-GEN_CMP0(gen_gvec_clt0, TCG_COND_LT)
1698
-GEN_CMP0(gen_gvec_cgt0, TCG_COND_GT)
1699
-
1700
-#undef GEN_CMP0
1701
-
1702
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
1703
-{
1704
- tcg_gen_vec_sar8i_i64(a, a, shift);
1705
- tcg_gen_vec_add8_i64(d, d, a);
1706
-}
1707
-
1708
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
1709
-{
1710
- tcg_gen_vec_sar16i_i64(a, a, shift);
1711
- tcg_gen_vec_add16_i64(d, d, a);
1712
-}
1713
-
1714
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
1715
-{
1716
- tcg_gen_sari_i32(a, a, shift);
1717
- tcg_gen_add_i32(d, d, a);
1718
-}
1719
-
1720
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
1721
-{
1722
- tcg_gen_sari_i64(a, a, shift);
1723
- tcg_gen_add_i64(d, d, a);
1724
-}
1725
-
1726
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
1727
-{
1728
- tcg_gen_sari_vec(vece, a, a, sh);
1729
- tcg_gen_add_vec(vece, d, d, a);
1730
-}
1731
-
1732
-void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
1733
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
1734
-{
1735
- static const TCGOpcode vecop_list[] = {
1736
- INDEX_op_sari_vec, INDEX_op_add_vec, 0
1737
- };
1738
- static const GVecGen2i ops[4] = {
1739
- { .fni8 = gen_ssra8_i64,
1740
- .fniv = gen_ssra_vec,
1741
- .fno = gen_helper_gvec_ssra_b,
1742
- .load_dest = true,
1743
- .opt_opc = vecop_list,
1744
- .vece = MO_8 },
1745
- { .fni8 = gen_ssra16_i64,
1746
- .fniv = gen_ssra_vec,
1747
- .fno = gen_helper_gvec_ssra_h,
1748
- .load_dest = true,
1749
- .opt_opc = vecop_list,
1750
- .vece = MO_16 },
1751
- { .fni4 = gen_ssra32_i32,
1752
- .fniv = gen_ssra_vec,
1753
- .fno = gen_helper_gvec_ssra_s,
1754
- .load_dest = true,
1755
- .opt_opc = vecop_list,
1756
- .vece = MO_32 },
1757
- { .fni8 = gen_ssra64_i64,
1758
- .fniv = gen_ssra_vec,
1759
- .fno = gen_helper_gvec_ssra_d,
1760
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1761
- .opt_opc = vecop_list,
1762
- .load_dest = true,
1763
- .vece = MO_64 },
1764
- };
1765
-
1766
- /* tszimm encoding produces immediates in the range [1..esize]. */
1767
- tcg_debug_assert(shift > 0);
1768
- tcg_debug_assert(shift <= (8 << vece));
1769
-
1770
- /*
1771
- * Shifts larger than the element size are architecturally valid.
1772
- * Signed results in all sign bits.
1773
- */
1774
- shift = MIN(shift, (8 << vece) - 1);
1775
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
1776
-}
1777
-
1778
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
1779
-{
1780
- tcg_gen_vec_shr8i_i64(a, a, shift);
1781
- tcg_gen_vec_add8_i64(d, d, a);
1782
-}
1783
-
1784
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
1785
-{
1786
- tcg_gen_vec_shr16i_i64(a, a, shift);
1787
- tcg_gen_vec_add16_i64(d, d, a);
1788
-}
1789
-
1790
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
1791
-{
1792
- tcg_gen_shri_i32(a, a, shift);
1793
- tcg_gen_add_i32(d, d, a);
1794
-}
1795
-
1796
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
1797
-{
1798
- tcg_gen_shri_i64(a, a, shift);
1799
- tcg_gen_add_i64(d, d, a);
1800
-}
1801
-
1802
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
1803
-{
1804
- tcg_gen_shri_vec(vece, a, a, sh);
1805
- tcg_gen_add_vec(vece, d, d, a);
1806
-}
1807
-
1808
-void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
1809
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
1810
-{
1811
- static const TCGOpcode vecop_list[] = {
1812
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
1813
- };
1814
- static const GVecGen2i ops[4] = {
1815
- { .fni8 = gen_usra8_i64,
1816
- .fniv = gen_usra_vec,
1817
- .fno = gen_helper_gvec_usra_b,
1818
- .load_dest = true,
1819
- .opt_opc = vecop_list,
1820
- .vece = MO_8, },
1821
- { .fni8 = gen_usra16_i64,
1822
- .fniv = gen_usra_vec,
1823
- .fno = gen_helper_gvec_usra_h,
1824
- .load_dest = true,
1825
- .opt_opc = vecop_list,
1826
- .vece = MO_16, },
1827
- { .fni4 = gen_usra32_i32,
1828
- .fniv = gen_usra_vec,
1829
- .fno = gen_helper_gvec_usra_s,
1830
- .load_dest = true,
1831
- .opt_opc = vecop_list,
1832
- .vece = MO_32, },
1833
- { .fni8 = gen_usra64_i64,
1834
- .fniv = gen_usra_vec,
1835
- .fno = gen_helper_gvec_usra_d,
1836
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1837
- .load_dest = true,
1838
- .opt_opc = vecop_list,
1839
- .vece = MO_64, },
1840
- };
1841
-
1842
- /* tszimm encoding produces immediates in the range [1..esize]. */
1843
- tcg_debug_assert(shift > 0);
1844
- tcg_debug_assert(shift <= (8 << vece));
1845
-
1846
- /*
1847
- * Shifts larger than the element size are architecturally valid.
1848
- * Unsigned results in all zeros as input to accumulate: nop.
1849
- */
1850
- if (shift < (8 << vece)) {
1851
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
1852
- } else {
1853
- /* Nop, but we do need to clear the tail. */
1854
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
1855
- }
1856
-}
1857
-
1858
-/*
1859
- * Shift one less than the requested amount, and the low bit is
1860
- * the rounding bit. For the 8 and 16-bit operations, because we
1861
- * mask the low bit, we can perform a normal integer shift instead
1862
- * of a vector shift.
1863
- */
1864
-static void gen_srshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
1865
-{
1866
- TCGv_i64 t = tcg_temp_new_i64();
1867
-
1868
- tcg_gen_shri_i64(t, a, sh - 1);
1869
- tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
1870
- tcg_gen_vec_sar8i_i64(d, a, sh);
1871
- tcg_gen_vec_add8_i64(d, d, t);
1872
-}
1873
-
1874
-static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
1875
-{
1876
- TCGv_i64 t = tcg_temp_new_i64();
1877
-
1878
- tcg_gen_shri_i64(t, a, sh - 1);
1879
- tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
1880
- tcg_gen_vec_sar16i_i64(d, a, sh);
1881
- tcg_gen_vec_add16_i64(d, d, t);
1882
-}
1883
-
1884
-static void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
1885
-{
1886
- TCGv_i32 t;
1887
-
1888
- /* Handle shift by the input size for the benefit of trans_SRSHR_ri */
1889
- if (sh == 32) {
1890
- tcg_gen_movi_i32(d, 0);
1891
- return;
1892
- }
1893
- t = tcg_temp_new_i32();
1894
- tcg_gen_extract_i32(t, a, sh - 1, 1);
1895
- tcg_gen_sari_i32(d, a, sh);
1896
- tcg_gen_add_i32(d, d, t);
1897
-}
1898
-
1899
-static void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
1900
-{
1901
- TCGv_i64 t = tcg_temp_new_i64();
1902
-
1903
- tcg_gen_extract_i64(t, a, sh - 1, 1);
1904
- tcg_gen_sari_i64(d, a, sh);
1905
- tcg_gen_add_i64(d, d, t);
1906
-}
1907
-
1908
-static void gen_srshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
1909
-{
1910
- TCGv_vec t = tcg_temp_new_vec_matching(d);
1911
- TCGv_vec ones = tcg_temp_new_vec_matching(d);
1912
-
1913
- tcg_gen_shri_vec(vece, t, a, sh - 1);
1914
- tcg_gen_dupi_vec(vece, ones, 1);
1915
- tcg_gen_and_vec(vece, t, t, ones);
1916
- tcg_gen_sari_vec(vece, d, a, sh);
1917
- tcg_gen_add_vec(vece, d, d, t);
1918
-}
1919
-
1920
-void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
1921
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
1922
-{
1923
- static const TCGOpcode vecop_list[] = {
1924
- INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
1925
- };
1926
- static const GVecGen2i ops[4] = {
1927
- { .fni8 = gen_srshr8_i64,
1928
- .fniv = gen_srshr_vec,
1929
- .fno = gen_helper_gvec_srshr_b,
1930
- .opt_opc = vecop_list,
1931
- .vece = MO_8 },
1932
- { .fni8 = gen_srshr16_i64,
1933
- .fniv = gen_srshr_vec,
1934
- .fno = gen_helper_gvec_srshr_h,
1935
- .opt_opc = vecop_list,
1936
- .vece = MO_16 },
1937
- { .fni4 = gen_srshr32_i32,
1938
- .fniv = gen_srshr_vec,
1939
- .fno = gen_helper_gvec_srshr_s,
1940
- .opt_opc = vecop_list,
1941
- .vece = MO_32 },
1942
- { .fni8 = gen_srshr64_i64,
1943
- .fniv = gen_srshr_vec,
1944
- .fno = gen_helper_gvec_srshr_d,
1945
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1946
- .opt_opc = vecop_list,
1947
- .vece = MO_64 },
1948
- };
1949
-
1950
- /* tszimm encoding produces immediates in the range [1..esize] */
1951
- tcg_debug_assert(shift > 0);
1952
- tcg_debug_assert(shift <= (8 << vece));
1953
-
1954
- if (shift == (8 << vece)) {
1955
- /*
1956
- * Shifts larger than the element size are architecturally valid.
1957
- * Signed results in all sign bits. With rounding, this produces
1958
- * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
1959
- * I.e. always zero.
1960
- */
1961
- tcg_gen_gvec_dup_imm(vece, rd_ofs, opr_sz, max_sz, 0);
1962
- } else {
1963
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
1964
- }
1965
-}
1966
-
1967
-static void gen_srsra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
1968
-{
1969
- TCGv_i64 t = tcg_temp_new_i64();
1970
-
1971
- gen_srshr8_i64(t, a, sh);
1972
- tcg_gen_vec_add8_i64(d, d, t);
1973
-}
1974
-
1975
-static void gen_srsra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
1976
-{
1977
- TCGv_i64 t = tcg_temp_new_i64();
1978
-
1979
- gen_srshr16_i64(t, a, sh);
1980
- tcg_gen_vec_add16_i64(d, d, t);
1981
-}
1982
-
1983
-static void gen_srsra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
1984
-{
1985
- TCGv_i32 t = tcg_temp_new_i32();
1986
-
1987
- gen_srshr32_i32(t, a, sh);
1988
- tcg_gen_add_i32(d, d, t);
1989
-}
1990
-
1991
-static void gen_srsra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
1992
-{
1993
- TCGv_i64 t = tcg_temp_new_i64();
1994
-
1995
- gen_srshr64_i64(t, a, sh);
1996
- tcg_gen_add_i64(d, d, t);
1997
-}
1998
-
1999
-static void gen_srsra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
2000
-{
2001
- TCGv_vec t = tcg_temp_new_vec_matching(d);
2002
-
2003
- gen_srshr_vec(vece, t, a, sh);
2004
- tcg_gen_add_vec(vece, d, d, t);
2005
-}
2006
-
2007
-void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
2008
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
2009
-{
2010
- static const TCGOpcode vecop_list[] = {
2011
- INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
2012
- };
2013
- static const GVecGen2i ops[4] = {
2014
- { .fni8 = gen_srsra8_i64,
2015
- .fniv = gen_srsra_vec,
2016
- .fno = gen_helper_gvec_srsra_b,
2017
- .opt_opc = vecop_list,
2018
- .load_dest = true,
2019
- .vece = MO_8 },
2020
- { .fni8 = gen_srsra16_i64,
2021
- .fniv = gen_srsra_vec,
2022
- .fno = gen_helper_gvec_srsra_h,
2023
- .opt_opc = vecop_list,
2024
- .load_dest = true,
2025
- .vece = MO_16 },
2026
- { .fni4 = gen_srsra32_i32,
2027
- .fniv = gen_srsra_vec,
2028
- .fno = gen_helper_gvec_srsra_s,
2029
- .opt_opc = vecop_list,
2030
- .load_dest = true,
2031
- .vece = MO_32 },
2032
- { .fni8 = gen_srsra64_i64,
2033
- .fniv = gen_srsra_vec,
2034
- .fno = gen_helper_gvec_srsra_d,
2035
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
2036
- .opt_opc = vecop_list,
2037
- .load_dest = true,
2038
- .vece = MO_64 },
2039
- };
2040
-
2041
- /* tszimm encoding produces immediates in the range [1..esize] */
2042
- tcg_debug_assert(shift > 0);
2043
- tcg_debug_assert(shift <= (8 << vece));
2044
-
2045
- /*
2046
- * Shifts larger than the element size are architecturally valid.
2047
- * Signed results in all sign bits. With rounding, this produces
2048
- * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
2049
- * I.e. always zero. With accumulation, this leaves D unchanged.
2050
- */
2051
- if (shift == (8 << vece)) {
2052
- /* Nop, but we do need to clear the tail. */
2053
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
2054
- } else {
2055
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
2056
- }
2057
-}
2058
-
2059
-static void gen_urshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
2060
-{
2061
- TCGv_i64 t = tcg_temp_new_i64();
2062
-
2063
- tcg_gen_shri_i64(t, a, sh - 1);
2064
- tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
2065
- tcg_gen_vec_shr8i_i64(d, a, sh);
2066
- tcg_gen_vec_add8_i64(d, d, t);
2067
-}
2068
-
2069
-static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
2070
-{
2071
- TCGv_i64 t = tcg_temp_new_i64();
2072
-
2073
- tcg_gen_shri_i64(t, a, sh - 1);
2074
- tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
2075
- tcg_gen_vec_shr16i_i64(d, a, sh);
2076
- tcg_gen_vec_add16_i64(d, d, t);
2077
-}
2078
-
2079
-static void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
2080
-{
2081
- TCGv_i32 t;
2082
-
2083
- /* Handle shift by the input size for the benefit of trans_URSHR_ri */
2084
- if (sh == 32) {
2085
- tcg_gen_extract_i32(d, a, sh - 1, 1);
2086
- return;
2087
- }
2088
- t = tcg_temp_new_i32();
2089
- tcg_gen_extract_i32(t, a, sh - 1, 1);
2090
- tcg_gen_shri_i32(d, a, sh);
2091
- tcg_gen_add_i32(d, d, t);
2092
-}
2093
-
2094
-static void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
2095
-{
2096
- TCGv_i64 t = tcg_temp_new_i64();
2097
-
2098
- tcg_gen_extract_i64(t, a, sh - 1, 1);
2099
- tcg_gen_shri_i64(d, a, sh);
2100
- tcg_gen_add_i64(d, d, t);
2101
-}
2102
-
2103
-static void gen_urshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t shift)
2104
-{
2105
- TCGv_vec t = tcg_temp_new_vec_matching(d);
2106
- TCGv_vec ones = tcg_temp_new_vec_matching(d);
2107
-
2108
- tcg_gen_shri_vec(vece, t, a, shift - 1);
2109
- tcg_gen_dupi_vec(vece, ones, 1);
2110
- tcg_gen_and_vec(vece, t, t, ones);
2111
- tcg_gen_shri_vec(vece, d, a, shift);
2112
- tcg_gen_add_vec(vece, d, d, t);
2113
-}
2114
-
2115
-void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
2116
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
2117
-{
2118
- static const TCGOpcode vecop_list[] = {
2119
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
2120
- };
2121
- static const GVecGen2i ops[4] = {
2122
- { .fni8 = gen_urshr8_i64,
2123
- .fniv = gen_urshr_vec,
2124
- .fno = gen_helper_gvec_urshr_b,
2125
- .opt_opc = vecop_list,
2126
- .vece = MO_8 },
2127
- { .fni8 = gen_urshr16_i64,
2128
- .fniv = gen_urshr_vec,
2129
- .fno = gen_helper_gvec_urshr_h,
2130
- .opt_opc = vecop_list,
2131
- .vece = MO_16 },
2132
- { .fni4 = gen_urshr32_i32,
2133
- .fniv = gen_urshr_vec,
2134
- .fno = gen_helper_gvec_urshr_s,
2135
- .opt_opc = vecop_list,
2136
- .vece = MO_32 },
2137
- { .fni8 = gen_urshr64_i64,
2138
- .fniv = gen_urshr_vec,
2139
- .fno = gen_helper_gvec_urshr_d,
2140
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
2141
- .opt_opc = vecop_list,
2142
- .vece = MO_64 },
2143
- };
2144
-
2145
- /* tszimm encoding produces immediates in the range [1..esize] */
2146
- tcg_debug_assert(shift > 0);
2147
- tcg_debug_assert(shift <= (8 << vece));
2148
-
2149
- if (shift == (8 << vece)) {
2150
- /*
2151
- * Shifts larger than the element size are architecturally valid.
2152
- * Unsigned results in zero. With rounding, this produces a
2153
- * copy of the most significant bit.
2154
- */
2155
- tcg_gen_gvec_shri(vece, rd_ofs, rm_ofs, shift - 1, opr_sz, max_sz);
2156
- } else {
2157
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
2158
- }
2159
-}
2160
-
2161
-static void gen_ursra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
2162
-{
2163
- TCGv_i64 t = tcg_temp_new_i64();
2164
-
2165
- if (sh == 8) {
2166
- tcg_gen_vec_shr8i_i64(t, a, 7);
2167
- } else {
2168
- gen_urshr8_i64(t, a, sh);
2169
- }
2170
- tcg_gen_vec_add8_i64(d, d, t);
2171
-}
2172
-
2173
-static void gen_ursra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
2174
-{
2175
- TCGv_i64 t = tcg_temp_new_i64();
2176
-
2177
- if (sh == 16) {
2178
- tcg_gen_vec_shr16i_i64(t, a, 15);
2179
- } else {
2180
- gen_urshr16_i64(t, a, sh);
2181
- }
2182
- tcg_gen_vec_add16_i64(d, d, t);
2183
-}
2184
-
2185
-static void gen_ursra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
2186
-{
2187
- TCGv_i32 t = tcg_temp_new_i32();
2188
-
2189
- if (sh == 32) {
2190
- tcg_gen_shri_i32(t, a, 31);
2191
- } else {
2192
- gen_urshr32_i32(t, a, sh);
2193
- }
2194
- tcg_gen_add_i32(d, d, t);
2195
-}
2196
-
2197
-static void gen_ursra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
2198
-{
2199
- TCGv_i64 t = tcg_temp_new_i64();
2200
-
2201
- if (sh == 64) {
2202
- tcg_gen_shri_i64(t, a, 63);
2203
- } else {
2204
- gen_urshr64_i64(t, a, sh);
2205
- }
2206
- tcg_gen_add_i64(d, d, t);
2207
-}
2208
-
2209
-static void gen_ursra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
2210
-{
2211
- TCGv_vec t = tcg_temp_new_vec_matching(d);
2212
-
2213
- if (sh == (8 << vece)) {
2214
- tcg_gen_shri_vec(vece, t, a, sh - 1);
2215
- } else {
2216
- gen_urshr_vec(vece, t, a, sh);
2217
- }
2218
- tcg_gen_add_vec(vece, d, d, t);
2219
-}
2220
-
2221
-void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
2222
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
2223
-{
2224
- static const TCGOpcode vecop_list[] = {
2225
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
2226
- };
2227
- static const GVecGen2i ops[4] = {
2228
- { .fni8 = gen_ursra8_i64,
2229
- .fniv = gen_ursra_vec,
2230
- .fno = gen_helper_gvec_ursra_b,
2231
- .opt_opc = vecop_list,
2232
- .load_dest = true,
2233
- .vece = MO_8 },
2234
- { .fni8 = gen_ursra16_i64,
2235
- .fniv = gen_ursra_vec,
2236
- .fno = gen_helper_gvec_ursra_h,
2237
- .opt_opc = vecop_list,
2238
- .load_dest = true,
2239
- .vece = MO_16 },
2240
- { .fni4 = gen_ursra32_i32,
2241
- .fniv = gen_ursra_vec,
2242
- .fno = gen_helper_gvec_ursra_s,
2243
- .opt_opc = vecop_list,
2244
- .load_dest = true,
2245
- .vece = MO_32 },
2246
- { .fni8 = gen_ursra64_i64,
2247
- .fniv = gen_ursra_vec,
2248
- .fno = gen_helper_gvec_ursra_d,
2249
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
2250
- .opt_opc = vecop_list,
2251
- .load_dest = true,
2252
- .vece = MO_64 },
2253
- };
2254
-
2255
- /* tszimm encoding produces immediates in the range [1..esize] */
2256
- tcg_debug_assert(shift > 0);
2257
- tcg_debug_assert(shift <= (8 << vece));
2258
-
2259
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
2260
-}
2261
-
2262
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
2263
-{
2264
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
2265
- TCGv_i64 t = tcg_temp_new_i64();
2266
-
2267
- tcg_gen_shri_i64(t, a, shift);
2268
- tcg_gen_andi_i64(t, t, mask);
2269
- tcg_gen_andi_i64(d, d, ~mask);
2270
- tcg_gen_or_i64(d, d, t);
2271
-}
2272
-
2273
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
2274
-{
2275
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
2276
- TCGv_i64 t = tcg_temp_new_i64();
2277
-
2278
- tcg_gen_shri_i64(t, a, shift);
2279
- tcg_gen_andi_i64(t, t, mask);
2280
- tcg_gen_andi_i64(d, d, ~mask);
2281
- tcg_gen_or_i64(d, d, t);
2282
-}
2283
-
2284
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
2285
-{
2286
- tcg_gen_shri_i32(a, a, shift);
2287
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
2288
-}
2289
-
2290
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
2291
-{
2292
- tcg_gen_shri_i64(a, a, shift);
2293
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
2294
-}
2295
-
2296
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
2297
-{
2298
- TCGv_vec t = tcg_temp_new_vec_matching(d);
2299
- TCGv_vec m = tcg_temp_new_vec_matching(d);
2300
-
2301
- tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
2302
- tcg_gen_shri_vec(vece, t, a, sh);
2303
- tcg_gen_and_vec(vece, d, d, m);
2304
- tcg_gen_or_vec(vece, d, d, t);
2305
-}
2306
-
2307
-void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
2308
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
2309
-{
2310
- static const TCGOpcode vecop_list[] = { INDEX_op_shri_vec, 0 };
2311
- const GVecGen2i ops[4] = {
2312
- { .fni8 = gen_shr8_ins_i64,
2313
- .fniv = gen_shr_ins_vec,
2314
- .fno = gen_helper_gvec_sri_b,
2315
- .load_dest = true,
2316
- .opt_opc = vecop_list,
2317
- .vece = MO_8 },
2318
- { .fni8 = gen_shr16_ins_i64,
2319
- .fniv = gen_shr_ins_vec,
2320
- .fno = gen_helper_gvec_sri_h,
2321
- .load_dest = true,
2322
- .opt_opc = vecop_list,
2323
- .vece = MO_16 },
2324
- { .fni4 = gen_shr32_ins_i32,
2325
- .fniv = gen_shr_ins_vec,
2326
- .fno = gen_helper_gvec_sri_s,
2327
- .load_dest = true,
2328
- .opt_opc = vecop_list,
2329
- .vece = MO_32 },
2330
- { .fni8 = gen_shr64_ins_i64,
2331
- .fniv = gen_shr_ins_vec,
2332
- .fno = gen_helper_gvec_sri_d,
2333
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
2334
- .load_dest = true,
2335
- .opt_opc = vecop_list,
2336
- .vece = MO_64 },
2337
- };
2338
-
2339
- /* tszimm encoding produces immediates in the range [1..esize]. */
2340
- tcg_debug_assert(shift > 0);
2341
- tcg_debug_assert(shift <= (8 << vece));
2342
-
2343
- /* Shift of esize leaves destination unchanged. */
2344
- if (shift < (8 << vece)) {
2345
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
2346
- } else {
2347
- /* Nop, but we do need to clear the tail. */
2348
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
2349
- }
2350
-}
2351
-
2352
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
2353
-{
2354
- uint64_t mask = dup_const(MO_8, 0xff << shift);
2355
- TCGv_i64 t = tcg_temp_new_i64();
2356
-
2357
- tcg_gen_shli_i64(t, a, shift);
2358
- tcg_gen_andi_i64(t, t, mask);
2359
- tcg_gen_andi_i64(d, d, ~mask);
2360
- tcg_gen_or_i64(d, d, t);
2361
-}
2362
-
2363
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
2364
-{
2365
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
2366
- TCGv_i64 t = tcg_temp_new_i64();
2367
-
2368
- tcg_gen_shli_i64(t, a, shift);
2369
- tcg_gen_andi_i64(t, t, mask);
2370
- tcg_gen_andi_i64(d, d, ~mask);
2371
- tcg_gen_or_i64(d, d, t);
2372
-}
2373
-
2374
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
2375
-{
2376
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
2377
-}
2378
-
2379
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
2380
-{
2381
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
2382
-}
2383
-
2384
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
2385
-{
2386
- TCGv_vec t = tcg_temp_new_vec_matching(d);
2387
- TCGv_vec m = tcg_temp_new_vec_matching(d);
2388
-
2389
- tcg_gen_shli_vec(vece, t, a, sh);
2390
- tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
2391
- tcg_gen_and_vec(vece, d, d, m);
2392
- tcg_gen_or_vec(vece, d, d, t);
2393
-}
2394
-
2395
-void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
2396
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
2397
-{
2398
- static const TCGOpcode vecop_list[] = { INDEX_op_shli_vec, 0 };
2399
- const GVecGen2i ops[4] = {
2400
- { .fni8 = gen_shl8_ins_i64,
2401
- .fniv = gen_shl_ins_vec,
2402
- .fno = gen_helper_gvec_sli_b,
2403
- .load_dest = true,
2404
- .opt_opc = vecop_list,
2405
- .vece = MO_8 },
2406
- { .fni8 = gen_shl16_ins_i64,
2407
- .fniv = gen_shl_ins_vec,
2408
- .fno = gen_helper_gvec_sli_h,
2409
- .load_dest = true,
2410
- .opt_opc = vecop_list,
2411
- .vece = MO_16 },
2412
- { .fni4 = gen_shl32_ins_i32,
2413
- .fniv = gen_shl_ins_vec,
2414
- .fno = gen_helper_gvec_sli_s,
2415
- .load_dest = true,
2416
- .opt_opc = vecop_list,
2417
- .vece = MO_32 },
2418
- { .fni8 = gen_shl64_ins_i64,
2419
- .fniv = gen_shl_ins_vec,
2420
- .fno = gen_helper_gvec_sli_d,
2421
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
2422
- .load_dest = true,
2423
- .opt_opc = vecop_list,
2424
- .vece = MO_64 },
2425
- };
2426
-
2427
- /* tszimm encoding produces immediates in the range [0..esize-1]. */
2428
- tcg_debug_assert(shift >= 0);
2429
- tcg_debug_assert(shift < (8 << vece));
2430
-
2431
- if (shift == 0) {
2432
- tcg_gen_gvec_mov(vece, rd_ofs, rm_ofs, opr_sz, max_sz);
2433
- } else {
2434
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
2435
- }
2436
-}
2437
-
2438
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
2439
-{
2440
- gen_helper_neon_mul_u8(a, a, b);
2441
- gen_helper_neon_add_u8(d, d, a);
2442
-}
2443
-
2444
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
2445
-{
2446
- gen_helper_neon_mul_u8(a, a, b);
2447
- gen_helper_neon_sub_u8(d, d, a);
2448
-}
2449
-
2450
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
2451
-{
2452
- gen_helper_neon_mul_u16(a, a, b);
2453
- gen_helper_neon_add_u16(d, d, a);
2454
-}
2455
-
2456
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
2457
-{
2458
- gen_helper_neon_mul_u16(a, a, b);
2459
- gen_helper_neon_sub_u16(d, d, a);
2460
-}
2461
-
2462
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
2463
-{
2464
- tcg_gen_mul_i32(a, a, b);
2465
- tcg_gen_add_i32(d, d, a);
2466
-}
2467
-
2468
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
2469
-{
2470
- tcg_gen_mul_i32(a, a, b);
2471
- tcg_gen_sub_i32(d, d, a);
2472
-}
2473
-
2474
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
2475
-{
2476
- tcg_gen_mul_i64(a, a, b);
2477
- tcg_gen_add_i64(d, d, a);
2478
-}
2479
-
2480
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
2481
-{
2482
- tcg_gen_mul_i64(a, a, b);
2483
- tcg_gen_sub_i64(d, d, a);
2484
-}
2485
-
2486
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
2487
-{
2488
- tcg_gen_mul_vec(vece, a, a, b);
2489
- tcg_gen_add_vec(vece, d, d, a);
2490
-}
2491
-
2492
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
2493
-{
2494
- tcg_gen_mul_vec(vece, a, a, b);
2495
- tcg_gen_sub_vec(vece, d, d, a);
2496
-}
2497
-
2498
-/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
2499
- * these tables are shared with AArch64 which does support them.
2500
- */
2501
-void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2502
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2503
-{
2504
- static const TCGOpcode vecop_list[] = {
2505
- INDEX_op_mul_vec, INDEX_op_add_vec, 0
2506
- };
2507
- static const GVecGen3 ops[4] = {
2508
- { .fni4 = gen_mla8_i32,
2509
- .fniv = gen_mla_vec,
2510
- .load_dest = true,
2511
- .opt_opc = vecop_list,
2512
- .vece = MO_8 },
2513
- { .fni4 = gen_mla16_i32,
2514
- .fniv = gen_mla_vec,
2515
- .load_dest = true,
2516
- .opt_opc = vecop_list,
2517
- .vece = MO_16 },
2518
- { .fni4 = gen_mla32_i32,
2519
- .fniv = gen_mla_vec,
2520
- .load_dest = true,
2521
- .opt_opc = vecop_list,
2522
- .vece = MO_32 },
2523
- { .fni8 = gen_mla64_i64,
2524
- .fniv = gen_mla_vec,
2525
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
2526
- .load_dest = true,
2527
- .opt_opc = vecop_list,
2528
- .vece = MO_64 },
2529
- };
2530
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
2531
-}
2532
-
2533
-void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2534
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2535
-{
2536
- static const TCGOpcode vecop_list[] = {
2537
- INDEX_op_mul_vec, INDEX_op_sub_vec, 0
2538
- };
2539
- static const GVecGen3 ops[4] = {
2540
- { .fni4 = gen_mls8_i32,
2541
- .fniv = gen_mls_vec,
2542
- .load_dest = true,
2543
- .opt_opc = vecop_list,
2544
- .vece = MO_8 },
2545
- { .fni4 = gen_mls16_i32,
2546
- .fniv = gen_mls_vec,
2547
- .load_dest = true,
2548
- .opt_opc = vecop_list,
2549
- .vece = MO_16 },
2550
- { .fni4 = gen_mls32_i32,
2551
- .fniv = gen_mls_vec,
2552
- .load_dest = true,
2553
- .opt_opc = vecop_list,
2554
- .vece = MO_32 },
2555
- { .fni8 = gen_mls64_i64,
2556
- .fniv = gen_mls_vec,
2557
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
2558
- .load_dest = true,
2559
- .opt_opc = vecop_list,
2560
- .vece = MO_64 },
2561
- };
2562
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
2563
-}
2564
-
2565
-/* CMTST : test is "if (X & Y != 0)". */
2566
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
2567
-{
2568
- tcg_gen_and_i32(d, a, b);
2569
- tcg_gen_negsetcond_i32(TCG_COND_NE, d, d, tcg_constant_i32(0));
2570
-}
2571
-
2572
-void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
2573
-{
2574
- tcg_gen_and_i64(d, a, b);
2575
- tcg_gen_negsetcond_i64(TCG_COND_NE, d, d, tcg_constant_i64(0));
2576
-}
2577
-
2578
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
2579
-{
2580
- tcg_gen_and_vec(vece, d, a, b);
2581
- tcg_gen_dupi_vec(vece, a, 0);
2582
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
2583
-}
2584
-
2585
-void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2586
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2587
-{
2588
- static const TCGOpcode vecop_list[] = { INDEX_op_cmp_vec, 0 };
2589
- static const GVecGen3 ops[4] = {
2590
- { .fni4 = gen_helper_neon_tst_u8,
2591
- .fniv = gen_cmtst_vec,
2592
- .opt_opc = vecop_list,
2593
- .vece = MO_8 },
2594
- { .fni4 = gen_helper_neon_tst_u16,
2595
- .fniv = gen_cmtst_vec,
2596
- .opt_opc = vecop_list,
2597
- .vece = MO_16 },
2598
- { .fni4 = gen_cmtst_i32,
2599
- .fniv = gen_cmtst_vec,
2600
- .opt_opc = vecop_list,
2601
- .vece = MO_32 },
2602
- { .fni8 = gen_cmtst_i64,
2603
- .fniv = gen_cmtst_vec,
2604
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
2605
- .opt_opc = vecop_list,
2606
- .vece = MO_64 },
2607
- };
2608
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
2609
-}
2610
-
2611
-void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
2612
-{
2613
- TCGv_i32 lval = tcg_temp_new_i32();
2614
- TCGv_i32 rval = tcg_temp_new_i32();
2615
- TCGv_i32 lsh = tcg_temp_new_i32();
2616
- TCGv_i32 rsh = tcg_temp_new_i32();
2617
- TCGv_i32 zero = tcg_constant_i32(0);
2618
- TCGv_i32 max = tcg_constant_i32(32);
2619
-
2620
- /*
2621
- * Rely on the TCG guarantee that out of range shifts produce
2622
- * unspecified results, not undefined behaviour (i.e. no trap).
2623
- * Discard out-of-range results after the fact.
2624
- */
2625
- tcg_gen_ext8s_i32(lsh, shift);
2626
- tcg_gen_neg_i32(rsh, lsh);
2627
- tcg_gen_shl_i32(lval, src, lsh);
2628
- tcg_gen_shr_i32(rval, src, rsh);
2629
- tcg_gen_movcond_i32(TCG_COND_LTU, dst, lsh, max, lval, zero);
2630
- tcg_gen_movcond_i32(TCG_COND_LTU, dst, rsh, max, rval, dst);
2631
-}
2632
-
2633
-void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
2634
-{
2635
- TCGv_i64 lval = tcg_temp_new_i64();
2636
- TCGv_i64 rval = tcg_temp_new_i64();
2637
- TCGv_i64 lsh = tcg_temp_new_i64();
2638
- TCGv_i64 rsh = tcg_temp_new_i64();
2639
- TCGv_i64 zero = tcg_constant_i64(0);
2640
- TCGv_i64 max = tcg_constant_i64(64);
2641
-
2642
- /*
2643
- * Rely on the TCG guarantee that out of range shifts produce
2644
- * unspecified results, not undefined behaviour (i.e. no trap).
2645
- * Discard out-of-range results after the fact.
2646
- */
2647
- tcg_gen_ext8s_i64(lsh, shift);
2648
- tcg_gen_neg_i64(rsh, lsh);
2649
- tcg_gen_shl_i64(lval, src, lsh);
2650
- tcg_gen_shr_i64(rval, src, rsh);
2651
- tcg_gen_movcond_i64(TCG_COND_LTU, dst, lsh, max, lval, zero);
2652
- tcg_gen_movcond_i64(TCG_COND_LTU, dst, rsh, max, rval, dst);
2653
-}
2654
-
2655
-static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
2656
- TCGv_vec src, TCGv_vec shift)
2657
-{
2658
- TCGv_vec lval = tcg_temp_new_vec_matching(dst);
2659
- TCGv_vec rval = tcg_temp_new_vec_matching(dst);
2660
- TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
2661
- TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
2662
- TCGv_vec msk, max;
2663
-
2664
- tcg_gen_neg_vec(vece, rsh, shift);
2665
- if (vece == MO_8) {
2666
- tcg_gen_mov_vec(lsh, shift);
2667
- } else {
2668
- msk = tcg_temp_new_vec_matching(dst);
2669
- tcg_gen_dupi_vec(vece, msk, 0xff);
2670
- tcg_gen_and_vec(vece, lsh, shift, msk);
2671
- tcg_gen_and_vec(vece, rsh, rsh, msk);
2672
- }
2673
-
2674
- /*
2675
- * Rely on the TCG guarantee that out of range shifts produce
2676
- * unspecified results, not undefined behaviour (i.e. no trap).
2677
- * Discard out-of-range results after the fact.
2678
- */
2679
- tcg_gen_shlv_vec(vece, lval, src, lsh);
2680
- tcg_gen_shrv_vec(vece, rval, src, rsh);
2681
-
2682
- max = tcg_temp_new_vec_matching(dst);
2683
- tcg_gen_dupi_vec(vece, max, 8 << vece);
2684
-
2685
- /*
2686
- * The choice of LT (signed) and GEU (unsigned) are biased toward
2687
- * the instructions of the x86_64 host. For MO_8, the whole byte
2688
- * is significant so we must use an unsigned compare; otherwise we
2689
- * have already masked to a byte and so a signed compare works.
2690
- * Other tcg hosts have a full set of comparisons and do not care.
2691
- */
2692
- if (vece == MO_8) {
2693
- tcg_gen_cmp_vec(TCG_COND_GEU, vece, lsh, lsh, max);
2694
- tcg_gen_cmp_vec(TCG_COND_GEU, vece, rsh, rsh, max);
2695
- tcg_gen_andc_vec(vece, lval, lval, lsh);
2696
- tcg_gen_andc_vec(vece, rval, rval, rsh);
2697
- } else {
2698
- tcg_gen_cmp_vec(TCG_COND_LT, vece, lsh, lsh, max);
2699
- tcg_gen_cmp_vec(TCG_COND_LT, vece, rsh, rsh, max);
2700
- tcg_gen_and_vec(vece, lval, lval, lsh);
2701
- tcg_gen_and_vec(vece, rval, rval, rsh);
2702
- }
2703
- tcg_gen_or_vec(vece, dst, lval, rval);
2704
-}
2705
-
2706
-void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2707
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2708
-{
2709
- static const TCGOpcode vecop_list[] = {
2710
- INDEX_op_neg_vec, INDEX_op_shlv_vec,
2711
- INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
2712
- };
2713
- static const GVecGen3 ops[4] = {
2714
- { .fniv = gen_ushl_vec,
2715
- .fno = gen_helper_gvec_ushl_b,
2716
- .opt_opc = vecop_list,
2717
- .vece = MO_8 },
2718
- { .fniv = gen_ushl_vec,
2719
- .fno = gen_helper_gvec_ushl_h,
2720
- .opt_opc = vecop_list,
2721
- .vece = MO_16 },
2722
- { .fni4 = gen_ushl_i32,
2723
- .fniv = gen_ushl_vec,
2724
- .opt_opc = vecop_list,
2725
- .vece = MO_32 },
2726
- { .fni8 = gen_ushl_i64,
2727
- .fniv = gen_ushl_vec,
2728
- .opt_opc = vecop_list,
2729
- .vece = MO_64 },
2730
- };
2731
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
2732
-}
2733
-
2734
-void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
2735
-{
2736
- TCGv_i32 lval = tcg_temp_new_i32();
2737
- TCGv_i32 rval = tcg_temp_new_i32();
2738
- TCGv_i32 lsh = tcg_temp_new_i32();
2739
- TCGv_i32 rsh = tcg_temp_new_i32();
2740
- TCGv_i32 zero = tcg_constant_i32(0);
2741
- TCGv_i32 max = tcg_constant_i32(31);
2742
-
2743
- /*
2744
- * Rely on the TCG guarantee that out of range shifts produce
2745
- * unspecified results, not undefined behaviour (i.e. no trap).
2746
- * Discard out-of-range results after the fact.
2747
- */
2748
- tcg_gen_ext8s_i32(lsh, shift);
2749
- tcg_gen_neg_i32(rsh, lsh);
2750
- tcg_gen_shl_i32(lval, src, lsh);
2751
- tcg_gen_umin_i32(rsh, rsh, max);
2752
- tcg_gen_sar_i32(rval, src, rsh);
2753
- tcg_gen_movcond_i32(TCG_COND_LEU, lval, lsh, max, lval, zero);
2754
- tcg_gen_movcond_i32(TCG_COND_LT, dst, lsh, zero, rval, lval);
2755
-}
2756
-
2757
-void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
2758
-{
2759
- TCGv_i64 lval = tcg_temp_new_i64();
2760
- TCGv_i64 rval = tcg_temp_new_i64();
2761
- TCGv_i64 lsh = tcg_temp_new_i64();
2762
- TCGv_i64 rsh = tcg_temp_new_i64();
2763
- TCGv_i64 zero = tcg_constant_i64(0);
2764
- TCGv_i64 max = tcg_constant_i64(63);
2765
-
2766
- /*
2767
- * Rely on the TCG guarantee that out of range shifts produce
2768
- * unspecified results, not undefined behaviour (i.e. no trap).
2769
- * Discard out-of-range results after the fact.
2770
- */
2771
- tcg_gen_ext8s_i64(lsh, shift);
2772
- tcg_gen_neg_i64(rsh, lsh);
2773
- tcg_gen_shl_i64(lval, src, lsh);
2774
- tcg_gen_umin_i64(rsh, rsh, max);
2775
- tcg_gen_sar_i64(rval, src, rsh);
2776
- tcg_gen_movcond_i64(TCG_COND_LEU, lval, lsh, max, lval, zero);
2777
- tcg_gen_movcond_i64(TCG_COND_LT, dst, lsh, zero, rval, lval);
2778
-}
2779
-
2780
-static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
2781
- TCGv_vec src, TCGv_vec shift)
2782
-{
2783
- TCGv_vec lval = tcg_temp_new_vec_matching(dst);
2784
- TCGv_vec rval = tcg_temp_new_vec_matching(dst);
2785
- TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
2786
- TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
2787
- TCGv_vec tmp = tcg_temp_new_vec_matching(dst);
2788
-
2789
- /*
2790
- * Rely on the TCG guarantee that out of range shifts produce
2791
- * unspecified results, not undefined behaviour (i.e. no trap).
2792
- * Discard out-of-range results after the fact.
2793
- */
2794
- tcg_gen_neg_vec(vece, rsh, shift);
2795
- if (vece == MO_8) {
2796
- tcg_gen_mov_vec(lsh, shift);
2797
- } else {
2798
- tcg_gen_dupi_vec(vece, tmp, 0xff);
2799
- tcg_gen_and_vec(vece, lsh, shift, tmp);
2800
- tcg_gen_and_vec(vece, rsh, rsh, tmp);
2801
- }
2802
-
2803
- /* Bound rsh so out of bound right shift gets -1. */
2804
- tcg_gen_dupi_vec(vece, tmp, (8 << vece) - 1);
2805
- tcg_gen_umin_vec(vece, rsh, rsh, tmp);
2806
- tcg_gen_cmp_vec(TCG_COND_GT, vece, tmp, lsh, tmp);
2807
-
2808
- tcg_gen_shlv_vec(vece, lval, src, lsh);
2809
- tcg_gen_sarv_vec(vece, rval, src, rsh);
2810
-
2811
- /* Select in-bound left shift. */
2812
- tcg_gen_andc_vec(vece, lval, lval, tmp);
2813
-
2814
- /* Select between left and right shift. */
2815
- if (vece == MO_8) {
2816
- tcg_gen_dupi_vec(vece, tmp, 0);
2817
- tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, rval, lval);
2818
- } else {
2819
- tcg_gen_dupi_vec(vece, tmp, 0x80);
2820
- tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, lval, rval);
2821
- }
2822
-}
2823
-
2824
-void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2825
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2826
-{
2827
- static const TCGOpcode vecop_list[] = {
2828
- INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
2829
- INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
2830
- };
2831
- static const GVecGen3 ops[4] = {
2832
- { .fniv = gen_sshl_vec,
2833
- .fno = gen_helper_gvec_sshl_b,
2834
- .opt_opc = vecop_list,
2835
- .vece = MO_8 },
2836
- { .fniv = gen_sshl_vec,
2837
- .fno = gen_helper_gvec_sshl_h,
2838
- .opt_opc = vecop_list,
2839
- .vece = MO_16 },
2840
- { .fni4 = gen_sshl_i32,
2841
- .fniv = gen_sshl_vec,
2842
- .opt_opc = vecop_list,
2843
- .vece = MO_32 },
2844
- { .fni8 = gen_sshl_i64,
2845
- .fniv = gen_sshl_vec,
2846
- .opt_opc = vecop_list,
2847
- .vece = MO_64 },
2848
- };
2849
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
2850
-}
2851
-
2852
-static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
2853
- TCGv_vec a, TCGv_vec b)
2854
-{
2855
- TCGv_vec x = tcg_temp_new_vec_matching(t);
2856
- tcg_gen_add_vec(vece, x, a, b);
2857
- tcg_gen_usadd_vec(vece, t, a, b);
2858
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
2859
- tcg_gen_or_vec(vece, sat, sat, x);
2860
-}
2861
-
2862
-void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2863
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2864
-{
2865
- static const TCGOpcode vecop_list[] = {
2866
- INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
2867
- };
2868
- static const GVecGen4 ops[4] = {
2869
- { .fniv = gen_uqadd_vec,
2870
- .fno = gen_helper_gvec_uqadd_b,
2871
- .write_aofs = true,
2872
- .opt_opc = vecop_list,
2873
- .vece = MO_8 },
2874
- { .fniv = gen_uqadd_vec,
2875
- .fno = gen_helper_gvec_uqadd_h,
2876
- .write_aofs = true,
2877
- .opt_opc = vecop_list,
2878
- .vece = MO_16 },
2879
- { .fniv = gen_uqadd_vec,
2880
- .fno = gen_helper_gvec_uqadd_s,
2881
- .write_aofs = true,
2882
- .opt_opc = vecop_list,
2883
- .vece = MO_32 },
2884
- { .fniv = gen_uqadd_vec,
2885
- .fno = gen_helper_gvec_uqadd_d,
2886
- .write_aofs = true,
2887
- .opt_opc = vecop_list,
2888
- .vece = MO_64 },
2889
- };
2890
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
2891
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
2892
-}
2893
-
2894
-static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
2895
- TCGv_vec a, TCGv_vec b)
2896
-{
2897
- TCGv_vec x = tcg_temp_new_vec_matching(t);
2898
- tcg_gen_add_vec(vece, x, a, b);
2899
- tcg_gen_ssadd_vec(vece, t, a, b);
2900
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
2901
- tcg_gen_or_vec(vece, sat, sat, x);
2902
-}
2903
-
2904
-void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2905
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2906
-{
2907
- static const TCGOpcode vecop_list[] = {
2908
- INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
2909
- };
2910
- static const GVecGen4 ops[4] = {
2911
- { .fniv = gen_sqadd_vec,
2912
- .fno = gen_helper_gvec_sqadd_b,
2913
- .opt_opc = vecop_list,
2914
- .write_aofs = true,
2915
- .vece = MO_8 },
2916
- { .fniv = gen_sqadd_vec,
2917
- .fno = gen_helper_gvec_sqadd_h,
2918
- .opt_opc = vecop_list,
2919
- .write_aofs = true,
2920
- .vece = MO_16 },
2921
- { .fniv = gen_sqadd_vec,
2922
- .fno = gen_helper_gvec_sqadd_s,
2923
- .opt_opc = vecop_list,
2924
- .write_aofs = true,
2925
- .vece = MO_32 },
2926
- { .fniv = gen_sqadd_vec,
2927
- .fno = gen_helper_gvec_sqadd_d,
2928
- .opt_opc = vecop_list,
2929
- .write_aofs = true,
2930
- .vece = MO_64 },
2931
- };
2932
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
2933
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
2934
-}
2935
-
2936
-static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
2937
- TCGv_vec a, TCGv_vec b)
2938
-{
2939
- TCGv_vec x = tcg_temp_new_vec_matching(t);
2940
- tcg_gen_sub_vec(vece, x, a, b);
2941
- tcg_gen_ussub_vec(vece, t, a, b);
2942
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
2943
- tcg_gen_or_vec(vece, sat, sat, x);
2944
-}
2945
-
2946
-void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2947
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2948
-{
2949
- static const TCGOpcode vecop_list[] = {
2950
- INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
2951
- };
2952
- static const GVecGen4 ops[4] = {
2953
- { .fniv = gen_uqsub_vec,
2954
- .fno = gen_helper_gvec_uqsub_b,
2955
- .opt_opc = vecop_list,
2956
- .write_aofs = true,
2957
- .vece = MO_8 },
2958
- { .fniv = gen_uqsub_vec,
2959
- .fno = gen_helper_gvec_uqsub_h,
2960
- .opt_opc = vecop_list,
2961
- .write_aofs = true,
2962
- .vece = MO_16 },
2963
- { .fniv = gen_uqsub_vec,
2964
- .fno = gen_helper_gvec_uqsub_s,
2965
- .opt_opc = vecop_list,
2966
- .write_aofs = true,
2967
- .vece = MO_32 },
2968
- { .fniv = gen_uqsub_vec,
2969
- .fno = gen_helper_gvec_uqsub_d,
2970
- .opt_opc = vecop_list,
2971
- .write_aofs = true,
2972
- .vece = MO_64 },
2973
- };
2974
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
2975
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
2976
-}
2977
-
2978
-static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
2979
- TCGv_vec a, TCGv_vec b)
2980
-{
2981
- TCGv_vec x = tcg_temp_new_vec_matching(t);
2982
- tcg_gen_sub_vec(vece, x, a, b);
2983
- tcg_gen_sssub_vec(vece, t, a, b);
2984
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
2985
- tcg_gen_or_vec(vece, sat, sat, x);
2986
-}
2987
-
2988
-void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
2989
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
2990
-{
2991
- static const TCGOpcode vecop_list[] = {
2992
- INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
2993
- };
2994
- static const GVecGen4 ops[4] = {
2995
- { .fniv = gen_sqsub_vec,
2996
- .fno = gen_helper_gvec_sqsub_b,
2997
- .opt_opc = vecop_list,
2998
- .write_aofs = true,
2999
- .vece = MO_8 },
3000
- { .fniv = gen_sqsub_vec,
3001
- .fno = gen_helper_gvec_sqsub_h,
3002
- .opt_opc = vecop_list,
3003
- .write_aofs = true,
3004
- .vece = MO_16 },
3005
- { .fniv = gen_sqsub_vec,
3006
- .fno = gen_helper_gvec_sqsub_s,
3007
- .opt_opc = vecop_list,
3008
- .write_aofs = true,
3009
- .vece = MO_32 },
3010
- { .fniv = gen_sqsub_vec,
3011
- .fno = gen_helper_gvec_sqsub_d,
3012
- .opt_opc = vecop_list,
3013
- .write_aofs = true,
3014
- .vece = MO_64 },
3015
- };
3016
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
3017
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
3018
-}
3019
-
3020
-static void gen_sabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
3021
-{
3022
- TCGv_i32 t = tcg_temp_new_i32();
3023
-
3024
- tcg_gen_sub_i32(t, a, b);
3025
- tcg_gen_sub_i32(d, b, a);
3026
- tcg_gen_movcond_i32(TCG_COND_LT, d, a, b, d, t);
3027
-}
3028
-
3029
-static void gen_sabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
3030
-{
3031
- TCGv_i64 t = tcg_temp_new_i64();
3032
-
3033
- tcg_gen_sub_i64(t, a, b);
3034
- tcg_gen_sub_i64(d, b, a);
3035
- tcg_gen_movcond_i64(TCG_COND_LT, d, a, b, d, t);
3036
-}
3037
-
3038
-static void gen_sabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
3039
-{
3040
- TCGv_vec t = tcg_temp_new_vec_matching(d);
3041
-
3042
- tcg_gen_smin_vec(vece, t, a, b);
3043
- tcg_gen_smax_vec(vece, d, a, b);
3044
- tcg_gen_sub_vec(vece, d, d, t);
3045
-}
3046
-
3047
-void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
3048
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
3049
-{
3050
- static const TCGOpcode vecop_list[] = {
3051
- INDEX_op_sub_vec, INDEX_op_smin_vec, INDEX_op_smax_vec, 0
3052
- };
3053
- static const GVecGen3 ops[4] = {
3054
- { .fniv = gen_sabd_vec,
3055
- .fno = gen_helper_gvec_sabd_b,
3056
- .opt_opc = vecop_list,
3057
- .vece = MO_8 },
3058
- { .fniv = gen_sabd_vec,
3059
- .fno = gen_helper_gvec_sabd_h,
3060
- .opt_opc = vecop_list,
3061
- .vece = MO_16 },
3062
- { .fni4 = gen_sabd_i32,
3063
- .fniv = gen_sabd_vec,
3064
- .fno = gen_helper_gvec_sabd_s,
3065
- .opt_opc = vecop_list,
3066
- .vece = MO_32 },
3067
- { .fni8 = gen_sabd_i64,
3068
- .fniv = gen_sabd_vec,
3069
- .fno = gen_helper_gvec_sabd_d,
3070
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
3071
- .opt_opc = vecop_list,
3072
- .vece = MO_64 },
3073
- };
3074
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
3075
-}
3076
-
3077
-static void gen_uabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
3078
-{
3079
- TCGv_i32 t = tcg_temp_new_i32();
3080
-
3081
- tcg_gen_sub_i32(t, a, b);
3082
- tcg_gen_sub_i32(d, b, a);
3083
- tcg_gen_movcond_i32(TCG_COND_LTU, d, a, b, d, t);
3084
-}
3085
-
3086
-static void gen_uabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
3087
-{
3088
- TCGv_i64 t = tcg_temp_new_i64();
3089
-
3090
- tcg_gen_sub_i64(t, a, b);
3091
- tcg_gen_sub_i64(d, b, a);
3092
- tcg_gen_movcond_i64(TCG_COND_LTU, d, a, b, d, t);
3093
-}
3094
-
3095
-static void gen_uabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
3096
-{
3097
- TCGv_vec t = tcg_temp_new_vec_matching(d);
3098
-
3099
- tcg_gen_umin_vec(vece, t, a, b);
3100
- tcg_gen_umax_vec(vece, d, a, b);
3101
- tcg_gen_sub_vec(vece, d, d, t);
3102
-}
3103
-
3104
-void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
3105
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
3106
-{
3107
- static const TCGOpcode vecop_list[] = {
3108
- INDEX_op_sub_vec, INDEX_op_umin_vec, INDEX_op_umax_vec, 0
3109
- };
3110
- static const GVecGen3 ops[4] = {
3111
- { .fniv = gen_uabd_vec,
3112
- .fno = gen_helper_gvec_uabd_b,
3113
- .opt_opc = vecop_list,
3114
- .vece = MO_8 },
3115
- { .fniv = gen_uabd_vec,
3116
- .fno = gen_helper_gvec_uabd_h,
3117
- .opt_opc = vecop_list,
3118
- .vece = MO_16 },
3119
- { .fni4 = gen_uabd_i32,
3120
- .fniv = gen_uabd_vec,
3121
- .fno = gen_helper_gvec_uabd_s,
3122
- .opt_opc = vecop_list,
3123
- .vece = MO_32 },
3124
- { .fni8 = gen_uabd_i64,
3125
- .fniv = gen_uabd_vec,
3126
- .fno = gen_helper_gvec_uabd_d,
3127
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
3128
- .opt_opc = vecop_list,
3129
- .vece = MO_64 },
3130
- };
3131
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
3132
-}
3133
-
3134
-static void gen_saba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
3135
-{
3136
- TCGv_i32 t = tcg_temp_new_i32();
3137
- gen_sabd_i32(t, a, b);
3138
- tcg_gen_add_i32(d, d, t);
3139
-}
3140
-
3141
-static void gen_saba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
3142
-{
3143
- TCGv_i64 t = tcg_temp_new_i64();
3144
- gen_sabd_i64(t, a, b);
3145
- tcg_gen_add_i64(d, d, t);
3146
-}
3147
-
3148
-static void gen_saba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
3149
-{
3150
- TCGv_vec t = tcg_temp_new_vec_matching(d);
3151
- gen_sabd_vec(vece, t, a, b);
3152
- tcg_gen_add_vec(vece, d, d, t);
3153
-}
3154
-
3155
-void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
3156
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
3157
-{
3158
- static const TCGOpcode vecop_list[] = {
3159
- INDEX_op_sub_vec, INDEX_op_add_vec,
3160
- INDEX_op_smin_vec, INDEX_op_smax_vec, 0
3161
- };
3162
- static const GVecGen3 ops[4] = {
3163
- { .fniv = gen_saba_vec,
3164
- .fno = gen_helper_gvec_saba_b,
3165
- .opt_opc = vecop_list,
3166
- .load_dest = true,
3167
- .vece = MO_8 },
3168
- { .fniv = gen_saba_vec,
3169
- .fno = gen_helper_gvec_saba_h,
3170
- .opt_opc = vecop_list,
3171
- .load_dest = true,
3172
- .vece = MO_16 },
3173
- { .fni4 = gen_saba_i32,
3174
- .fniv = gen_saba_vec,
3175
- .fno = gen_helper_gvec_saba_s,
3176
- .opt_opc = vecop_list,
3177
- .load_dest = true,
3178
- .vece = MO_32 },
3179
- { .fni8 = gen_saba_i64,
3180
- .fniv = gen_saba_vec,
3181
- .fno = gen_helper_gvec_saba_d,
3182
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
3183
- .opt_opc = vecop_list,
3184
- .load_dest = true,
3185
- .vece = MO_64 },
3186
- };
3187
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
3188
-}
3189
-
3190
-static void gen_uaba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
3191
-{
3192
- TCGv_i32 t = tcg_temp_new_i32();
3193
- gen_uabd_i32(t, a, b);
3194
- tcg_gen_add_i32(d, d, t);
3195
-}
3196
-
3197
-static void gen_uaba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
3198
-{
3199
- TCGv_i64 t = tcg_temp_new_i64();
3200
- gen_uabd_i64(t, a, b);
3201
- tcg_gen_add_i64(d, d, t);
3202
-}
3203
-
3204
-static void gen_uaba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
3205
-{
3206
- TCGv_vec t = tcg_temp_new_vec_matching(d);
3207
- gen_uabd_vec(vece, t, a, b);
3208
- tcg_gen_add_vec(vece, d, d, t);
3209
-}
3210
-
3211
-void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
3212
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
3213
-{
3214
- static const TCGOpcode vecop_list[] = {
3215
- INDEX_op_sub_vec, INDEX_op_add_vec,
3216
- INDEX_op_umin_vec, INDEX_op_umax_vec, 0
3217
- };
3218
- static const GVecGen3 ops[4] = {
3219
- { .fniv = gen_uaba_vec,
3220
- .fno = gen_helper_gvec_uaba_b,
3221
- .opt_opc = vecop_list,
3222
- .load_dest = true,
3223
- .vece = MO_8 },
3224
- { .fniv = gen_uaba_vec,
3225
- .fno = gen_helper_gvec_uaba_h,
3226
- .opt_opc = vecop_list,
3227
- .load_dest = true,
3228
- .vece = MO_16 },
3229
- { .fni4 = gen_uaba_i32,
3230
- .fniv = gen_uaba_vec,
3231
- .fno = gen_helper_gvec_uaba_s,
3232
- .opt_opc = vecop_list,
3233
- .load_dest = true,
3234
- .vece = MO_32 },
3235
- { .fni8 = gen_uaba_i64,
3236
- .fniv = gen_uaba_vec,
3237
- .fno = gen_helper_gvec_uaba_d,
3238
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
3239
- .opt_opc = vecop_list,
3240
- .load_dest = true,
3241
- .vece = MO_64 },
3242
- };
3243
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
3244
-}
3245
-
3246
static bool aa32_cpreg_encoding_in_impdef_space(uint8_t crn, uint8_t crm)
3247
{
3248
static const uint16_t mask[3] = {
3249
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
3250
index XXXXXXX..XXXXXXX 100644
3251
--- a/target/arm/tcg/meson.build
3252
+++ b/target/arm/tcg/meson.build
3253
@@ -XXX,XX +XXX,XX @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: gen_a64)
3254
3255
arm_ss.add(files(
3256
'cpu32.c',
3257
+ 'gengvec.c',
3258
'translate.c',
3259
'translate-m-nocp.c',
3260
'translate-mve.c',
3261
--
29
--
3262
2.34.1
30
2.34.1
3263
31
3264
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is the last instruction within disas_fp_2src,
3
This function is part of the public interface and
4
so remove that and its subroutines.
4
is not "specialized" to any target in any way.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20240506010403.6204-17-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241203203949.483774-7-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/tcg/a64.decode | 1 +
11
fpu/softfloat.c | 52 ++++++++++++++++++++++++++++++++++
12
target/arm/tcg/translate-a64.c | 177 +++++----------------------------
12
fpu/softfloat-specialize.c.inc | 52 ----------------------------------
13
2 files changed, 27 insertions(+), 151 deletions(-)
13
2 files changed, 52 insertions(+), 52 deletions(-)
14
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
17
--- a/fpu/softfloat.c
18
+++ b/target/arm/tcg/a64.decode
18
+++ b/fpu/softfloat.c
19
@@ -XXX,XX +XXX,XX @@ FADD_s 0001 1110 ..1 ..... 0010 10 ..... ..... @rrr_hsd
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
20
FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
20
*zExpPtr = 1 - shiftCount;
21
FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
21
}
22
FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
22
23
+FNMUL_s 0001 1110 ..1 ..... 1000 10 ..... ..... @rrr_hsd
23
+/*----------------------------------------------------------------------------
24
24
+| Takes two extended double-precision floating-point values `a' and `b', one
25
FMAX_s 0001 1110 ..1 ..... 0100 10 ..... ..... @rrr_hsd
25
+| of which is a NaN, and returns the appropriate NaN result. If either `a' or
26
FMIN_s 0001 1110 ..1 ..... 0101 10 ..... ..... @rrr_hsd
26
+| `b' is a signaling NaN, the invalid exception is raised.
27
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
27
+*----------------------------------------------------------------------------*/
28
index XXXXXXX..XXXXXXX 100644
28
+
29
--- a/target/arm/tcg/translate-a64.c
29
+floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
30
+++ b/target/arm/tcg/translate-a64.c
31
@@ -XXX,XX +XXX,XX @@ static const FPScalar f_scalar_fmulx = {
32
};
33
TRANS(FMULX_s, do_fp3_scalar, a, &f_scalar_fmulx)
34
35
+static void gen_fnmul_h(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
36
+{
30
+{
37
+ gen_helper_vfp_mulh(d, n, m, s);
31
+ bool aIsLargerSignificand;
38
+ gen_vfp_negh(d, d);
32
+ FloatClass a_cls, b_cls;
33
+
34
+ /* This is not complete, but is good enough for pickNaN. */
35
+ a_cls = (!floatx80_is_any_nan(a)
36
+ ? float_class_normal
37
+ : floatx80_is_signaling_nan(a, status)
38
+ ? float_class_snan
39
+ : float_class_qnan);
40
+ b_cls = (!floatx80_is_any_nan(b)
41
+ ? float_class_normal
42
+ : floatx80_is_signaling_nan(b, status)
43
+ ? float_class_snan
44
+ : float_class_qnan);
45
+
46
+ if (is_snan(a_cls) || is_snan(b_cls)) {
47
+ float_raise(float_flag_invalid, status);
48
+ }
49
+
50
+ if (status->default_nan_mode) {
51
+ return floatx80_default_nan(status);
52
+ }
53
+
54
+ if (a.low < b.low) {
55
+ aIsLargerSignificand = 0;
56
+ } else if (b.low < a.low) {
57
+ aIsLargerSignificand = 1;
58
+ } else {
59
+ aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
60
+ }
61
+
62
+ if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
63
+ if (is_snan(b_cls)) {
64
+ return floatx80_silence_nan(b, status);
65
+ }
66
+ return b;
67
+ } else {
68
+ if (is_snan(a_cls)) {
69
+ return floatx80_silence_nan(a, status);
70
+ }
71
+ return a;
72
+ }
39
+}
73
+}
40
+
74
+
41
+static void gen_fnmul_s(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
75
/*----------------------------------------------------------------------------
42
+{
76
| Takes an abstract floating-point value having sign `zSign', exponent `zExp',
43
+ gen_helper_vfp_muls(d, n, m, s);
77
| and extended significand formed by the concatenation of `zSig0' and `zSig1',
44
+ gen_vfp_negs(d, d);
78
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
45
+}
79
index XXXXXXX..XXXXXXX 100644
46
+
80
--- a/fpu/softfloat-specialize.c.inc
47
+static void gen_fnmul_d(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_ptr s)
81
+++ b/fpu/softfloat-specialize.c.inc
48
+{
82
@@ -XXX,XX +XXX,XX @@ floatx80 floatx80_silence_nan(floatx80 a, float_status *status)
49
+ gen_helper_vfp_muld(d, n, m, s);
83
return a;
50
+ gen_vfp_negd(d, d);
51
+}
52
+
53
+static const FPScalar f_scalar_fnmul = {
54
+ gen_fnmul_h,
55
+ gen_fnmul_s,
56
+ gen_fnmul_d,
57
+};
58
+TRANS(FNMUL_s, do_fp3_scalar, a, &f_scalar_fnmul)
59
+
60
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
61
gen_helper_gvec_3_ptr * const fns[3])
62
{
63
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
64
}
65
}
84
}
66
85
67
-/* Floating-point data-processing (2 source) - single precision */
86
-/*----------------------------------------------------------------------------
68
-static void handle_fp_2src_single(DisasContext *s, int opcode,
87
-| Takes two extended double-precision floating-point values `a' and `b', one
69
- int rd, int rn, int rm)
88
-| of which is a NaN, and returns the appropriate NaN result. If either `a' or
89
-| `b' is a signaling NaN, the invalid exception is raised.
90
-*----------------------------------------------------------------------------*/
91
-
92
-floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
70
-{
93
-{
71
- TCGv_i32 tcg_op1;
94
- bool aIsLargerSignificand;
72
- TCGv_i32 tcg_op2;
95
- FloatClass a_cls, b_cls;
73
- TCGv_i32 tcg_res;
74
- TCGv_ptr fpst;
75
-
96
-
76
- tcg_res = tcg_temp_new_i32();
97
- /* This is not complete, but is good enough for pickNaN. */
77
- fpst = fpstatus_ptr(FPST_FPCR);
98
- a_cls = (!floatx80_is_any_nan(a)
78
- tcg_op1 = read_fp_sreg(s, rn);
99
- ? float_class_normal
79
- tcg_op2 = read_fp_sreg(s, rm);
100
- : floatx80_is_signaling_nan(a, status)
101
- ? float_class_snan
102
- : float_class_qnan);
103
- b_cls = (!floatx80_is_any_nan(b)
104
- ? float_class_normal
105
- : floatx80_is_signaling_nan(b, status)
106
- ? float_class_snan
107
- : float_class_qnan);
80
-
108
-
81
- switch (opcode) {
109
- if (is_snan(a_cls) || is_snan(b_cls)) {
82
- case 0x8: /* FNMUL */
110
- float_raise(float_flag_invalid, status);
83
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
84
- gen_vfp_negs(tcg_res, tcg_res);
85
- break;
86
- default:
87
- case 0x0: /* FMUL */
88
- case 0x1: /* FDIV */
89
- case 0x2: /* FADD */
90
- case 0x3: /* FSUB */
91
- case 0x4: /* FMAX */
92
- case 0x5: /* FMIN */
93
- case 0x6: /* FMAXNM */
94
- case 0x7: /* FMINNM */
95
- g_assert_not_reached();
96
- }
111
- }
97
-
112
-
98
- write_fp_sreg(s, rd, tcg_res);
113
- if (status->default_nan_mode) {
99
-}
114
- return floatx80_default_nan(status);
100
-
101
-/* Floating-point data-processing (2 source) - double precision */
102
-static void handle_fp_2src_double(DisasContext *s, int opcode,
103
- int rd, int rn, int rm)
104
-{
105
- TCGv_i64 tcg_op1;
106
- TCGv_i64 tcg_op2;
107
- TCGv_i64 tcg_res;
108
- TCGv_ptr fpst;
109
-
110
- tcg_res = tcg_temp_new_i64();
111
- fpst = fpstatus_ptr(FPST_FPCR);
112
- tcg_op1 = read_fp_dreg(s, rn);
113
- tcg_op2 = read_fp_dreg(s, rm);
114
-
115
- switch (opcode) {
116
- case 0x8: /* FNMUL */
117
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
118
- gen_vfp_negd(tcg_res, tcg_res);
119
- break;
120
- default:
121
- case 0x0: /* FMUL */
122
- case 0x1: /* FDIV */
123
- case 0x2: /* FADD */
124
- case 0x3: /* FSUB */
125
- case 0x4: /* FMAX */
126
- case 0x5: /* FMIN */
127
- case 0x6: /* FMAXNM */
128
- case 0x7: /* FMINNM */
129
- g_assert_not_reached();
130
- }
115
- }
131
-
116
-
132
- write_fp_dreg(s, rd, tcg_res);
117
- if (a.low < b.low) {
133
-}
118
- aIsLargerSignificand = 0;
134
-
119
- } else if (b.low < a.low) {
135
-/* Floating-point data-processing (2 source) - half precision */
120
- aIsLargerSignificand = 1;
136
-static void handle_fp_2src_half(DisasContext *s, int opcode,
121
- } else {
137
- int rd, int rn, int rm)
122
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
138
-{
139
- TCGv_i32 tcg_op1;
140
- TCGv_i32 tcg_op2;
141
- TCGv_i32 tcg_res;
142
- TCGv_ptr fpst;
143
-
144
- tcg_res = tcg_temp_new_i32();
145
- fpst = fpstatus_ptr(FPST_FPCR_F16);
146
- tcg_op1 = read_fp_hreg(s, rn);
147
- tcg_op2 = read_fp_hreg(s, rm);
148
-
149
- switch (opcode) {
150
- case 0x8: /* FNMUL */
151
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
152
- gen_vfp_negh(tcg_res, tcg_res);
153
- break;
154
- default:
155
- case 0x0: /* FMUL */
156
- case 0x1: /* FDIV */
157
- case 0x2: /* FADD */
158
- case 0x3: /* FSUB */
159
- case 0x4: /* FMAX */
160
- case 0x5: /* FMIN */
161
- case 0x6: /* FMAXNM */
162
- case 0x7: /* FMINNM */
163
- g_assert_not_reached();
164
- }
123
- }
165
-
124
-
166
- write_fp_sreg(s, rd, tcg_res);
125
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
167
-}
126
- if (is_snan(b_cls)) {
168
-
127
- return floatx80_silence_nan(b, status);
169
-/* Floating point data-processing (2 source)
170
- * 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
171
- * +---+---+---+-----------+------+---+------+--------+-----+------+------+
172
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | opcode | 1 0 | Rn | Rd |
173
- * +---+---+---+-----------+------+---+------+--------+-----+------+------+
174
- */
175
-static void disas_fp_2src(DisasContext *s, uint32_t insn)
176
-{
177
- int mos = extract32(insn, 29, 3);
178
- int type = extract32(insn, 22, 2);
179
- int rd = extract32(insn, 0, 5);
180
- int rn = extract32(insn, 5, 5);
181
- int rm = extract32(insn, 16, 5);
182
- int opcode = extract32(insn, 12, 4);
183
-
184
- if (opcode > 8 || mos) {
185
- unallocated_encoding(s);
186
- return;
187
- }
188
-
189
- switch (type) {
190
- case 0:
191
- if (!fp_access_check(s)) {
192
- return;
193
- }
128
- }
194
- handle_fp_2src_single(s, opcode, rd, rn, rm);
129
- return b;
195
- break;
130
- } else {
196
- case 1:
131
- if (is_snan(a_cls)) {
197
- if (!fp_access_check(s)) {
132
- return floatx80_silence_nan(a, status);
198
- return;
199
- }
133
- }
200
- handle_fp_2src_double(s, opcode, rd, rn, rm);
134
- return a;
201
- break;
202
- case 3:
203
- if (!dc_isar_feature(aa64_fp16, s)) {
204
- unallocated_encoding(s);
205
- return;
206
- }
207
- if (!fp_access_check(s)) {
208
- return;
209
- }
210
- handle_fp_2src_half(s, opcode, rd, rn, rm);
211
- break;
212
- default:
213
- unallocated_encoding(s);
214
- }
135
- }
215
-}
136
-}
216
-
137
-
217
/* Floating-point data-processing (3 source) - single precision */
138
/*----------------------------------------------------------------------------
218
static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
139
| Returns 1 if the quadruple-precision floating-point value `a' is a quiet
219
int rd, int rn, int rm, int ra)
140
| NaN; otherwise returns 0.
220
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
221
break;
222
case 2:
223
/* Floating point data-processing (2 source) */
224
- disas_fp_2src(s, insn);
225
+ unallocated_encoding(s); /* in decodetree */
226
break;
227
case 3:
228
/* Floating point conditional select */
229
--
141
--
230
2.34.1
142
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
These are the last instructions within handle_3same_float
3
Unpacking and repacking the parts may be slightly more work
4
and disas_simd_scalar_three_reg_same_fp16 so remove them.
4
than we did before, but we get to reuse more code. For a
5
code path handling exceptional values, this is an improvement.
5
6
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20240506010403.6204-21-richard.henderson@linaro.org
8
Message-id: 20241203203949.483774-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/tcg/a64.decode | 12 ++
12
fpu/softfloat.c | 43 +++++--------------------------------------
12
target/arm/tcg/translate-a64.c | 293 ++++-----------------------------
13
1 file changed, 5 insertions(+), 38 deletions(-)
13
2 files changed, 46 insertions(+), 259 deletions(-)
14
14
15
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/a64.decode
17
--- a/fpu/softfloat.c
18
+++ b/target/arm/tcg/a64.decode
18
+++ b/fpu/softfloat.c
19
@@ -XXX,XX +XXX,XX @@ FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
20
FABD_s 0111 1110 110 ..... 00010 1 ..... ..... @rrr_h
20
21
FABD_s 0111 1110 1.1 ..... 11010 1 ..... ..... @rrr_sd
21
floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
22
23
+FRECPS_s 0101 1110 010 ..... 00111 1 ..... ..... @rrr_h
24
+FRECPS_s 0101 1110 0.1 ..... 11111 1 ..... ..... @rrr_sd
25
+
26
+FRSQRTS_s 0101 1110 110 ..... 00111 1 ..... ..... @rrr_h
27
+FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
28
+
29
### Advanced SIMD three same
30
31
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
32
@@ -XXX,XX +XXX,XX @@ FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
33
FABD_v 0.10 1110 110 ..... 00010 1 ..... ..... @qrrr_h
34
FABD_v 0.10 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
35
36
+FRECPS_v 0.00 1110 010 ..... 00111 1 ..... ..... @qrrr_h
37
+FRECPS_v 0.00 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
38
+
39
+FRSQRTS_v 0.00 1110 110 ..... 00111 1 ..... ..... @qrrr_h
40
+FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
41
+
42
### Advanced SIMD scalar x indexed element
43
44
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
45
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/tcg/translate-a64.c
48
+++ b/target/arm/tcg/translate-a64.c
49
@@ -XXX,XX +XXX,XX @@ static const FPScalar f_scalar_fabd = {
50
};
51
TRANS(FABD_s, do_fp3_scalar, a, &f_scalar_fabd)
52
53
+static const FPScalar f_scalar_frecps = {
54
+ gen_helper_recpsf_f16,
55
+ gen_helper_recpsf_f32,
56
+ gen_helper_recpsf_f64,
57
+};
58
+TRANS(FRECPS_s, do_fp3_scalar, a, &f_scalar_frecps)
59
+
60
+static const FPScalar f_scalar_frsqrts = {
61
+ gen_helper_rsqrtsf_f16,
62
+ gen_helper_rsqrtsf_f32,
63
+ gen_helper_rsqrtsf_f64,
64
+};
65
+TRANS(FRSQRTS_s, do_fp3_scalar, a, &f_scalar_frsqrts)
66
+
67
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
68
gen_helper_gvec_3_ptr * const fns[3])
69
{
22
{
70
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_fabd[3] = {
23
- bool aIsLargerSignificand;
71
};
24
- FloatClass a_cls, b_cls;
72
TRANS(FABD_v, do_fp3_vector, a, f_vector_fabd)
25
+ FloatParts128 pa, pb, *pr;
73
26
74
+static gen_helper_gvec_3_ptr * const f_vector_frecps[3] = {
27
- /* This is not complete, but is good enough for pickNaN. */
75
+ gen_helper_gvec_recps_h,
28
- a_cls = (!floatx80_is_any_nan(a)
76
+ gen_helper_gvec_recps_s,
29
- ? float_class_normal
77
+ gen_helper_gvec_recps_d,
30
- : floatx80_is_signaling_nan(a, status)
78
+};
31
- ? float_class_snan
79
+TRANS(FRECPS_v, do_fp3_vector, a, f_vector_frecps)
32
- : float_class_qnan);
80
+
33
- b_cls = (!floatx80_is_any_nan(b)
81
+static gen_helper_gvec_3_ptr * const f_vector_frsqrts[3] = {
34
- ? float_class_normal
82
+ gen_helper_gvec_rsqrts_h,
35
- : floatx80_is_signaling_nan(b, status)
83
+ gen_helper_gvec_rsqrts_s,
36
- ? float_class_snan
84
+ gen_helper_gvec_rsqrts_d,
37
- : float_class_qnan);
85
+};
86
+TRANS(FRSQRTS_v, do_fp3_vector, a, f_vector_frsqrts)
87
+
88
/*
89
* Advanced SIMD scalar/vector x indexed element
90
*/
91
@@ -XXX,XX +XXX,XX @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
92
}
93
}
94
95
-/* Handle the 3-same-operands float operations; shared by the scalar
96
- * and vector encodings. The caller must filter out any encodings
97
- * not allocated for the encoding it is dealing with.
98
- */
99
-static void handle_3same_float(DisasContext *s, int size, int elements,
100
- int fpopcode, int rd, int rn, int rm)
101
-{
102
- int pass;
103
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
104
-
38
-
105
- for (pass = 0; pass < elements; pass++) {
39
- if (is_snan(a_cls) || is_snan(b_cls)) {
106
- if (size) {
40
- float_raise(float_flag_invalid, status);
107
- /* Double */
108
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
109
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
110
- TCGv_i64 tcg_res = tcg_temp_new_i64();
111
-
112
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
113
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
114
-
115
- switch (fpopcode) {
116
- case 0x1f: /* FRECPS */
117
- gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
118
- break;
119
- case 0x3f: /* FRSQRTS */
120
- gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
121
- break;
122
- default:
123
- case 0x18: /* FMAXNM */
124
- case 0x19: /* FMLA */
125
- case 0x1a: /* FADD */
126
- case 0x1b: /* FMULX */
127
- case 0x1c: /* FCMEQ */
128
- case 0x1e: /* FMAX */
129
- case 0x38: /* FMINNM */
130
- case 0x39: /* FMLS */
131
- case 0x3a: /* FSUB */
132
- case 0x3e: /* FMIN */
133
- case 0x5b: /* FMUL */
134
- case 0x5c: /* FCMGE */
135
- case 0x5d: /* FACGE */
136
- case 0x5f: /* FDIV */
137
- case 0x7a: /* FABD */
138
- case 0x7c: /* FCMGT */
139
- case 0x7d: /* FACGT */
140
- g_assert_not_reached();
141
- }
142
-
143
- write_vec_element(s, tcg_res, rd, pass, MO_64);
144
- } else {
145
- /* Single */
146
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
147
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
148
- TCGv_i32 tcg_res = tcg_temp_new_i32();
149
-
150
- read_vec_element_i32(s, tcg_op1, rn, pass, MO_32);
151
- read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
152
-
153
- switch (fpopcode) {
154
- case 0x1f: /* FRECPS */
155
- gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
156
- break;
157
- case 0x3f: /* FRSQRTS */
158
- gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
159
- break;
160
- default:
161
- case 0x18: /* FMAXNM */
162
- case 0x19: /* FMLA */
163
- case 0x1a: /* FADD */
164
- case 0x1b: /* FMULX */
165
- case 0x1c: /* FCMEQ */
166
- case 0x1e: /* FMAX */
167
- case 0x38: /* FMINNM */
168
- case 0x39: /* FMLS */
169
- case 0x3a: /* FSUB */
170
- case 0x3e: /* FMIN */
171
- case 0x5b: /* FMUL */
172
- case 0x5c: /* FCMGE */
173
- case 0x5d: /* FACGE */
174
- case 0x5f: /* FDIV */
175
- case 0x7a: /* FABD */
176
- case 0x7c: /* FCMGT */
177
- case 0x7d: /* FACGT */
178
- g_assert_not_reached();
179
- }
180
-
181
- if (elements == 1) {
182
- /* scalar single so clear high part */
183
- TCGv_i64 tcg_tmp = tcg_temp_new_i64();
184
-
185
- tcg_gen_extu_i32_i64(tcg_tmp, tcg_res);
186
- write_vec_element(s, tcg_tmp, rd, pass, MO_64);
187
- } else {
188
- write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
189
- }
190
- }
191
- }
41
- }
192
-
42
-
193
- clear_vec_high(s, elements * (size ? 8 : 4) > 8, rd);
43
- if (status->default_nan_mode) {
194
-}
44
+ if (!floatx80_unpack_canonical(&pa, a, status) ||
195
-
45
+ !floatx80_unpack_canonical(&pb, b, status)) {
196
/* AdvSIMD scalar three same
46
return floatx80_default_nan(status);
197
* 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
47
}
198
* +-----+---+-----------+------+---+------+--------+---+------+------+
48
199
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
49
- if (a.low < b.low) {
200
bool u = extract32(insn, 29, 1);
50
- aIsLargerSignificand = 0;
201
TCGv_i64 tcg_rd;
51
- } else if (b.low < a.low) {
202
52
- aIsLargerSignificand = 1;
203
- if (opcode >= 0x18) {
53
- } else {
204
- /* Floating point: U, size[1] and opcode indicate operation */
54
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
205
- int fpopcode = opcode | (extract32(size, 1, 1) << 5) | (u << 6);
206
- switch (fpopcode) {
207
- case 0x1f: /* FRECPS */
208
- case 0x3f: /* FRSQRTS */
209
- break;
210
- default:
211
- case 0x1b: /* FMULX */
212
- case 0x5d: /* FACGE */
213
- case 0x7d: /* FACGT */
214
- case 0x1c: /* FCMEQ */
215
- case 0x5c: /* FCMGE */
216
- case 0x7a: /* FABD */
217
- case 0x7c: /* FCMGT */
218
- unallocated_encoding(s);
219
- return;
220
- }
221
-
222
- if (!fp_access_check(s)) {
223
- return;
224
- }
225
-
226
- handle_3same_float(s, extract32(size, 0, 1), 1, fpopcode, rd, rn, rm);
227
- return;
228
- }
55
- }
229
-
56
-
230
switch (opcode) {
57
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
231
case 0x1: /* SQADD, UQADD */
58
- if (is_snan(b_cls)) {
232
case 0x5: /* SQSUB, UQSUB */
59
- return floatx80_silence_nan(b, status);
233
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
60
- }
234
write_fp_dreg(s, rd, tcg_rd);
61
- return b;
62
- } else {
63
- if (is_snan(a_cls)) {
64
- return floatx80_silence_nan(a, status);
65
- }
66
- return a;
67
- }
68
+ pr = parts_pick_nan(&pa, &pb, status);
69
+ return floatx80_round_pack_canonical(pr, status);
235
}
70
}
236
71
237
-/* AdvSIMD scalar three same FP16
72
/*----------------------------------------------------------------------------
238
- * 31 30 29 28 24 23 22 21 20 16 15 14 13 11 10 9 5 4 0
239
- * +-----+---+-----------+---+-----+------+-----+--------+---+----+----+
240
- * | 0 1 | U | 1 1 1 1 0 | a | 1 0 | Rm | 0 0 | opcode | 1 | Rn | Rd |
241
- * +-----+---+-----------+---+-----+------+-----+--------+---+----+----+
242
- * v: 0101 1110 0100 0000 0000 0100 0000 0000 => 5e400400
243
- * m: 1101 1111 0110 0000 1100 0100 0000 0000 => df60c400
244
- */
245
-static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
246
- uint32_t insn)
247
-{
248
- int rd = extract32(insn, 0, 5);
249
- int rn = extract32(insn, 5, 5);
250
- int opcode = extract32(insn, 11, 3);
251
- int rm = extract32(insn, 16, 5);
252
- bool u = extract32(insn, 29, 1);
253
- bool a = extract32(insn, 23, 1);
254
- int fpopcode = opcode | (a << 3) | (u << 4);
255
- TCGv_ptr fpst;
256
- TCGv_i32 tcg_op1;
257
- TCGv_i32 tcg_op2;
258
- TCGv_i32 tcg_res;
259
-
260
- switch (fpopcode) {
261
- case 0x07: /* FRECPS */
262
- case 0x0f: /* FRSQRTS */
263
- break;
264
- default:
265
- case 0x03: /* FMULX */
266
- case 0x04: /* FCMEQ (reg) */
267
- case 0x14: /* FCMGE (reg) */
268
- case 0x15: /* FACGE */
269
- case 0x1a: /* FABD */
270
- case 0x1c: /* FCMGT (reg) */
271
- case 0x1d: /* FACGT */
272
- unallocated_encoding(s);
273
- return;
274
- }
275
-
276
- if (!dc_isar_feature(aa64_fp16, s)) {
277
- unallocated_encoding(s);
278
- }
279
-
280
- if (!fp_access_check(s)) {
281
- return;
282
- }
283
-
284
- fpst = fpstatus_ptr(FPST_FPCR_F16);
285
-
286
- tcg_op1 = read_fp_hreg(s, rn);
287
- tcg_op2 = read_fp_hreg(s, rm);
288
- tcg_res = tcg_temp_new_i32();
289
-
290
- switch (fpopcode) {
291
- case 0x07: /* FRECPS */
292
- gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
293
- break;
294
- case 0x0f: /* FRSQRTS */
295
- gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
296
- break;
297
- default:
298
- case 0x03: /* FMULX */
299
- case 0x04: /* FCMEQ (reg) */
300
- case 0x14: /* FCMGE (reg) */
301
- case 0x15: /* FACGE */
302
- case 0x1a: /* FABD */
303
- case 0x1c: /* FCMGT (reg) */
304
- case 0x1d: /* FACGT */
305
- g_assert_not_reached();
306
- }
307
-
308
- write_fp_sreg(s, rd, tcg_res);
309
-}
310
-
311
/* AdvSIMD scalar three same extra
312
* 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
313
* +-----+---+-----------+------+---+------+---+--------+---+----+----+
314
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
315
316
/* Pairwise op subgroup of C3.6.16.
317
*
318
- * This is called directly or via the handle_3same_float for float pairwise
319
+ * This is called directly for float pairwise
320
* operations where the opcode and size are calculated differently.
321
*/
322
static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
323
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
324
int rn = extract32(insn, 5, 5);
325
int rd = extract32(insn, 0, 5);
326
327
- int datasize = is_q ? 128 : 64;
328
- int esize = 32 << size;
329
- int elements = datasize / esize;
330
-
331
if (size == 1 && !is_q) {
332
unallocated_encoding(s);
333
return;
334
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
335
handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
336
rn, rm, rd);
337
return;
338
- case 0x1f: /* FRECPS */
339
- case 0x3f: /* FRSQRTS */
340
- if (!fp_access_check(s)) {
341
- return;
342
- }
343
- handle_3same_float(s, size, elements, fpopcode, rd, rn, rm);
344
- return;
345
346
case 0x1d: /* FMLAL */
347
case 0x3d: /* FMLSL */
348
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
349
case 0x1b: /* FMULX */
350
case 0x1c: /* FCMEQ */
351
case 0x1e: /* FMAX */
352
+ case 0x1f: /* FRECPS */
353
case 0x38: /* FMINNM */
354
case 0x39: /* FMLS */
355
case 0x3a: /* FSUB */
356
case 0x3e: /* FMIN */
357
+ case 0x3f: /* FRSQRTS */
358
case 0x5b: /* FMUL */
359
case 0x5c: /* FCMGE */
360
case 0x5d: /* FACGE */
361
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
362
* together indicate the operation.
363
*/
364
int fpopcode = opcode | (a << 3) | (u << 4);
365
- int datasize = is_q ? 128 : 64;
366
- int elements = datasize / 16;
367
bool pairwise;
368
TCGv_ptr fpst;
369
int pass;
370
371
switch (fpopcode) {
372
- case 0x7: /* FRECPS */
373
- case 0xf: /* FRSQRTS */
374
- pairwise = false;
375
- break;
376
case 0x10: /* FMAXNMP */
377
case 0x12: /* FADDP */
378
case 0x16: /* FMAXP */
379
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
380
case 0x3: /* FMULX */
381
case 0x4: /* FCMEQ */
382
case 0x6: /* FMAX */
383
+ case 0x7: /* FRECPS */
384
case 0x8: /* FMINNM */
385
case 0x9: /* FMLS */
386
case 0xa: /* FSUB */
387
case 0xe: /* FMIN */
388
+ case 0xf: /* FRSQRTS */
389
case 0x13: /* FMUL */
390
case 0x14: /* FCMGE */
391
case 0x15: /* FACGE */
392
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
393
write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
394
}
395
} else {
396
- for (pass = 0; pass < elements; pass++) {
397
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
398
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
399
- TCGv_i32 tcg_res = tcg_temp_new_i32();
400
-
401
- read_vec_element_i32(s, tcg_op1, rn, pass, MO_16);
402
- read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
403
-
404
- switch (fpopcode) {
405
- case 0x7: /* FRECPS */
406
- gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
407
- break;
408
- case 0xf: /* FRSQRTS */
409
- gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
410
- break;
411
- default:
412
- case 0x0: /* FMAXNM */
413
- case 0x1: /* FMLA */
414
- case 0x2: /* FADD */
415
- case 0x3: /* FMULX */
416
- case 0x4: /* FCMEQ */
417
- case 0x6: /* FMAX */
418
- case 0x8: /* FMINNM */
419
- case 0x9: /* FMLS */
420
- case 0xa: /* FSUB */
421
- case 0xe: /* FMIN */
422
- case 0x13: /* FMUL */
423
- case 0x14: /* FCMGE */
424
- case 0x15: /* FACGE */
425
- case 0x17: /* FDIV */
426
- case 0x1a: /* FABD */
427
- case 0x1c: /* FCMGT */
428
- case 0x1d: /* FACGT */
429
- g_assert_not_reached();
430
- }
431
-
432
- write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
433
- }
434
+ g_assert_not_reached();
435
}
436
437
clear_vec_high(s, is_q, rd);
438
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
439
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
440
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
441
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
442
- { 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
443
{ 0x00000000, 0x00000000, NULL }
444
};
445
446
--
73
--
447
2.34.1
74
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This includes AND, ORR, EOR, BIC, ORN, BSF, BIT, BIF.
3
Inline pickNaN into its only caller. This makes one assert
4
redundant with the immediately preceding IF.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20240506010403.6204-30-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241203203949.483774-9-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/tcg/a64.decode | 10 +++++
11
fpu/softfloat-parts.c.inc | 82 +++++++++++++++++++++++++----
11
target/arm/tcg/translate-a64.c | 68 ++++++++++------------------------
12
fpu/softfloat-specialize.c.inc | 96 ----------------------------------
12
2 files changed, 29 insertions(+), 49 deletions(-)
13
2 files changed, 73 insertions(+), 105 deletions(-)
13
14
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/a64.decode
17
--- a/fpu/softfloat-parts.c.inc
17
+++ b/target/arm/tcg/a64.decode
18
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
19
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
20
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
20
@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
21
float_status *s)
21
22
{
22
+@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
23
+ int cmp, which;
23
@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
24
@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
25
@qrrr_e . q:1 ...... esz:2 . rm:5 ...... rn:5 rd:5 &qrrr_e
26
@@ -XXX,XX +XXX,XX @@ SMINP_v 0.00 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
27
UMAXP_v 0.10 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
28
UMINP_v 0.10 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
29
30
+AND_v 0.00 1110 001 ..... 00011 1 ..... ..... @qrrr_b
31
+BIC_v 0.00 1110 011 ..... 00011 1 ..... ..... @qrrr_b
32
+ORR_v 0.00 1110 101 ..... 00011 1 ..... ..... @qrrr_b
33
+ORN_v 0.00 1110 111 ..... 00011 1 ..... ..... @qrrr_b
34
+EOR_v 0.10 1110 001 ..... 00011 1 ..... ..... @qrrr_b
35
+BSL_v 0.10 1110 011 ..... 00011 1 ..... ..... @qrrr_b
36
+BIT_v 0.10 1110 101 ..... 00011 1 ..... ..... @qrrr_b
37
+BIF_v 0.10 1110 111 ..... 00011 1 ..... ..... @qrrr_b
38
+
24
+
39
### Advanced SIMD scalar x indexed element
25
if (is_snan(a->cls) || is_snan(b->cls)) {
40
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
41
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
27
}
42
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
29
if (s->default_nan_mode) {
30
parts_default_nan(a, s);
31
- } else {
32
- int cmp = frac_cmp(a, b);
33
- if (cmp == 0) {
34
- cmp = a->sign < b->sign;
35
- }
36
+ return a;
37
+ }
38
39
- if (pickNaN(a->cls, b->cls, cmp > 0, s)) {
40
- a = b;
41
- }
42
+ cmp = frac_cmp(a, b);
43
+ if (cmp == 0) {
44
+ cmp = a->sign < b->sign;
45
+ }
46
+
47
+ switch (s->float_2nan_prop_rule) {
48
+ case float_2nan_prop_s_ab:
49
if (is_snan(a->cls)) {
50
- parts_silence_nan(a, s);
51
+ which = 0;
52
+ } else if (is_snan(b->cls)) {
53
+ which = 1;
54
+ } else if (is_qnan(a->cls)) {
55
+ which = 0;
56
+ } else {
57
+ which = 1;
58
}
59
+ break;
60
+ case float_2nan_prop_s_ba:
61
+ if (is_snan(b->cls)) {
62
+ which = 1;
63
+ } else if (is_snan(a->cls)) {
64
+ which = 0;
65
+ } else if (is_qnan(b->cls)) {
66
+ which = 1;
67
+ } else {
68
+ which = 0;
69
+ }
70
+ break;
71
+ case float_2nan_prop_ab:
72
+ which = is_nan(a->cls) ? 0 : 1;
73
+ break;
74
+ case float_2nan_prop_ba:
75
+ which = is_nan(b->cls) ? 1 : 0;
76
+ break;
77
+ case float_2nan_prop_x87:
78
+ /*
79
+ * This implements x87 NaN propagation rules:
80
+ * SNaN + QNaN => return the QNaN
81
+ * two SNaNs => return the one with the larger significand, silenced
82
+ * two QNaNs => return the one with the larger significand
83
+ * SNaN and a non-NaN => return the SNaN, silenced
84
+ * QNaN and a non-NaN => return the QNaN
85
+ *
86
+ * If we get down to comparing significands and they are the same,
87
+ * return the NaN with the positive sign bit (if any).
88
+ */
89
+ if (is_snan(a->cls)) {
90
+ if (is_snan(b->cls)) {
91
+ which = cmp > 0 ? 0 : 1;
92
+ } else {
93
+ which = is_qnan(b->cls) ? 1 : 0;
94
+ }
95
+ } else if (is_qnan(a->cls)) {
96
+ if (is_snan(b->cls) || !is_qnan(b->cls)) {
97
+ which = 0;
98
+ } else {
99
+ which = cmp > 0 ? 0 : 1;
100
+ }
101
+ } else {
102
+ which = 1;
103
+ }
104
+ break;
105
+ default:
106
+ g_assert_not_reached();
107
+ }
108
+
109
+ if (which) {
110
+ a = b;
111
+ }
112
+ if (is_snan(a->cls)) {
113
+ parts_silence_nan(a, s);
114
}
115
return a;
116
}
117
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
43
index XXXXXXX..XXXXXXX 100644
118
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/translate-a64.c
119
--- a/fpu/softfloat-specialize.c.inc
45
+++ b/target/arm/tcg/translate-a64.c
120
+++ b/fpu/softfloat-specialize.c.inc
46
@@ -XXX,XX +XXX,XX @@ TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
121
@@ -XXX,XX +XXX,XX @@ bool float32_is_signaling_nan(float32 a_, float_status *status)
47
TRANS(UMAXP_v, do_gvec_fn3_no64, a, gen_gvec_umaxp)
48
TRANS(UMINP_v, do_gvec_fn3_no64, a, gen_gvec_uminp)
49
50
+TRANS(AND_v, do_gvec_fn3, a, tcg_gen_gvec_and)
51
+TRANS(BIC_v, do_gvec_fn3, a, tcg_gen_gvec_andc)
52
+TRANS(ORR_v, do_gvec_fn3, a, tcg_gen_gvec_or)
53
+TRANS(ORN_v, do_gvec_fn3, a, tcg_gen_gvec_orc)
54
+TRANS(EOR_v, do_gvec_fn3, a, tcg_gen_gvec_xor)
55
+
56
+static bool do_bitsel(DisasContext *s, bool is_q, int d, int a, int b, int c)
57
+{
58
+ if (fp_access_check(s)) {
59
+ gen_gvec_fn4(s, is_q, d, a, b, c, tcg_gen_gvec_bitsel, 0);
60
+ }
61
+ return true;
62
+}
63
+
64
+TRANS(BSL_v, do_bitsel, a->q, a->rd, a->rd, a->rn, a->rm)
65
+TRANS(BIT_v, do_bitsel, a->q, a->rd, a->rm, a->rn, a->rd)
66
+TRANS(BIF_v, do_bitsel, a->q, a->rd, a->rm, a->rd, a->rn)
67
+
68
/*
69
* Advanced SIMD scalar/vector x indexed element
70
*/
71
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
72
}
122
}
73
}
123
}
74
124
75
-/* Logic op (opcode == 3) subgroup of C3.6.16. */
125
-/*----------------------------------------------------------------------------
76
-static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
126
-| Select which NaN to propagate for a two-input operation.
127
-| IEEE754 doesn't specify all the details of this, so the
128
-| algorithm is target-specific.
129
-| The routine is passed various bits of information about the
130
-| two NaNs and should return 0 to select NaN a and 1 for NaN b.
131
-| Note that signalling NaNs are always squashed to quiet NaNs
132
-| by the caller, by calling floatXX_silence_nan() before
133
-| returning them.
134
-|
135
-| aIsLargerSignificand is only valid if both a and b are NaNs
136
-| of some kind, and is true if a has the larger significand,
137
-| or if both a and b have the same significand but a is
138
-| positive but b is negative. It is only needed for the x87
139
-| tie-break rule.
140
-*----------------------------------------------------------------------------*/
141
-
142
-static int pickNaN(FloatClass a_cls, FloatClass b_cls,
143
- bool aIsLargerSignificand, float_status *status)
77
-{
144
-{
78
- int rd = extract32(insn, 0, 5);
145
- /*
79
- int rn = extract32(insn, 5, 5);
146
- * We guarantee not to require the target to tell us how to
80
- int rm = extract32(insn, 16, 5);
147
- * pick a NaN if we're always returning the default NaN.
81
- int size = extract32(insn, 22, 2);
148
- * But if we're not in default-NaN mode then the target must
82
- bool is_u = extract32(insn, 29, 1);
149
- * specify via set_float_2nan_prop_rule().
83
- bool is_q = extract32(insn, 30, 1);
150
- */
151
- assert(!status->default_nan_mode);
84
-
152
-
85
- if (!fp_access_check(s)) {
153
- switch (status->float_2nan_prop_rule) {
86
- return;
154
- case float_2nan_prop_s_ab:
87
- }
155
- if (is_snan(a_cls)) {
88
-
156
- return 0;
89
- switch (size + 4 * is_u) {
157
- } else if (is_snan(b_cls)) {
90
- case 0: /* AND */
158
- return 1;
91
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_and, 0);
159
- } else if (is_qnan(a_cls)) {
92
- return;
160
- return 0;
93
- case 1: /* BIC */
161
- } else {
94
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_andc, 0);
162
- return 1;
95
- return;
163
- }
96
- case 2: /* ORR */
164
- break;
97
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_or, 0);
165
- case float_2nan_prop_s_ba:
98
- return;
166
- if (is_snan(b_cls)) {
99
- case 3: /* ORN */
167
- return 1;
100
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_orc, 0);
168
- } else if (is_snan(a_cls)) {
101
- return;
169
- return 0;
102
- case 4: /* EOR */
170
- } else if (is_qnan(b_cls)) {
103
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_xor, 0);
171
- return 1;
104
- return;
172
- } else {
105
-
173
- return 0;
106
- case 5: /* BSL bitwise select */
174
- }
107
- gen_gvec_fn4(s, is_q, rd, rd, rn, rm, tcg_gen_gvec_bitsel, 0);
175
- break;
108
- return;
176
- case float_2nan_prop_ab:
109
- case 6: /* BIT, bitwise insert if true */
177
- if (is_nan(a_cls)) {
110
- gen_gvec_fn4(s, is_q, rd, rm, rn, rd, tcg_gen_gvec_bitsel, 0);
178
- return 0;
111
- return;
179
- } else {
112
- case 7: /* BIF, bitwise insert if false */
180
- return 1;
113
- gen_gvec_fn4(s, is_q, rd, rm, rd, rn, tcg_gen_gvec_bitsel, 0);
181
- }
114
- return;
182
- break;
115
-
183
- case float_2nan_prop_ba:
184
- if (is_nan(b_cls)) {
185
- return 1;
186
- } else {
187
- return 0;
188
- }
189
- break;
190
- case float_2nan_prop_x87:
191
- /*
192
- * This implements x87 NaN propagation rules:
193
- * SNaN + QNaN => return the QNaN
194
- * two SNaNs => return the one with the larger significand, silenced
195
- * two QNaNs => return the one with the larger significand
196
- * SNaN and a non-NaN => return the SNaN, silenced
197
- * QNaN and a non-NaN => return the QNaN
198
- *
199
- * If we get down to comparing significands and they are the same,
200
- * return the NaN with the positive sign bit (if any).
201
- */
202
- if (is_snan(a_cls)) {
203
- if (is_snan(b_cls)) {
204
- return aIsLargerSignificand ? 0 : 1;
205
- }
206
- return is_qnan(b_cls) ? 1 : 0;
207
- } else if (is_qnan(a_cls)) {
208
- if (is_snan(b_cls) || !is_qnan(b_cls)) {
209
- return 0;
210
- } else {
211
- return aIsLargerSignificand ? 0 : 1;
212
- }
213
- } else {
214
- return 1;
215
- }
116
- default:
216
- default:
117
- g_assert_not_reached();
217
- g_assert_not_reached();
118
- }
218
- }
119
-}
219
-}
120
-
220
-
121
/* Integer op subgroup of C3.6.16. */
221
/*----------------------------------------------------------------------------
122
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
222
| Returns 1 if the double-precision floating-point value `a' is a quiet
123
{
223
| NaN; otherwise returns 0.
124
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
125
int opcode = extract32(insn, 11, 5);
126
127
switch (opcode) {
128
- case 0x3: /* logic ops */
129
- disas_simd_3same_logic(s, insn);
130
- break;
131
default:
132
disas_simd_3same_int(s, insn);
133
break;
134
+ case 0x3: /* logic ops */
135
case 0x14: /* SMAXP, UMAXP */
136
case 0x15: /* SMINP, UMINP */
137
case 0x17: /* ADDP */
138
--
224
--
139
2.34.1
225
2.34.1
226
227
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This fixes a bug in which scalar half-precision did not
3
Remember if there was an SNaN, and use that to simplify
4
diagnose sz == 1 as UNDEFINED.
4
float_2nan_prop_s_{ab,ba} to only the snan component.
5
Then, fall through to the corresponding
6
float_2nan_prop_{ab,ba} case to handle any remaining
7
nans, which must be quiet.
5
8
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20240506010403.6204-22-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20241203203949.483774-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
target/arm/helper.h | 4 ++
14
fpu/softfloat-parts.c.inc | 32 ++++++++++++--------------------
12
target/arm/tcg/a64.decode | 12 +++++
15
1 file changed, 12 insertions(+), 20 deletions(-)
13
target/arm/tcg/translate-a64.c | 87 ++++++++++++++++++++++++++--------
14
target/arm/tcg/vec_helper.c | 23 +++++++++
15
4 files changed, 105 insertions(+), 21 deletions(-)
16
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
19
--- a/fpu/softfloat-parts.c.inc
20
+++ b/target/arm/helper.h
20
+++ b/fpu/softfloat-parts.c.inc
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_uclamp_s, TCG_CALL_NO_RWG,
21
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
22
DEF_HELPER_FLAGS_5(gvec_uclamp_d, TCG_CALL_NO_RWG,
22
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
23
void, ptr, ptr, ptr, ptr, i32)
23
float_status *s)
24
24
{
25
+DEF_HELPER_FLAGS_5(gvec_faddp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
+ bool have_snan = false;
26
+DEF_HELPER_FLAGS_5(gvec_faddp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
int cmp, which;
27
+DEF_HELPER_FLAGS_5(gvec_faddp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
28
+
28
if (is_snan(a->cls) || is_snan(b->cls)) {
29
#ifdef TARGET_AARCH64
29
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
30
#include "tcg/helper-a64.h"
30
+ have_snan = true;
31
#include "tcg/helper-sve.h"
31
}
32
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
32
33
index XXXXXXX..XXXXXXX 100644
33
if (s->default_nan_mode) {
34
--- a/target/arm/tcg/a64.decode
34
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
35
+++ b/target/arm/tcg/a64.decode
35
36
@@ -XXX,XX +XXX,XX @@
36
switch (s->float_2nan_prop_rule) {
37
&ri rd imm
37
case float_2nan_prop_s_ab:
38
&rri_sf rd rn imm sf
38
- if (is_snan(a->cls)) {
39
&i imm
39
- which = 0;
40
+&rr_e rd rn esz
40
- } else if (is_snan(b->cls)) {
41
&rrr_e rd rn rm esz
41
- which = 1;
42
&rrx_e rd rn rm idx esz
42
- } else if (is_qnan(a->cls)) {
43
&qrr_e q rd rn esz
43
- which = 0;
44
@@ -XXX,XX +XXX,XX @@
44
- } else {
45
&qrrx_e q rd rn rm idx esz
45
- which = 1;
46
&qrrrr_e q rd rn rm ra esz
46
+ if (have_snan) {
47
47
+ which = is_snan(a->cls) ? 0 : 1;
48
+@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
48
+ break;
49
+@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
49
}
50
+
50
- break;
51
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
51
- case float_2nan_prop_s_ba:
52
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
52
- if (is_snan(b->cls)) {
53
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
53
- which = 1;
54
@@ -XXX,XX +XXX,XX @@ FRECPS_s 0101 1110 0.1 ..... 11111 1 ..... ..... @rrr_sd
54
- } else if (is_snan(a->cls)) {
55
FRSQRTS_s 0101 1110 110 ..... 00111 1 ..... ..... @rrr_h
55
- which = 0;
56
FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
56
- } else if (is_qnan(b->cls)) {
57
57
- which = 1;
58
+### Advanced SIMD scalar pairwise
58
- } else {
59
+
59
- which = 0;
60
+FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
60
- }
61
+FADDP_s 0111 1110 0.11 0000 1101 10 ..... ..... @rr_sd
61
- break;
62
+
62
+ /* fall through */
63
### Advanced SIMD three same
63
case float_2nan_prop_ab:
64
64
which = is_nan(a->cls) ? 0 : 1;
65
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
65
break;
66
@@ -XXX,XX +XXX,XX @@ FRECPS_v 0.00 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
66
+ case float_2nan_prop_s_ba:
67
FRSQRTS_v 0.00 1110 110 ..... 00111 1 ..... ..... @qrrr_h
67
+ if (have_snan) {
68
FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
68
+ which = is_snan(b->cls) ? 1 : 0;
69
69
+ break;
70
+FADDP_v 0.10 1110 010 ..... 00010 1 ..... ..... @qrrr_h
71
+FADDP_v 0.10 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
72
+
73
### Advanced SIMD scalar x indexed element
74
75
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
76
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/tcg/translate-a64.c
79
+++ b/target/arm/tcg/translate-a64.c
80
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_frsqrts[3] = {
81
};
82
TRANS(FRSQRTS_v, do_fp3_vector, a, f_vector_frsqrts)
83
84
+static gen_helper_gvec_3_ptr * const f_vector_faddp[3] = {
85
+ gen_helper_gvec_faddp_h,
86
+ gen_helper_gvec_faddp_s,
87
+ gen_helper_gvec_faddp_d,
88
+};
89
+TRANS(FADDP_v, do_fp3_vector, a, f_vector_faddp)
90
+
91
/*
92
* Advanced SIMD scalar/vector x indexed element
93
*/
94
@@ -XXX,XX +XXX,XX @@ static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
95
TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
96
TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
97
98
+/*
99
+ * Advanced SIMD scalar pairwise
100
+ */
101
+
102
+static bool do_fp3_scalar_pair(DisasContext *s, arg_rr_e *a, const FPScalar *f)
103
+{
104
+ switch (a->esz) {
105
+ case MO_64:
106
+ if (fp_access_check(s)) {
107
+ TCGv_i64 t0 = tcg_temp_new_i64();
108
+ TCGv_i64 t1 = tcg_temp_new_i64();
109
+
110
+ read_vec_element(s, t0, a->rn, 0, MO_64);
111
+ read_vec_element(s, t1, a->rn, 1, MO_64);
112
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
113
+ write_fp_dreg(s, a->rd, t0);
114
+ }
70
+ }
115
+ break;
71
+ /* fall through */
116
+ case MO_32:
72
case float_2nan_prop_ba:
117
+ if (fp_access_check(s)) {
73
which = is_nan(b->cls) ? 1 : 0;
118
+ TCGv_i32 t0 = tcg_temp_new_i32();
119
+ TCGv_i32 t1 = tcg_temp_new_i32();
120
+
121
+ read_vec_element_i32(s, t0, a->rn, 0, MO_32);
122
+ read_vec_element_i32(s, t1, a->rn, 1, MO_32);
123
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
124
+ write_fp_sreg(s, a->rd, t0);
125
+ }
126
+ break;
127
+ case MO_16:
128
+ if (!dc_isar_feature(aa64_fp16, s)) {
129
+ return false;
130
+ }
131
+ if (fp_access_check(s)) {
132
+ TCGv_i32 t0 = tcg_temp_new_i32();
133
+ TCGv_i32 t1 = tcg_temp_new_i32();
134
+
135
+ read_vec_element_i32(s, t0, a->rn, 0, MO_16);
136
+ read_vec_element_i32(s, t1, a->rn, 1, MO_16);
137
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
138
+ write_fp_sreg(s, a->rd, t0);
139
+ }
140
+ break;
141
+ default:
142
+ g_assert_not_reached();
143
+ }
144
+ return true;
145
+}
146
+
147
+TRANS(FADDP_s, do_fp3_scalar_pair, a, &f_scalar_fadd)
148
149
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
150
* Note that it is the caller's responsibility to ensure that the
151
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
152
fpst = NULL;
153
break;
74
break;
154
case 0xc: /* FMAXNMP */
155
- case 0xd: /* FADDP */
156
case 0xf: /* FMAXP */
157
case 0x2c: /* FMINNMP */
158
case 0x2f: /* FMINP */
159
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
160
fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
161
break;
162
default:
163
+ case 0xd: /* FADDP */
164
unallocated_encoding(s);
165
return;
166
}
167
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
168
case 0xc: /* FMAXNMP */
169
gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
170
break;
171
- case 0xd: /* FADDP */
172
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
173
- break;
174
case 0xf: /* FMAXP */
175
gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
176
break;
177
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
178
gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
179
break;
180
default:
181
+ case 0xd: /* FADDP */
182
g_assert_not_reached();
183
}
184
185
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
186
case 0xc: /* FMAXNMP */
187
gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
188
break;
189
- case 0xd: /* FADDP */
190
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
191
- break;
192
case 0xf: /* FMAXP */
193
gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
194
break;
195
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
196
gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
197
break;
198
default:
199
+ case 0xd: /* FADDP */
200
g_assert_not_reached();
201
}
202
} else {
203
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
204
case 0xc: /* FMAXNMP */
205
gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
206
break;
207
- case 0xd: /* FADDP */
208
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
209
- break;
210
case 0xf: /* FMAXP */
211
gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
212
break;
213
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
214
gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
215
break;
216
default:
217
+ case 0xd: /* FADDP */
218
g_assert_not_reached();
219
}
220
}
221
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
222
case 0x58: /* FMAXNMP */
223
gen_helper_vfp_maxnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
224
break;
225
- case 0x5a: /* FADDP */
226
- gen_helper_vfp_addd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
227
- break;
228
case 0x5e: /* FMAXP */
229
gen_helper_vfp_maxd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
230
break;
231
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
232
gen_helper_vfp_mind(tcg_res[pass], tcg_op1, tcg_op2, fpst);
233
break;
234
default:
235
+ case 0x5a: /* FADDP */
236
g_assert_not_reached();
237
}
238
}
239
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
240
case 0x58: /* FMAXNMP */
241
gen_helper_vfp_maxnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
242
break;
243
- case 0x5a: /* FADDP */
244
- gen_helper_vfp_adds(tcg_res[pass], tcg_op1, tcg_op2, fpst);
245
- break;
246
case 0x5e: /* FMAXP */
247
gen_helper_vfp_maxs(tcg_res[pass], tcg_op1, tcg_op2, fpst);
248
break;
249
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
250
gen_helper_vfp_mins(tcg_res[pass], tcg_op1, tcg_op2, fpst);
251
break;
252
default:
253
+ case 0x5a: /* FADDP */
254
g_assert_not_reached();
255
}
256
257
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
258
259
switch (fpopcode) {
260
case 0x58: /* FMAXNMP */
261
- case 0x5a: /* FADDP */
262
case 0x5e: /* FMAXP */
263
case 0x78: /* FMINNMP */
264
case 0x7e: /* FMINP */
265
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
266
case 0x3a: /* FSUB */
267
case 0x3e: /* FMIN */
268
case 0x3f: /* FRSQRTS */
269
+ case 0x5a: /* FADDP */
270
case 0x5b: /* FMUL */
271
case 0x5c: /* FCMGE */
272
case 0x5d: /* FACGE */
273
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
274
275
switch (fpopcode) {
276
case 0x10: /* FMAXNMP */
277
- case 0x12: /* FADDP */
278
case 0x16: /* FMAXP */
279
case 0x18: /* FMINNMP */
280
case 0x1e: /* FMINP */
281
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
282
case 0xa: /* FSUB */
283
case 0xe: /* FMIN */
284
case 0xf: /* FRSQRTS */
285
+ case 0x12: /* FADDP */
286
case 0x13: /* FMUL */
287
case 0x14: /* FCMGE */
288
case 0x15: /* FACGE */
289
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
290
gen_helper_advsimd_maxnumh(tcg_res[pass], tcg_op1, tcg_op2,
291
fpst);
292
break;
293
- case 0x12: /* FADDP */
294
- gen_helper_advsimd_addh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
295
- break;
296
case 0x16: /* FMAXP */
297
gen_helper_advsimd_maxh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
298
break;
299
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
300
gen_helper_advsimd_minh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
301
break;
302
default:
303
+ case 0x12: /* FADDP */
304
g_assert_not_reached();
305
}
306
}
307
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
308
index XXXXXXX..XXXXXXX 100644
309
--- a/target/arm/tcg/vec_helper.c
310
+++ b/target/arm/tcg/vec_helper.c
311
@@ -XXX,XX +XXX,XX @@ DO_NEON_PAIRWISE(neon_pmin, min)
312
313
#undef DO_NEON_PAIRWISE
314
315
+#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
316
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
317
+{ \
318
+ ARMVectorReg scratch; \
319
+ intptr_t oprsz = simd_oprsz(desc); \
320
+ intptr_t half = oprsz / sizeof(TYPE) / 2; \
321
+ TYPE *d = vd, *n = vn, *m = vm; \
322
+ if (unlikely(d == m)) { \
323
+ m = memcpy(&scratch, m, oprsz); \
324
+ } \
325
+ for (intptr_t i = 0; i < half; ++i) { \
326
+ d[H(i)] = FUNC(n[H(i * 2)], n[H(i * 2 + 1)], stat); \
327
+ } \
328
+ for (intptr_t i = 0; i < half; ++i) { \
329
+ d[H(i + half)] = FUNC(m[H(i * 2)], m[H(i * 2 + 1)], stat); \
330
+ } \
331
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
332
+}
333
+
334
+DO_3OP_PAIR(gvec_faddp_h, float16_add, float16, H2)
335
+DO_3OP_PAIR(gvec_faddp_s, float32_add, float32, H4)
336
+DO_3OP_PAIR(gvec_faddp_d, float64_add, float64, )
337
+
338
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
339
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
340
{ \
341
--
75
--
342
2.34.1
76
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
These are the last instructions within handle_simd_3same_pair
3
Move the fractional comparison to the end of the
4
so remove it.
4
float_2nan_prop_x87 case. This is not required for
5
any other 2nan propagation rule. Reorganize the
6
x87 case itself to break out of the switch when the
7
fractional comparison is not required.
5
8
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20240506010403.6204-27-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20241203203949.483774-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
target/arm/helper.h | 16 +++++
14
fpu/softfloat-parts.c.inc | 19 +++++++++----------
12
target/arm/tcg/translate.h | 8 +++
15
1 file changed, 9 insertions(+), 10 deletions(-)
13
target/arm/tcg/a64.decode | 4 ++
14
target/arm/tcg/gengvec.c | 48 +++++++++++++
15
target/arm/tcg/translate-a64.c | 119 +++++----------------------------
16
target/arm/tcg/vec_helper.c | 16 +++++
17
6 files changed, 109 insertions(+), 102 deletions(-)
18
16
19
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.h
19
--- a/fpu/softfloat-parts.c.inc
22
+++ b/target/arm/helper.h
20
+++ b/fpu/softfloat-parts.c.inc
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_addp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
24
DEF_HELPER_FLAGS_4(gvec_addp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
return a;
25
DEF_HELPER_FLAGS_4(gvec_addp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
27
+DEF_HELPER_FLAGS_4(gvec_smaxp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(gvec_smaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(gvec_smaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_4(gvec_sminp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(gvec_sminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(gvec_sminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_4(gvec_umaxp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(gvec_umaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(gvec_umaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_4(gvec_uminp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(gvec_uminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(gvec_uminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
+
43
#ifdef TARGET_AARCH64
44
#include "tcg/helper-a64.h"
45
#include "tcg/helper-sve.h"
46
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/tcg/translate.h
49
+++ b/target/arm/tcg/translate.h
50
@@ -XXX,XX +XXX,XX @@ void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
51
52
void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
53
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
54
+void gen_gvec_smaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
55
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
56
+void gen_gvec_sminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
57
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
58
+void gen_gvec_umaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
59
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
60
+void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
61
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
62
63
/*
64
* Forward to the isar_feature_* tests given a DisasContext pointer.
65
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/tcg/a64.decode
68
+++ b/target/arm/tcg/a64.decode
69
@@ -XXX,XX +XXX,XX @@ FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
70
FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
71
72
ADDP_v 0.00 1110 ..1 ..... 10111 1 ..... ..... @qrrr_e
73
+SMAXP_v 0.00 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
74
+SMINP_v 0.00 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
75
+UMAXP_v 0.10 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
76
+UMINP_v 0.10 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
77
78
### Advanced SIMD scalar x indexed element
79
80
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/tcg/gengvec.c
83
+++ b/target/arm/tcg/gengvec.c
84
@@ -XXX,XX +XXX,XX @@ void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
85
};
86
tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
87
}
88
+
89
+void gen_gvec_smaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
90
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
91
+{
92
+ static gen_helper_gvec_3 * const fns[4] = {
93
+ gen_helper_gvec_smaxp_b,
94
+ gen_helper_gvec_smaxp_h,
95
+ gen_helper_gvec_smaxp_s,
96
+ };
97
+ tcg_debug_assert(vece <= MO_32);
98
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
99
+}
100
+
101
+void gen_gvec_sminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
102
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
103
+{
104
+ static gen_helper_gvec_3 * const fns[4] = {
105
+ gen_helper_gvec_sminp_b,
106
+ gen_helper_gvec_sminp_h,
107
+ gen_helper_gvec_sminp_s,
108
+ };
109
+ tcg_debug_assert(vece <= MO_32);
110
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
111
+}
112
+
113
+void gen_gvec_umaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
114
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
115
+{
116
+ static gen_helper_gvec_3 * const fns[4] = {
117
+ gen_helper_gvec_umaxp_b,
118
+ gen_helper_gvec_umaxp_h,
119
+ gen_helper_gvec_umaxp_s,
120
+ };
121
+ tcg_debug_assert(vece <= MO_32);
122
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
123
+}
124
+
125
+void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
126
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
127
+{
128
+ static gen_helper_gvec_3 * const fns[4] = {
129
+ gen_helper_gvec_uminp_b,
130
+ gen_helper_gvec_uminp_h,
131
+ gen_helper_gvec_uminp_s,
132
+ };
133
+ tcg_debug_assert(vece <= MO_32);
134
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
135
+}
136
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
137
index XXXXXXX..XXXXXXX 100644
138
--- a/target/arm/tcg/translate-a64.c
139
+++ b/target/arm/tcg/translate-a64.c
140
@@ -XXX,XX +XXX,XX @@ static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
141
return true;
142
}
143
144
+static bool do_gvec_fn3_no64(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
145
+{
146
+ if (a->esz == MO_64) {
147
+ return false;
148
+ }
149
+ if (fp_access_check(s)) {
150
+ gen_gvec_fn3(s, a->q, a->rd, a->rn, a->rm, fn, a->esz);
151
+ }
152
+ return true;
153
+}
154
+
155
static bool do_gvec_fn4(DisasContext *s, arg_qrrrr_e *a, GVecGen4Fn *fn)
156
{
157
if (!a->q && a->esz == MO_64) {
158
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
159
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
160
161
TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
162
+TRANS(SMAXP_v, do_gvec_fn3_no64, a, gen_gvec_smaxp)
163
+TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
164
+TRANS(UMAXP_v, do_gvec_fn3_no64, a, gen_gvec_umaxp)
165
+TRANS(UMINP_v, do_gvec_fn3_no64, a, gen_gvec_uminp)
166
167
/*
168
* Advanced SIMD scalar/vector x indexed element
169
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
170
}
23
}
171
}
24
172
25
- cmp = frac_cmp(a, b);
173
-/* Pairwise op subgroup of C3.6.16.
26
- if (cmp == 0) {
174
- *
27
- cmp = a->sign < b->sign;
175
- * This is called directly for float pairwise
176
- * operations where the opcode and size are calculated differently.
177
- */
178
-static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
179
- int size, int rn, int rm, int rd)
180
-{
181
- int pass;
182
-
183
- if (!fp_access_check(s)) {
184
- return;
185
- }
28
- }
186
-
29
-
187
- /* These operations work on the concatenated rm:rn, with each pair of
30
switch (s->float_2nan_prop_rule) {
188
- * adjacent elements being operated on to produce an element in the result.
31
case float_2nan_prop_s_ab:
189
- */
32
if (have_snan) {
190
- if (size == 3) {
33
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
191
- g_assert_not_reached();
34
* return the NaN with the positive sign bit (if any).
192
- } else {
35
*/
193
- int maxpass = is_q ? 4 : 2;
36
if (is_snan(a->cls)) {
194
- TCGv_i32 tcg_res[4];
37
- if (is_snan(b->cls)) {
195
-
38
- which = cmp > 0 ? 0 : 1;
196
- for (pass = 0; pass < maxpass; pass++) {
39
- } else {
197
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
40
+ if (!is_snan(b->cls)) {
198
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
41
which = is_qnan(b->cls) ? 1 : 0;
199
- NeonGenTwoOpFn *genfn = NULL;
42
+ break;
200
- int passreg = pass < (maxpass / 2) ? rn : rm;
43
}
201
- int passelt = (is_q && (pass & 1)) ? 2 : 0;
44
} else if (is_qnan(a->cls)) {
202
-
45
if (is_snan(b->cls) || !is_qnan(b->cls)) {
203
- read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_32);
46
which = 0;
204
- read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_32);
47
- } else {
205
- tcg_res[pass] = tcg_temp_new_i32();
48
- which = cmp > 0 ? 0 : 1;
206
-
49
+ break;
207
- switch (opcode) {
50
}
208
- case 0x14: /* SMAXP, UMAXP */
51
} else {
209
- {
52
which = 1;
210
- static NeonGenTwoOpFn * const fns[3][2] = {
53
+ break;
211
- { gen_helper_neon_pmax_s8, gen_helper_neon_pmax_u8 },
54
}
212
- { gen_helper_neon_pmax_s16, gen_helper_neon_pmax_u16 },
55
+ cmp = frac_cmp(a, b);
213
- { tcg_gen_smax_i32, tcg_gen_umax_i32 },
56
+ if (cmp == 0) {
214
- };
57
+ cmp = a->sign < b->sign;
215
- genfn = fns[size][u];
58
+ }
216
- break;
59
+ which = cmp > 0 ? 0 : 1;
217
- }
218
- case 0x15: /* SMINP, UMINP */
219
- {
220
- static NeonGenTwoOpFn * const fns[3][2] = {
221
- { gen_helper_neon_pmin_s8, gen_helper_neon_pmin_u8 },
222
- { gen_helper_neon_pmin_s16, gen_helper_neon_pmin_u16 },
223
- { tcg_gen_smin_i32, tcg_gen_umin_i32 },
224
- };
225
- genfn = fns[size][u];
226
- break;
227
- }
228
- default:
229
- case 0x17: /* ADDP */
230
- case 0x58: /* FMAXNMP */
231
- case 0x5a: /* FADDP */
232
- case 0x5e: /* FMAXP */
233
- case 0x78: /* FMINNMP */
234
- case 0x7e: /* FMINP */
235
- g_assert_not_reached();
236
- }
237
-
238
- /* FP ops called directly, otherwise call now */
239
- if (genfn) {
240
- genfn(tcg_res[pass], tcg_op1, tcg_op2);
241
- }
242
- }
243
-
244
- for (pass = 0; pass < maxpass; pass++) {
245
- write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
246
- }
247
- clear_vec_high(s, is_q, rd);
248
- }
249
-}
250
-
251
/* Floating point op subgroup of C3.6.16. */
252
static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
253
{
254
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
255
case 0x3: /* logic ops */
256
disas_simd_3same_logic(s, insn);
257
break;
60
break;
258
- case 0x14: /* SMAXP, UMAXP */
259
- case 0x15: /* SMINP, UMINP */
260
- {
261
- /* Pairwise operations */
262
- int is_q = extract32(insn, 30, 1);
263
- int u = extract32(insn, 29, 1);
264
- int size = extract32(insn, 22, 2);
265
- int rm = extract32(insn, 16, 5);
266
- int rn = extract32(insn, 5, 5);
267
- int rd = extract32(insn, 0, 5);
268
- if (opcode == 0x17) {
269
- if (u || (size == 3 && !is_q)) {
270
- unallocated_encoding(s);
271
- return;
272
- }
273
- } else {
274
- if (size == 3) {
275
- unallocated_encoding(s);
276
- return;
277
- }
278
- }
279
- handle_simd_3same_pair(s, is_q, u, opcode, size, rn, rm, rd);
280
- break;
281
- }
282
case 0x18 ... 0x31:
283
/* floating point ops, sz[1] and U are part of opcode */
284
disas_simd_3same_float(s, insn);
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
286
default:
61
default:
287
disas_simd_3same_int(s, insn);
62
g_assert_not_reached();
288
break;
289
+ case 0x14: /* SMAXP, UMAXP */
290
+ case 0x15: /* SMINP, UMINP */
291
case 0x17: /* ADDP */
292
unallocated_encoding(s);
293
break;
294
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
295
index XXXXXXX..XXXXXXX 100644
296
--- a/target/arm/tcg/vec_helper.c
297
+++ b/target/arm/tcg/vec_helper.c
298
@@ -XXX,XX +XXX,XX @@ DO_3OP_PAIR(gvec_addp_s, ADD, uint32_t, H4)
299
DO_3OP_PAIR(gvec_addp_d, ADD, uint64_t, )
300
#undef ADD
301
302
+DO_3OP_PAIR(gvec_smaxp_b, MAX, int8_t, H1)
303
+DO_3OP_PAIR(gvec_smaxp_h, MAX, int16_t, H2)
304
+DO_3OP_PAIR(gvec_smaxp_s, MAX, int32_t, H4)
305
+
306
+DO_3OP_PAIR(gvec_umaxp_b, MAX, uint8_t, H1)
307
+DO_3OP_PAIR(gvec_umaxp_h, MAX, uint16_t, H2)
308
+DO_3OP_PAIR(gvec_umaxp_s, MAX, uint32_t, H4)
309
+
310
+DO_3OP_PAIR(gvec_sminp_b, MIN, int8_t, H1)
311
+DO_3OP_PAIR(gvec_sminp_h, MIN, int16_t, H2)
312
+DO_3OP_PAIR(gvec_sminp_s, MIN, int32_t, H4)
313
+
314
+DO_3OP_PAIR(gvec_uminp_b, MIN, uint8_t, H1)
315
+DO_3OP_PAIR(gvec_uminp_h, MIN, uint16_t, H2)
316
+DO_3OP_PAIR(gvec_uminp_s, MIN, uint32_t, H4)
317
+
318
#undef DO_3OP_PAIR
319
320
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
321
--
63
--
322
2.34.1
64
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Convert all forms (scalar, vector, scalar indexed, vector indexed),
3
Replace the "index" selecting between A and B with a result variable
4
which allows us to remove switch table entries elsewhere.
4
of the proper type. This improves clarity within the function.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20240506010403.6204-13-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241203203949.483774-12-richard.henderson@linaro.org
9
[PMM: fixed decode line error for FMULX_v]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/tcg/helper-a64.h | 8 ++
11
fpu/softfloat-parts.c.inc | 28 +++++++++++++---------------
13
target/arm/tcg/a64.decode | 45 +++++++
12
1 file changed, 13 insertions(+), 15 deletions(-)
14
target/arm/tcg/translate-a64.c | 221 +++++++++++++++++++++++++++------
15
target/arm/tcg/vec_helper.c | 39 +++---
16
4 files changed, 259 insertions(+), 54 deletions(-)
17
13
18
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/tcg/helper-a64.h
16
--- a/fpu/softfloat-parts.c.inc
21
+++ b/target/arm/tcg/helper-a64.h
17
+++ b/fpu/softfloat-parts.c.inc
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_4(cpye, void, env, i32, i32, i32)
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
23
DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
19
float_status *s)
24
DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
20
{
25
DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
21
bool have_snan = false;
26
+
22
- int cmp, which;
27
+DEF_HELPER_FLAGS_5(gvec_fmulx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+ FloatPartsN *ret;
28
+DEF_HELPER_FLAGS_5(gvec_fmulx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
+ int cmp;
29
+DEF_HELPER_FLAGS_5(gvec_fmulx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
30
+
26
if (is_snan(a->cls) || is_snan(b->cls)) {
31
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
32
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
33
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
switch (s->float_2nan_prop_rule) {
34
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
30
case float_2nan_prop_s_ab:
35
index XXXXXXX..XXXXXXX 100644
31
if (have_snan) {
36
--- a/target/arm/tcg/a64.decode
32
- which = is_snan(a->cls) ? 0 : 1;
37
+++ b/target/arm/tcg/a64.decode
33
+ ret = is_snan(a->cls) ? a : b;
38
@@ -XXX,XX +XXX,XX @@
34
break;
39
#
35
}
40
36
/* fall through */
41
%rd 0:5
37
case float_2nan_prop_ab:
42
+%esz_sd 22:1 !function=plus_2
38
- which = is_nan(a->cls) ? 0 : 1;
43
+%hl 11:1 21:1
39
+ ret = is_nan(a->cls) ? a : b;
44
+%hlm 11:1 20:2
40
break;
45
41
case float_2nan_prop_s_ba:
46
&r rn
42
if (have_snan) {
47
&ri rd imm
43
- which = is_snan(b->cls) ? 1 : 0;
48
&rri_sf rd rn imm sf
44
+ ret = is_snan(b->cls) ? b : a;
49
&i imm
45
break;
50
+&rrr_e rd rn rm esz
46
}
51
+&rrx_e rd rn rm idx esz
47
/* fall through */
52
&qrr_e q rd rn esz
48
case float_2nan_prop_ba:
53
&qrrr_e q rd rn rm esz
49
- which = is_nan(b->cls) ? 1 : 0;
54
+&qrrx_e q rd rn rm idx esz
50
+ ret = is_nan(b->cls) ? b : a;
55
&qrrrr_e q rd rn rm ra esz
51
break;
56
52
case float_2nan_prop_x87:
57
+@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
53
/*
58
+@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
54
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
59
+
55
*/
60
+@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
56
if (is_snan(a->cls)) {
61
+@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
57
if (!is_snan(b->cls)) {
62
+@rrx_d ........ .. . rm:5 .... idx:1 . rn:5 rd:5 &rrx_e esz=3
58
- which = is_qnan(b->cls) ? 1 : 0;
63
+
59
+ ret = is_qnan(b->cls) ? b : a;
64
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
65
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
66
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
67
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
68
@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
69
70
+@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
71
+@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
72
+
73
+@qrrx_h . q:1 .. .... .. .. rm:4 .... . . rn:5 rd:5 \
74
+ &qrrx_e esz=1 idx=%hlm
75
+@qrrx_s . q:1 .. .... .. . rm:5 .... . . rn:5 rd:5 \
76
+ &qrrx_e esz=2 idx=%hl
77
+@qrrx_d . q:1 .. .... .. . rm:5 .... idx:1 . rn:5 rd:5 \
78
+ &qrrx_e esz=3
79
+
80
### Data Processing - Immediate
81
82
# PC-rel addressing
83
@@ -XXX,XX +XXX,XX @@ INS_general 0 1 00 1110 000 imm:5 0 0011 1 rn:5 rd:5
84
SMOV 0 q:1 00 1110 000 imm:5 0 0101 1 rn:5 rd:5
85
UMOV 0 q:1 00 1110 000 imm:5 0 0111 1 rn:5 rd:5
86
INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
87
+
88
+### Advanced SIMD scalar three same
89
+
90
+FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
91
+FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
92
+
93
+### Advanced SIMD three same
94
+
95
+FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
96
+FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
97
+
98
+### Advanced SIMD scalar x indexed element
99
+
100
+FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
101
+FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
102
+FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
103
+
104
+### Advanced SIMD vector x indexed element
105
+
106
+FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
107
+FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
108
+FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
109
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/tcg/translate-a64.c
112
+++ b/target/arm/tcg/translate-a64.c
113
@@ -XXX,XX +XXX,XX @@ static bool trans_INS_element(DisasContext *s, arg_INS_element *a)
114
return true;
115
}
116
117
+/*
118
+ * Advanced SIMD three same
119
+ */
120
+
121
+typedef struct FPScalar {
122
+ void (*gen_h)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
123
+ void (*gen_s)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
124
+ void (*gen_d)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_ptr);
125
+} FPScalar;
126
+
127
+static bool do_fp3_scalar(DisasContext *s, arg_rrr_e *a, const FPScalar *f)
128
+{
129
+ switch (a->esz) {
130
+ case MO_64:
131
+ if (fp_access_check(s)) {
132
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
133
+ TCGv_i64 t1 = read_fp_dreg(s, a->rm);
134
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
135
+ write_fp_dreg(s, a->rd, t0);
136
+ }
137
+ break;
138
+ case MO_32:
139
+ if (fp_access_check(s)) {
140
+ TCGv_i32 t0 = read_fp_sreg(s, a->rn);
141
+ TCGv_i32 t1 = read_fp_sreg(s, a->rm);
142
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
143
+ write_fp_sreg(s, a->rd, t0);
144
+ }
145
+ break;
146
+ case MO_16:
147
+ if (!dc_isar_feature(aa64_fp16, s)) {
148
+ return false;
149
+ }
150
+ if (fp_access_check(s)) {
151
+ TCGv_i32 t0 = read_fp_hreg(s, a->rn);
152
+ TCGv_i32 t1 = read_fp_hreg(s, a->rm);
153
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
154
+ write_fp_sreg(s, a->rd, t0);
155
+ }
156
+ break;
157
+ default:
158
+ return false;
159
+ }
160
+ return true;
161
+}
162
+
163
+static const FPScalar f_scalar_fmulx = {
164
+ gen_helper_advsimd_mulxh,
165
+ gen_helper_vfp_mulxs,
166
+ gen_helper_vfp_mulxd,
167
+};
168
+TRANS(FMULX_s, do_fp3_scalar, a, &f_scalar_fmulx)
169
+
170
+static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
171
+ gen_helper_gvec_3_ptr * const fns[3])
172
+{
173
+ MemOp esz = a->esz;
174
+
175
+ switch (esz) {
176
+ case MO_64:
177
+ if (!a->q) {
178
+ return false;
179
+ }
180
+ break;
181
+ case MO_32:
182
+ break;
183
+ case MO_16:
184
+ if (!dc_isar_feature(aa64_fp16, s)) {
185
+ return false;
186
+ }
187
+ break;
188
+ default:
189
+ return false;
190
+ }
191
+ if (fp_access_check(s)) {
192
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
193
+ esz == MO_16, 0, fns[esz - 1]);
194
+ }
195
+ return true;
196
+}
197
+
198
+static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
199
+ gen_helper_gvec_fmulx_h,
200
+ gen_helper_gvec_fmulx_s,
201
+ gen_helper_gvec_fmulx_d,
202
+};
203
+TRANS(FMULX_v, do_fp3_vector, a, f_vector_fmulx)
204
+
205
+/*
206
+ * Advanced SIMD scalar/vector x indexed element
207
+ */
208
+
209
+static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
210
+{
211
+ switch (a->esz) {
212
+ case MO_64:
213
+ if (fp_access_check(s)) {
214
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
215
+ TCGv_i64 t1 = tcg_temp_new_i64();
216
+
217
+ read_vec_element(s, t1, a->rm, a->idx, MO_64);
218
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
219
+ write_fp_dreg(s, a->rd, t0);
220
+ }
221
+ break;
222
+ case MO_32:
223
+ if (fp_access_check(s)) {
224
+ TCGv_i32 t0 = read_fp_sreg(s, a->rn);
225
+ TCGv_i32 t1 = tcg_temp_new_i32();
226
+
227
+ read_vec_element_i32(s, t1, a->rm, a->idx, MO_32);
228
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
229
+ write_fp_sreg(s, a->rd, t0);
230
+ }
231
+ break;
232
+ case MO_16:
233
+ if (!dc_isar_feature(aa64_fp16, s)) {
234
+ return false;
235
+ }
236
+ if (fp_access_check(s)) {
237
+ TCGv_i32 t0 = read_fp_hreg(s, a->rn);
238
+ TCGv_i32 t1 = tcg_temp_new_i32();
239
+
240
+ read_vec_element_i32(s, t1, a->rm, a->idx, MO_16);
241
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
242
+ write_fp_sreg(s, a->rd, t0);
243
+ }
244
+ break;
245
+ default:
246
+ g_assert_not_reached();
247
+ }
248
+ return true;
249
+}
250
+
251
+TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
252
+
253
+static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
254
+ gen_helper_gvec_3_ptr * const fns[3])
255
+{
256
+ MemOp esz = a->esz;
257
+
258
+ switch (esz) {
259
+ case MO_64:
260
+ if (!a->q) {
261
+ return false;
262
+ }
263
+ break;
264
+ case MO_32:
265
+ break;
266
+ case MO_16:
267
+ if (!dc_isar_feature(aa64_fp16, s)) {
268
+ return false;
269
+ }
270
+ break;
271
+ default:
272
+ g_assert_not_reached();
273
+ }
274
+ if (fp_access_check(s)) {
275
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
276
+ esz == MO_16, a->idx, fns[esz - 1]);
277
+ }
278
+ return true;
279
+}
280
+
281
+static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
282
+ gen_helper_gvec_fmulx_idx_h,
283
+ gen_helper_gvec_fmulx_idx_s,
284
+ gen_helper_gvec_fmulx_idx_d,
285
+};
286
+TRANS(FMULX_vi, do_fp3_vector_idx, a, f_vector_idx_fmulx)
287
+
288
+
289
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
290
* Note that it is the caller's responsibility to ensure that the
291
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
292
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
293
case 0x1a: /* FADD */
294
gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
295
break;
60
break;
296
- case 0x1b: /* FMULX */
61
}
297
- gen_helper_vfp_mulxd(tcg_res, tcg_op1, tcg_op2, fpst);
62
} else if (is_qnan(a->cls)) {
298
- break;
63
if (is_snan(b->cls) || !is_qnan(b->cls)) {
299
case 0x1c: /* FCMEQ */
64
- which = 0;
300
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
65
+ ret = a;
301
break;
66
break;
302
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
303
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
304
break;
305
default:
306
+ case 0x1b: /* FMULX */
307
g_assert_not_reached();
308
}
67
}
309
68
} else {
310
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
69
- which = 1;
311
case 0x1a: /* FADD */
70
+ ret = b;
312
gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
313
break;
314
- case 0x1b: /* FMULX */
315
- gen_helper_vfp_mulxs(tcg_res, tcg_op1, tcg_op2, fpst);
316
- break;
317
case 0x1c: /* FCMEQ */
318
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
319
break;
320
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
321
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
322
break;
323
default:
324
+ case 0x1b: /* FMULX */
325
g_assert_not_reached();
326
}
327
328
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
329
/* Floating point: U, size[1] and opcode indicate operation */
330
int fpopcode = opcode | (extract32(size, 1, 1) << 5) | (u << 6);
331
switch (fpopcode) {
332
- case 0x1b: /* FMULX */
333
case 0x1f: /* FRECPS */
334
case 0x3f: /* FRSQRTS */
335
case 0x5d: /* FACGE */
336
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
337
case 0x7a: /* FABD */
338
break;
71
break;
339
default:
340
+ case 0x1b: /* FMULX */
341
unallocated_encoding(s);
342
return;
343
}
72
}
344
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
73
cmp = frac_cmp(a, b);
345
TCGv_i32 tcg_res;
74
if (cmp == 0) {
346
75
cmp = a->sign < b->sign;
347
switch (fpopcode) {
76
}
348
- case 0x03: /* FMULX */
77
- which = cmp > 0 ? 0 : 1;
349
case 0x04: /* FCMEQ (reg) */
78
+ ret = cmp > 0 ? a : b;
350
case 0x07: /* FRECPS */
351
case 0x0f: /* FRSQRTS */
352
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
353
case 0x1d: /* FACGT */
354
break;
79
break;
355
default:
80
default:
356
+ case 0x03: /* FMULX */
357
unallocated_encoding(s);
358
return;
359
}
360
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
361
tcg_res = tcg_temp_new_i32();
362
363
switch (fpopcode) {
364
- case 0x03: /* FMULX */
365
- gen_helper_advsimd_mulxh(tcg_res, tcg_op1, tcg_op2, fpst);
366
- break;
367
case 0x04: /* FCMEQ (reg) */
368
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
369
break;
370
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
371
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
372
break;
373
default:
374
+ case 0x03: /* FMULX */
375
g_assert_not_reached();
81
g_assert_not_reached();
376
}
82
}
377
83
378
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
84
- if (which) {
379
handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
85
- a = b;
380
rn, rm, rd);
86
+ if (is_snan(ret->cls)) {
381
return;
87
+ parts_silence_nan(ret, s);
382
- case 0x1b: /* FMULX */
383
case 0x1f: /* FRECPS */
384
case 0x3f: /* FRSQRTS */
385
case 0x5d: /* FACGE */
386
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
387
return;
388
389
default:
390
+ case 0x1b: /* FMULX */
391
unallocated_encoding(s);
392
return;
393
}
88
}
394
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
89
- if (is_snan(a->cls)) {
395
case 0x0: /* FMAXNM */
90
- parts_silence_nan(a, s);
396
case 0x1: /* FMLA */
91
- }
397
case 0x2: /* FADD */
92
- return a;
398
- case 0x3: /* FMULX */
93
+ return ret;
399
case 0x4: /* FCMEQ */
400
case 0x6: /* FMAX */
401
case 0x7: /* FRECPS */
402
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
403
pairwise = true;
404
break;
405
default:
406
+ case 0x3: /* FMULX */
407
unallocated_encoding(s);
408
return;
409
}
410
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
411
case 0x2: /* FADD */
412
gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
413
break;
414
- case 0x3: /* FMULX */
415
- gen_helper_advsimd_mulxh(tcg_res, tcg_op1, tcg_op2, fpst);
416
- break;
417
case 0x4: /* FCMEQ */
418
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
419
break;
420
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
421
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
422
break;
423
default:
424
+ case 0x3: /* FMULX */
425
g_assert_not_reached();
426
}
427
428
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
429
case 0x01: /* FMLA */
430
case 0x05: /* FMLS */
431
case 0x09: /* FMUL */
432
- case 0x19: /* FMULX */
433
is_fp = 1;
434
break;
435
case 0x1d: /* SQRDMLAH */
436
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
437
/* is_fp, but we pass tcg_env not fp_status. */
438
break;
439
default:
440
+ case 0x19: /* FMULX */
441
unallocated_encoding(s);
442
return;
443
}
444
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
445
case 0x09: /* FMUL */
446
gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
447
break;
448
- case 0x19: /* FMULX */
449
- gen_helper_vfp_mulxd(tcg_res, tcg_op, tcg_idx, fpst);
450
- break;
451
default:
452
+ case 0x19: /* FMULX */
453
g_assert_not_reached();
454
}
455
456
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
457
g_assert_not_reached();
458
}
459
break;
460
- case 0x19: /* FMULX */
461
- switch (size) {
462
- case 1:
463
- if (is_scalar) {
464
- gen_helper_advsimd_mulxh(tcg_res, tcg_op,
465
- tcg_idx, fpst);
466
- } else {
467
- gen_helper_advsimd_mulx2h(tcg_res, tcg_op,
468
- tcg_idx, fpst);
469
- }
470
- break;
471
- case 2:
472
- gen_helper_vfp_mulxs(tcg_res, tcg_op, tcg_idx, fpst);
473
- break;
474
- default:
475
- g_assert_not_reached();
476
- }
477
- break;
478
case 0x0c: /* SQDMULH */
479
if (size == 1) {
480
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
481
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
482
}
483
break;
484
default:
485
+ case 0x19: /* FMULX */
486
g_assert_not_reached();
487
}
488
489
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
490
index XXXXXXX..XXXXXXX 100644
491
--- a/target/arm/tcg/vec_helper.c
492
+++ b/target/arm/tcg/vec_helper.c
493
@@ -XXX,XX +XXX,XX @@ DO_3OP(gvec_rsqrts_nf_h, float16_rsqrts_nf, float16)
494
DO_3OP(gvec_rsqrts_nf_s, float32_rsqrts_nf, float32)
495
496
#ifdef TARGET_AARCH64
497
+DO_3OP(gvec_fmulx_h, helper_advsimd_mulxh, float16)
498
+DO_3OP(gvec_fmulx_s, helper_vfp_mulxs, float32)
499
+DO_3OP(gvec_fmulx_d, helper_vfp_mulxd, float64)
500
501
DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
502
DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
503
@@ -XXX,XX +XXX,XX @@ DO_MLA_IDX(gvec_mls_idx_d, uint64_t, -, H8)
504
505
#undef DO_MLA_IDX
506
507
-#define DO_FMUL_IDX(NAME, ADD, TYPE, H) \
508
+#define DO_FMUL_IDX(NAME, ADD, MUL, TYPE, H) \
509
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
510
{ \
511
intptr_t i, j, oprsz = simd_oprsz(desc); \
512
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
513
for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
514
TYPE mm = m[H(i + idx)]; \
515
for (j = 0; j < segment; j++) { \
516
- d[i + j] = TYPE##_##ADD(d[i + j], \
517
- TYPE##_mul(n[i + j], mm, stat), stat); \
518
+ d[i + j] = ADD(d[i + j], MUL(n[i + j], mm, stat), stat); \
519
} \
520
} \
521
clear_tail(d, oprsz, simd_maxsz(desc)); \
522
}
94
}
523
95
524
-#define float16_nop(N, M, S) (M)
96
static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
525
-#define float32_nop(N, M, S) (M)
526
-#define float64_nop(N, M, S) (M)
527
+#define nop(N, M, S) (M)
528
529
-DO_FMUL_IDX(gvec_fmul_idx_h, nop, float16, H2)
530
-DO_FMUL_IDX(gvec_fmul_idx_s, nop, float32, H4)
531
-DO_FMUL_IDX(gvec_fmul_idx_d, nop, float64, H8)
532
+DO_FMUL_IDX(gvec_fmul_idx_h, nop, float16_mul, float16, H2)
533
+DO_FMUL_IDX(gvec_fmul_idx_s, nop, float32_mul, float32, H4)
534
+DO_FMUL_IDX(gvec_fmul_idx_d, nop, float64_mul, float64, H8)
535
+
536
+#ifdef TARGET_AARCH64
537
+
538
+DO_FMUL_IDX(gvec_fmulx_idx_h, nop, helper_advsimd_mulxh, float16, H2)
539
+DO_FMUL_IDX(gvec_fmulx_idx_s, nop, helper_vfp_mulxs, float32, H4)
540
+DO_FMUL_IDX(gvec_fmulx_idx_d, nop, helper_vfp_mulxd, float64, H8)
541
+
542
+#endif
543
+
544
+#undef nop
545
546
/*
547
* Non-fused multiply-accumulate operations, for Neon. NB that unlike
548
* the fused ops below they assume accumulate both from and into Vd.
549
*/
550
-DO_FMUL_IDX(gvec_fmla_nf_idx_h, add, float16, H2)
551
-DO_FMUL_IDX(gvec_fmla_nf_idx_s, add, float32, H4)
552
-DO_FMUL_IDX(gvec_fmls_nf_idx_h, sub, float16, H2)
553
-DO_FMUL_IDX(gvec_fmls_nf_idx_s, sub, float32, H4)
554
+DO_FMUL_IDX(gvec_fmla_nf_idx_h, float16_add, float16_mul, float16, H2)
555
+DO_FMUL_IDX(gvec_fmla_nf_idx_s, float32_add, float32_mul, float32, H4)
556
+DO_FMUL_IDX(gvec_fmls_nf_idx_h, float16_sub, float16_mul, float16, H2)
557
+DO_FMUL_IDX(gvec_fmls_nf_idx_s, float32_sub, float32_mul, float32, H4)
558
559
-#undef float16_nop
560
-#undef float32_nop
561
-#undef float64_nop
562
#undef DO_FMUL_IDX
563
564
#define DO_FMLA_IDX(NAME, TYPE, H) \
565
--
97
--
566
2.34.1
98
2.34.1
99
100
diff view generated by jsdifflib
1
From: Dorjoy Chowdhury <dorjoychy111@gmail.com>
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
2
3
The value of the mp-affinity property being set in npcm7xx_realize is
3
I'm migrating to Qualcomm's new open source email infrastructure, so
4
always the same as the default value it would have when arm_cpu_realizefn
4
update my email address, and update the mailmap to match.
5
is called if the property is not set here. So there is no need to set
6
the property value in npcm7xx_realize function.
7
5
8
Signed-off-by: Dorjoy Chowdhury <dorjoychy111@gmail.com>
6
Signed-off-by: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
8
Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20240504141733.14813-1-dorjoychy111@gmail.com
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20241205114047.1125842-1-leif.lindholm@oss.qualcomm.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
13
---
14
hw/arm/npcm7xx.c | 3 ---
14
MAINTAINERS | 2 +-
15
1 file changed, 3 deletions(-)
15
.mailmap | 5 +++--
16
2 files changed, 4 insertions(+), 3 deletions(-)
16
17
17
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
18
diff --git a/MAINTAINERS b/MAINTAINERS
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/npcm7xx.c
20
--- a/MAINTAINERS
20
+++ b/hw/arm/npcm7xx.c
21
+++ b/MAINTAINERS
21
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
22
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
22
23
SBSA-REF
23
/* CPUs */
24
M: Radoslaw Biernacki <rad@semihalf.com>
24
for (i = 0; i < nc->num_cpus; i++) {
25
M: Peter Maydell <peter.maydell@linaro.org>
25
- object_property_set_int(OBJECT(&s->cpu[i]), "mp-affinity",
26
-R: Leif Lindholm <quic_llindhol@quicinc.com>
26
- arm_build_mp_affinity(i, NPCM7XX_MAX_NUM_CPUS),
27
+R: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
27
- &error_abort);
28
R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
28
object_property_set_int(OBJECT(&s->cpu[i]), "reset-cbar",
29
L: qemu-arm@nongnu.org
29
NPCM7XX_GIC_CPU_IF_ADDR, &error_abort);
30
S: Maintained
30
object_property_set_bool(OBJECT(&s->cpu[i]), "reset-hivecs", true,
31
diff --git a/.mailmap b/.mailmap
32
index XXXXXXX..XXXXXXX 100644
33
--- a/.mailmap
34
+++ b/.mailmap
35
@@ -XXX,XX +XXX,XX @@ Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
36
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
37
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
38
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
39
-Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
40
-Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
41
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <quic_llindhol@quicinc.com>
42
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif.lindholm@linaro.org>
43
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif@nuviainc.com>
44
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
45
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
46
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
31
--
47
--
32
2.34.1
48
2.34.1
33
49
34
50
diff view generated by jsdifflib
1
From: Zenghui Yu <zenghui.yu@linux.dev>
1
From: Vikram Garhwal <vikram.garhwal@bytedance.com>
2
2
3
We wrongly encoded ID_AA64PFR1_EL1 using {3,0,0,4,2} in hvf_sreg_match[] so
3
Previously, maintainer role was paused due to inactive email id. Commit id:
4
we fail to get the expected ARMCPRegInfo from cp_regs hash table with the
4
c009d715721861984c4987bcc78b7ee183e86d75.
5
wrong key.
6
5
7
Fix it with the correct encoding {3,0,0,4,1}. With that fixed, the Linux
6
Signed-off-by: Vikram Garhwal <vikram.garhwal@bytedance.com>
8
guest can properly detect FEAT_SSBS2 on my M1 HW.
7
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
9
8
Message-id: 20241204184205.12952-1-vikram.garhwal@bytedance.com
10
All DBG{B,W}{V,C}R_EL1 registers are also wrongly encoded with op0 == 14.
11
It happens to work because HVF_SYSREG(CRn, CRm, 14, op1, op2) equals to
12
HVF_SYSREG(CRn, CRm, 2, op1, op2), by definition. But we shouldn't rely on
13
it.
14
15
Cc: qemu-stable@nongnu.org
16
Fixes: a1477da3ddeb ("hvf: Add Apple Silicon support")
17
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
18
Reviewed-by: Alexander Graf <agraf@csgraf.de>
19
Message-id: 20240503153453.54389-1-zenghui.yu@linux.dev
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
10
---
22
target/arm/hvf/hvf.c | 130 +++++++++++++++++++++----------------------
11
MAINTAINERS | 2 ++
23
1 file changed, 65 insertions(+), 65 deletions(-)
12
1 file changed, 2 insertions(+)
24
13
25
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
14
diff --git a/MAINTAINERS b/MAINTAINERS
26
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/hvf/hvf.c
16
--- a/MAINTAINERS
28
+++ b/target/arm/hvf/hvf.c
17
+++ b/MAINTAINERS
29
@@ -XXX,XX +XXX,XX @@ struct hvf_sreg_match {
18
@@ -XXX,XX +XXX,XX @@ F: tests/qtest/fuzz-sb16-test.c
30
};
19
31
20
Xilinx CAN
32
static struct hvf_sreg_match hvf_sreg_match[] = {
21
M: Francisco Iglesias <francisco.iglesias@amd.com>
33
- { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
22
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
34
- { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
23
S: Maintained
35
- { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
24
F: hw/net/can/xlnx-*
36
- { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
25
F: include/hw/net/xlnx-*
37
+ { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 2, 0, 4) },
26
@@ -XXX,XX +XXX,XX @@ F: include/hw/rx/
38
+ { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 2, 0, 5) },
27
CAN bus subsystem and hardware
39
+ { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 2, 0, 6) },
28
M: Pavel Pisa <pisa@cmp.felk.cvut.cz>
40
+ { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 2, 0, 7) },
29
M: Francisco Iglesias <francisco.iglesias@amd.com>
41
30
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
42
- { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
31
S: Maintained
43
- { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
32
W: https://canbus.pages.fel.cvut.cz/
44
- { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
33
F: net/can/*
45
- { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
46
+ { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 2, 0, 4) },
47
+ { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 2, 0, 5) },
48
+ { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 2, 0, 6) },
49
+ { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 2, 0, 7) },
50
51
- { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
52
- { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
53
- { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
54
- { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
55
+ { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 2, 0, 4) },
56
+ { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 2, 0, 5) },
57
+ { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 2, 0, 6) },
58
+ { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 2, 0, 7) },
59
60
- { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
61
- { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
62
- { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
63
- { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
64
+ { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 2, 0, 4) },
65
+ { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 2, 0, 5) },
66
+ { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 2, 0, 6) },
67
+ { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 2, 0, 7) },
68
69
- { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
70
- { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
71
- { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
72
- { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
73
+ { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 2, 0, 4) },
74
+ { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 2, 0, 5) },
75
+ { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 2, 0, 6) },
76
+ { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 2, 0, 7) },
77
78
- { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
79
- { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
80
- { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
81
- { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
82
+ { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 2, 0, 4) },
83
+ { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 2, 0, 5) },
84
+ { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 2, 0, 6) },
85
+ { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 2, 0, 7) },
86
87
- { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
88
- { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
89
- { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
90
- { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
91
+ { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 2, 0, 4) },
92
+ { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 2, 0, 5) },
93
+ { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 2, 0, 6) },
94
+ { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 2, 0, 7) },
95
96
- { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
97
- { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
98
- { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
99
- { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
100
+ { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 2, 0, 4) },
101
+ { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 2, 0, 5) },
102
+ { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 2, 0, 6) },
103
+ { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 2, 0, 7) },
104
105
- { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
106
- { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
107
- { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
108
- { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
109
+ { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 2, 0, 4) },
110
+ { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 2, 0, 5) },
111
+ { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 2, 0, 6) },
112
+ { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 2, 0, 7) },
113
114
- { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
115
- { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
116
- { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
117
- { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
118
+ { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 2, 0, 4) },
119
+ { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 2, 0, 5) },
120
+ { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 2, 0, 6) },
121
+ { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 2, 0, 7) },
122
123
- { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
124
- { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
125
- { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
126
- { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
127
+ { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 2, 0, 4) },
128
+ { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 2, 0, 5) },
129
+ { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 2, 0, 6) },
130
+ { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 2, 0, 7) },
131
132
- { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
133
- { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
134
- { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
135
- { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
136
+ { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 2, 0, 4) },
137
+ { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 2, 0, 5) },
138
+ { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 2, 0, 6) },
139
+ { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 2, 0, 7) },
140
141
- { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
142
- { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
143
- { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
144
- { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
145
+ { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 2, 0, 4) },
146
+ { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 2, 0, 5) },
147
+ { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 2, 0, 6) },
148
+ { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 2, 0, 7) },
149
150
- { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
151
- { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
152
- { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
153
- { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
154
+ { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 2, 0, 4) },
155
+ { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 2, 0, 5) },
156
+ { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 2, 0, 6) },
157
+ { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 2, 0, 7) },
158
159
- { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
160
- { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
161
- { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
162
- { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
163
+ { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 2, 0, 4) },
164
+ { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 2, 0, 5) },
165
+ { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 2, 0, 6) },
166
+ { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 2, 0, 7) },
167
168
- { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
169
- { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
170
- { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
171
- { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
172
+ { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 2, 0, 4) },
173
+ { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 2, 0, 5) },
174
+ { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 2, 0, 6) },
175
+ { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 2, 0, 7) },
176
177
#ifdef SYNC_NO_RAW_REGS
178
/*
179
@@ -XXX,XX +XXX,XX @@ static struct hvf_sreg_match hvf_sreg_match[] = {
180
{ HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
181
{ HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
182
#endif
183
- { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
184
+ { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 1) },
185
{ HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
186
{ HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
187
{ HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
188
--
34
--
189
2.34.1
35
2.34.1
diff view generated by jsdifflib