1
Hi; here's the latest arm pullreq. This is mostly patches from
1
First arm pullreq of the cycle; this is mostly my softfloat NaN
2
RTH, plus a couple of other more minor things. Switching to
2
handling series. (Lots more in my to-review queue, but I don't
3
PCREL is the big one, hopefully should improve performance.
3
like pullreqs growing too close to a hundred patches at a time :-))
4
4
5
thanks
5
thanks
6
-- PMM
6
-- PMM
7
7
8
The following changes since commit 214a8da23651f2472b296b3293e619fd58d9e212:
8
The following changes since commit 97f2796a3736ed37a1b85dc1c76a6c45b829dd17:
9
9
10
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2022-10-18 11:14:31 -0400)
10
Open 10.0 development tree (2024-12-10 17:41:17 +0000)
11
11
12
are available in the Git repository at:
12
are available in the Git repository at:
13
13
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221020
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20241211
15
15
16
for you to fetch changes up to 5db899303799e49209016a93289b8694afa1449e:
16
for you to fetch changes up to 1abe28d519239eea5cf9620bb13149423e5665f8:
17
17
18
hw/ide/microdrive: Use device_cold_reset() for self-resets (2022-10-20 12:11:53 +0100)
18
MAINTAINERS: Add correct email address for Vikram Garhwal (2024-12-11 15:31:09 +0000)
19
19
20
----------------------------------------------------------------
20
----------------------------------------------------------------
21
target-arm queue:
21
target-arm queue:
22
* Switch to TARGET_TB_PCREL
22
* hw/net/lan9118: Extract PHY model, reuse with imx_fec, fix bugs
23
* More pagetable-walk refactoring preparatory to HAFDBS
23
* fpu: Make muladd NaN handling runtime-selected, not compile-time
24
* update the cortex-a15 MIDR to latest rev
24
* fpu: Make default NaN pattern runtime-selected, not compile-time
25
* hw/char/pl011: fix baud rate calculation
25
* fpu: Minor NaN-related cleanups
26
* hw/ide/microdrive: Use device_cold_reset() for self-resets
26
* MAINTAINERS: email address updates
27
27
28
----------------------------------------------------------------
28
----------------------------------------------------------------
29
Alex Bennée (1):
29
Bernhard Beschow (5):
30
target/arm: update the cortex-a15 MIDR to latest rev
30
hw/net/lan9118: Extract lan9118_phy
31
hw/net/lan9118_phy: Reuse in imx_fec and consolidate implementations
32
hw/net/lan9118_phy: Fix off-by-one error in MII_ANLPAR register
33
hw/net/lan9118_phy: Reuse MII constants
34
hw/net/lan9118_phy: Add missing 100 mbps full duplex advertisement
31
35
32
Baruch Siach (1):
36
Leif Lindholm (1):
33
hw/char/pl011: fix baud rate calculation
37
MAINTAINERS: update email address for Leif Lindholm
34
38
35
Peter Maydell (1):
39
Peter Maydell (54):
36
hw/ide/microdrive: Use device_cold_reset() for self-resets
40
fpu: handle raising Invalid for infzero in pick_nan_muladd
41
fpu: Check for default_nan_mode before calling pickNaNMulAdd
42
softfloat: Allow runtime choice of inf * 0 + NaN result
43
tests/fp: Explicitly set inf-zero-nan rule
44
target/arm: Set FloatInfZeroNaNRule explicitly
45
target/s390: Set FloatInfZeroNaNRule explicitly
46
target/ppc: Set FloatInfZeroNaNRule explicitly
47
target/mips: Set FloatInfZeroNaNRule explicitly
48
target/sparc: Set FloatInfZeroNaNRule explicitly
49
target/xtensa: Set FloatInfZeroNaNRule explicitly
50
target/x86: Set FloatInfZeroNaNRule explicitly
51
target/loongarch: Set FloatInfZeroNaNRule explicitly
52
target/hppa: Set FloatInfZeroNaNRule explicitly
53
softfloat: Pass have_snan to pickNaNMulAdd
54
softfloat: Allow runtime choice of NaN propagation for muladd
55
tests/fp: Explicitly set 3-NaN propagation rule
56
target/arm: Set Float3NaNPropRule explicitly
57
target/loongarch: Set Float3NaNPropRule explicitly
58
target/ppc: Set Float3NaNPropRule explicitly
59
target/s390x: Set Float3NaNPropRule explicitly
60
target/sparc: Set Float3NaNPropRule explicitly
61
target/mips: Set Float3NaNPropRule explicitly
62
target/xtensa: Set Float3NaNPropRule explicitly
63
target/i386: Set Float3NaNPropRule explicitly
64
target/hppa: Set Float3NaNPropRule explicitly
65
fpu: Remove use_first_nan field from float_status
66
target/m68k: Don't pass NULL float_status to floatx80_default_nan()
67
softfloat: Create floatx80 default NaN from parts64_default_nan
68
target/loongarch: Use normal float_status in fclass_s and fclass_d helpers
69
target/m68k: In frem helper, initialize local float_status from env->fp_status
70
target/m68k: Init local float_status from env fp_status in gdb get/set reg
71
target/sparc: Initialize local scratch float_status from env->fp_status
72
target/ppc: Use env->fp_status in helper_compute_fprf functions
73
fpu: Allow runtime choice of default NaN value
74
tests/fp: Set default NaN pattern explicitly
75
target/microblaze: Set default NaN pattern explicitly
76
target/i386: Set default NaN pattern explicitly
77
target/hppa: Set default NaN pattern explicitly
78
target/alpha: Set default NaN pattern explicitly
79
target/arm: Set default NaN pattern explicitly
80
target/loongarch: Set default NaN pattern explicitly
81
target/m68k: Set default NaN pattern explicitly
82
target/mips: Set default NaN pattern explicitly
83
target/openrisc: Set default NaN pattern explicitly
84
target/ppc: Set default NaN pattern explicitly
85
target/sh4: Set default NaN pattern explicitly
86
target/rx: Set default NaN pattern explicitly
87
target/s390x: Set default NaN pattern explicitly
88
target/sparc: Set default NaN pattern explicitly
89
target/xtensa: Set default NaN pattern explicitly
90
target/hexagon: Set default NaN pattern explicitly
91
target/riscv: Set default NaN pattern explicitly
92
target/tricore: Set default NaN pattern explicitly
93
fpu: Remove default handling for dnan_pattern
37
94
38
Richard Henderson (21):
95
Richard Henderson (11):
39
target/arm: Enable TARGET_PAGE_ENTRY_EXTRA
96
target/arm: Copy entire float_status in is_ebf
40
target/arm: Use probe_access_full for MTE
97
softfloat: Inline pickNaNMulAdd
41
target/arm: Use probe_access_full for BTI
98
softfloat: Use goto for default nan case in pick_nan_muladd
42
target/arm: Add ARMMMUIdx_Phys_{S,NS}
99
softfloat: Remove which from parts_pick_nan_muladd
43
target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idx
100
softfloat: Pad array size in pick_nan_muladd
44
target/arm: Restrict tlb flush from vttbr_write to vmid change
101
softfloat: Move propagateFloatx80NaN to softfloat.c
45
target/arm: Split out S1Translate type
102
softfloat: Use parts_pick_nan in propagateFloatx80NaN
46
target/arm: Plumb debug into S1Translate
103
softfloat: Inline pickNaN
47
target/arm: Move be test for regime into S1TranslateResult
104
softfloat: Share code between parts_pick_nan cases
48
target/arm: Use softmmu tlbs for page table walking
105
softfloat: Sink frac_cmp in parts_pick_nan until needed
49
target/arm: Split out get_phys_addr_twostage
106
softfloat: Replace WHICH with RET in parts_pick_nan
50
target/arm: Use bool consistently for get_phys_addr subroutines
51
target/arm: Introduce curr_insn_len
52
target/arm: Change gen_goto_tb to work on displacements
53
target/arm: Change gen_*set_pc_im to gen_*update_pc
54
target/arm: Change gen_exception_insn* to work on displacements
55
target/arm: Remove gen_exception_internal_insn pc argument
56
target/arm: Change gen_jmp* to work on displacements
57
target/arm: Introduce gen_pc_plus_diff for aarch64
58
target/arm: Introduce gen_pc_plus_diff for aarch32
59
target/arm: Enable TARGET_TB_PCREL
60
107
61
target/arm/cpu-param.h | 17 +-
108
Vikram Garhwal (1):
62
target/arm/cpu.h | 47 ++--
109
MAINTAINERS: Add correct email address for Vikram Garhwal
63
target/arm/internals.h | 1 +
64
target/arm/sve_ldst_internal.h | 1 +
65
target/arm/translate-a32.h | 2 +-
66
target/arm/translate.h | 66 ++++-
67
hw/char/pl011.c | 2 +-
68
hw/ide/microdrive.c | 8 +-
69
target/arm/cpu.c | 23 +-
70
target/arm/cpu_tcg.c | 4 +-
71
target/arm/helper.c | 155 +++++++++---
72
target/arm/mte_helper.c | 62 ++---
73
target/arm/ptw.c | 535 +++++++++++++++++++++++++----------------
74
target/arm/sve_helper.c | 54 ++---
75
target/arm/tlb_helper.c | 24 +-
76
target/arm/translate-a64.c | 220 ++++++++++-------
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 2 +-
79
target/arm/translate-vfp.c | 10 +-
80
target/arm/translate.c | 284 +++++++++++++---------
81
20 files changed, 918 insertions(+), 607 deletions(-)
82
110
111
MAINTAINERS | 4 +-
112
include/fpu/softfloat-helpers.h | 38 +++-
113
include/fpu/softfloat-types.h | 89 +++++++-
114
include/hw/net/imx_fec.h | 9 +-
115
include/hw/net/lan9118_phy.h | 37 ++++
116
include/hw/net/mii.h | 6 +
117
target/mips/fpu_helper.h | 20 ++
118
target/sparc/helper.h | 4 +-
119
fpu/softfloat.c | 19 ++
120
hw/net/imx_fec.c | 146 ++------------
121
hw/net/lan9118.c | 137 ++-----------
122
hw/net/lan9118_phy.c | 222 ++++++++++++++++++++
123
linux-user/arm/nwfpe/fpa11.c | 5 +
124
target/alpha/cpu.c | 2 +
125
target/arm/cpu.c | 10 +
126
target/arm/tcg/vec_helper.c | 20 +-
127
target/hexagon/cpu.c | 2 +
128
target/hppa/fpu_helper.c | 12 ++
129
target/i386/tcg/fpu_helper.c | 12 ++
130
target/loongarch/tcg/fpu_helper.c | 14 +-
131
target/m68k/cpu.c | 14 +-
132
target/m68k/fpu_helper.c | 6 +-
133
target/m68k/helper.c | 6 +-
134
target/microblaze/cpu.c | 2 +
135
target/mips/msa.c | 10 +
136
target/openrisc/cpu.c | 2 +
137
target/ppc/cpu_init.c | 19 ++
138
target/ppc/fpu_helper.c | 3 +-
139
target/riscv/cpu.c | 2 +
140
target/rx/cpu.c | 2 +
141
target/s390x/cpu.c | 5 +
142
target/sh4/cpu.c | 2 +
143
target/sparc/cpu.c | 6 +
144
target/sparc/fop_helper.c | 8 +-
145
target/sparc/translate.c | 4 +-
146
target/tricore/helper.c | 2 +
147
target/xtensa/cpu.c | 4 +
148
target/xtensa/fpu_helper.c | 3 +-
149
tests/fp/fp-bench.c | 7 +
150
tests/fp/fp-test-log2.c | 1 +
151
tests/fp/fp-test.c | 7 +
152
fpu/softfloat-parts.c.inc | 152 +++++++++++---
153
fpu/softfloat-specialize.c.inc | 412 ++------------------------------------
154
.mailmap | 5 +-
155
hw/net/Kconfig | 5 +
156
hw/net/meson.build | 1 +
157
hw/net/trace-events | 10 +-
158
47 files changed, 778 insertions(+), 730 deletions(-)
159
create mode 100644 include/hw/net/lan9118_phy.h
160
create mode 100644 hw/net/lan9118_phy.c
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
3
A very similar implementation of the same device exists in imx_fec. Prepare for
4
a common implementation by extracting a device model into its own files.
4
5
6
Some migration state has been moved into the new device model which breaks
7
migration compatibility for the following machines:
8
* smdkc210
9
* realview-*
10
* vexpress-*
11
* kzm
12
* mps2-*
13
14
While breaking migration ABI, fix the size of the MII registers to be 16 bit,
15
as defined by IEEE 802.3u.
16
17
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
18
Tested-by: Guenter Roeck <linux@roeck-us.net>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20241102125724.532843-2-shentey@gmail.com
7
Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
22
---
10
target/arm/translate-a64.c | 40 ++++++++++++++++++++------------------
23
include/hw/net/lan9118_phy.h | 37 ++++++++
11
target/arm/translate.c | 10 ++++++----
24
hw/net/lan9118.c | 137 +++++-----------------------
12
2 files changed, 27 insertions(+), 23 deletions(-)
25
hw/net/lan9118_phy.c | 169 +++++++++++++++++++++++++++++++++++
26
hw/net/Kconfig | 4 +
27
hw/net/meson.build | 1 +
28
5 files changed, 233 insertions(+), 115 deletions(-)
29
create mode 100644 include/hw/net/lan9118_phy.h
30
create mode 100644 hw/net/lan9118_phy.c
13
31
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
diff --git a/include/hw/net/lan9118_phy.h b/include/hw/net/lan9118_phy.h
33
new file mode 100644
34
index XXXXXXX..XXXXXXX
35
--- /dev/null
36
+++ b/include/hw/net/lan9118_phy.h
37
@@ -XXX,XX +XXX,XX @@
38
+/*
39
+ * SMSC LAN9118 PHY emulation
40
+ *
41
+ * Copyright (c) 2009 CodeSourcery, LLC.
42
+ * Written by Paul Brook
43
+ *
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
45
+ * See the COPYING file in the top-level directory.
46
+ */
47
+
48
+#ifndef HW_NET_LAN9118_PHY_H
49
+#define HW_NET_LAN9118_PHY_H
50
+
51
+#include "qom/object.h"
52
+#include "hw/sysbus.h"
53
+
54
+#define TYPE_LAN9118_PHY "lan9118-phy"
55
+OBJECT_DECLARE_SIMPLE_TYPE(Lan9118PhyState, LAN9118_PHY)
56
+
57
+typedef struct Lan9118PhyState {
58
+ SysBusDevice parent_obj;
59
+
60
+ uint16_t status;
61
+ uint16_t control;
62
+ uint16_t advertise;
63
+ uint16_t ints;
64
+ uint16_t int_mask;
65
+ qemu_irq irq;
66
+ bool link_down;
67
+} Lan9118PhyState;
68
+
69
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down);
70
+void lan9118_phy_reset(Lan9118PhyState *s);
71
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg);
72
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val);
73
+
74
+#endif
75
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
15
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
77
--- a/hw/net/lan9118.c
17
+++ b/target/arm/translate-a64.c
78
+++ b/hw/net/lan9118.c
18
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
79
@@ -XXX,XX +XXX,XX @@
19
return translator_use_goto_tb(&s->base, dest);
80
#include "net/net.h"
81
#include "net/eth.h"
82
#include "hw/irq.h"
83
+#include "hw/net/lan9118_phy.h"
84
#include "hw/net/lan9118.h"
85
#include "hw/ptimer.h"
86
#include "hw/qdev-properties.h"
87
@@ -XXX,XX +XXX,XX @@ do { printf("lan9118: " fmt , ## __VA_ARGS__); } while (0)
88
#define MAC_CR_RXEN 0x00000004
89
#define MAC_CR_RESERVED 0x7f404213
90
91
-#define PHY_INT_ENERGYON 0x80
92
-#define PHY_INT_AUTONEG_COMPLETE 0x40
93
-#define PHY_INT_FAULT 0x20
94
-#define PHY_INT_DOWN 0x10
95
-#define PHY_INT_AUTONEG_LP 0x08
96
-#define PHY_INT_PARFAULT 0x04
97
-#define PHY_INT_AUTONEG_PAGE 0x02
98
-
99
#define GPT_TIMER_EN 0x20000000
100
101
/*
102
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
103
uint32_t mac_mii_data;
104
uint32_t mac_flow;
105
106
- uint32_t phy_status;
107
- uint32_t phy_control;
108
- uint32_t phy_advertise;
109
- uint32_t phy_int;
110
- uint32_t phy_int_mask;
111
+ Lan9118PhyState mii;
112
+ IRQState mii_irq;
113
114
int32_t eeprom_writable;
115
uint8_t eeprom[128];
116
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
117
118
static const VMStateDescription vmstate_lan9118 = {
119
.name = "lan9118",
120
- .version_id = 2,
121
- .minimum_version_id = 1,
122
+ .version_id = 3,
123
+ .minimum_version_id = 3,
124
.fields = (const VMStateField[]) {
125
VMSTATE_PTIMER(timer, lan9118_state),
126
VMSTATE_UINT32(irq_cfg, lan9118_state),
127
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118 = {
128
VMSTATE_UINT32(mac_mii_acc, lan9118_state),
129
VMSTATE_UINT32(mac_mii_data, lan9118_state),
130
VMSTATE_UINT32(mac_flow, lan9118_state),
131
- VMSTATE_UINT32(phy_status, lan9118_state),
132
- VMSTATE_UINT32(phy_control, lan9118_state),
133
- VMSTATE_UINT32(phy_advertise, lan9118_state),
134
- VMSTATE_UINT32(phy_int, lan9118_state),
135
- VMSTATE_UINT32(phy_int_mask, lan9118_state),
136
VMSTATE_INT32(eeprom_writable, lan9118_state),
137
VMSTATE_UINT8_ARRAY(eeprom, lan9118_state, 128),
138
VMSTATE_INT32(tx_fifo_size, lan9118_state),
139
@@ -XXX,XX +XXX,XX @@ static void lan9118_reload_eeprom(lan9118_state *s)
140
lan9118_mac_changed(s);
20
}
141
}
21
142
22
-static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
143
-static void phy_update_irq(lan9118_state *s)
23
+static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
144
+static void lan9118_update_irq(void *opaque, int n, int level)
24
{
145
{
25
+ uint64_t dest = s->pc_curr + diff;
146
- if (s->phy_int & s->phy_int_mask) {
26
+
147
+ lan9118_state *s = opaque;
27
if (use_goto_tb(s, dest)) {
148
+
28
tcg_gen_goto_tb(n);
149
+ if (level) {
29
gen_a64_set_pc_im(dest);
150
s->int_sts |= PHY_INT;
30
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
151
} else {
31
*/
152
s->int_sts &= ~PHY_INT;
32
static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
153
@@ -XXX,XX +XXX,XX @@ static void phy_update_irq(lan9118_state *s)
154
lan9118_update(s);
155
}
156
157
-static void phy_update_link(lan9118_state *s)
158
-{
159
- /* Autonegotiation status mirrors link status. */
160
- if (qemu_get_queue(s->nic)->link_down) {
161
- s->phy_status &= ~0x0024;
162
- s->phy_int |= PHY_INT_DOWN;
163
- } else {
164
- s->phy_status |= 0x0024;
165
- s->phy_int |= PHY_INT_ENERGYON;
166
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
167
- }
168
- phy_update_irq(s);
169
-}
170
-
171
static void lan9118_set_link(NetClientState *nc)
33
{
172
{
34
- uint64_t addr = s->pc_curr + sextract32(insn, 0, 26) * 4;
173
- phy_update_link(qemu_get_nic_opaque(nc));
35
+ int64_t diff = sextract32(insn, 0, 26) * 4;
174
-}
36
175
-
37
if (insn & (1U << 31)) {
176
-static void phy_reset(lan9118_state *s)
38
/* BL Branch with link */
177
-{
39
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
178
- s->phy_status = 0x7809;
40
179
- s->phy_control = 0x3000;
41
/* B Branch / BL Branch with link */
180
- s->phy_advertise = 0x01e1;
42
reset_btype(s);
181
- s->phy_int_mask = 0;
43
- gen_goto_tb(s, 0, addr);
182
- s->phy_int = 0;
44
+ gen_goto_tb(s, 0, diff);
183
- phy_update_link(s);
184
+ lan9118_phy_update_link(&LAN9118(qemu_get_nic_opaque(nc))->mii,
185
+ nc->link_down);
45
}
186
}
46
187
47
/* Compare and branch (immediate)
188
static void lan9118_reset(DeviceState *d)
48
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
189
@@ -XXX,XX +XXX,XX @@ static void lan9118_reset(DeviceState *d)
49
static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
190
s->read_word_n = 0;
50
{
191
s->write_word_n = 0;
51
unsigned int sf, op, rt;
192
52
- uint64_t addr;
193
- phy_reset(s);
53
+ int64_t diff;
194
-
54
TCGLabel *label_match;
195
s->eeprom_writable = 0;
55
TCGv_i64 tcg_cmp;
196
lan9118_reload_eeprom(s);
56
57
sf = extract32(insn, 31, 1);
58
op = extract32(insn, 24, 1); /* 0: CBZ; 1: CBNZ */
59
rt = extract32(insn, 0, 5);
60
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
61
+ diff = sextract32(insn, 5, 19) * 4;
62
63
tcg_cmp = read_cpu_reg(s, rt, sf);
64
label_match = gen_new_label();
65
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
66
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
67
tcg_cmp, 0, label_match);
68
69
- gen_goto_tb(s, 0, s->base.pc_next);
70
+ gen_goto_tb(s, 0, 4);
71
gen_set_label(label_match);
72
- gen_goto_tb(s, 1, addr);
73
+ gen_goto_tb(s, 1, diff);
74
}
197
}
75
198
@@ -XXX,XX +XXX,XX @@ static void do_tx_packet(lan9118_state *s)
76
/* Test and branch (immediate)
199
uint32_t status;
77
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
200
78
static void disas_test_b_imm(DisasContext *s, uint32_t insn)
201
/* FIXME: Honor TX disable, and allow queueing of packets. */
79
{
202
- if (s->phy_control & 0x4000) {
80
unsigned int bit_pos, op, rt;
203
+ if (s->mii.control & 0x4000) {
81
- uint64_t addr;
204
/* This assumes the receive routine doesn't touch the VLANClient. */
82
+ int64_t diff;
205
qemu_receive_packet(qemu_get_queue(s->nic), s->txp->data, s->txp->len);
83
TCGLabel *label_match;
84
TCGv_i64 tcg_cmp;
85
86
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
87
op = extract32(insn, 24, 1); /* 0: TBZ; 1: TBNZ */
88
- addr = s->pc_curr + sextract32(insn, 5, 14) * 4;
89
+ diff = sextract32(insn, 5, 14) * 4;
90
rt = extract32(insn, 0, 5);
91
92
tcg_cmp = tcg_temp_new_i64();
93
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
94
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
95
tcg_cmp, 0, label_match);
96
tcg_temp_free_i64(tcg_cmp);
97
- gen_goto_tb(s, 0, s->base.pc_next);
98
+ gen_goto_tb(s, 0, 4);
99
gen_set_label(label_match);
100
- gen_goto_tb(s, 1, addr);
101
+ gen_goto_tb(s, 1, diff);
102
}
103
104
/* Conditional branch (immediate)
105
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
106
static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
107
{
108
unsigned int cond;
109
- uint64_t addr;
110
+ int64_t diff;
111
112
if ((insn & (1 << 4)) || (insn & (1 << 24))) {
113
unallocated_encoding(s);
114
return;
115
}
116
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
117
+ diff = sextract32(insn, 5, 19) * 4;
118
cond = extract32(insn, 0, 4);
119
120
reset_btype(s);
121
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
122
/* genuinely conditional branches */
123
TCGLabel *label_match = gen_new_label();
124
arm_gen_test_cc(cond, label_match);
125
- gen_goto_tb(s, 0, s->base.pc_next);
126
+ gen_goto_tb(s, 0, 4);
127
gen_set_label(label_match);
128
- gen_goto_tb(s, 1, addr);
129
+ gen_goto_tb(s, 1, diff);
130
} else {
206
} else {
131
/* 0xe and 0xf are both "always" conditions */
207
@@ -XXX,XX +XXX,XX @@ static void tx_fifo_push(lan9118_state *s, uint32_t val)
132
- gen_goto_tb(s, 0, addr);
133
+ gen_goto_tb(s, 0, diff);
134
}
208
}
135
}
209
}
136
210
137
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
211
-static uint32_t do_phy_read(lan9118_state *s, int reg)
138
* any pending interrupts immediately.
212
-{
139
*/
213
- uint32_t val;
140
reset_btype(s);
214
-
141
- gen_goto_tb(s, 0, s->base.pc_next);
215
- switch (reg) {
142
+ gen_goto_tb(s, 0, 4);
216
- case 0: /* Basic Control */
143
return;
217
- return s->phy_control;
144
218
- case 1: /* Basic Status */
145
case 7: /* SB */
219
- return s->phy_status;
146
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
220
- case 2: /* ID1 */
147
* MB and end the TB instead.
221
- return 0x0007;
148
*/
222
- case 3: /* ID2 */
149
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
223
- return 0xc0d1;
150
- gen_goto_tb(s, 0, s->base.pc_next);
224
- case 4: /* Auto-neg advertisement */
151
+ gen_goto_tb(s, 0, 4);
225
- return s->phy_advertise;
152
return;
226
- case 5: /* Auto-neg Link Partner Ability */
153
227
- return 0x0f71;
154
default:
228
- case 6: /* Auto-neg Expansion */
155
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
229
- return 1;
156
switch (dc->base.is_jmp) {
230
- /* TODO 17, 18, 27, 29, 30, 31 */
157
case DISAS_NEXT:
231
- case 29: /* Interrupt source. */
158
case DISAS_TOO_MANY:
232
- val = s->phy_int;
159
- gen_goto_tb(dc, 1, dc->base.pc_next);
233
- s->phy_int = 0;
160
+ gen_goto_tb(dc, 1, 4);
234
- phy_update_irq(s);
161
break;
235
- return val;
162
default:
236
- case 30: /* Interrupt mask */
163
case DISAS_UPDATE_EXIT:
237
- return s->phy_int_mask;
164
diff --git a/target/arm/translate.c b/target/arm/translate.c
238
- default:
239
- qemu_log_mask(LOG_GUEST_ERROR,
240
- "do_phy_read: PHY read reg %d\n", reg);
241
- return 0;
242
- }
243
-}
244
-
245
-static void do_phy_write(lan9118_state *s, int reg, uint32_t val)
246
-{
247
- switch (reg) {
248
- case 0: /* Basic Control */
249
- if (val & 0x8000) {
250
- phy_reset(s);
251
- break;
252
- }
253
- s->phy_control = val & 0x7980;
254
- /* Complete autonegotiation immediately. */
255
- if (val & 0x1000) {
256
- s->phy_status |= 0x0020;
257
- }
258
- break;
259
- case 4: /* Auto-neg advertisement */
260
- s->phy_advertise = (val & 0x2d7f) | 0x80;
261
- break;
262
- /* TODO 17, 18, 27, 31 */
263
- case 30: /* Interrupt mask */
264
- s->phy_int_mask = val & 0xff;
265
- phy_update_irq(s);
266
- break;
267
- default:
268
- qemu_log_mask(LOG_GUEST_ERROR,
269
- "do_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
270
- }
271
-}
272
-
273
static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
274
{
275
switch (reg) {
276
@@ -XXX,XX +XXX,XX @@ static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
277
if (val & 2) {
278
DPRINTF("PHY write %d = 0x%04x\n",
279
(val >> 6) & 0x1f, s->mac_mii_data);
280
- do_phy_write(s, (val >> 6) & 0x1f, s->mac_mii_data);
281
+ lan9118_phy_write(&s->mii, (val >> 6) & 0x1f, s->mac_mii_data);
282
} else {
283
- s->mac_mii_data = do_phy_read(s, (val >> 6) & 0x1f);
284
+ s->mac_mii_data = lan9118_phy_read(&s->mii, (val >> 6) & 0x1f);
285
DPRINTF("PHY read %d = 0x%04x\n",
286
(val >> 6) & 0x1f, s->mac_mii_data);
287
}
288
@@ -XXX,XX +XXX,XX @@ static void lan9118_writel(void *opaque, hwaddr offset,
289
break;
290
case CSR_PMT_CTRL:
291
if (val & 0x400) {
292
- phy_reset(s);
293
+ lan9118_phy_reset(&s->mii);
294
}
295
s->pmt_ctrl &= ~0x34e;
296
s->pmt_ctrl |= (val & 0x34e);
297
@@ -XXX,XX +XXX,XX @@ static void lan9118_realize(DeviceState *dev, Error **errp)
298
const MemoryRegionOps *mem_ops =
299
s->mode_16bit ? &lan9118_16bit_mem_ops : &lan9118_mem_ops;
300
301
+ qemu_init_irq(&s->mii_irq, lan9118_update_irq, s, 0);
302
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
303
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
304
+ return;
305
+ }
306
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
307
+
308
memory_region_init_io(&s->mmio, OBJECT(dev), mem_ops, s,
309
"lan9118-mmio", 0x100);
310
sysbus_init_mmio(sbd, &s->mmio);
311
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
312
new file mode 100644
313
index XXXXXXX..XXXXXXX
314
--- /dev/null
315
+++ b/hw/net/lan9118_phy.c
316
@@ -XXX,XX +XXX,XX @@
317
+/*
318
+ * SMSC LAN9118 PHY emulation
319
+ *
320
+ * Copyright (c) 2009 CodeSourcery, LLC.
321
+ * Written by Paul Brook
322
+ *
323
+ * This code is licensed under the GNU GPL v2
324
+ *
325
+ * Contributions after 2012-01-13 are licensed under the terms of the
326
+ * GNU GPL, version 2 or (at your option) any later version.
327
+ */
328
+
329
+#include "qemu/osdep.h"
330
+#include "hw/net/lan9118_phy.h"
331
+#include "hw/irq.h"
332
+#include "hw/resettable.h"
333
+#include "migration/vmstate.h"
334
+#include "qemu/log.h"
335
+
336
+#define PHY_INT_ENERGYON (1 << 7)
337
+#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
338
+#define PHY_INT_FAULT (1 << 5)
339
+#define PHY_INT_DOWN (1 << 4)
340
+#define PHY_INT_AUTONEG_LP (1 << 3)
341
+#define PHY_INT_PARFAULT (1 << 2)
342
+#define PHY_INT_AUTONEG_PAGE (1 << 1)
343
+
344
+static void lan9118_phy_update_irq(Lan9118PhyState *s)
345
+{
346
+ qemu_set_irq(s->irq, !!(s->ints & s->int_mask));
347
+}
348
+
349
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
350
+{
351
+ uint16_t val;
352
+
353
+ switch (reg) {
354
+ case 0: /* Basic Control */
355
+ return s->control;
356
+ case 1: /* Basic Status */
357
+ return s->status;
358
+ case 2: /* ID1 */
359
+ return 0x0007;
360
+ case 3: /* ID2 */
361
+ return 0xc0d1;
362
+ case 4: /* Auto-neg advertisement */
363
+ return s->advertise;
364
+ case 5: /* Auto-neg Link Partner Ability */
365
+ return 0x0f71;
366
+ case 6: /* Auto-neg Expansion */
367
+ return 1;
368
+ /* TODO 17, 18, 27, 29, 30, 31 */
369
+ case 29: /* Interrupt source. */
370
+ val = s->ints;
371
+ s->ints = 0;
372
+ lan9118_phy_update_irq(s);
373
+ return val;
374
+ case 30: /* Interrupt mask */
375
+ return s->int_mask;
376
+ default:
377
+ qemu_log_mask(LOG_GUEST_ERROR,
378
+ "lan9118_phy_read: PHY read reg %d\n", reg);
379
+ return 0;
380
+ }
381
+}
382
+
383
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
384
+{
385
+ switch (reg) {
386
+ case 0: /* Basic Control */
387
+ if (val & 0x8000) {
388
+ lan9118_phy_reset(s);
389
+ break;
390
+ }
391
+ s->control = val & 0x7980;
392
+ /* Complete autonegotiation immediately. */
393
+ if (val & 0x1000) {
394
+ s->status |= 0x0020;
395
+ }
396
+ break;
397
+ case 4: /* Auto-neg advertisement */
398
+ s->advertise = (val & 0x2d7f) | 0x80;
399
+ break;
400
+ /* TODO 17, 18, 27, 31 */
401
+ case 30: /* Interrupt mask */
402
+ s->int_mask = val & 0xff;
403
+ lan9118_phy_update_irq(s);
404
+ break;
405
+ default:
406
+ qemu_log_mask(LOG_GUEST_ERROR,
407
+ "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
408
+ }
409
+}
410
+
411
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
412
+{
413
+ s->link_down = link_down;
414
+
415
+ /* Autonegotiation status mirrors link status. */
416
+ if (link_down) {
417
+ s->status &= ~0x0024;
418
+ s->ints |= PHY_INT_DOWN;
419
+ } else {
420
+ s->status |= 0x0024;
421
+ s->ints |= PHY_INT_ENERGYON;
422
+ s->ints |= PHY_INT_AUTONEG_COMPLETE;
423
+ }
424
+ lan9118_phy_update_irq(s);
425
+}
426
+
427
+void lan9118_phy_reset(Lan9118PhyState *s)
428
+{
429
+ s->control = 0x3000;
430
+ s->status = 0x7809;
431
+ s->advertise = 0x01e1;
432
+ s->int_mask = 0;
433
+ s->ints = 0;
434
+ lan9118_phy_update_link(s, s->link_down);
435
+}
436
+
437
+static void lan9118_phy_reset_hold(Object *obj, ResetType type)
438
+{
439
+ Lan9118PhyState *s = LAN9118_PHY(obj);
440
+
441
+ lan9118_phy_reset(s);
442
+}
443
+
444
+static void lan9118_phy_init(Object *obj)
445
+{
446
+ Lan9118PhyState *s = LAN9118_PHY(obj);
447
+
448
+ qdev_init_gpio_out(DEVICE(s), &s->irq, 1);
449
+}
450
+
451
+static const VMStateDescription vmstate_lan9118_phy = {
452
+ .name = "lan9118-phy",
453
+ .version_id = 1,
454
+ .minimum_version_id = 1,
455
+ .fields = (const VMStateField[]) {
456
+ VMSTATE_UINT16(control, Lan9118PhyState),
457
+ VMSTATE_UINT16(status, Lan9118PhyState),
458
+ VMSTATE_UINT16(advertise, Lan9118PhyState),
459
+ VMSTATE_UINT16(ints, Lan9118PhyState),
460
+ VMSTATE_UINT16(int_mask, Lan9118PhyState),
461
+ VMSTATE_BOOL(link_down, Lan9118PhyState),
462
+ VMSTATE_END_OF_LIST()
463
+ }
464
+};
465
+
466
+static void lan9118_phy_class_init(ObjectClass *klass, void *data)
467
+{
468
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
469
+ DeviceClass *dc = DEVICE_CLASS(klass);
470
+
471
+ rc->phases.hold = lan9118_phy_reset_hold;
472
+ dc->vmsd = &vmstate_lan9118_phy;
473
+}
474
+
475
+static const TypeInfo types[] = {
476
+ {
477
+ .name = TYPE_LAN9118_PHY,
478
+ .parent = TYPE_SYS_BUS_DEVICE,
479
+ .instance_size = sizeof(Lan9118PhyState),
480
+ .instance_init = lan9118_phy_init,
481
+ .class_init = lan9118_phy_class_init,
482
+ }
483
+};
484
+
485
+DEFINE_TYPES(types)
486
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
165
index XXXXXXX..XXXXXXX 100644
487
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/translate.c
488
--- a/hw/net/Kconfig
167
+++ b/target/arm/translate.c
489
+++ b/hw/net/Kconfig
168
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
490
@@ -XXX,XX +XXX,XX @@ config VMXNET3_PCI
169
* cpu_loop_exec. Any live exit_requests will be processed as we
491
config SMC91C111
170
* enter the next TB.
492
bool
171
*/
493
172
-static void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
494
+config LAN9118_PHY
173
+static void gen_goto_tb(DisasContext *s, int n, int diff)
495
+ bool
174
{
496
+
175
+ target_ulong dest = s->pc_curr + diff;
497
config LAN9118
176
+
498
bool
177
if (translator_use_goto_tb(&s->base, dest)) {
499
+ select LAN9118_PHY
178
tcg_gen_goto_tb(n);
500
select PTIMER
179
gen_set_pc_im(s, dest);
501
180
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
502
config NE2000_ISA
181
* gen_jmp();
503
diff --git a/hw/net/meson.build b/hw/net/meson.build
182
* on the second call to gen_jmp().
504
index XXXXXXX..XXXXXXX 100644
183
*/
505
--- a/hw/net/meson.build
184
- gen_goto_tb(s, tbno, dest);
506
+++ b/hw/net/meson.build
185
+ gen_goto_tb(s, tbno, dest - s->pc_curr);
507
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_VMXNET3_PCI', if_true: files('vmxnet3.c'))
186
break;
508
187
case DISAS_UPDATE_NOCHAIN:
509
system_ss.add(when: 'CONFIG_SMC91C111', if_true: files('smc91c111.c'))
188
case DISAS_UPDATE_EXIT:
510
system_ss.add(when: 'CONFIG_LAN9118', if_true: files('lan9118.c'))
189
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
511
+system_ss.add(when: 'CONFIG_LAN9118_PHY', if_true: files('lan9118_phy.c'))
190
switch (dc->base.is_jmp) {
512
system_ss.add(when: 'CONFIG_NE2000_ISA', if_true: files('ne2000-isa.c'))
191
case DISAS_NEXT:
513
system_ss.add(when: 'CONFIG_OPENCORES_ETH', if_true: files('opencores_eth.c'))
192
case DISAS_TOO_MANY:
514
system_ss.add(when: 'CONFIG_XGMAC', if_true: files('xgmac.c'))
193
- gen_goto_tb(dc, 1, dc->base.pc_next);
194
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
195
break;
196
case DISAS_UPDATE_NOCHAIN:
197
gen_set_pc_im(dc, dc->base.pc_next);
198
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
199
gen_set_pc_im(dc, dc->base.pc_next);
200
gen_singlestep_exception(dc);
201
} else {
202
- gen_goto_tb(dc, 1, dc->base.pc_next);
203
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
204
}
205
}
206
}
207
--
515
--
208
2.25.1
516
2.34.1
diff view generated by jsdifflib
New patch
1
From: Bernhard Beschow <shentey@gmail.com>
1
2
3
imx_fec models the same PHY as lan9118_phy. The code is almost the same with
4
imx_fec having more logging and tracing. Merge these improvements into
5
lan9118_phy and reuse in imx_fec to fix the code duplication.
6
7
Some migration state how resides in the new device model which breaks migration
8
compatibility for the following machines:
9
* imx25-pdk
10
* sabrelite
11
* mcimx7d-sabre
12
* mcimx6ul-evk
13
14
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
15
Tested-by: Guenter Roeck <linux@roeck-us.net>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20241102125724.532843-3-shentey@gmail.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
include/hw/net/imx_fec.h | 9 ++-
21
hw/net/imx_fec.c | 146 ++++-----------------------------------
22
hw/net/lan9118_phy.c | 82 ++++++++++++++++------
23
hw/net/Kconfig | 1 +
24
hw/net/trace-events | 10 +--
25
5 files changed, 85 insertions(+), 163 deletions(-)
26
27
diff --git a/include/hw/net/imx_fec.h b/include/hw/net/imx_fec.h
28
index XXXXXXX..XXXXXXX 100644
29
--- a/include/hw/net/imx_fec.h
30
+++ b/include/hw/net/imx_fec.h
31
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(IMXFECState, IMX_FEC)
32
#define TYPE_IMX_ENET "imx.enet"
33
34
#include "hw/sysbus.h"
35
+#include "hw/net/lan9118_phy.h"
36
+#include "hw/irq.h"
37
#include "net/net.h"
38
39
#define ENET_EIR 1
40
@@ -XXX,XX +XXX,XX @@ struct IMXFECState {
41
uint32_t tx_descriptor[ENET_TX_RING_NUM];
42
uint32_t tx_ring_num;
43
44
- uint32_t phy_status;
45
- uint32_t phy_control;
46
- uint32_t phy_advertise;
47
- uint32_t phy_int;
48
- uint32_t phy_int_mask;
49
+ Lan9118PhyState mii;
50
+ IRQState mii_irq;
51
uint32_t phy_num;
52
bool phy_connected;
53
struct IMXFECState *phy_consumer;
54
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/net/imx_fec.c
57
+++ b/hw/net/imx_fec.c
58
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth_txdescs = {
59
60
static const VMStateDescription vmstate_imx_eth = {
61
.name = TYPE_IMX_FEC,
62
- .version_id = 2,
63
- .minimum_version_id = 2,
64
+ .version_id = 3,
65
+ .minimum_version_id = 3,
66
.fields = (const VMStateField[]) {
67
VMSTATE_UINT32_ARRAY(regs, IMXFECState, ENET_MAX),
68
VMSTATE_UINT32(rx_descriptor, IMXFECState),
69
VMSTATE_UINT32(tx_descriptor[0], IMXFECState),
70
- VMSTATE_UINT32(phy_status, IMXFECState),
71
- VMSTATE_UINT32(phy_control, IMXFECState),
72
- VMSTATE_UINT32(phy_advertise, IMXFECState),
73
- VMSTATE_UINT32(phy_int, IMXFECState),
74
- VMSTATE_UINT32(phy_int_mask, IMXFECState),
75
VMSTATE_END_OF_LIST()
76
},
77
.subsections = (const VMStateDescription * const []) {
78
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth = {
79
},
80
};
81
82
-#define PHY_INT_ENERGYON (1 << 7)
83
-#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
84
-#define PHY_INT_FAULT (1 << 5)
85
-#define PHY_INT_DOWN (1 << 4)
86
-#define PHY_INT_AUTONEG_LP (1 << 3)
87
-#define PHY_INT_PARFAULT (1 << 2)
88
-#define PHY_INT_AUTONEG_PAGE (1 << 1)
89
-
90
static void imx_eth_update(IMXFECState *s);
91
92
/*
93
@@ -XXX,XX +XXX,XX @@ static void imx_eth_update(IMXFECState *s);
94
* For now we don't handle any GPIO/interrupt line, so the OS will
95
* have to poll for the PHY status.
96
*/
97
-static void imx_phy_update_irq(IMXFECState *s)
98
+static void imx_phy_update_irq(void *opaque, int n, int level)
99
{
100
- imx_eth_update(s);
101
-}
102
-
103
-static void imx_phy_update_link(IMXFECState *s)
104
-{
105
- /* Autonegotiation status mirrors link status. */
106
- if (qemu_get_queue(s->nic)->link_down) {
107
- trace_imx_phy_update_link("down");
108
- s->phy_status &= ~0x0024;
109
- s->phy_int |= PHY_INT_DOWN;
110
- } else {
111
- trace_imx_phy_update_link("up");
112
- s->phy_status |= 0x0024;
113
- s->phy_int |= PHY_INT_ENERGYON;
114
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
115
- }
116
- imx_phy_update_irq(s);
117
+ imx_eth_update(opaque);
118
}
119
120
static void imx_eth_set_link(NetClientState *nc)
121
{
122
- imx_phy_update_link(IMX_FEC(qemu_get_nic_opaque(nc)));
123
-}
124
-
125
-static void imx_phy_reset(IMXFECState *s)
126
-{
127
- trace_imx_phy_reset();
128
-
129
- s->phy_status = 0x7809;
130
- s->phy_control = 0x3000;
131
- s->phy_advertise = 0x01e1;
132
- s->phy_int_mask = 0;
133
- s->phy_int = 0;
134
- imx_phy_update_link(s);
135
+ lan9118_phy_update_link(&IMX_FEC(qemu_get_nic_opaque(nc))->mii,
136
+ nc->link_down);
137
}
138
139
static uint32_t imx_phy_read(IMXFECState *s, int reg)
140
{
141
- uint32_t val;
142
uint32_t phy = reg / 32;
143
144
if (!s->phy_connected) {
145
@@ -XXX,XX +XXX,XX @@ static uint32_t imx_phy_read(IMXFECState *s, int reg)
146
147
reg %= 32;
148
149
- switch (reg) {
150
- case 0: /* Basic Control */
151
- val = s->phy_control;
152
- break;
153
- case 1: /* Basic Status */
154
- val = s->phy_status;
155
- break;
156
- case 2: /* ID1 */
157
- val = 0x0007;
158
- break;
159
- case 3: /* ID2 */
160
- val = 0xc0d1;
161
- break;
162
- case 4: /* Auto-neg advertisement */
163
- val = s->phy_advertise;
164
- break;
165
- case 5: /* Auto-neg Link Partner Ability */
166
- val = 0x0f71;
167
- break;
168
- case 6: /* Auto-neg Expansion */
169
- val = 1;
170
- break;
171
- case 29: /* Interrupt source. */
172
- val = s->phy_int;
173
- s->phy_int = 0;
174
- imx_phy_update_irq(s);
175
- break;
176
- case 30: /* Interrupt mask */
177
- val = s->phy_int_mask;
178
- break;
179
- case 17:
180
- case 18:
181
- case 27:
182
- case 31:
183
- qemu_log_mask(LOG_UNIMP, "[%s.phy]%s: reg %d not implemented\n",
184
- TYPE_IMX_FEC, __func__, reg);
185
- val = 0;
186
- break;
187
- default:
188
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
189
- TYPE_IMX_FEC, __func__, reg);
190
- val = 0;
191
- break;
192
- }
193
-
194
- trace_imx_phy_read(val, phy, reg);
195
-
196
- return val;
197
+ return lan9118_phy_read(&s->mii, reg);
198
}
199
200
static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
201
@@ -XXX,XX +XXX,XX @@ static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
202
203
reg %= 32;
204
205
- trace_imx_phy_write(val, phy, reg);
206
-
207
- switch (reg) {
208
- case 0: /* Basic Control */
209
- if (val & 0x8000) {
210
- imx_phy_reset(s);
211
- } else {
212
- s->phy_control = val & 0x7980;
213
- /* Complete autonegotiation immediately. */
214
- if (val & 0x1000) {
215
- s->phy_status |= 0x0020;
216
- }
217
- }
218
- break;
219
- case 4: /* Auto-neg advertisement */
220
- s->phy_advertise = (val & 0x2d7f) | 0x80;
221
- break;
222
- case 30: /* Interrupt mask */
223
- s->phy_int_mask = val & 0xff;
224
- imx_phy_update_irq(s);
225
- break;
226
- case 17:
227
- case 18:
228
- case 27:
229
- case 31:
230
- qemu_log_mask(LOG_UNIMP, "[%s.phy)%s: reg %d not implemented\n",
231
- TYPE_IMX_FEC, __func__, reg);
232
- break;
233
- default:
234
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
235
- TYPE_IMX_FEC, __func__, reg);
236
- break;
237
- }
238
+ lan9118_phy_write(&s->mii, reg, val);
239
}
240
241
static void imx_fec_read_bd(IMXFECBufDesc *bd, dma_addr_t addr)
242
@@ -XXX,XX +XXX,XX @@ static void imx_eth_reset(DeviceState *d)
243
244
s->rx_descriptor = 0;
245
memset(s->tx_descriptor, 0, sizeof(s->tx_descriptor));
246
-
247
- /* We also reset the PHY */
248
- imx_phy_reset(s);
249
}
250
251
static uint32_t imx_default_read(IMXFECState *s, uint32_t index)
252
@@ -XXX,XX +XXX,XX @@ static void imx_eth_realize(DeviceState *dev, Error **errp)
253
sysbus_init_irq(sbd, &s->irq[0]);
254
sysbus_init_irq(sbd, &s->irq[1]);
255
256
+ qemu_init_irq(&s->mii_irq, imx_phy_update_irq, s, 0);
257
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
258
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
259
+ return;
260
+ }
261
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
262
+
263
qemu_macaddr_default_if_unset(&s->conf.macaddr);
264
265
s->nic = qemu_new_nic(&imx_eth_net_info, &s->conf,
266
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
267
index XXXXXXX..XXXXXXX 100644
268
--- a/hw/net/lan9118_phy.c
269
+++ b/hw/net/lan9118_phy.c
270
@@ -XXX,XX +XXX,XX @@
271
* Copyright (c) 2009 CodeSourcery, LLC.
272
* Written by Paul Brook
273
*
274
+ * Copyright (c) 2013 Jean-Christophe Dubois. <jcd@tribudubois.net>
275
+ *
276
* This code is licensed under the GNU GPL v2
277
*
278
* Contributions after 2012-01-13 are licensed under the terms of the
279
@@ -XXX,XX +XXX,XX @@
280
#include "hw/resettable.h"
281
#include "migration/vmstate.h"
282
#include "qemu/log.h"
283
+#include "trace.h"
284
285
#define PHY_INT_ENERGYON (1 << 7)
286
#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
287
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
288
289
switch (reg) {
290
case 0: /* Basic Control */
291
- return s->control;
292
+ val = s->control;
293
+ break;
294
case 1: /* Basic Status */
295
- return s->status;
296
+ val = s->status;
297
+ break;
298
case 2: /* ID1 */
299
- return 0x0007;
300
+ val = 0x0007;
301
+ break;
302
case 3: /* ID2 */
303
- return 0xc0d1;
304
+ val = 0xc0d1;
305
+ break;
306
case 4: /* Auto-neg advertisement */
307
- return s->advertise;
308
+ val = s->advertise;
309
+ break;
310
case 5: /* Auto-neg Link Partner Ability */
311
- return 0x0f71;
312
+ val = 0x0f71;
313
+ break;
314
case 6: /* Auto-neg Expansion */
315
- return 1;
316
- /* TODO 17, 18, 27, 29, 30, 31 */
317
+ val = 1;
318
+ break;
319
case 29: /* Interrupt source. */
320
val = s->ints;
321
s->ints = 0;
322
lan9118_phy_update_irq(s);
323
- return val;
324
+ break;
325
case 30: /* Interrupt mask */
326
- return s->int_mask;
327
+ val = s->int_mask;
328
+ break;
329
+ case 17:
330
+ case 18:
331
+ case 27:
332
+ case 31:
333
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
334
+ __func__, reg);
335
+ val = 0;
336
+ break;
337
default:
338
- qemu_log_mask(LOG_GUEST_ERROR,
339
- "lan9118_phy_read: PHY read reg %d\n", reg);
340
- return 0;
341
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
342
+ __func__, reg);
343
+ val = 0;
344
+ break;
345
}
346
+
347
+ trace_lan9118_phy_read(val, reg);
348
+
349
+ return val;
350
}
351
352
void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
353
{
354
+ trace_lan9118_phy_write(val, reg);
355
+
356
switch (reg) {
357
case 0: /* Basic Control */
358
if (val & 0x8000) {
359
lan9118_phy_reset(s);
360
- break;
361
- }
362
- s->control = val & 0x7980;
363
- /* Complete autonegotiation immediately. */
364
- if (val & 0x1000) {
365
- s->status |= 0x0020;
366
+ } else {
367
+ s->control = val & 0x7980;
368
+ /* Complete autonegotiation immediately. */
369
+ if (val & 0x1000) {
370
+ s->status |= 0x0020;
371
+ }
372
}
373
break;
374
case 4: /* Auto-neg advertisement */
375
s->advertise = (val & 0x2d7f) | 0x80;
376
break;
377
- /* TODO 17, 18, 27, 31 */
378
case 30: /* Interrupt mask */
379
s->int_mask = val & 0xff;
380
lan9118_phy_update_irq(s);
381
break;
382
+ case 17:
383
+ case 18:
384
+ case 27:
385
+ case 31:
386
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
387
+ __func__, reg);
388
+ break;
389
default:
390
- qemu_log_mask(LOG_GUEST_ERROR,
391
- "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
392
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
393
+ __func__, reg);
394
+ break;
395
}
396
}
397
398
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
399
400
/* Autonegotiation status mirrors link status. */
401
if (link_down) {
402
+ trace_lan9118_phy_update_link("down");
403
s->status &= ~0x0024;
404
s->ints |= PHY_INT_DOWN;
405
} else {
406
+ trace_lan9118_phy_update_link("up");
407
s->status |= 0x0024;
408
s->ints |= PHY_INT_ENERGYON;
409
s->ints |= PHY_INT_AUTONEG_COMPLETE;
410
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
411
412
void lan9118_phy_reset(Lan9118PhyState *s)
413
{
414
+ trace_lan9118_phy_reset();
415
+
416
s->control = 0x3000;
417
s->status = 0x7809;
418
s->advertise = 0x01e1;
419
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_phy = {
420
.version_id = 1,
421
.minimum_version_id = 1,
422
.fields = (const VMStateField[]) {
423
- VMSTATE_UINT16(control, Lan9118PhyState),
424
VMSTATE_UINT16(status, Lan9118PhyState),
425
+ VMSTATE_UINT16(control, Lan9118PhyState),
426
VMSTATE_UINT16(advertise, Lan9118PhyState),
427
VMSTATE_UINT16(ints, Lan9118PhyState),
428
VMSTATE_UINT16(int_mask, Lan9118PhyState),
429
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
430
index XXXXXXX..XXXXXXX 100644
431
--- a/hw/net/Kconfig
432
+++ b/hw/net/Kconfig
433
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_SUN8I_EMAC
434
435
config IMX_FEC
436
bool
437
+ select LAN9118_PHY
438
439
config CADENCE
440
bool
441
diff --git a/hw/net/trace-events b/hw/net/trace-events
442
index XXXXXXX..XXXXXXX 100644
443
--- a/hw/net/trace-events
444
+++ b/hw/net/trace-events
445
@@ -XXX,XX +XXX,XX @@ allwinner_sun8i_emac_set_link(bool active) "Set link: active=%u"
446
allwinner_sun8i_emac_read(uint64_t offset, uint64_t val) "MMIO read: offset=0x%" PRIx64 " value=0x%" PRIx64
447
allwinner_sun8i_emac_write(uint64_t offset, uint64_t val) "MMIO write: offset=0x%" PRIx64 " value=0x%" PRIx64
448
449
+# lan9118_phy.c
450
+lan9118_phy_read(uint16_t val, int reg) "[0x%02x] -> 0x%04" PRIx16
451
+lan9118_phy_write(uint16_t val, int reg) "[0x%02x] <- 0x%04" PRIx16
452
+lan9118_phy_update_link(const char *s) "%s"
453
+lan9118_phy_reset(void) ""
454
+
455
# lance.c
456
lance_mem_readw(uint64_t addr, uint32_t ret) "addr=0x%"PRIx64"val=0x%04x"
457
lance_mem_writew(uint64_t addr, uint32_t val) "addr=0x%"PRIx64"val=0x%04x"
458
@@ -XXX,XX +XXX,XX @@ i82596_set_multicast(uint16_t count) "Added %d multicast entries"
459
i82596_channel_attention(void *s) "%p: Received CHANNEL ATTENTION"
460
461
# imx_fec.c
462
-imx_phy_read(uint32_t val, int phy, int reg) "0x%04"PRIx32" <= phy[%d].reg[%d]"
463
imx_phy_read_num(int phy, int configured) "read request from unconfigured phy %d (configured %d)"
464
-imx_phy_write(uint32_t val, int phy, int reg) "0x%04"PRIx32" => phy[%d].reg[%d]"
465
imx_phy_write_num(int phy, int configured) "write request to unconfigured phy %d (configured %d)"
466
-imx_phy_update_link(const char *s) "%s"
467
-imx_phy_reset(void) ""
468
imx_fec_read_bd(uint64_t addr, int flags, int len, int data) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x"
469
imx_enet_read_bd(uint64_t addr, int flags, int len, int data, int options, int status) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x option 0x%04x status 0x%04x"
470
imx_eth_tx_bd_busy(void) "tx_bd ran out of descriptors to transmit"
471
--
472
2.34.1
diff view generated by jsdifflib
New patch
1
From: Bernhard Beschow <shentey@gmail.com>
1
2
3
Turns 0x70 into 0xe0 (== 0x70 << 1) which adds the missing MII_ANLPAR_TX and
4
fixes the MSB of selector field to be zero, as specified in the datasheet.
5
6
Fixes: 2a424990170b "LAN9118 emulation"
7
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
8
Tested-by: Guenter Roeck <linux@roeck-us.net>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20241102125724.532843-4-shentey@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/net/lan9118_phy.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/net/lan9118_phy.c
19
+++ b/hw/net/lan9118_phy.c
20
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
21
val = s->advertise;
22
break;
23
case 5: /* Auto-neg Link Partner Ability */
24
- val = 0x0f71;
25
+ val = 0x0fe1;
26
break;
27
case 6: /* Auto-neg Expansion */
28
val = 1;
29
--
30
2.34.1
diff view generated by jsdifflib
New patch
1
From: Bernhard Beschow <shentey@gmail.com>
1
2
3
Prefer named constants over magic values for better readability.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
7
Tested-by: Guenter Roeck <linux@roeck-us.net>
8
Message-id: 20241102125724.532843-5-shentey@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/net/mii.h | 6 +++++
12
hw/net/lan9118_phy.c | 63 ++++++++++++++++++++++++++++----------------
13
2 files changed, 46 insertions(+), 23 deletions(-)
14
15
diff --git a/include/hw/net/mii.h b/include/hw/net/mii.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/net/mii.h
18
+++ b/include/hw/net/mii.h
19
@@ -XXX,XX +XXX,XX @@
20
#define MII_BMSR_JABBER (1 << 1) /* Jabber detected */
21
#define MII_BMSR_EXTCAP (1 << 0) /* Ext-reg capability */
22
23
+#define MII_ANAR_RFAULT (1 << 13) /* Say we can detect faults */
24
#define MII_ANAR_PAUSE_ASYM (1 << 11) /* Try for asymmetric pause */
25
#define MII_ANAR_PAUSE (1 << 10) /* Try for pause */
26
#define MII_ANAR_TXFD (1 << 8)
27
@@ -XXX,XX +XXX,XX @@
28
#define MII_ANAR_10FD (1 << 6)
29
#define MII_ANAR_10 (1 << 5)
30
#define MII_ANAR_CSMACD (1 << 0)
31
+#define MII_ANAR_SELECT (0x001f) /* Selector bits */
32
33
#define MII_ANLPAR_ACK (1 << 14)
34
#define MII_ANLPAR_PAUSEASY (1 << 11) /* can pause asymmetrically */
35
@@ -XXX,XX +XXX,XX @@
36
#define RTL8201CP_PHYID1 0x0000
37
#define RTL8201CP_PHYID2 0x8201
38
39
+/* SMSC LAN9118 */
40
+#define SMSCLAN9118_PHYID1 0x0007
41
+#define SMSCLAN9118_PHYID2 0xc0d1
42
+
43
/* RealTek 8211E */
44
#define RTL8211E_PHYID1 0x001c
45
#define RTL8211E_PHYID2 0xc915
46
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/net/lan9118_phy.c
49
+++ b/hw/net/lan9118_phy.c
50
@@ -XXX,XX +XXX,XX @@
51
52
#include "qemu/osdep.h"
53
#include "hw/net/lan9118_phy.h"
54
+#include "hw/net/mii.h"
55
#include "hw/irq.h"
56
#include "hw/resettable.h"
57
#include "migration/vmstate.h"
58
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
59
uint16_t val;
60
61
switch (reg) {
62
- case 0: /* Basic Control */
63
+ case MII_BMCR:
64
val = s->control;
65
break;
66
- case 1: /* Basic Status */
67
+ case MII_BMSR:
68
val = s->status;
69
break;
70
- case 2: /* ID1 */
71
- val = 0x0007;
72
+ case MII_PHYID1:
73
+ val = SMSCLAN9118_PHYID1;
74
break;
75
- case 3: /* ID2 */
76
- val = 0xc0d1;
77
+ case MII_PHYID2:
78
+ val = SMSCLAN9118_PHYID2;
79
break;
80
- case 4: /* Auto-neg advertisement */
81
+ case MII_ANAR:
82
val = s->advertise;
83
break;
84
- case 5: /* Auto-neg Link Partner Ability */
85
- val = 0x0fe1;
86
+ case MII_ANLPAR:
87
+ val = MII_ANLPAR_PAUSEASY | MII_ANLPAR_PAUSE | MII_ANLPAR_T4 |
88
+ MII_ANLPAR_TXFD | MII_ANLPAR_TX | MII_ANLPAR_10FD |
89
+ MII_ANLPAR_10 | MII_ANLPAR_CSMACD;
90
break;
91
- case 6: /* Auto-neg Expansion */
92
- val = 1;
93
+ case MII_ANER:
94
+ val = MII_ANER_NWAY;
95
break;
96
case 29: /* Interrupt source. */
97
val = s->ints;
98
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
99
trace_lan9118_phy_write(val, reg);
100
101
switch (reg) {
102
- case 0: /* Basic Control */
103
- if (val & 0x8000) {
104
+ case MII_BMCR:
105
+ if (val & MII_BMCR_RESET) {
106
lan9118_phy_reset(s);
107
} else {
108
- s->control = val & 0x7980;
109
+ s->control = val & (MII_BMCR_LOOPBACK | MII_BMCR_SPEED100 |
110
+ MII_BMCR_AUTOEN | MII_BMCR_PDOWN | MII_BMCR_FD |
111
+ MII_BMCR_CTST);
112
/* Complete autonegotiation immediately. */
113
- if (val & 0x1000) {
114
- s->status |= 0x0020;
115
+ if (val & MII_BMCR_AUTOEN) {
116
+ s->status |= MII_BMSR_AN_COMP;
117
}
118
}
119
break;
120
- case 4: /* Auto-neg advertisement */
121
- s->advertise = (val & 0x2d7f) | 0x80;
122
+ case MII_ANAR:
123
+ s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
124
+ MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
125
+ MII_ANAR_SELECT))
126
+ | MII_ANAR_TX;
127
break;
128
case 30: /* Interrupt mask */
129
s->int_mask = val & 0xff;
130
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
131
/* Autonegotiation status mirrors link status. */
132
if (link_down) {
133
trace_lan9118_phy_update_link("down");
134
- s->status &= ~0x0024;
135
+ s->status &= ~(MII_BMSR_AN_COMP | MII_BMSR_LINK_ST);
136
s->ints |= PHY_INT_DOWN;
137
} else {
138
trace_lan9118_phy_update_link("up");
139
- s->status |= 0x0024;
140
+ s->status |= MII_BMSR_AN_COMP | MII_BMSR_LINK_ST;
141
s->ints |= PHY_INT_ENERGYON;
142
s->ints |= PHY_INT_AUTONEG_COMPLETE;
143
}
144
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_reset(Lan9118PhyState *s)
145
{
146
trace_lan9118_phy_reset();
147
148
- s->control = 0x3000;
149
- s->status = 0x7809;
150
- s->advertise = 0x01e1;
151
+ s->control = MII_BMCR_AUTOEN | MII_BMCR_SPEED100;
152
+ s->status = MII_BMSR_100TX_FD
153
+ | MII_BMSR_100TX_HD
154
+ | MII_BMSR_10T_FD
155
+ | MII_BMSR_10T_HD
156
+ | MII_BMSR_AUTONEG
157
+ | MII_BMSR_EXTCAP;
158
+ s->advertise = MII_ANAR_TXFD
159
+ | MII_ANAR_TX
160
+ | MII_ANAR_10FD
161
+ | MII_ANAR_10
162
+ | MII_ANAR_CSMACD;
163
s->int_mask = 0;
164
s->ints = 0;
165
lan9118_phy_update_link(s, s->link_down);
166
--
167
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate.
3
The real device advertises this mode and the device model already advertises
4
100 mbps half duplex and 10 mbps full+half duplex. So advertise this mode to
5
make the model more realistic.
4
6
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
7
Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org
9
Tested-by: Guenter Roeck <linux@roeck-us.net>
10
Message-id: 20241102125724.532843-6-shentey@gmail.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
target/arm/ptw.c | 6 ++++--
13
hw/net/lan9118_phy.c | 4 ++--
11
1 file changed, 4 insertions(+), 2 deletions(-)
14
1 file changed, 2 insertions(+), 2 deletions(-)
12
15
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.c
18
--- a/hw/net/lan9118_phy.c
16
+++ b/target/arm/ptw.c
19
+++ b/hw/net/lan9118_phy.c
17
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
20
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
18
bool in_secure;
21
break;
19
bool in_debug;
22
case MII_ANAR:
20
bool out_secure;
23
s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
21
+ bool out_be;
24
- MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
22
hwaddr out_phys;
25
- MII_ANAR_SELECT))
23
} S1Translate;
26
+ MII_ANAR_PAUSE | MII_ANAR_TXFD | MII_ANAR_10FD |
24
27
+ MII_ANAR_10 | MII_ANAR_SELECT))
25
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
28
| MII_ANAR_TX;
26
29
break;
27
ptw->out_secure = is_secure;
30
case 30: /* Interrupt mask */
28
ptw->out_phys = addr;
29
+ ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
30
return true;
31
}
32
33
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
34
addr = ptw->out_phys;
35
attrs.secure = ptw->out_secure;
36
as = arm_addressspace(cs, attrs);
37
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
38
+ if (ptw->out_be) {
39
data = address_space_ldl_be(as, addr, attrs, &result);
40
} else {
41
data = address_space_ldl_le(as, addr, attrs, &result);
42
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
43
addr = ptw->out_phys;
44
attrs.secure = ptw->out_secure;
45
as = arm_addressspace(cs, attrs);
46
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
47
+ if (ptw->out_be) {
48
data = address_space_ldq_be(as, addr, attrs, &result);
49
} else {
50
data = address_space_ldq_le(as, addr, attrs, &result);
51
--
31
--
52
2.25.1
32
2.34.1
diff view generated by jsdifflib
New patch
1
For IEEE fused multiply-add, the (0 * inf) + NaN case should raise
2
Invalid for the multiplication of 0 by infinity. Currently we handle
3
this in the per-architecture ifdef ladder in pickNaNMulAdd().
4
However, since this isn't really architecture specific we can hoist
5
it up to the generic code.
1
6
7
For the cases where the infzero test in pickNaNMulAdd was
8
returning 2, we can delete the check entirely and allow the
9
code to fall into the normal pick-a-NaN handling, because this
10
will return 2 anyway (input 'c' being the only NaN in this case).
11
For the cases where infzero was returning 3 to indicate "return
12
the default NaN", we must retain that "return 3".
13
14
For Arm, this looks like it might be a behaviour change because we
15
used to set float_flag_invalid | float_flag_invalid_imz only if C is
16
a quiet NaN. However, it is not, because Arm target code never looks
17
at float_flag_invalid_imz, and for the (0 * inf) + SNaN case we
18
already raised float_flag_invalid via the "abc_mask &
19
float_cmask_snan" check in pick_nan_muladd.
20
21
For any target architecture using the "default implementation" at the
22
bottom of the ifdef, this is a behaviour change but will be fixing a
23
bug (where we failed to raise the Invalid exception for (0 * inf +
24
QNaN). The architectures using the default case are:
25
* hppa
26
* i386
27
* sh4
28
* tricore
29
30
The x86, Tricore and SH4 CPU architecture manuals are clear that this
31
should have raised Invalid; HPPA is a bit vaguer but still seems
32
clear enough.
33
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20241202131347.498124-2-peter.maydell@linaro.org
37
---
38
fpu/softfloat-parts.c.inc | 13 +++++++------
39
fpu/softfloat-specialize.c.inc | 29 +----------------------------
40
2 files changed, 8 insertions(+), 34 deletions(-)
41
42
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
43
index XXXXXXX..XXXXXXX 100644
44
--- a/fpu/softfloat-parts.c.inc
45
+++ b/fpu/softfloat-parts.c.inc
46
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
47
int ab_mask, int abc_mask)
48
{
49
int which;
50
+ bool infzero = (ab_mask == float_cmask_infzero);
51
52
if (unlikely(abc_mask & float_cmask_snan)) {
53
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
54
}
55
56
- which = pickNaNMulAdd(a->cls, b->cls, c->cls,
57
- ab_mask == float_cmask_infzero, s);
58
+ if (infzero) {
59
+ /* This is (0 * inf) + NaN or (inf * 0) + NaN */
60
+ float_raise(float_flag_invalid | float_flag_invalid_imz, s);
61
+ }
62
+
63
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
64
65
if (s->default_nan_mode || which == 3) {
66
- /*
67
- * Note that this check is after pickNaNMulAdd so that function
68
- * has an opportunity to set the Invalid flag for infzero.
69
- */
70
parts_default_nan(a, s);
71
return a;
72
}
73
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
74
index XXXXXXX..XXXXXXX 100644
75
--- a/fpu/softfloat-specialize.c.inc
76
+++ b/fpu/softfloat-specialize.c.inc
77
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
78
* the default NaN
79
*/
80
if (infzero && is_qnan(c_cls)) {
81
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
82
return 3;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
86
* case sets InvalidOp and returns the default NaN
87
*/
88
if (infzero) {
89
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
90
return 3;
91
}
92
/* Prefer sNaN over qNaN, in the a, b, c order. */
93
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
94
* For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
95
* case sets InvalidOp and returns the input value 'c'
96
*/
97
- if (infzero) {
98
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
99
- return 2;
100
- }
101
/* Prefer sNaN over qNaN, in the c, a, b order. */
102
if (is_snan(c_cls)) {
103
return 2;
104
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
105
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
106
* case sets InvalidOp and returns the input value 'c'
107
*/
108
- if (infzero) {
109
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
110
- return 2;
111
- }
112
+
113
/* Prefer sNaN over qNaN, in the c, a, b order. */
114
if (is_snan(c_cls)) {
115
return 2;
116
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
117
* to return an input NaN if we have one (ie c) rather than generating
118
* a default NaN
119
*/
120
- if (infzero) {
121
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
122
- return 2;
123
- }
124
125
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
126
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
127
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
128
return 1;
129
}
130
#elif defined(TARGET_RISCV)
131
- /* For RISC-V, InvalidOp is set when multiplicands are Inf and zero */
132
- if (infzero) {
133
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
134
- }
135
return 3; /* default NaN */
136
#elif defined(TARGET_S390X)
137
if (infzero) {
138
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
139
return 3;
140
}
141
142
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
143
return 2;
144
}
145
#elif defined(TARGET_SPARC)
146
- /* For (inf,0,nan) return c. */
147
- if (infzero) {
148
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
149
- return 2;
150
- }
151
/* Prefer SNaN over QNaN, order C, B, A. */
152
if (is_snan(c_cls)) {
153
return 2;
154
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
155
* For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
156
* an input NaN if we have one (ie c).
157
*/
158
- if (infzero) {
159
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
160
- return 2;
161
- }
162
if (status->use_first_nan) {
163
if (is_nan(a_cls)) {
164
return 0;
165
--
166
2.34.1
diff view generated by jsdifflib
New patch
1
If the target sets default_nan_mode then we're always going to return
2
the default NaN, and pickNaNMulAdd() no longer has any side effects.
3
For consistency with pickNaN(), check for default_nan_mode before
4
calling pickNaNMulAdd().
1
5
6
When we convert pickNaNMulAdd() to allow runtime selection of the NaN
7
propagation rule, this means we won't have to make the targets which
8
use default_nan_mode also set a propagation rule.
9
10
Since RiscV always uses default_nan_mode, this allows us to remove
11
its ifdef case from pickNaNMulAdd().
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-3-peter.maydell@linaro.org
16
---
17
fpu/softfloat-parts.c.inc | 8 ++++++--
18
fpu/softfloat-specialize.c.inc | 9 +++++++--
19
2 files changed, 13 insertions(+), 4 deletions(-)
20
21
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
22
index XXXXXXX..XXXXXXX 100644
23
--- a/fpu/softfloat-parts.c.inc
24
+++ b/fpu/softfloat-parts.c.inc
25
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
26
float_raise(float_flag_invalid | float_flag_invalid_imz, s);
27
}
28
29
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
30
+ if (s->default_nan_mode) {
31
+ which = 3;
32
+ } else {
33
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ }
35
36
- if (s->default_nan_mode || which == 3) {
37
+ if (which == 3) {
38
parts_default_nan(a, s);
39
return a;
40
}
41
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
42
index XXXXXXX..XXXXXXX 100644
43
--- a/fpu/softfloat-specialize.c.inc
44
+++ b/fpu/softfloat-specialize.c.inc
45
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
46
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
47
bool infzero, float_status *status)
48
{
49
+ /*
50
+ * We guarantee not to require the target to tell us how to
51
+ * pick a NaN if we're always returning the default NaN.
52
+ * But if we're not in default-NaN mode then the target must
53
+ * specify.
54
+ */
55
+ assert(!status->default_nan_mode);
56
#if defined(TARGET_ARM)
57
/* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
58
* the default NaN
59
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
60
} else {
61
return 1;
62
}
63
-#elif defined(TARGET_RISCV)
64
- return 3; /* default NaN */
65
#elif defined(TARGET_S390X)
66
if (infzero) {
67
return 3;
68
--
69
2.34.1
diff view generated by jsdifflib
New patch
1
1
IEEE 758 does not define a fixed rule for what NaN to return in
2
the case of a fused multiply-add of inf * 0 + NaN. Different
3
architectures thus do different things:
4
* some return the default NaN
5
* some return the input NaN
6
* Arm returns the default NaN if the input NaN is quiet,
7
and the input NaN if it is signalling
8
9
We want to make this logic be runtime selected rather than
10
hardcoded into the binary, because:
11
* this will let us have multiple targets in one QEMU binary
12
* the Arm FEAT_AFP architectural feature includes letting
13
the guest select a NaN propagation rule at runtime
14
15
In this commit we add an enum for the propagation rule, the field in
16
float_status, and the corresponding getters and setters. We change
17
pickNaNMulAdd to honour this, but because all targets still leave
18
this field at its default 0 value, the fallback logic will pick the
19
rule type with the old ifdef ladder.
20
21
Note that four architectures both use the muladd softfloat functions
22
and did not have a branch of the ifdef ladder to specify their
23
behaviour (and so were ending up with the "default" case, probably
24
wrongly): i386, HPPA, SH4 and Tricore. SH4 and Tricore both set
25
default_nan_mode, and so will never get into pickNaNMulAdd(). For
26
HPPA and i386 we retain the same behaviour as the old default-case,
27
which is to not ever return the default NaN. This might not be
28
correct but it is not a behaviour change.
29
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
32
Message-id: 20241202131347.498124-4-peter.maydell@linaro.org
33
---
34
include/fpu/softfloat-helpers.h | 11 ++++
35
include/fpu/softfloat-types.h | 23 +++++++++
36
fpu/softfloat-specialize.c.inc | 91 ++++++++++++++++++++++-----------
37
3 files changed, 95 insertions(+), 30 deletions(-)
38
39
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
40
index XXXXXXX..XXXXXXX 100644
41
--- a/include/fpu/softfloat-helpers.h
42
+++ b/include/fpu/softfloat-helpers.h
43
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
44
status->float_2nan_prop_rule = rule;
45
}
46
47
+static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
48
+ float_status *status)
49
+{
50
+ status->float_infzeronan_rule = rule;
51
+}
52
+
53
static inline void set_flush_to_zero(bool val, float_status *status)
54
{
55
status->flush_to_zero = val;
56
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
57
return status->float_2nan_prop_rule;
58
}
59
60
+static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
61
+{
62
+ return status->float_infzeronan_rule;
63
+}
64
+
65
static inline bool get_flush_to_zero(float_status *status)
66
{
67
return status->flush_to_zero;
68
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
69
index XXXXXXX..XXXXXXX 100644
70
--- a/include/fpu/softfloat-types.h
71
+++ b/include/fpu/softfloat-types.h
72
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
73
float_2nan_prop_x87,
74
} Float2NaNPropRule;
75
76
+/*
77
+ * Rule for result of fused multiply-add 0 * Inf + NaN.
78
+ * This must be a NaN, but implementations differ on whether this
79
+ * is the input NaN or the default NaN.
80
+ *
81
+ * You don't need to set this if default_nan_mode is enabled.
82
+ * When not in default-NaN mode, it is an error for the target
83
+ * not to set the rule in float_status if it uses muladd, and we
84
+ * will assert if we need to handle an input NaN and no rule was
85
+ * selected.
86
+ */
87
+typedef enum __attribute__((__packed__)) {
88
+ /* No propagation rule specified */
89
+ float_infzeronan_none = 0,
90
+ /* Result is never the default NaN (so always the input NaN) */
91
+ float_infzeronan_dnan_never,
92
+ /* Result is always the default NaN */
93
+ float_infzeronan_dnan_always,
94
+ /* Result is the default NaN if the input NaN is quiet */
95
+ float_infzeronan_dnan_if_qnan,
96
+} FloatInfZeroNaNRule;
97
+
98
/*
99
* Floating Point Status. Individual architectures may maintain
100
* several versions of float_status for different functions. The
101
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
102
FloatRoundMode float_rounding_mode;
103
FloatX80RoundPrec floatx80_rounding_precision;
104
Float2NaNPropRule float_2nan_prop_rule;
105
+ FloatInfZeroNaNRule float_infzeronan_rule;
106
bool tininess_before_rounding;
107
/* should denormalised results go to zero and set the inexact flag? */
108
bool flush_to_zero;
109
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
110
index XXXXXXX..XXXXXXX 100644
111
--- a/fpu/softfloat-specialize.c.inc
112
+++ b/fpu/softfloat-specialize.c.inc
113
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
114
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
115
bool infzero, float_status *status)
116
{
117
+ FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
118
+
119
/*
120
* We guarantee not to require the target to tell us how to
121
* pick a NaN if we're always returning the default NaN.
122
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
123
* specify.
124
*/
125
assert(!status->default_nan_mode);
126
+
127
+ if (rule == float_infzeronan_none) {
128
+ /*
129
+ * Temporarily fall back to ifdef ladder
130
+ */
131
#if defined(TARGET_ARM)
132
- /* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
133
- * the default NaN
134
- */
135
- if (infzero && is_qnan(c_cls)) {
136
- return 3;
137
+ /*
138
+ * For ARM, the (inf,zero,qnan) case returns the default NaN,
139
+ * but (inf,zero,snan) returns the input NaN.
140
+ */
141
+ rule = float_infzeronan_dnan_if_qnan;
142
+#elif defined(TARGET_MIPS)
143
+ if (snan_bit_is_one(status)) {
144
+ /*
145
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
146
+ * case sets InvalidOp and returns the default NaN
147
+ */
148
+ rule = float_infzeronan_dnan_always;
149
+ } else {
150
+ /*
151
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
152
+ * case sets InvalidOp and returns the input value 'c'
153
+ */
154
+ rule = float_infzeronan_dnan_never;
155
+ }
156
+#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
157
+ defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
158
+ defined(TARGET_I386) || defined(TARGET_LOONGARCH)
159
+ /*
160
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
161
+ * case sets InvalidOp and returns the input value 'c'
162
+ */
163
+ /*
164
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
165
+ * to return an input NaN if we have one (ie c) rather than generating
166
+ * a default NaN
167
+ */
168
+ rule = float_infzeronan_dnan_never;
169
+#elif defined(TARGET_S390X)
170
+ rule = float_infzeronan_dnan_always;
171
+#endif
172
}
173
174
+ if (infzero) {
175
+ /*
176
+ * Inf * 0 + NaN -- some implementations return the default NaN here,
177
+ * and some return the input NaN.
178
+ */
179
+ switch (rule) {
180
+ case float_infzeronan_dnan_never:
181
+ return 2;
182
+ case float_infzeronan_dnan_always:
183
+ return 3;
184
+ case float_infzeronan_dnan_if_qnan:
185
+ return is_qnan(c_cls) ? 3 : 2;
186
+ default:
187
+ g_assert_not_reached();
188
+ }
189
+ }
190
+
191
+#if defined(TARGET_ARM)
192
+
193
/* This looks different from the ARM ARM pseudocode, because the ARM ARM
194
* puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
195
*/
196
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
197
}
198
#elif defined(TARGET_MIPS)
199
if (snan_bit_is_one(status)) {
200
- /*
201
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
202
- * case sets InvalidOp and returns the default NaN
203
- */
204
- if (infzero) {
205
- return 3;
206
- }
207
/* Prefer sNaN over qNaN, in the a, b, c order. */
208
if (is_snan(a_cls)) {
209
return 0;
210
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
211
return 2;
212
}
213
} else {
214
- /*
215
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
216
- * case sets InvalidOp and returns the input value 'c'
217
- */
218
/* Prefer sNaN over qNaN, in the c, a, b order. */
219
if (is_snan(c_cls)) {
220
return 2;
221
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
222
}
223
}
224
#elif defined(TARGET_LOONGARCH64)
225
- /*
226
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
227
- * case sets InvalidOp and returns the input value 'c'
228
- */
229
-
230
/* Prefer sNaN over qNaN, in the c, a, b order. */
231
if (is_snan(c_cls)) {
232
return 2;
233
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
234
return 1;
235
}
236
#elif defined(TARGET_PPC)
237
- /* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
238
- * to return an input NaN if we have one (ie c) rather than generating
239
- * a default NaN
240
- */
241
-
242
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
243
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
244
*/
245
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
246
return 1;
247
}
248
#elif defined(TARGET_S390X)
249
- if (infzero) {
250
- return 3;
251
- }
252
-
253
if (is_snan(a_cls)) {
254
return 0;
255
} else if (is_snan(b_cls)) {
256
--
257
2.34.1
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for the inf-zero-nan
2
muladd special case. In meson.build we put -DTARGET_ARM in fpcflags,
3
and so we should select here the Arm rule of
4
float_infzeronan_dnan_if_qnan.
1
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241202131347.498124-5-peter.maydell@linaro.org
9
---
10
tests/fp/fp-bench.c | 5 +++++
11
tests/fp/fp-test.c | 5 +++++
12
2 files changed, 10 insertions(+)
13
14
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/fp/fp-bench.c
17
+++ b/tests/fp/fp-bench.c
18
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
19
{
20
bench_func_t f;
21
22
+ /*
23
+ * These implementation-defined choices for various things IEEE
24
+ * doesn't specify match those used by the Arm architecture.
25
+ */
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
28
29
f = bench_funcs[operation][precision];
30
g_assert(f);
31
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/tests/fp/fp-test.c
34
+++ b/tests/fp/fp-test.c
35
@@ -XXX,XX +XXX,XX @@ void run_test(void)
36
{
37
unsigned int i;
38
39
+ /*
40
+ * These implementation-defined choices for various things IEEE
41
+ * doesn't specify match those used by the Arm architecture.
42
+ */
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
44
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
45
46
genCases_setLevel(test_level);
47
verCases_maxErrorCount = n_max_errors;
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the Arm target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-6-peter.maydell@linaro.org
7
---
8
target/arm/cpu.c | 3 +++
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
2 files changed, 4 insertions(+), 7 deletions(-)
11
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
17
* * tininess-before-rounding
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
20
+ * * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
21
+ * and the input NaN if it is signalling
22
*/
23
static void arm_set_default_fp_behaviours(float_status *s)
24
{
25
set_float_detect_tininess(float_tininess_before_rounding, s);
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
28
}
29
30
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
/*
37
* Temporarily fall back to ifdef ladder
38
*/
39
-#if defined(TARGET_ARM)
40
- /*
41
- * For ARM, the (inf,zero,qnan) case returns the default NaN,
42
- * but (inf,zero,snan) returns the input NaN.
43
- */
44
- rule = float_infzeronan_dnan_if_qnan;
45
-#elif defined(TARGET_MIPS)
46
+#if defined(TARGET_MIPS)
47
if (snan_bit_is_one(status)) {
48
/*
49
* For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
50
--
51
2.34.1
diff view generated by jsdifflib
1
From: Baruch Siach <baruch@tkos.co.il>
1
Set the FloatInfZeroNaNRule explicitly for s390, so we
2
can remove the ifdef from pickNaNMulAdd().
2
3
3
The PL011 TRM says that "UARTIBRD = 0 is invalid and UARTFBRD is ignored
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
when this is the case". But the code looks at FBRD for the invalid case.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Fix this.
6
Message-id: 20241202131347.498124-7-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
6
11
7
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
8
Message-id: 1408f62a2e45665816527d4845ffde650957d5ab.1665051588.git.baruchs-c@neureality.ai
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/char/pl011.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/char/pl011.c
14
--- a/target/s390x/cpu.c
18
+++ b/hw/char/pl011.c
15
+++ b/target/s390x/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static unsigned int pl011_get_baudrate(const PL011State *s)
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
20
{
17
set_float_detect_tininess(float_tininess_before_rounding,
21
uint64_t clk;
18
&env->fpu_status);
22
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
23
- if (s->fbrd == 0) {
20
+ set_float_infzeronan_rule(float_infzeronan_dnan_always,
24
+ if (s->ibrd == 0) {
21
+ &env->fpu_status);
25
return 0;
22
/* fall through */
23
case RESET_TYPE_S390_CPU_NORMAL:
24
env->psw.mask &= ~PSW_MASK_RI;
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
* a default NaN
31
*/
32
rule = float_infzeronan_dnan_never;
33
-#elif defined(TARGET_S390X)
34
- rule = float_infzeronan_dnan_always;
35
#endif
26
}
36
}
27
37
28
--
38
--
29
2.25.1
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the PPC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-8-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 7 +++++++
9
fpu/softfloat-specialize.c.inc | 7 +------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
22
+ * to return an input NaN if we have one (ie c) rather than generating
23
+ * a default NaN
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
27
28
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
29
ppc_spr_t *spr = &env->spr_cb[i];
30
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
31
index XXXXXXX..XXXXXXX 100644
32
--- a/fpu/softfloat-specialize.c.inc
33
+++ b/fpu/softfloat-specialize.c.inc
34
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
*/
36
rule = float_infzeronan_dnan_never;
37
}
38
-#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
39
+#elif defined(TARGET_SPARC) || \
40
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
41
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
42
/*
43
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
44
* case sets InvalidOp and returns the input value 'c'
45
*/
46
- /*
47
- * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
48
- * to return an input NaN if we have one (ie c) rather than generating
49
- * a default NaN
50
- */
51
rule = float_infzeronan_dnan_never;
52
#endif
53
}
54
--
55
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the MIPS target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-9-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 9 +++++++++
9
target/mips/msa.c | 4 ++++
10
fpu/softfloat-specialize.c.inc | 16 +---------------
11
3 files changed, 14 insertions(+), 15 deletions(-)
12
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/mips/fpu_helper.h
16
+++ b/target/mips/fpu_helper.h
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_flush_mode(CPUMIPSState *env)
18
static inline void restore_snan_bit_mode(CPUMIPSState *env)
19
{
20
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
21
+ FloatInfZeroNaNRule izn_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
set_snan_bit_is_one(!nan2008, &env->active_fpu.fp_status);
28
set_default_nan_mode(!nan2008, &env->active_fpu.fp_status);
29
+ /*
30
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
31
+ * case sets InvalidOp and returns the default NaN.
32
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
33
+ * case sets InvalidOp and returns the input value 'c'.
34
+ */
35
+ izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
36
+ set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
37
}
38
39
static inline void restore_fp_status(CPUMIPSState *env)
40
diff --git a/target/mips/msa.c b/target/mips/msa.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/mips/msa.c
43
+++ b/target/mips/msa.c
44
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
45
46
/* set proper signanling bit meaning ("1" means "quiet") */
47
set_snan_bit_is_one(0, &env->active_tc.msa_fp_status);
48
+
49
+ /* Inf * 0 + NaN returns the input NaN */
50
+ set_float_infzeronan_rule(float_infzeronan_dnan_never,
51
+ &env->active_tc.msa_fp_status);
52
}
53
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
54
index XXXXXXX..XXXXXXX 100644
55
--- a/fpu/softfloat-specialize.c.inc
56
+++ b/fpu/softfloat-specialize.c.inc
57
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
58
/*
59
* Temporarily fall back to ifdef ladder
60
*/
61
-#if defined(TARGET_MIPS)
62
- if (snan_bit_is_one(status)) {
63
- /*
64
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
65
- * case sets InvalidOp and returns the default NaN
66
- */
67
- rule = float_infzeronan_dnan_always;
68
- } else {
69
- /*
70
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
71
- * case sets InvalidOp and returns the input value 'c'
72
- */
73
- rule = float_infzeronan_dnan_never;
74
- }
75
-#elif defined(TARGET_SPARC) || \
76
+#if defined(TARGET_SPARC) || \
77
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
78
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
79
/*
80
--
81
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the SPARC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-10-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_SPARC) || \
34
- defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
35
+#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
36
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
37
/*
38
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the xtensa target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-11-peter.maydell@linaro.org
7
---
8
target/xtensa/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 +-
10
2 files changed, 3 insertions(+), 1 deletion(-)
11
12
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/xtensa/cpu.c
15
+++ b/target/xtensa/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
17
reset_mmu(env);
18
cs->halted = env->runstall;
19
#endif
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
set_no_signaling_nans(!dfpu, &env->fp_status);
23
xtensa_use_first_nan(env, !dfpu);
24
}
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
34
+#if defined(TARGET_HPPA) || \
35
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
36
/*
37
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the x86 target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-12-peter.maydell@linaro.org
6
---
7
target/i386/tcg/fpu_helper.c | 7 +++++++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 8 insertions(+), 1 deletion(-)
10
11
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/i386/tcg/fpu_helper.c
14
+++ b/target/i386/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->mmx_status);
18
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->sse_status);
19
+ /*
20
+ * Only SSE has multiply-add instructions. In the SDM Section 14.5.2
21
+ * "Fused-Multiply-ADD (FMA) Numeric Behavior" the NaN handling is
22
+ * specified -- for 0 * inf + NaN the input NaN is selected, and if
23
+ * there are multiple input NaNs they are selected in the order a, b, c.
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
26
}
27
28
static inline uint8_t save_exception_flags(CPUX86State *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
34
* Temporarily fall back to ifdef ladder
35
*/
36
#if defined(TARGET_HPPA) || \
37
- defined(TARGET_I386) || defined(TARGET_LOONGARCH)
38
+ defined(TARGET_LOONGARCH)
39
/*
40
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
41
* case sets InvalidOp and returns the input value 'c'
42
--
43
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the loongarch target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-13-peter.maydell@linaro.org
6
---
7
target/loongarch/tcg/fpu_helper.c | 5 +++++
8
fpu/softfloat-specialize.c.inc | 7 +------
9
2 files changed, 6 insertions(+), 6 deletions(-)
10
11
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/loongarch/tcg/fpu_helper.c
14
+++ b/target/loongarch/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
16
&env->fp_status);
17
set_flush_to_zero(0, &env->fp_status);
18
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
19
+ /*
20
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
21
+ * case sets InvalidOp and returns the input value 'c'
22
+ */
23
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
}
25
26
int ieee_ex_to_loongarch(int xcpt)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
32
/*
33
* Temporarily fall back to ifdef ladder
34
*/
35
-#if defined(TARGET_HPPA) || \
36
- defined(TARGET_LOONGARCH)
37
- /*
38
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
- * case sets InvalidOp and returns the input value 'c'
40
- */
41
+#if defined(TARGET_HPPA)
42
rule = float_infzeronan_dnan_never;
43
#endif
44
}
45
--
46
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the HPPA target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
As this is the last target to be converted to explicitly setting
5
the rule, we can remove the fallback code in pickNaNMulAdd()
6
entirely.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20241202131347.498124-14-peter.maydell@linaro.org
11
---
12
target/hppa/fpu_helper.c | 2 ++
13
fpu/softfloat-specialize.c.inc | 13 +------------
14
2 files changed, 3 insertions(+), 12 deletions(-)
15
16
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/hppa/fpu_helper.c
19
+++ b/target/hppa/fpu_helper.c
20
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
21
* HPPA does note implement a CPU reset method at all...
22
*/
23
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
24
+ /* For inf * 0 + NaN, return the input NaN */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
}
27
28
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
34
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
bool infzero, float_status *status)
36
{
37
- FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
38
-
39
/*
40
* We guarantee not to require the target to tell us how to
41
* pick a NaN if we're always returning the default NaN.
42
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
43
*/
44
assert(!status->default_nan_mode);
45
46
- if (rule == float_infzeronan_none) {
47
- /*
48
- * Temporarily fall back to ifdef ladder
49
- */
50
-#if defined(TARGET_HPPA)
51
- rule = float_infzeronan_dnan_never;
52
-#endif
53
- }
54
-
55
if (infzero) {
56
/*
57
* Inf * 0 + NaN -- some implementations return the default NaN here,
58
* and some return the input NaN.
59
*/
60
- switch (rule) {
61
+ switch (status->float_infzeronan_rule) {
62
case float_infzeronan_dnan_never:
63
return 2;
64
case float_infzeronan_dnan_always:
65
--
66
2.34.1
diff view generated by jsdifflib
New patch
1
The new implementation of pickNaNMulAdd() will find it convenient
2
to know whether at least one of the three arguments to the muladd
3
was a signaling NaN. We already calculate that in the caller,
4
so pass it in as a new bool have_snan.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-15-peter.maydell@linaro.org
9
---
10
fpu/softfloat-parts.c.inc | 5 +++--
11
fpu/softfloat-specialize.c.inc | 2 +-
12
2 files changed, 4 insertions(+), 3 deletions(-)
13
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
16
--- a/fpu/softfloat-parts.c.inc
17
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
19
{
20
int which;
21
bool infzero = (ab_mask == float_cmask_infzero);
22
+ bool have_snan = (abc_mask & float_cmask_snan);
23
24
- if (unlikely(abc_mask & float_cmask_snan)) {
25
+ if (unlikely(have_snan)) {
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
27
}
28
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
30
if (s->default_nan_mode) {
31
which = 3;
32
} else {
33
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
35
}
36
37
if (which == 3) {
38
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
39
index XXXXXXX..XXXXXXX 100644
40
--- a/fpu/softfloat-specialize.c.inc
41
+++ b/fpu/softfloat-specialize.c.inc
42
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
43
| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
44
*----------------------------------------------------------------------------*/
45
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
46
- bool infzero, float_status *status)
47
+ bool infzero, bool have_snan, float_status *status)
48
{
49
/*
50
* We guarantee not to require the target to tell us how to
51
--
52
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
IEEE 758 does not define a fixed rule for which NaN to pick as the
2
2
result if both operands of a 3-operand fused multiply-add operation
3
Consolidate most of the inputs and outputs of S1_ptw_translate
3
are NaNs. As a result different architectures have ended up with
4
into a single structure. Plumb this through arm_ld*_ptw from
4
different rules for propagating NaNs.
5
the controlling get_phys_addr_* routine.
5
6
6
QEMU currently hardcodes the NaN propagation logic into the binary
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
because pickNaNMulAdd() has an ifdef ladder for different targets.
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
We want to make the propagation rule instead be selectable at
9
Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org
9
runtime, because:
10
* this will let us have multiple targets in one QEMU binary
11
* the Arm FEAT_AFP architectural feature includes letting
12
the guest select a NaN propagation rule at runtime
13
14
In this commit we add an enum for the propagation rule, the field in
15
float_status, and the corresponding getters and setters. We change
16
pickNaNMulAdd to honour this, but because all targets still leave
17
this field at its default 0 value, the fallback logic will pick the
18
rule type with the old ifdef ladder.
19
20
It's valid not to set a propagation rule if default_nan_mode is
21
enabled, because in that case there's no need to pick a NaN; all the
22
callers of pickNaNMulAdd() catch this case and skip calling it.
23
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-id: 20241202131347.498124-16-peter.maydell@linaro.org
11
---
27
---
12
target/arm/ptw.c | 140 ++++++++++++++++++++++++++---------------------
28
include/fpu/softfloat-helpers.h | 11 +++
13
1 file changed, 79 insertions(+), 61 deletions(-)
29
include/fpu/softfloat-types.h | 55 +++++++++++
14
30
fpu/softfloat-specialize.c.inc | 167 ++++++++------------------------
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
31
3 files changed, 107 insertions(+), 126 deletions(-)
32
33
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
16
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/ptw.c
35
--- a/include/fpu/softfloat-helpers.h
18
+++ b/target/arm/ptw.c
36
+++ b/include/fpu/softfloat-helpers.h
19
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
20
#include "idau.h"
38
status->float_2nan_prop_rule = rule;
21
22
23
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
24
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
25
- bool is_secure, bool s1_is_el0,
26
+typedef struct S1Translate {
27
+ ARMMMUIdx in_mmu_idx;
28
+ bool in_secure;
29
+ bool out_secure;
30
+ hwaddr out_phys;
31
+} S1Translate;
32
+
33
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
34
+ uint64_t address,
35
+ MMUAccessType access_type, bool s1_is_el0,
36
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
37
__attribute__((nonnull));
38
39
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
40
}
39
}
41
40
42
/* Translate a S1 pagetable walk through S2 if needed. */
41
+static inline void set_float_3nan_prop_rule(Float3NaNPropRule rule,
43
-static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
42
+ float_status *status)
44
- hwaddr addr, bool *is_secure_ptr,
43
+{
45
- ARMMMUFaultInfo *fi)
44
+ status->float_3nan_prop_rule = rule;
46
+static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
45
+}
47
+ hwaddr addr, ARMMMUFaultInfo *fi)
46
+
47
static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
48
float_status *status)
48
{
49
{
49
- bool is_secure = *is_secure_ptr;
50
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
50
+ bool is_secure = ptw->in_secure;
51
return status->float_2nan_prop_rule;
51
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
52
53
- if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
54
+ if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
55
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
56
GetPhysAddrResult s2 = {};
57
+ S1Translate s2ptw = {
58
+ .in_mmu_idx = s2_mmu_idx,
59
+ .in_secure = is_secure,
60
+ };
61
uint64_t hcr;
62
int ret;
63
64
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
65
- is_secure, false, &s2, fi);
66
+ ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
67
+ false, &s2, fi);
68
if (ret) {
69
assert(fi->type != ARMFault_None);
70
fi->s2addr = addr;
71
fi->stage2 = true;
72
fi->s1ptw = true;
73
fi->s1ns = !is_secure;
74
- return ~0;
75
+ return false;
76
}
77
78
hcr = arm_hcr_el2_eff_secstate(env, is_secure);
79
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
80
fi->stage2 = true;
81
fi->s1ptw = true;
82
fi->s1ns = !is_secure;
83
- return ~0;
84
+ return false;
85
}
86
87
if (arm_is_secure_below_el3(env)) {
88
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
89
} else {
90
is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
91
}
92
- *is_secure_ptr = is_secure;
93
} else {
94
assert(!is_secure);
95
}
96
97
addr = s2.f.phys_addr;
98
}
99
- return addr;
100
+
101
+ ptw->out_secure = is_secure;
102
+ ptw->out_phys = addr;
103
+ return true;
104
}
52
}
105
53
106
/* All loads done in the course of a page table walk go through here. */
54
+static inline Float3NaNPropRule get_float_3nan_prop_rule(float_status *status)
107
-static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
55
+{
108
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
56
+ return status->float_3nan_prop_rule;
109
+static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
57
+}
110
+ ARMMMUFaultInfo *fi)
58
+
59
static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
111
{
60
{
112
CPUState *cs = env_cpu(env);
61
return status->float_infzeronan_rule;
113
MemTxAttrs attrs = {};
62
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
114
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
63
index XXXXXXX..XXXXXXX 100644
115
AddressSpace *as;
64
--- a/include/fpu/softfloat-types.h
116
uint32_t data;
65
+++ b/include/fpu/softfloat-types.h
117
66
@@ -XXX,XX +XXX,XX @@ this code that are retained.
118
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
67
#ifndef SOFTFLOAT_TYPES_H
119
- attrs.secure = is_secure;
68
#define SOFTFLOAT_TYPES_H
120
- as = arm_addressspace(cs, attrs);
69
121
- if (fi->s1ptw) {
70
+#include "hw/registerfields.h"
122
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
71
+
123
return 0;
72
/*
124
}
73
* Software IEC/IEEE floating-point types.
125
- if (regime_translation_big_endian(env, mmu_idx)) {
74
*/
126
+ addr = ptw->out_phys;
75
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
127
+ attrs.secure = ptw->out_secure;
76
float_2nan_prop_x87,
128
+ as = arm_addressspace(cs, attrs);
77
} Float2NaNPropRule;
129
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
78
130
data = address_space_ldl_be(as, addr, attrs, &result);
79
+/*
131
} else {
80
+ * 3-input NaN propagation rule, for fused multiply-add. Individual
132
data = address_space_ldl_le(as, addr, attrs, &result);
81
+ * architectures have different rules for which input NaN is
133
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
82
+ * propagated to the output when there is more than one NaN on the
134
return 0;
83
+ * input.
135
}
84
+ *
136
85
+ * If default_nan_mode is enabled then it is valid not to set a NaN
137
-static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
86
+ * propagation rule, because the softfloat code guarantees not to try
138
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
87
+ * to pick a NaN to propagate in default NaN mode. When not in
139
+static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
88
+ * default-NaN mode, it is an error for the target not to set the rule
140
+ ARMMMUFaultInfo *fi)
89
+ * in float_status if it uses a muladd, and we will assert if we need
90
+ * to handle an input NaN and no rule was selected.
91
+ *
92
+ * The naming scheme for Float3NaNPropRule values is:
93
+ * float_3nan_prop_s_abc:
94
+ * = "Prefer SNaN over QNaN, then operand A over B over C"
95
+ * float_3nan_prop_abc:
96
+ * = "Prefer A over B over C regardless of SNaN vs QNAN"
97
+ *
98
+ * For QEMU, the multiply-add operation is A * B + C.
99
+ */
100
+
101
+/*
102
+ * We set the Float3NaNPropRule enum values up so we can select the
103
+ * right value in pickNaNMulAdd in a data driven way.
104
+ */
105
+FIELD(3NAN, 1ST, 0, 2) /* which operand is most preferred ? */
106
+FIELD(3NAN, 2ND, 2, 2) /* which operand is next most preferred ? */
107
+FIELD(3NAN, 3RD, 4, 2) /* which operand is least preferred ? */
108
+FIELD(3NAN, SNAN, 6, 1) /* do we prefer SNaN over QNaN ? */
109
+
110
+#define PROPRULE(X, Y, Z) \
111
+ ((X << R_3NAN_1ST_SHIFT) | (Y << R_3NAN_2ND_SHIFT) | (Z << R_3NAN_3RD_SHIFT))
112
+
113
+typedef enum __attribute__((__packed__)) {
114
+ float_3nan_prop_none = 0, /* No propagation rule specified */
115
+ float_3nan_prop_abc = PROPRULE(0, 1, 2),
116
+ float_3nan_prop_acb = PROPRULE(0, 2, 1),
117
+ float_3nan_prop_bac = PROPRULE(1, 0, 2),
118
+ float_3nan_prop_bca = PROPRULE(1, 2, 0),
119
+ float_3nan_prop_cab = PROPRULE(2, 0, 1),
120
+ float_3nan_prop_cba = PROPRULE(2, 1, 0),
121
+ float_3nan_prop_s_abc = float_3nan_prop_abc | R_3NAN_SNAN_MASK,
122
+ float_3nan_prop_s_acb = float_3nan_prop_acb | R_3NAN_SNAN_MASK,
123
+ float_3nan_prop_s_bac = float_3nan_prop_bac | R_3NAN_SNAN_MASK,
124
+ float_3nan_prop_s_bca = float_3nan_prop_bca | R_3NAN_SNAN_MASK,
125
+ float_3nan_prop_s_cab = float_3nan_prop_cab | R_3NAN_SNAN_MASK,
126
+ float_3nan_prop_s_cba = float_3nan_prop_cba | R_3NAN_SNAN_MASK,
127
+} Float3NaNPropRule;
128
+
129
+#undef PROPRULE
130
+
131
/*
132
* Rule for result of fused multiply-add 0 * Inf + NaN.
133
* This must be a NaN, but implementations differ on whether this
134
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
135
FloatRoundMode float_rounding_mode;
136
FloatX80RoundPrec floatx80_rounding_precision;
137
Float2NaNPropRule float_2nan_prop_rule;
138
+ Float3NaNPropRule float_3nan_prop_rule;
139
FloatInfZeroNaNRule float_infzeronan_rule;
140
bool tininess_before_rounding;
141
/* should denormalised results go to zero and set the inexact flag? */
142
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
143
index XXXXXXX..XXXXXXX 100644
144
--- a/fpu/softfloat-specialize.c.inc
145
+++ b/fpu/softfloat-specialize.c.inc
146
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
147
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
148
bool infzero, bool have_snan, float_status *status)
141
{
149
{
142
CPUState *cs = env_cpu(env);
150
+ FloatClass cls[3] = { a_cls, b_cls, c_cls };
143
MemTxAttrs attrs = {};
151
+ Float3NaNPropRule rule = status->float_3nan_prop_rule;
144
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
152
+ int which;
145
AddressSpace *as;
153
+
146
uint64_t data;
154
/*
147
155
* We guarantee not to require the target to tell us how to
148
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
156
* pick a NaN if we're always returning the default NaN.
149
- attrs.secure = is_secure;
157
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
150
- as = arm_addressspace(cs, attrs);
151
- if (fi->s1ptw) {
152
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
153
return 0;
154
}
155
- if (regime_translation_big_endian(env, mmu_idx)) {
156
+ addr = ptw->out_phys;
157
+ attrs.secure = ptw->out_secure;
158
+ as = arm_addressspace(cs, attrs);
159
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
160
data = address_space_ldq_be(as, addr, attrs, &result);
161
} else {
162
data = address_space_ldq_le(as, addr, attrs, &result);
163
@@ -XXX,XX +XXX,XX @@ static int simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
164
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
165
}
166
167
-static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
168
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
169
- bool is_secure, GetPhysAddrResult *result,
170
- ARMMMUFaultInfo *fi)
171
+static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
172
+ uint32_t address, MMUAccessType access_type,
173
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
174
{
175
int level = 1;
176
uint32_t table;
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
179
/* Pagetable walk. */
180
/* Lookup l1 descriptor. */
181
- if (!get_level1_table_address(env, mmu_idx, &table, address)) {
182
+ if (!get_level1_table_address(env, ptw->in_mmu_idx, &table, address)) {
183
/* Section translation fault if page walk is disabled by PD0 or PD1 */
184
fi->type = ARMFault_Translation;
185
goto do_fault;
186
}
187
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
188
+ desc = arm_ldl_ptw(env, ptw, table, fi);
189
if (fi->type != ARMFault_None) {
190
goto do_fault;
191
}
192
type = (desc & 3);
193
domain = (desc >> 5) & 0x0f;
194
- if (regime_el(env, mmu_idx) == 1) {
195
+ if (regime_el(env, ptw->in_mmu_idx) == 1) {
196
dacr = env->cp15.dacr_ns;
197
} else {
198
dacr = env->cp15.dacr_s;
199
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
200
/* Fine pagetable. */
201
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
202
}
203
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
204
+ desc = arm_ldl_ptw(env, ptw, table, fi);
205
if (fi->type != ARMFault_None) {
206
goto do_fault;
207
}
208
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
209
g_assert_not_reached();
210
}
158
}
211
}
159
}
212
- result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
160
213
+ result->f.prot = ap_to_rw_prot(env, ptw->in_mmu_idx, ap, domain_prot);
161
+ if (rule == float_3nan_prop_none) {
214
result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
162
#if defined(TARGET_ARM)
215
if (!(result->f.prot & (1 << access_type))) {
163
-
216
/* Access permission fault. */
164
- /* This looks different from the ARM ARM pseudocode, because the ARM ARM
217
@@ -XXX,XX +XXX,XX @@ do_fault:
165
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
218
return true;
166
- */
167
- if (is_snan(c_cls)) {
168
- return 2;
169
- } else if (is_snan(a_cls)) {
170
- return 0;
171
- } else if (is_snan(b_cls)) {
172
- return 1;
173
- } else if (is_qnan(c_cls)) {
174
- return 2;
175
- } else if (is_qnan(a_cls)) {
176
- return 0;
177
- } else {
178
- return 1;
179
- }
180
+ /*
181
+ * This looks different from the ARM ARM pseudocode, because the ARM ARM
182
+ * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
183
+ */
184
+ rule = float_3nan_prop_s_cab;
185
#elif defined(TARGET_MIPS)
186
- if (snan_bit_is_one(status)) {
187
- /* Prefer sNaN over qNaN, in the a, b, c order. */
188
- if (is_snan(a_cls)) {
189
- return 0;
190
- } else if (is_snan(b_cls)) {
191
- return 1;
192
- } else if (is_snan(c_cls)) {
193
- return 2;
194
- } else if (is_qnan(a_cls)) {
195
- return 0;
196
- } else if (is_qnan(b_cls)) {
197
- return 1;
198
+ if (snan_bit_is_one(status)) {
199
+ rule = float_3nan_prop_s_abc;
200
} else {
201
- return 2;
202
+ rule = float_3nan_prop_s_cab;
203
}
204
- } else {
205
- /* Prefer sNaN over qNaN, in the c, a, b order. */
206
- if (is_snan(c_cls)) {
207
- return 2;
208
- } else if (is_snan(a_cls)) {
209
- return 0;
210
- } else if (is_snan(b_cls)) {
211
- return 1;
212
- } else if (is_qnan(c_cls)) {
213
- return 2;
214
- } else if (is_qnan(a_cls)) {
215
- return 0;
216
- } else {
217
- return 1;
218
- }
219
- }
220
#elif defined(TARGET_LOONGARCH64)
221
- /* Prefer sNaN over qNaN, in the c, a, b order. */
222
- if (is_snan(c_cls)) {
223
- return 2;
224
- } else if (is_snan(a_cls)) {
225
- return 0;
226
- } else if (is_snan(b_cls)) {
227
- return 1;
228
- } else if (is_qnan(c_cls)) {
229
- return 2;
230
- } else if (is_qnan(a_cls)) {
231
- return 0;
232
- } else {
233
- return 1;
234
- }
235
+ rule = float_3nan_prop_s_cab;
236
#elif defined(TARGET_PPC)
237
- /* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
238
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
239
- */
240
- if (is_nan(a_cls)) {
241
- return 0;
242
- } else if (is_nan(c_cls)) {
243
- return 2;
244
- } else {
245
- return 1;
246
- }
247
+ /*
248
+ * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
249
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
250
+ */
251
+ rule = float_3nan_prop_acb;
252
#elif defined(TARGET_S390X)
253
- if (is_snan(a_cls)) {
254
- return 0;
255
- } else if (is_snan(b_cls)) {
256
- return 1;
257
- } else if (is_snan(c_cls)) {
258
- return 2;
259
- } else if (is_qnan(a_cls)) {
260
- return 0;
261
- } else if (is_qnan(b_cls)) {
262
- return 1;
263
- } else {
264
- return 2;
265
- }
266
+ rule = float_3nan_prop_s_abc;
267
#elif defined(TARGET_SPARC)
268
- /* Prefer SNaN over QNaN, order C, B, A. */
269
- if (is_snan(c_cls)) {
270
- return 2;
271
- } else if (is_snan(b_cls)) {
272
- return 1;
273
- } else if (is_snan(a_cls)) {
274
- return 0;
275
- } else if (is_qnan(c_cls)) {
276
- return 2;
277
- } else if (is_qnan(b_cls)) {
278
- return 1;
279
- } else {
280
- return 0;
281
- }
282
+ rule = float_3nan_prop_s_cba;
283
#elif defined(TARGET_XTENSA)
284
- /*
285
- * For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
286
- * an input NaN if we have one (ie c).
287
- */
288
- if (status->use_first_nan) {
289
- if (is_nan(a_cls)) {
290
- return 0;
291
- } else if (is_nan(b_cls)) {
292
- return 1;
293
+ if (status->use_first_nan) {
294
+ rule = float_3nan_prop_abc;
295
} else {
296
- return 2;
297
+ rule = float_3nan_prop_cba;
298
}
299
- } else {
300
- if (is_nan(c_cls)) {
301
- return 2;
302
- } else if (is_nan(b_cls)) {
303
- return 1;
304
- } else {
305
- return 0;
306
- }
307
- }
308
#else
309
- /* A default implementation: prefer a to b to c.
310
- * This is unlikely to actually match any real implementation.
311
- */
312
- if (is_nan(a_cls)) {
313
- return 0;
314
- } else if (is_nan(b_cls)) {
315
- return 1;
316
- } else {
317
- return 2;
318
- }
319
+ rule = float_3nan_prop_abc;
320
#endif
321
+ }
322
+
323
+ assert(rule != float_3nan_prop_none);
324
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
325
+ /* We have at least one SNaN input and should prefer it */
326
+ do {
327
+ which = rule & R_3NAN_1ST_MASK;
328
+ rule >>= R_3NAN_1ST_LENGTH;
329
+ } while (!is_snan(cls[which]));
330
+ } else {
331
+ do {
332
+ which = rule & R_3NAN_1ST_MASK;
333
+ rule >>= R_3NAN_1ST_LENGTH;
334
+ } while (!is_nan(cls[which]));
335
+ }
336
+ return which;
219
}
337
}
220
338
221
-static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
339
/*----------------------------------------------------------------------------
222
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
223
- bool is_secure, GetPhysAddrResult *result,
224
- ARMMMUFaultInfo *fi)
225
+static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
226
+ uint32_t address, MMUAccessType access_type,
227
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
228
{
229
ARMCPU *cpu = env_archcpu(env);
230
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
231
int level = 1;
232
uint32_t table;
233
uint32_t desc;
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
fi->type = ARMFault_Translation;
236
goto do_fault;
237
}
238
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
239
+ desc = arm_ldl_ptw(env, ptw, table, fi);
240
if (fi->type != ARMFault_None) {
241
goto do_fault;
242
}
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
ns = extract32(desc, 3, 1);
245
/* Lookup l2 entry. */
246
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
247
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
248
+ desc = arm_ldl_ptw(env, ptw, table, fi);
249
if (fi->type != ARMFault_None) {
250
goto do_fault;
251
}
252
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
253
* the WnR bit is never set (the caller must do this).
254
*
255
* @env: CPUARMState
256
+ * @ptw: Current and next stage parameters for the walk.
257
* @address: virtual address to get physical address for
258
* @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
259
- * @mmu_idx: MMU index indicating required translation regime
260
- * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page
261
- * table walk), must be true if this is stage 2 of a stage 1+2
262
+ * @s1_is_el0: if @ptw->in_mmu_idx is ARMMMUIdx_Stage2
263
+ * (so this is a stage 2 page table walk),
264
+ * must be true if this is stage 2 of a stage 1+2
265
* walk for an EL0 access. If @mmu_idx is anything else,
266
* @s1_is_el0 is ignored.
267
* @result: set on translation success,
268
* @fi: set to fault info if the translation fails
269
*/
270
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
271
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
272
- bool is_secure, bool s1_is_el0,
273
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
274
+ uint64_t address,
275
+ MMUAccessType access_type, bool s1_is_el0,
276
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
277
{
278
ARMCPU *cpu = env_archcpu(env);
279
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
280
+ bool is_secure = ptw->in_secure;
281
/* Read an LPAE long-descriptor translation table. */
282
ARMFaultType fault_type = ARMFault_Translation;
283
uint32_t level;
284
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
285
descaddr |= (address >> (stride * (4 - level))) & indexmask;
286
descaddr &= ~7ULL;
287
nstable = extract32(tableattrs, 4, 1);
288
- descriptor = arm_ldq_ptw(env, descaddr, !nstable, mmu_idx, fi);
289
+ ptw->in_secure = !nstable;
290
+ descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
291
if (fi->type != ARMFault_None) {
292
goto do_fault;
293
}
294
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
295
ARMMMUFaultInfo *fi)
296
{
297
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
298
+ S1Translate ptw;
299
300
if (mmu_idx != s1_mmu_idx) {
301
/*
302
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
303
int ret;
304
bool ipa_secure, s2walk_secure;
305
ARMCacheAttrs cacheattrs1;
306
- ARMMMUIdx s2_mmu_idx;
307
bool is_el0;
308
uint64_t hcr;
309
310
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
311
s2walk_secure = false;
312
}
313
314
- s2_mmu_idx = (s2walk_secure
315
- ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
316
+ ptw.in_mmu_idx =
317
+ s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
318
+ ptw.in_secure = s2walk_secure;
319
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
320
321
/*
322
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
323
cacheattrs1 = result->cacheattrs;
324
memset(result, 0, sizeof(*result));
325
326
- ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
327
- s2walk_secure, is_el0, result, fi);
328
+ ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
329
+ is_el0, result, fi);
330
fi->s2addr = ipa;
331
332
/* Combine the S1 and S2 perms. */
333
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
334
return get_phys_addr_disabled(env, address, access_type, mmu_idx,
335
is_secure, result, fi);
336
}
337
+
338
+ ptw.in_mmu_idx = mmu_idx;
339
+ ptw.in_secure = is_secure;
340
+
341
if (regime_using_lpae_format(env, mmu_idx)) {
342
- return get_phys_addr_lpae(env, address, access_type, mmu_idx,
343
- is_secure, false, result, fi);
344
+ return get_phys_addr_lpae(env, &ptw, address, access_type, false,
345
+ result, fi);
346
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
347
- return get_phys_addr_v6(env, address, access_type, mmu_idx,
348
- is_secure, result, fi);
349
+ return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
350
} else {
351
- return get_phys_addr_v5(env, address, access_type, mmu_idx,
352
- is_secure, result, fi);
353
+ return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
354
}
355
}
356
357
--
340
--
358
2.25.1
341
2.34.1
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for propagating NaNs in
2
the muladd case. In meson.build we put -DTARGET_ARM in fpcflags, and
3
so we should select here the Arm rule of float_3nan_prop_s_cab.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-17-peter.maydell@linaro.org
8
---
9
tests/fp/fp-bench.c | 1 +
10
tests/fp/fp-test.c | 1 +
11
2 files changed, 2 insertions(+)
12
13
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/fp/fp-bench.c
16
+++ b/tests/fp/fp-bench.c
17
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
18
* doesn't specify match those used by the Arm architecture.
19
*/
20
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
22
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
23
24
f = bench_funcs[operation][precision];
25
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/fp/fp-test.c
28
+++ b/tests/fp/fp-test.c
29
@@ -XXX,XX +XXX,XX @@ void run_test(void)
30
* doesn't specify match those used by the Arm architecture.
31
*/
32
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
33
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
34
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
35
36
genCases_setLevel(test_level);
37
--
38
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-18-peter.maydell@linaro.org
7
---
7
---
8
target/arm/cpu-param.h | 2 +
8
target/arm/cpu.c | 5 +++++
9
target/arm/translate.h | 50 +++++++++++++++-
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
target/arm/cpu.c | 23 ++++----
10
2 files changed, 6 insertions(+), 7 deletions(-)
11
target/arm/translate-a64.c | 64 +++++++++++++-------
12
target/arm/translate-m-nocp.c | 2 +-
13
target/arm/translate.c | 108 +++++++++++++++++++++++-----------
14
6 files changed, 178 insertions(+), 71 deletions(-)
15
11
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu-param.h
19
+++ b/target/arm/cpu-param.h
20
@@ -XXX,XX +XXX,XX @@
21
# define TARGET_PAGE_BITS_VARY
22
# define TARGET_PAGE_BITS_MIN 10
23
24
+# define TARGET_TB_PCREL 1
25
+
26
/*
27
* Cache the attrs and shareability fields from the page table entry.
28
*
29
diff --git a/target/arm/translate.h b/target/arm/translate.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate.h
32
+++ b/target/arm/translate.h
33
@@ -XXX,XX +XXX,XX @@
34
35
36
/* internal defines */
37
+
38
+/*
39
+ * Save pc_save across a branch, so that we may restore the value from
40
+ * before the branch at the point the label is emitted.
41
+ */
42
+typedef struct DisasLabel {
43
+ TCGLabel *label;
44
+ target_ulong pc_save;
45
+} DisasLabel;
46
+
47
typedef struct DisasContext {
48
DisasContextBase base;
49
const ARMISARegisters *isar;
50
51
/* The address of the current instruction being translated. */
52
target_ulong pc_curr;
53
+ /*
54
+ * For TARGET_TB_PCREL, the full value of cpu_pc is not known
55
+ * (although the page offset is known). For convenience, the
56
+ * translation loop uses the full virtual address that triggered
57
+ * the translation, from base.pc_start through pc_curr.
58
+ * For efficiency, we do not update cpu_pc for every instruction.
59
+ * Instead, pc_save has the value of pc_curr at the time of the
60
+ * last update to cpu_pc, which allows us to compute the addend
61
+ * needed to bring cpu_pc current: pc_curr - pc_save.
62
+ * If cpu_pc now contains the destination of an indirect branch,
63
+ * pc_save contains -1 to indicate that relative updates are no
64
+ * longer possible.
65
+ */
66
+ target_ulong pc_save;
67
target_ulong page_start;
68
uint32_t insn;
69
/* Nonzero if this instruction has been conditionally skipped. */
70
int condjmp;
71
/* The label that will be jumped to when the instruction is skipped. */
72
- TCGLabel *condlabel;
73
+ DisasLabel condlabel;
74
/* Thumb-2 conditional execution bits. */
75
int condexec_mask;
76
int condexec_cond;
77
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
78
* after decode (ie after any UNDEF checks)
79
*/
80
bool eci_handled;
81
- /* TCG op to rewind to if this turns out to be an invalid ECI state */
82
- TCGOp *insn_eci_rewind;
83
int sctlr_b;
84
MemOp be_data;
85
#if !defined(CONFIG_USER_ONLY)
86
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
87
*/
88
uint64_t asimd_imm_const(uint32_t imm, int cmode, int op);
89
90
+/*
91
+ * gen_disas_label:
92
+ * Create a label and cache a copy of pc_save.
93
+ */
94
+static inline DisasLabel gen_disas_label(DisasContext *s)
95
+{
96
+ return (DisasLabel){
97
+ .label = gen_new_label(),
98
+ .pc_save = s->pc_save,
99
+ };
100
+}
101
+
102
+/*
103
+ * set_disas_label:
104
+ * Emit a label and restore the cached copy of pc_save.
105
+ */
106
+static inline void set_disas_label(DisasContext *s, DisasLabel l)
107
+{
108
+ gen_set_label(l.label);
109
+ s->pc_save = l.pc_save;
110
+}
111
+
112
/*
113
* Helpers for implementing sets of trans_* functions.
114
* Defer the implementation of NAME to FUNC, with optional extra arguments.
115
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
116
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/cpu.c
14
--- a/target/arm/cpu.c
118
+++ b/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
119
@@ -XXX,XX +XXX,XX @@ static vaddr arm_cpu_get_pc(CPUState *cs)
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
120
void arm_cpu_synchronize_from_tb(CPUState *cs,
17
* * tininess-before-rounding
121
const TranslationBlock *tb)
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
20
+ * * 3-input NaN propagation prefers SNaN over QNaN, and then
21
+ * operand C over A over B (see FPProcessNaNs3() pseudocode,
22
+ * but note that for QEMU muladd is a * b + c, whereas for
23
+ * the pseudocode function the arguments are in the order c, a, b.
24
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
25
* and the input NaN if it is signalling
26
*/
27
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
122
{
28
{
123
- ARMCPU *cpu = ARM_CPU(cs);
29
set_float_detect_tininess(float_tininess_before_rounding, s);
124
- CPUARMState *env = &cpu->env;
30
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
125
-
31
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
126
- /*
32
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
127
- * It's OK to look at env for the current mode here, because it's
33
}
128
- * never possible for an AArch64 TB to chain to an AArch32 TB.
34
129
- */
35
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
130
- if (is_a64(env)) {
36
index XXXXXXX..XXXXXXX 100644
131
- env->pc = tb_pc(tb);
37
--- a/fpu/softfloat-specialize.c.inc
132
- } else {
38
+++ b/fpu/softfloat-specialize.c.inc
133
- env->regs[15] = tb_pc(tb);
39
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
134
+ /* The program counter is always up to date with TARGET_TB_PCREL. */
135
+ if (!TARGET_TB_PCREL) {
136
+ CPUARMState *env = cs->env_ptr;
137
+ /*
138
+ * It's OK to look at env for the current mode here, because it's
139
+ * never possible for an AArch64 TB to chain to an AArch32 TB.
140
+ */
141
+ if (is_a64(env)) {
142
+ env->pc = tb_pc(tb);
143
+ } else {
144
+ env->regs[15] = tb_pc(tb);
145
+ }
146
}
40
}
147
}
41
148
#endif /* CONFIG_TCG */
42
if (rule == float_3nan_prop_none) {
149
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
43
-#if defined(TARGET_ARM)
150
index XXXXXXX..XXXXXXX 100644
44
- /*
151
--- a/target/arm/translate-a64.c
45
- * This looks different from the ARM ARM pseudocode, because the ARM ARM
152
+++ b/target/arm/translate-a64.c
46
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
153
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
47
- */
154
48
- rule = float_3nan_prop_s_cab;
155
static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
49
-#elif defined(TARGET_MIPS)
156
{
50
+#if defined(TARGET_MIPS)
157
- tcg_gen_movi_i64(dest, s->pc_curr + diff);
51
if (snan_bit_is_one(status)) {
158
+ assert(s->pc_save != -1);
52
rule = float_3nan_prop_s_abc;
159
+ if (TARGET_TB_PCREL) {
53
} else {
160
+ tcg_gen_addi_i64(dest, cpu_pc, (s->pc_curr - s->pc_save) + diff);
161
+ } else {
162
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
163
+ }
164
}
165
166
void gen_a64_update_pc(DisasContext *s, target_long diff)
167
{
168
gen_pc_plus_diff(s, cpu_pc, diff);
169
+ s->pc_save = s->pc_curr + diff;
170
}
171
172
/*
173
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
174
* then loading an address into the PC will clear out any tag.
175
*/
176
gen_top_byte_ignore(s, cpu_pc, src, s->tbii);
177
+ s->pc_save = -1;
178
}
179
180
/*
181
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
182
183
static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
184
{
185
- uint64_t dest = s->pc_curr + diff;
186
-
187
- if (use_goto_tb(s, dest)) {
188
- tcg_gen_goto_tb(n);
189
- gen_a64_update_pc(s, diff);
190
+ if (use_goto_tb(s, s->pc_curr + diff)) {
191
+ /*
192
+ * For pcrel, the pc must always be up-to-date on entry to
193
+ * the linked TB, so that it can use simple additions for all
194
+ * further adjustments. For !pcrel, the linked TB is compiled
195
+ * to know its full virtual address, so we can delay the
196
+ * update to pc to the unlinked path. A long chain of links
197
+ * can thus avoid many updates to the PC.
198
+ */
199
+ if (TARGET_TB_PCREL) {
200
+ gen_a64_update_pc(s, diff);
201
+ tcg_gen_goto_tb(n);
202
+ } else {
203
+ tcg_gen_goto_tb(n);
204
+ gen_a64_update_pc(s, diff);
205
+ }
206
tcg_gen_exit_tb(s->base.tb, n);
207
s->base.is_jmp = DISAS_NORETURN;
208
} else {
209
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
210
{
211
unsigned int sf, op, rt;
212
int64_t diff;
213
- TCGLabel *label_match;
214
+ DisasLabel match;
215
TCGv_i64 tcg_cmp;
216
217
sf = extract32(insn, 31, 1);
218
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
219
diff = sextract32(insn, 5, 19) * 4;
220
221
tcg_cmp = read_cpu_reg(s, rt, sf);
222
- label_match = gen_new_label();
223
-
224
reset_btype(s);
225
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
226
- tcg_cmp, 0, label_match);
227
228
+ match = gen_disas_label(s);
229
+ tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
230
+ tcg_cmp, 0, match.label);
231
gen_goto_tb(s, 0, 4);
232
- gen_set_label(label_match);
233
+ set_disas_label(s, match);
234
gen_goto_tb(s, 1, diff);
235
}
236
237
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
238
{
239
unsigned int bit_pos, op, rt;
240
int64_t diff;
241
- TCGLabel *label_match;
242
+ DisasLabel match;
243
TCGv_i64 tcg_cmp;
244
245
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
246
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
247
248
tcg_cmp = tcg_temp_new_i64();
249
tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, rt), (1ULL << bit_pos));
250
- label_match = gen_new_label();
251
252
reset_btype(s);
253
+
254
+ match = gen_disas_label(s);
255
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
256
- tcg_cmp, 0, label_match);
257
+ tcg_cmp, 0, match.label);
258
tcg_temp_free_i64(tcg_cmp);
259
gen_goto_tb(s, 0, 4);
260
- gen_set_label(label_match);
261
+ set_disas_label(s, match);
262
gen_goto_tb(s, 1, diff);
263
}
264
265
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
266
reset_btype(s);
267
if (cond < 0x0e) {
268
/* genuinely conditional branches */
269
- TCGLabel *label_match = gen_new_label();
270
- arm_gen_test_cc(cond, label_match);
271
+ DisasLabel match = gen_disas_label(s);
272
+ arm_gen_test_cc(cond, match.label);
273
gen_goto_tb(s, 0, 4);
274
- gen_set_label(label_match);
275
+ set_disas_label(s, match);
276
gen_goto_tb(s, 1, diff);
277
} else {
278
/* 0xe and 0xf are both "always" conditions */
279
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
280
281
dc->isar = &arm_cpu->isar;
282
dc->condjmp = 0;
283
-
284
+ dc->pc_save = dc->base.pc_first;
285
dc->aarch64 = true;
286
dc->thumb = false;
287
dc->sctlr_b = 0;
288
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
289
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
290
{
291
DisasContext *dc = container_of(dcbase, DisasContext, base);
292
+ target_ulong pc_arg = dc->base.pc_next;
293
294
- tcg_gen_insn_start(dc->base.pc_next, 0, 0);
295
+ if (TARGET_TB_PCREL) {
296
+ pc_arg &= ~TARGET_PAGE_MASK;
297
+ }
298
+ tcg_gen_insn_start(pc_arg, 0, 0);
299
dc->insn_start = tcg_last_op();
300
}
301
302
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
303
index XXXXXXX..XXXXXXX 100644
304
--- a/target/arm/translate-m-nocp.c
305
+++ b/target/arm/translate-m-nocp.c
306
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
307
tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
308
tcg_gen_or_i32(sfpa, sfpa, aspen);
309
arm_gen_condlabel(s);
310
- tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
311
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel.label);
312
313
if (s->fp_excp_el != 0) {
314
gen_exception_insn_el(s, 0, EXCP_NOCP,
315
diff --git a/target/arm/translate.c b/target/arm/translate.c
316
index XXXXXXX..XXXXXXX 100644
317
--- a/target/arm/translate.c
318
+++ b/target/arm/translate.c
319
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
320
void arm_gen_condlabel(DisasContext *s)
321
{
322
if (!s->condjmp) {
323
- s->condlabel = gen_new_label();
324
+ s->condlabel = gen_disas_label(s);
325
s->condjmp = 1;
326
}
327
}
328
@@ -XXX,XX +XXX,XX @@ static target_long jmp_diff(DisasContext *s, target_long diff)
329
330
static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
331
{
332
- tcg_gen_movi_i32(var, s->pc_curr + diff);
333
+ assert(s->pc_save != -1);
334
+ if (TARGET_TB_PCREL) {
335
+ tcg_gen_addi_i32(var, cpu_R[15], (s->pc_curr - s->pc_save) + diff);
336
+ } else {
337
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
338
+ }
339
}
340
341
/* Set a variable to the value of a CPU register. */
342
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
343
*/
344
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
345
s->base.is_jmp = DISAS_JUMP;
346
+ s->pc_save = -1;
347
} else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
348
/* For M-profile SP bits [1:0] are always zero */
349
tcg_gen_andi_i32(var, var, ~3);
350
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
351
352
void gen_update_pc(DisasContext *s, target_long diff)
353
{
354
- tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
355
+ gen_pc_plus_diff(s, cpu_R[15], diff);
356
+ s->pc_save = s->pc_curr + diff;
357
}
358
359
/* Set PC and Thumb state from var. var is marked as dead. */
360
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx(DisasContext *s, TCGv_i32 var)
361
tcg_gen_andi_i32(cpu_R[15], var, ~1);
362
tcg_gen_andi_i32(var, var, 1);
363
store_cpu_field(var, thumb);
364
+ s->pc_save = -1;
365
}
366
367
/*
368
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
369
static inline void gen_bx_excret_final_code(DisasContext *s)
370
{
371
/* Generate the code to finish possible exception return and end the TB */
372
- TCGLabel *excret_label = gen_new_label();
373
+ DisasLabel excret_label = gen_disas_label(s);
374
uint32_t min_magic;
375
376
if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) {
377
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret_final_code(DisasContext *s)
378
}
379
380
/* Is the new PC value in the magic range indicating exception return? */
381
- tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label);
382
+ tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label.label);
383
/* No: end the TB as we would for a DISAS_JMP */
384
if (s->ss_active) {
385
gen_singlestep_exception(s);
386
} else {
387
tcg_gen_exit_tb(NULL, 0);
388
}
389
- gen_set_label(excret_label);
390
+ set_disas_label(s, excret_label);
391
/* Yes: this is an exception return.
392
* At this point in runtime env->regs[15] and env->thumb will hold
393
* the exception-return magic number, which do_v7m_exception_exit()
394
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
395
*/
396
static void gen_goto_tb(DisasContext *s, int n, target_long diff)
397
{
398
- target_ulong dest = s->pc_curr + diff;
399
-
400
- if (translator_use_goto_tb(&s->base, dest)) {
401
- tcg_gen_goto_tb(n);
402
- gen_update_pc(s, diff);
403
+ if (translator_use_goto_tb(&s->base, s->pc_curr + diff)) {
404
+ /*
405
+ * For pcrel, the pc must always be up-to-date on entry to
406
+ * the linked TB, so that it can use simple additions for all
407
+ * further adjustments. For !pcrel, the linked TB is compiled
408
+ * to know its full virtual address, so we can delay the
409
+ * update to pc to the unlinked path. A long chain of links
410
+ * can thus avoid many updates to the PC.
411
+ */
412
+ if (TARGET_TB_PCREL) {
413
+ gen_update_pc(s, diff);
414
+ tcg_gen_goto_tb(n);
415
+ } else {
416
+ tcg_gen_goto_tb(n);
417
+ gen_update_pc(s, diff);
418
+ }
419
tcg_gen_exit_tb(s->base.tb, n);
420
} else {
421
gen_update_pc(s, diff);
422
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
423
static void arm_skip_unless(DisasContext *s, uint32_t cond)
424
{
425
arm_gen_condlabel(s);
426
- arm_gen_test_cc(cond ^ 1, s->condlabel);
427
+ arm_gen_test_cc(cond ^ 1, s->condlabel.label);
428
}
429
430
431
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
432
{
433
/* M-profile low-overhead while-loop start */
434
TCGv_i32 tmp;
435
- TCGLabel *nextlabel;
436
+ DisasLabel nextlabel;
437
438
if (!dc_isar_feature(aa32_lob, s)) {
439
return false;
440
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
441
}
442
}
443
444
- nextlabel = gen_new_label();
445
- tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel);
446
+ nextlabel = gen_disas_label(s);
447
+ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel.label);
448
tmp = load_reg(s, a->rn);
449
store_reg(s, 14, tmp);
450
if (a->size != 4) {
451
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
452
}
453
gen_jmp_tb(s, curr_insn_len(s), 1);
454
455
- gen_set_label(nextlabel);
456
+ set_disas_label(s, nextlabel);
457
gen_jmp(s, jmp_diff(s, a->imm));
458
return true;
459
}
460
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
461
* any faster.
462
*/
463
TCGv_i32 tmp;
464
- TCGLabel *loopend;
465
+ DisasLabel loopend;
466
bool fpu_active;
467
468
if (!dc_isar_feature(aa32_lob, s)) {
469
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
470
471
if (!a->tp && dc_isar_feature(aa32_mve, s) && fpu_active) {
472
/* Need to do a runtime check for LTPSIZE != 4 */
473
- TCGLabel *skipexc = gen_new_label();
474
+ DisasLabel skipexc = gen_disas_label(s);
475
tmp = load_cpu_field(v7m.ltpsize);
476
- tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
477
+ tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc.label);
478
tcg_temp_free_i32(tmp);
479
gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
480
- gen_set_label(skipexc);
481
+ set_disas_label(s, skipexc);
482
}
483
484
if (a->f) {
485
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
486
* loop decrement value is 1. For LETP we need to calculate the decrement
487
* value from LTPSIZE.
488
*/
489
- loopend = gen_new_label();
490
+ loopend = gen_disas_label(s);
491
if (!a->tp) {
492
- tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend);
493
+ tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend.label);
494
tcg_gen_addi_i32(cpu_R[14], cpu_R[14], -1);
495
} else {
496
/*
497
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
498
tcg_gen_shl_i32(decr, tcg_constant_i32(1), decr);
499
tcg_temp_free_i32(ltpsize);
500
501
- tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend);
502
+ tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend.label);
503
504
tcg_gen_sub_i32(cpu_R[14], cpu_R[14], decr);
505
tcg_temp_free_i32(decr);
506
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
507
/* Jump back to the loop start */
508
gen_jmp(s, jmp_diff(s, -a->imm));
509
510
- gen_set_label(loopend);
511
+ set_disas_label(s, loopend);
512
if (a->tp) {
513
/* Exits from tail-pred loops must reset LTPSIZE to 4 */
514
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
515
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
516
517
arm_gen_condlabel(s);
518
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
519
- tmp, 0, s->condlabel);
520
+ tmp, 0, s->condlabel.label);
521
tcg_temp_free_i32(tmp);
522
gen_jmp(s, jmp_diff(s, a->imm));
523
return true;
524
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
525
526
dc->isar = &cpu->isar;
527
dc->condjmp = 0;
528
-
529
+ dc->pc_save = dc->base.pc_first;
530
dc->aarch64 = false;
531
dc->thumb = EX_TBFLAG_AM32(tb_flags, THUMB);
532
dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE;
533
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
534
*/
535
dc->eci = dc->condexec_mask = dc->condexec_cond = 0;
536
dc->eci_handled = false;
537
- dc->insn_eci_rewind = NULL;
538
if (condexec & 0xf) {
539
dc->condexec_mask = (condexec & 0xf) << 1;
540
dc->condexec_cond = condexec >> 4;
541
@@ -XXX,XX +XXX,XX @@ static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
542
* fields here.
543
*/
544
uint32_t condexec_bits;
545
+ target_ulong pc_arg = dc->base.pc_next;
546
547
+ if (TARGET_TB_PCREL) {
548
+ pc_arg &= ~TARGET_PAGE_MASK;
549
+ }
550
if (dc->eci) {
551
condexec_bits = dc->eci << 4;
552
} else {
553
condexec_bits = (dc->condexec_cond << 4) | (dc->condexec_mask >> 1);
554
}
555
- tcg_gen_insn_start(dc->base.pc_next, condexec_bits, 0);
556
+ tcg_gen_insn_start(pc_arg, condexec_bits, 0);
557
dc->insn_start = tcg_last_op();
558
}
559
560
@@ -XXX,XX +XXX,XX @@ static bool arm_check_ss_active(DisasContext *dc)
561
562
static void arm_post_translate_insn(DisasContext *dc)
563
{
564
- if (dc->condjmp && !dc->base.is_jmp) {
565
- gen_set_label(dc->condlabel);
566
+ if (dc->condjmp && dc->base.is_jmp == DISAS_NEXT) {
567
+ if (dc->pc_save != dc->condlabel.pc_save) {
568
+ gen_update_pc(dc, dc->condlabel.pc_save - dc->pc_save);
569
+ }
570
+ gen_set_label(dc->condlabel.label);
571
dc->condjmp = 0;
572
}
573
translator_loop_temp_check(&dc->base);
574
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
575
uint32_t pc = dc->base.pc_next;
576
uint32_t insn;
577
bool is_16bit;
578
+ /* TCG op to rewind to if this turns out to be an invalid ECI state */
579
+ TCGOp *insn_eci_rewind = NULL;
580
+ target_ulong insn_eci_pc_save = -1;
581
582
/* Misaligned thumb PC is architecturally impossible. */
583
assert((dc->base.pc_next & 1) == 0);
584
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
585
* insn" case. We will rewind to the marker (ie throwing away
586
* all the generated code) and instead emit "take exception".
587
*/
588
- dc->insn_eci_rewind = tcg_last_op();
589
+ insn_eci_rewind = tcg_last_op();
590
+ insn_eci_pc_save = dc->pc_save;
591
}
592
593
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
594
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
595
* Insn wasn't valid for ECI/ICI at all: undo what we
596
* just generated and instead emit an exception
597
*/
598
- tcg_remove_ops_after(dc->insn_eci_rewind);
599
+ tcg_remove_ops_after(insn_eci_rewind);
600
+ dc->pc_save = insn_eci_pc_save;
601
dc->condjmp = 0;
602
gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
603
}
604
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
605
606
if (dc->condjmp) {
607
/* "Condition failed" instruction codepath for the branch/trap insn */
608
- gen_set_label(dc->condlabel);
609
+ set_disas_label(dc, dc->condlabel);
610
gen_set_condexec(dc);
611
if (unlikely(dc->ss_active)) {
612
gen_update_pc(dc, curr_insn_len(dc));
613
@@ -XXX,XX +XXX,XX @@ void restore_state_to_opc(CPUARMState *env, TranslationBlock *tb,
614
target_ulong *data)
615
{
616
if (is_a64(env)) {
617
- env->pc = data[0];
618
+ if (TARGET_TB_PCREL) {
619
+ env->pc = (env->pc & TARGET_PAGE_MASK) | data[0];
620
+ } else {
621
+ env->pc = data[0];
622
+ }
623
env->condexec_bits = 0;
624
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
625
} else {
626
- env->regs[15] = data[0];
627
+ if (TARGET_TB_PCREL) {
628
+ env->regs[15] = (env->regs[15] & TARGET_PAGE_MASK) | data[0];
629
+ } else {
630
+ env->regs[15] = data[0];
631
+ }
632
env->condexec_bits = data[1];
633
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
634
}
635
--
54
--
636
2.25.1
55
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for loongarch, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-19-peter.maydell@linaro.org
7
---
8
target/loongarch/tcg/fpu_helper.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/loongarch/tcg/fpu_helper.c
15
+++ b/target/loongarch/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
17
* case sets InvalidOp and returns the input value 'c'
18
*/
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
21
}
22
23
int ieee_ex_to_loongarch(int xcpt)
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_LOONGARCH64)
33
- rule = float_3nan_prop_s_cab;
34
#elif defined(TARGET_PPC)
35
/*
36
* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for PPC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-20-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 8 ++++++++
9
fpu/softfloat-specialize.c.inc | 6 ------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * NaN propagation for fused multiply-add:
22
+ * if fRA is a NaN return it; otherwise if fRB is a NaN return it;
23
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
24
+ * whereas QEMU labels the operands as (a * b) + c.
25
+ */
26
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->fp_status);
27
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->vec_status);
28
/*
29
* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
30
* to return an input NaN if we have one (ie c) rather than generating
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
} else {
37
rule = float_3nan_prop_s_cab;
38
}
39
-#elif defined(TARGET_PPC)
40
- /*
41
- * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
42
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
43
- */
44
- rule = float_3nan_prop_acb;
45
#elif defined(TARGET_S390X)
46
rule = float_3nan_prop_s_abc;
47
#elif defined(TARGET_SPARC)
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for s390x, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-21-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/cpu.c
15
+++ b/target/s390x/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
17
set_float_detect_tininess(float_tininess_before_rounding,
18
&env->fpu_status);
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
21
set_float_infzeronan_rule(float_infzeronan_dnan_always,
22
&env->fpu_status);
23
/* fall through */
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_S390X)
33
- rule = float_3nan_prop_s_abc;
34
#elif defined(TARGET_SPARC)
35
rule = float_3nan_prop_s_cba;
36
#elif defined(TARGET_XTENSA)
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for SPARC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-22-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For fused-multiply add, prefer SNaN over QNaN, then C->B->A */
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
22
/* For inf * 0 + NaN, return the input NaN */
23
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
} else {
31
rule = float_3nan_prop_s_cab;
32
}
33
-#elif defined(TARGET_SPARC)
34
- rule = float_3nan_prop_s_cba;
35
#elif defined(TARGET_XTENSA)
36
if (status->use_first_nan) {
37
rule = float_3nan_prop_abc;
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-23-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 4 ++++
9
target/mips/msa.c | 3 +++
10
fpu/softfloat-specialize.c.inc | 8 +-------
11
3 files changed, 8 insertions(+), 7 deletions(-)
12
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/mips/fpu_helper.h
16
+++ b/target/mips/fpu_helper.h
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
18
{
19
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
20
FloatInfZeroNaNRule izn_rule;
21
+ Float3NaNPropRule nan3_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
28
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
29
+ nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
30
+ set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
31
+
32
}
33
34
static inline void restore_fp_status(CPUMIPSState *env)
35
diff --git a/target/mips/msa.c b/target/mips/msa.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/mips/msa.c
38
+++ b/target/mips/msa.c
39
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
40
set_float_2nan_prop_rule(float_2nan_prop_s_ab,
41
&env->active_tc.msa_fp_status);
42
43
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab,
44
+ &env->active_tc.msa_fp_status);
45
+
46
/* clear float_status exception flags */
47
set_float_exception_flags(0, &env->active_tc.msa_fp_status);
48
49
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
50
index XXXXXXX..XXXXXXX 100644
51
--- a/fpu/softfloat-specialize.c.inc
52
+++ b/fpu/softfloat-specialize.c.inc
53
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
54
}
55
56
if (rule == float_3nan_prop_none) {
57
-#if defined(TARGET_MIPS)
58
- if (snan_bit_is_one(status)) {
59
- rule = float_3nan_prop_s_abc;
60
- } else {
61
- rule = float_3nan_prop_s_cab;
62
- }
63
-#elif defined(TARGET_XTENSA)
64
+#if defined(TARGET_XTENSA)
65
if (status->use_first_nan) {
66
rule = float_3nan_prop_abc;
67
} else {
68
--
69
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the Float3NaNPropRule explicitly for xtensa, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
Before using softmmu page tables for the ptw, plumb down
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
a debug parameter so that we can query page table entries
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
from gdbstub without modifying cpu state.
6
Message-id: 20241202131347.498124-24-peter.maydell@linaro.org
7
---
8
target/xtensa/fpu_helper.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 8 --------
10
2 files changed, 2 insertions(+), 8 deletions(-)
6
11
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/ptw.c | 55 ++++++++++++++++++++++++++++++++----------------
13
1 file changed, 37 insertions(+), 18 deletions(-)
14
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/ptw.c
14
--- a/target/xtensa/fpu_helper.c
18
+++ b/target/arm/ptw.c
15
+++ b/target/xtensa/fpu_helper.c
19
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
20
typedef struct S1Translate {
17
set_use_first_nan(use_first, &env->fp_status);
21
ARMMMUIdx in_mmu_idx;
18
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
22
bool in_secure;
19
&env->fp_status);
23
+ bool in_debug;
20
+ set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
24
bool out_secure;
21
+ &env->fp_status);
25
hwaddr out_phys;
26
} S1Translate;
27
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
28
S1Translate s2ptw = {
29
.in_mmu_idx = s2_mmu_idx,
30
.in_secure = is_secure,
31
+ .in_debug = ptw->in_debug,
32
};
33
uint64_t hcr;
34
int ret;
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
36
return 0;
37
}
22
}
38
23
39
-bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
24
void HELPER(wur_fpu2k_fcr)(CPUXtensaState *env, uint32_t v)
40
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
41
- bool is_secure, GetPhysAddrResult *result,
26
index XXXXXXX..XXXXXXX 100644
42
- ARMMMUFaultInfo *fi)
27
--- a/fpu/softfloat-specialize.c.inc
43
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
28
+++ b/fpu/softfloat-specialize.c.inc
44
+ target_ulong address,
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
45
+ MMUAccessType access_type,
46
+ GetPhysAddrResult *result,
47
+ ARMMMUFaultInfo *fi)
48
{
49
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
50
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
51
- S1Translate ptw;
52
+ bool is_secure = ptw->in_secure;
53
54
if (mmu_idx != s1_mmu_idx) {
55
/*
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
57
bool is_el0;
58
uint64_t hcr;
59
60
- ret = get_phys_addr_with_secure(env, address, access_type,
61
- s1_mmu_idx, is_secure, result, fi);
62
+ ptw->in_mmu_idx = s1_mmu_idx;
63
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type,
64
+ result, fi);
65
66
/* If S1 fails or S2 is disabled, return early. */
67
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
68
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
69
s2walk_secure = false;
70
}
71
72
- ptw.in_mmu_idx =
73
+ ptw->in_mmu_idx =
74
s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
75
- ptw.in_secure = s2walk_secure;
76
+ ptw->in_secure = s2walk_secure;
77
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
78
79
/*
80
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
81
cacheattrs1 = result->cacheattrs;
82
memset(result, 0, sizeof(*result));
83
84
- ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
85
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
86
is_el0, result, fi);
87
fi->s2addr = ipa;
88
89
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
90
is_secure, result, fi);
91
}
30
}
92
31
93
- ptw.in_mmu_idx = mmu_idx;
32
if (rule == float_3nan_prop_none) {
94
- ptw.in_secure = is_secure;
33
-#if defined(TARGET_XTENSA)
95
-
34
- if (status->use_first_nan) {
96
if (regime_using_lpae_format(env, mmu_idx)) {
35
- rule = float_3nan_prop_abc;
97
- return get_phys_addr_lpae(env, &ptw, address, access_type, false,
36
- } else {
98
+ return get_phys_addr_lpae(env, ptw, address, access_type, false,
37
- rule = float_3nan_prop_cba;
99
result, fi);
38
- }
100
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
39
-#else
101
- return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
40
rule = float_3nan_prop_abc;
102
+ return get_phys_addr_v6(env, ptw, address, access_type, result, fi);
41
-#endif
103
} else {
104
- return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
105
+ return get_phys_addr_v5(env, ptw, address, access_type, result, fi);
106
}
42
}
107
}
43
108
44
assert(rule != float_3nan_prop_none);
109
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
110
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
111
+ bool is_secure, GetPhysAddrResult *result,
112
+ ARMMMUFaultInfo *fi)
113
+{
114
+ S1Translate ptw = {
115
+ .in_mmu_idx = mmu_idx,
116
+ .in_secure = is_secure,
117
+ };
118
+ return get_phys_addr_with_struct(env, &ptw, address, access_type,
119
+ result, fi);
120
+}
121
+
122
bool get_phys_addr(CPUARMState *env, target_ulong address,
123
MMUAccessType access_type, ARMMMUIdx mmu_idx,
124
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
125
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
126
{
127
ARMCPU *cpu = ARM_CPU(cs);
128
CPUARMState *env = &cpu->env;
129
+ S1Translate ptw = {
130
+ .in_mmu_idx = arm_mmu_idx(env),
131
+ .in_secure = arm_is_secure(env),
132
+ .in_debug = true,
133
+ };
134
GetPhysAddrResult res = {};
135
ARMMMUFaultInfo fi = {};
136
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
137
bool ret;
138
139
- ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
140
+ ret = get_phys_addr_with_struct(env, &ptw, addr, MMU_DATA_LOAD, &res, &fi);
141
*attrs = res.f.attrs;
142
143
if (ret) {
144
--
45
--
145
2.25.1
46
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for i386. We had no
2
i386-specific behaviour in the old ifdef ladder, so we were using the
3
default "prefer a then b then c" fallback; this is actually the
4
correct per-the-spec handling for i386.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-25-peter.maydell@linaro.org
9
---
10
target/i386/tcg/fpu_helper.c | 1 +
11
1 file changed, 1 insertion(+)
12
13
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/i386/tcg/fpu_helper.c
16
+++ b/target/i386/tcg/fpu_helper.c
17
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
18
* there are multiple input NaNs they are selected in the order a, b, c.
19
*/
20
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
21
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
22
}
23
24
static inline uint8_t save_exception_flags(CPUX86State *env)
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for HPPA, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
HPPA is the only target that was using the default branch of the
5
ifdef ladder (other targets either do not use muladd or set
6
default_nan_mode), so we can remove the ifdef fallback entirely now
7
(allowing the "rule not set" case to fall into the default of the
8
switch statement and assert).
9
10
We add a TODO note that the HPPA rule is probably wrong; this is
11
not a behavioural change for this refactoring.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-26-peter.maydell@linaro.org
16
---
17
target/hppa/fpu_helper.c | 8 ++++++++
18
fpu/softfloat-specialize.c.inc | 4 ----
19
2 files changed, 8 insertions(+), 4 deletions(-)
20
21
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/hppa/fpu_helper.c
24
+++ b/target/hppa/fpu_helper.c
25
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
26
* HPPA does note implement a CPU reset method at all...
27
*/
28
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
29
+ /*
30
+ * TODO: The HPPA architecture reference only documents its NaN
31
+ * propagation rule for 2-operand operations. Testing on real hardware
32
+ * might be necessary to confirm whether this order for muladd is correct.
33
+ * Not preferring the SNaN is almost certainly incorrect as it diverges
34
+ * from the documented rules for 2-operand operations.
35
+ */
36
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
37
/* For inf * 0 + NaN, return the input NaN */
38
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
39
}
40
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
41
index XXXXXXX..XXXXXXX 100644
42
--- a/fpu/softfloat-specialize.c.inc
43
+++ b/fpu/softfloat-specialize.c.inc
44
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
45
}
46
}
47
48
- if (rule == float_3nan_prop_none) {
49
- rule = float_3nan_prop_abc;
50
- }
51
-
52
assert(rule != float_3nan_prop_none);
53
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
54
/* We have at least one SNaN input and should prefer it */
55
--
56
2.34.1
diff view generated by jsdifflib
New patch
1
The use_first_nan field in float_status was an xtensa-specific way to
2
select at runtime from two different NaN propagation rules. Now that
3
xtensa is using the target-agnostic NaN propagation rule selection
4
that we've just added, we can remove use_first_nan, because there is
5
no longer any code that reads it.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241202131347.498124-27-peter.maydell@linaro.org
10
---
11
include/fpu/softfloat-helpers.h | 5 -----
12
include/fpu/softfloat-types.h | 1 -
13
target/xtensa/fpu_helper.c | 1 -
14
3 files changed, 7 deletions(-)
15
16
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/fpu/softfloat-helpers.h
19
+++ b/include/fpu/softfloat-helpers.h
20
@@ -XXX,XX +XXX,XX @@ static inline void set_snan_bit_is_one(bool val, float_status *status)
21
status->snan_bit_is_one = val;
22
}
23
24
-static inline void set_use_first_nan(bool val, float_status *status)
25
-{
26
- status->use_first_nan = val;
27
-}
28
-
29
static inline void set_no_signaling_nans(bool val, float_status *status)
30
{
31
status->no_signaling_nans = val;
32
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/include/fpu/softfloat-types.h
35
+++ b/include/fpu/softfloat-types.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
37
* softfloat-specialize.inc.c)
38
*/
39
bool snan_bit_is_one;
40
- bool use_first_nan;
41
bool no_signaling_nans;
42
/* should overflowed results subtract re_bias to its exponent? */
43
bool rebias_overflow;
44
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/xtensa/fpu_helper.c
47
+++ b/target/xtensa/fpu_helper.c
48
@@ -XXX,XX +XXX,XX @@ static const struct {
49
50
void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
51
{
52
- set_use_first_nan(use_first, &env->fp_status);
53
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
54
&env->fp_status);
55
set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
56
--
57
2.34.1
diff view generated by jsdifflib
New patch
1
Currently m68k_cpu_reset_hold() calls floatx80_default_nan(NULL)
2
to get the NaN bit pattern to reset the FPU registers. This
3
works because it happens that our implementation of
4
floatx80_default_nan() doesn't actually look at the float_status
5
pointer except for TARGET_MIPS. However, this isn't guaranteed,
6
and to be able to remove the ifdef in floatx80_default_nan()
7
we're going to need a real float_status here.
1
8
9
Rearrange m68k_cpu_reset_hold() so that we initialize env->fp_status
10
earlier, and thus can pass it to floatx80_default_nan().
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20241202131347.498124-28-peter.maydell@linaro.org
15
---
16
target/m68k/cpu.c | 12 +++++++-----
17
1 file changed, 7 insertions(+), 5 deletions(-)
18
19
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/m68k/cpu.c
22
+++ b/target/m68k/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
24
CPUState *cs = CPU(obj);
25
M68kCPUClass *mcc = M68K_CPU_GET_CLASS(obj);
26
CPUM68KState *env = cpu_env(cs);
27
- floatx80 nan = floatx80_default_nan(NULL);
28
+ floatx80 nan;
29
int i;
30
31
if (mcc->parent_phases.hold) {
32
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
33
#else
34
cpu_m68k_set_sr(env, SR_S | SR_I);
35
#endif
36
- for (i = 0; i < 8; i++) {
37
- env->fregs[i].d = nan;
38
- }
39
- cpu_m68k_set_fpcr(env, 0);
40
/*
41
* M68000 FAMILY PROGRAMMER'S REFERENCE MANUAL
42
* 3.4 FLOATING-POINT INSTRUCTION DETAILS
43
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
44
* preceding paragraph for nonsignaling NaNs.
45
*/
46
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
47
+
48
+ nan = floatx80_default_nan(&env->fp_status);
49
+ for (i = 0; i < 8; i++) {
50
+ env->fregs[i].d = nan;
51
+ }
52
+ cpu_m68k_set_fpcr(env, 0);
53
env->fpsr = 0;
54
55
/* TODO: We should set PC from the interrupt vector. */
56
--
57
2.34.1
diff view generated by jsdifflib
New patch
1
We create our 128-bit default NaN by calling parts64_default_nan()
2
and then adjusting the result. We can do the same trick for creating
3
the floatx80 default NaN, which lets us drop a target ifdef.
1
4
5
floatx80 is used only by:
6
i386
7
m68k
8
arm nwfpe old floating-point emulation emulation support
9
(which is essentially dead, especially the parts involving floatx80)
10
PPC (only in the xsrqpxp instruction, which just rounds an input
11
value by converting to floatx80 and back, so will never generate
12
the default NaN)
13
14
The floatx80 default NaN as currently implemented is:
15
m68k: sign = 0, exp = 1...1, int = 1, frac = 1....1
16
i386: sign = 1, exp = 1...1, int = 1, frac = 10...0
17
18
These are the same as the parts64_default_nan for these architectures.
19
20
This is technically a possible behaviour change for arm linux-user
21
nwfpe emulation emulation, because the default NaN will now have the
22
sign bit clear. But we were already generating a different floatx80
23
default NaN from the real kernel emulation we are supposedly
24
following, which appears to use an all-bits-1 value:
25
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L267
26
27
This won't affect the only "real" use of the nwfpe emulation, which
28
is ancient binaries that used it as part of the old floating point
29
calling convention; that only uses loads and stores of 32 and 64 bit
30
floats, not any of the floatx80 behaviour the original hardware had.
31
We also get the nwfpe float64 default NaN value wrong:
32
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L166
33
so if we ever cared about this obscure corner the right fix would be
34
to correct that so nwfpe used its own default-NaN setting rather
35
than the Arm VFP one.
36
37
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
39
Message-id: 20241202131347.498124-29-peter.maydell@linaro.org
40
---
41
fpu/softfloat-specialize.c.inc | 20 ++++++++++----------
42
1 file changed, 10 insertions(+), 10 deletions(-)
43
44
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
45
index XXXXXXX..XXXXXXX 100644
46
--- a/fpu/softfloat-specialize.c.inc
47
+++ b/fpu/softfloat-specialize.c.inc
48
@@ -XXX,XX +XXX,XX @@ static void parts128_silence_nan(FloatParts128 *p, float_status *status)
49
floatx80 floatx80_default_nan(float_status *status)
50
{
51
floatx80 r;
52
+ /*
53
+ * Extrapolate from the choices made by parts64_default_nan to fill
54
+ * in the floatx80 format. We assume that floatx80's explicit
55
+ * integer bit is always set (this is true for i386 and m68k,
56
+ * which are the only real users of this format).
57
+ */
58
+ FloatParts64 p64;
59
+ parts64_default_nan(&p64, status);
60
61
- /* None of the targets that have snan_bit_is_one use floatx80. */
62
- assert(!snan_bit_is_one(status));
63
-#if defined(TARGET_M68K)
64
- r.low = UINT64_C(0xFFFFFFFFFFFFFFFF);
65
- r.high = 0x7FFF;
66
-#else
67
- /* X86 */
68
- r.low = UINT64_C(0xC000000000000000);
69
- r.high = 0xFFFF;
70
-#endif
71
+ r.high = 0x7FFF | (p64.sign << 15);
72
+ r.low = (1ULL << DECOMPOSED_BINARY_POINT) | p64.frac;
73
return r;
74
}
75
76
--
77
2.34.1
diff view generated by jsdifflib
New patch
1
In target/loongarch's helper_fclass_s() and helper_fclass_d() we pass
2
a zero-initialized float_status struct to float32_is_quiet_nan() and
3
float64_is_quiet_nan(), with the cryptic comment "for
4
snan_bit_is_one".
1
5
6
This pattern appears to have been copied from target/riscv, where it
7
is used because the functions there do not have ready access to the
8
CPU state struct. The comment presumably refers to the fact that the
9
main reason the is_quiet_nan() functions want the float_state is
10
because they want to know about the snan_bit_is_one config.
11
12
In the loongarch helpers, though, we have the CPU state struct
13
to hand. Use the usual env->fp_status here. This avoids our needing
14
to track that we need to update the initializer of the local
15
float_status structs when the core softfloat code adds new
16
options for targets to configure their behaviour.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20241202131347.498124-30-peter.maydell@linaro.org
21
---
22
target/loongarch/tcg/fpu_helper.c | 6 ++----
23
1 file changed, 2 insertions(+), 4 deletions(-)
24
25
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/loongarch/tcg/fpu_helper.c
28
+++ b/target/loongarch/tcg/fpu_helper.c
29
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_s(CPULoongArchState *env, uint64_t fj)
30
} else if (float32_is_zero_or_denormal(f)) {
31
return sign ? 1 << 4 : 1 << 8;
32
} else if (float32_is_any_nan(f)) {
33
- float_status s = { }; /* for snan_bit_is_one */
34
- return float32_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
35
+ return float32_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
36
} else {
37
return sign ? 1 << 3 : 1 << 7;
38
}
39
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_d(CPULoongArchState *env, uint64_t fj)
40
} else if (float64_is_zero_or_denormal(f)) {
41
return sign ? 1 << 4 : 1 << 8;
42
} else if (float64_is_any_nan(f)) {
43
- float_status s = { }; /* for snan_bit_is_one */
44
- return float64_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
45
+ return float64_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
46
} else {
47
return sign ? 1 << 3 : 1 << 7;
48
}
49
--
50
2.34.1
diff view generated by jsdifflib
New patch
1
In the frem helper, we have a local float_status because we want to
2
execute the floatx80_div() with a custom rounding mode. Instead of
3
zero-initializing the local float_status and then having to set it up
4
with the m68k standard behaviour (including the NaN propagation rule
5
and copying the rounding precision from env->fp_status), initialize
6
it as a complete copy of env->fp_status. This will avoid our having
7
to add new code in this function for every new config knob we add
8
to fp_status.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-31-peter.maydell@linaro.org
13
---
14
target/m68k/fpu_helper.c | 6 ++----
15
1 file changed, 2 insertions(+), 4 deletions(-)
16
17
diff --git a/target/m68k/fpu_helper.c b/target/m68k/fpu_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/m68k/fpu_helper.c
20
+++ b/target/m68k/fpu_helper.c
21
@@ -XXX,XX +XXX,XX @@ void HELPER(frem)(CPUM68KState *env, FPReg *res, FPReg *val0, FPReg *val1)
22
23
fp_rem = floatx80_rem(val1->d, val0->d, &env->fp_status);
24
if (!floatx80_is_any_nan(fp_rem)) {
25
- float_status fp_status = { };
26
+ /* Use local temporary fp_status to set different rounding mode */
27
+ float_status fp_status = env->fp_status;
28
uint32_t quotient;
29
int sign;
30
31
/* Calculate quotient directly using round to nearest mode */
32
- set_float_2nan_prop_rule(float_2nan_prop_ab, &fp_status);
33
set_float_rounding_mode(float_round_nearest_even, &fp_status);
34
- set_floatx80_rounding_precision(
35
- get_floatx80_rounding_precision(&env->fp_status), &fp_status);
36
fp_quot.d = floatx80_div(val1->d, val0->d, &fp_status);
37
38
sign = extractFloatx80Sign(fp_quot.d);
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
In cf_fpu_gdb_get_reg() and cf_fpu_gdb_set_reg() we do the conversion
2
from float64 to floatx80 using a scratch float_status, because we
3
don't want the conversion to affect the CPU's floating point exception
4
status. Currently we use a zero-initialized float_status. This will
5
get steadily more awkward as we add config knobs to float_status
6
that the target must initialize. Avoid having to add any of that
7
configuration here by instead initializing our local float_status
8
from the env->fp_status.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-32-peter.maydell@linaro.org
13
---
14
target/m68k/helper.c | 6 ++++--
15
1 file changed, 4 insertions(+), 2 deletions(-)
16
17
diff --git a/target/m68k/helper.c b/target/m68k/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/m68k/helper.c
20
+++ b/target/m68k/helper.c
21
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_get_reg(CPUState *cs, GByteArray *mem_buf, int n)
22
CPUM68KState *env = &cpu->env;
23
24
if (n < 8) {
25
- float_status s = {};
26
+ /* Use scratch float_status so any exceptions don't change CPU state */
27
+ float_status s = env->fp_status;
28
return gdb_get_reg64(mem_buf, floatx80_to_float64(env->fregs[n].d, &s));
29
}
30
switch (n) {
31
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_set_reg(CPUState *cs, uint8_t *mem_buf, int n)
32
CPUM68KState *env = &cpu->env;
33
34
if (n < 8) {
35
- float_status s = {};
36
+ /* Use scratch float_status so any exceptions don't change CPU state */
37
+ float_status s = env->fp_status;
38
env->fregs[n].d = float64_to_floatx80(ldq_be_p(mem_buf), &s);
39
return 8;
40
}
41
--
42
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In the helper functions flcmps and flcmpd we use a scratch float_status
2
so that we don't change the CPU state if the comparison raises any
3
floating point exception flags. Instead of zero-initializing this
4
scratch float_status, initialize it as a copy of env->fp_status. This
5
avoids the need to explicitly initialize settings like the NaN
6
propagation rule or others we might add to softfloat in future.
2
7
3
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and
8
To do this we need to pass the CPU env pointer in to the helper.
4
arm_ldq_ptw. Use probe_access_full to find the host address,
5
and if so use a host load. If the probe fails, we've got our
6
fault info already. On the off chance that page tables are not
7
in RAM, continue to use the address_space_ld* functions.
8
9
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-33-peter.maydell@linaro.org
13
---
13
---
14
target/arm/cpu.h | 5 +
14
target/sparc/helper.h | 4 ++--
15
target/arm/ptw.c | 196 +++++++++++++++++++++++++---------------
15
target/sparc/fop_helper.c | 8 ++++----
16
target/arm/tlb_helper.c | 17 +++-
16
target/sparc/translate.c | 4 ++--
17
3 files changed, 144 insertions(+), 74 deletions(-)
17
3 files changed, 8 insertions(+), 8 deletions(-)
18
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/sparc/helper.h b/target/sparc/helper.h
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
21
--- a/target/sparc/helper.h
22
+++ b/target/arm/cpu.h
22
+++ b/target/sparc/helper.h
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(fcmpd, TCG_CALL_NO_WG, i32, env, f64, f64)
24
target_ulong flags2;
24
DEF_HELPER_FLAGS_3(fcmped, TCG_CALL_NO_WG, i32, env, f64, f64)
25
} CPUARMTBFlags;
25
DEF_HELPER_FLAGS_3(fcmpq, TCG_CALL_NO_WG, i32, env, i128, i128)
26
26
DEF_HELPER_FLAGS_3(fcmpeq, TCG_CALL_NO_WG, i32, env, i128, i128)
27
+typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
27
-DEF_HELPER_FLAGS_2(flcmps, TCG_CALL_NO_RWG_SE, i32, f32, f32)
28
+
28
-DEF_HELPER_FLAGS_2(flcmpd, TCG_CALL_NO_RWG_SE, i32, f64, f64)
29
typedef struct CPUArchState {
29
+DEF_HELPER_FLAGS_3(flcmps, TCG_CALL_NO_RWG_SE, i32, env, f32, f32)
30
/* Regs for current mode. */
30
+DEF_HELPER_FLAGS_3(flcmpd, TCG_CALL_NO_RWG_SE, i32, env, f64, f64)
31
uint32_t regs[16];
31
DEF_HELPER_2(raise_exception, noreturn, env, int)
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
32
33
struct CPUBreakpoint *cpu_breakpoint[16];
33
DEF_HELPER_FLAGS_3(faddd, TCG_CALL_NO_WG, f64, env, f64, f64)
34
struct CPUWatchpoint *cpu_watchpoint[16];
34
diff --git a/target/sparc/fop_helper.c b/target/sparc/fop_helper.c
35
36
+ /* Optional fault info across tlb lookup. */
37
+ ARMMMUFaultInfo *tlb_fi;
38
+
39
/* Fields up to this point are cleared by a CPU reset */
40
struct {} end_reset_fields;
41
42
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
43
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/ptw.c
36
--- a/target/sparc/fop_helper.c
45
+++ b/target/arm/ptw.c
37
+++ b/target/sparc/fop_helper.c
46
@@ -XXX,XX +XXX,XX @@
38
@@ -XXX,XX +XXX,XX @@ uint32_t helper_fcmpeq(CPUSPARCState *env, Int128 src1, Int128 src2)
47
#include "qemu/osdep.h"
39
return finish_fcmp(env, r, GETPC());
48
#include "qemu/log.h"
49
#include "qemu/range.h"
50
+#include "exec/exec-all.h"
51
#include "cpu.h"
52
#include "internals.h"
53
#include "idau.h"
54
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
55
bool out_secure;
56
bool out_be;
57
hwaddr out_phys;
58
+ void *out_host;
59
} S1Translate;
60
61
static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
62
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
63
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
64
}
40
}
65
41
66
-static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
42
-uint32_t helper_flcmps(float32 src1, float32 src2)
67
+static bool S2_attrs_are_device(uint64_t hcr, uint8_t attrs)
43
+uint32_t helper_flcmps(CPUSPARCState *env, float32 src1, float32 src2)
68
{
44
{
69
/*
45
/*
70
* For an S1 page table walk, the stage 1 attributes are always
46
* FLCMP never raises an exception nor modifies any FSR fields.
71
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
47
* Perform the comparison with a dummy fp environment.
72
* With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
73
* when cacheattrs.attrs bit [2] is 0.
74
*/
48
*/
75
- assert(cacheattrs.is_s2_format);
49
- float_status discard = { };
76
if (hcr & HCR_FWB) {
50
+ float_status discard = env->fp_status;
77
- return (cacheattrs.attrs & 0x4) == 0;
51
FloatRelation r;
78
+ return (attrs & 0x4) == 0;
52
79
} else {
53
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
80
- return (cacheattrs.attrs & 0xc) == 0;
54
@@ -XXX,XX +XXX,XX @@ uint32_t helper_flcmps(float32 src1, float32 src2)
81
+ return (attrs & 0xc) == 0;
55
g_assert_not_reached();
82
}
83
}
56
}
84
57
85
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
58
-uint32_t helper_flcmpd(float64 src1, float64 src2)
86
hwaddr addr, ARMMMUFaultInfo *fi)
59
+uint32_t helper_flcmpd(CPUSPARCState *env, float64 src1, float64 src2)
87
{
60
{
88
bool is_secure = ptw->in_secure;
61
- float_status discard = { };
89
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
62
+ float_status discard = env->fp_status;
90
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
63
FloatRelation r;
91
+ bool s2_phys = false;
64
92
+ uint8_t pte_attrs;
65
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
93
+ bool pte_secure;
66
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
94
67
index XXXXXXX..XXXXXXX 100644
95
- if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
68
--- a/target/sparc/translate.c
96
- !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
69
+++ b/target/sparc/translate.c
97
- GetPhysAddrResult s2 = {};
70
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPs(DisasContext *dc, arg_FLCMPs *a)
98
- S1Translate s2ptw = {
71
99
- .in_mmu_idx = s2_mmu_idx,
72
src1 = gen_load_fpr_F(dc, a->rs1);
100
- .in_secure = is_secure,
73
src2 = gen_load_fpr_F(dc, a->rs2);
101
- .in_debug = ptw->in_debug,
74
- gen_helper_flcmps(cpu_fcc[a->cc], src1, src2);
102
- };
75
+ gen_helper_flcmps(cpu_fcc[a->cc], tcg_env, src1, src2);
103
- uint64_t hcr;
76
return advance_pc(dc);
104
- int ret;
105
+ if (!arm_mmu_idx_is_stage1_of_2(mmu_idx)
106
+ || regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
107
+ s2_mmu_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
108
+ s2_phys = true;
109
+ }
110
111
- ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
112
- false, &s2, fi);
113
- if (ret) {
114
- assert(fi->type != ARMFault_None);
115
- fi->s2addr = addr;
116
- fi->stage2 = true;
117
- fi->s1ptw = true;
118
- fi->s1ns = !is_secure;
119
- return false;
120
+ if (unlikely(ptw->in_debug)) {
121
+ /*
122
+ * From gdbstub, do not use softmmu so that we don't modify the
123
+ * state of the cpu at all, including softmmu tlb contents.
124
+ */
125
+ if (s2_phys) {
126
+ ptw->out_phys = addr;
127
+ pte_attrs = 0;
128
+ pte_secure = is_secure;
129
+ } else {
130
+ S1Translate s2ptw = {
131
+ .in_mmu_idx = s2_mmu_idx,
132
+ .in_secure = is_secure,
133
+ .in_debug = true,
134
+ };
135
+ GetPhysAddrResult s2 = { };
136
+ if (!get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
137
+ false, &s2, fi)) {
138
+ goto fail;
139
+ }
140
+ ptw->out_phys = s2.f.phys_addr;
141
+ pte_attrs = s2.cacheattrs.attrs;
142
+ pte_secure = s2.f.attrs.secure;
143
}
144
+ ptw->out_host = NULL;
145
+ } else {
146
+ CPUTLBEntryFull *full;
147
+ int flags;
148
149
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
150
- if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
151
+ env->tlb_fi = fi;
152
+ flags = probe_access_full(env, addr, MMU_DATA_LOAD,
153
+ arm_to_core_mmu_idx(s2_mmu_idx),
154
+ true, &ptw->out_host, &full, 0);
155
+ env->tlb_fi = NULL;
156
+
157
+ if (unlikely(flags & TLB_INVALID_MASK)) {
158
+ goto fail;
159
+ }
160
+ ptw->out_phys = full->phys_addr;
161
+ pte_attrs = full->pte_attrs;
162
+ pte_secure = full->attrs.secure;
163
+ }
164
+
165
+ if (!s2_phys) {
166
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
167
+
168
+ if ((hcr & HCR_PTW) && S2_attrs_are_device(hcr, pte_attrs)) {
169
/*
170
* PTW set and S1 walk touched S2 Device memory:
171
* generate Permission fault.
172
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
173
fi->s1ns = !is_secure;
174
return false;
175
}
176
-
177
- if (arm_is_secure_below_el3(env)) {
178
- /* Check if page table walk is to secure or non-secure PA space. */
179
- if (is_secure) {
180
- is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
181
- } else {
182
- is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
183
- }
184
- } else {
185
- assert(!is_secure);
186
- }
187
-
188
- addr = s2.f.phys_addr;
189
}
190
191
- ptw->out_secure = is_secure;
192
- ptw->out_phys = addr;
193
- ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
194
+ /* Check if page table walk is to secure or non-secure PA space. */
195
+ ptw->out_secure = (is_secure
196
+ && !(pte_secure
197
+ ? env->cp15.vstcr_el2 & VSTCR_SW
198
+ : env->cp15.vtcr_el2 & VTCR_NSW));
199
+ ptw->out_be = regime_translation_big_endian(env, mmu_idx);
200
return true;
201
+
202
+ fail:
203
+ assert(fi->type != ARMFault_None);
204
+ fi->s2addr = addr;
205
+ fi->stage2 = true;
206
+ fi->s1ptw = true;
207
+ fi->s1ns = !is_secure;
208
+ return false;
209
}
77
}
210
78
211
/* All loads done in the course of a page table walk go through here. */
79
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPd(DisasContext *dc, arg_FLCMPd *a)
212
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
80
213
ARMMMUFaultInfo *fi)
81
src1 = gen_load_fpr_D(dc, a->rs1);
214
{
82
src2 = gen_load_fpr_D(dc, a->rs2);
215
CPUState *cs = env_cpu(env);
83
- gen_helper_flcmpd(cpu_fcc[a->cc], src1, src2);
216
- MemTxAttrs attrs = {};
84
+ gen_helper_flcmpd(cpu_fcc[a->cc], tcg_env, src1, src2);
217
- MemTxResult result = MEMTX_OK;
85
return advance_pc(dc);
218
- AddressSpace *as;
219
uint32_t data;
220
221
if (!S1_ptw_translate(env, ptw, addr, fi)) {
222
+ /* Failure. */
223
+ assert(fi->s1ptw);
224
return 0;
225
}
226
- addr = ptw->out_phys;
227
- attrs.secure = ptw->out_secure;
228
- as = arm_addressspace(cs, attrs);
229
- if (ptw->out_be) {
230
- data = address_space_ldl_be(as, addr, attrs, &result);
231
+
232
+ if (likely(ptw->out_host)) {
233
+ /* Page tables are in RAM, and we have the host address. */
234
+ if (ptw->out_be) {
235
+ data = ldl_be_p(ptw->out_host);
236
+ } else {
237
+ data = ldl_le_p(ptw->out_host);
238
+ }
239
} else {
240
- data = address_space_ldl_le(as, addr, attrs, &result);
241
+ /* Page tables are in MMIO. */
242
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
243
+ AddressSpace *as = arm_addressspace(cs, attrs);
244
+ MemTxResult result = MEMTX_OK;
245
+
246
+ if (ptw->out_be) {
247
+ data = address_space_ldl_be(as, ptw->out_phys, attrs, &result);
248
+ } else {
249
+ data = address_space_ldl_le(as, ptw->out_phys, attrs, &result);
250
+ }
251
+ if (unlikely(result != MEMTX_OK)) {
252
+ fi->type = ARMFault_SyncExternalOnWalk;
253
+ fi->ea = arm_extabort_type(result);
254
+ return 0;
255
+ }
256
}
257
- if (result == MEMTX_OK) {
258
- return data;
259
- }
260
- fi->type = ARMFault_SyncExternalOnWalk;
261
- fi->ea = arm_extabort_type(result);
262
- return 0;
263
+ return data;
264
}
86
}
265
87
266
static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
267
ARMMMUFaultInfo *fi)
268
{
269
CPUState *cs = env_cpu(env);
270
- MemTxAttrs attrs = {};
271
- MemTxResult result = MEMTX_OK;
272
- AddressSpace *as;
273
uint64_t data;
274
275
if (!S1_ptw_translate(env, ptw, addr, fi)) {
276
+ /* Failure. */
277
+ assert(fi->s1ptw);
278
return 0;
279
}
280
- addr = ptw->out_phys;
281
- attrs.secure = ptw->out_secure;
282
- as = arm_addressspace(cs, attrs);
283
- if (ptw->out_be) {
284
- data = address_space_ldq_be(as, addr, attrs, &result);
285
+
286
+ if (likely(ptw->out_host)) {
287
+ /* Page tables are in RAM, and we have the host address. */
288
+ if (ptw->out_be) {
289
+ data = ldq_be_p(ptw->out_host);
290
+ } else {
291
+ data = ldq_le_p(ptw->out_host);
292
+ }
293
} else {
294
- data = address_space_ldq_le(as, addr, attrs, &result);
295
+ /* Page tables are in MMIO. */
296
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
297
+ AddressSpace *as = arm_addressspace(cs, attrs);
298
+ MemTxResult result = MEMTX_OK;
299
+
300
+ if (ptw->out_be) {
301
+ data = address_space_ldq_be(as, ptw->out_phys, attrs, &result);
302
+ } else {
303
+ data = address_space_ldq_le(as, ptw->out_phys, attrs, &result);
304
+ }
305
+ if (unlikely(result != MEMTX_OK)) {
306
+ fi->type = ARMFault_SyncExternalOnWalk;
307
+ fi->ea = arm_extabort_type(result);
308
+ return 0;
309
+ }
310
}
311
- if (result == MEMTX_OK) {
312
- return data;
313
- }
314
- fi->type = ARMFault_SyncExternalOnWalk;
315
- fi->ea = arm_extabort_type(result);
316
- return 0;
317
+ return data;
318
}
319
320
static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
321
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/tlb_helper.c
324
+++ b/target/arm/tlb_helper.c
325
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
326
bool probe, uintptr_t retaddr)
327
{
328
ARMCPU *cpu = ARM_CPU(cs);
329
- ARMMMUFaultInfo fi = {};
330
GetPhysAddrResult res = {};
331
+ ARMMMUFaultInfo local_fi, *fi;
332
int ret;
333
334
+ /*
335
+ * Allow S1_ptw_translate to see any fault generated here.
336
+ * Since this may recurse, read and clear.
337
+ */
338
+ fi = cpu->env.tlb_fi;
339
+ if (fi) {
340
+ cpu->env.tlb_fi = NULL;
341
+ } else {
342
+ fi = memset(&local_fi, 0, sizeof(local_fi));
343
+ }
344
+
345
/*
346
* Walk the page table and (if the mapping exists) add the page
347
* to the TLB. On success, return true. Otherwise, if probing,
348
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
349
*/
350
ret = get_phys_addr(&cpu->env, address, access_type,
351
core_to_arm_mmu_idx(&cpu->env, mmu_idx),
352
- &res, &fi);
353
+ &res, fi);
354
if (likely(!ret)) {
355
/*
356
* Map a single [sub]page. Regions smaller than our declared
357
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
358
} else {
359
/* now we have a real cpu fault */
360
cpu_restore_state(cs, retaddr, true);
361
- arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi);
362
+ arm_deliver_fault(cpu, address, access_type, mmu_idx, fi);
363
}
364
}
365
#else
366
--
88
--
367
2.25.1
89
2.34.1
diff view generated by jsdifflib
New patch
1
In the helper_compute_fprf functions, we pass a dummy float_status
2
in to the is_signaling_nan() function. This is unnecessary, because
3
we have convenient access to the CPU env pointer here and that
4
is already set up with the correct values for the snan_bit_is_one
5
and no_signaling_nans config settings. is_signaling_nan() doesn't
6
ever update the fp_status with any exception flags, so there is
7
no reason not to use env->fp_status here.
1
8
9
Use env->fp_status instead of the dummy fp_status.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20241202131347.498124-34-peter.maydell@linaro.org
14
---
15
target/ppc/fpu_helper.c | 3 +--
16
1 file changed, 1 insertion(+), 2 deletions(-)
17
18
diff --git a/target/ppc/fpu_helper.c b/target/ppc/fpu_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/ppc/fpu_helper.c
21
+++ b/target/ppc/fpu_helper.c
22
@@ -XXX,XX +XXX,XX @@ void helper_compute_fprf_##tp(CPUPPCState *env, tp arg) \
23
} else if (tp##_is_infinity(arg)) { \
24
fprf = neg ? 0x09 << FPSCR_FPRF : 0x05 << FPSCR_FPRF; \
25
} else { \
26
- float_status dummy = { }; /* snan_bit_is_one = 0 */ \
27
- if (tp##_is_signaling_nan(arg, &dummy)) { \
28
+ if (tp##_is_signaling_nan(arg, &env->fp_status)) { \
29
fprf = 0x00 << FPSCR_FPRF; \
30
} else { \
31
fprf = 0x11 << FPSCR_FPRF; \
32
--
33
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
3
Now that float_status has a bunch of fp parameters,
4
In is_guarded_page, use probe_access_full instead of just guessing
4
it is easier to copy an existing structure than create
5
that the tlb entry is still present. Also handles the FIXME about
5
one from scratch. Begin by copying the structure that
6
executing from device memory.
6
corresponds to the FPSR and make only the adjustments
7
required for BFloat16 semantics.
7
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241203203949.483774-2-richard.henderson@linaro.org
10
Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
14
---
13
target/arm/cpu-param.h | 9 +++++----
15
target/arm/tcg/vec_helper.c | 20 +++++++-------------
14
target/arm/cpu.h | 13 -------------
16
1 file changed, 7 insertions(+), 13 deletions(-)
15
target/arm/internals.h | 1 +
16
target/arm/ptw.c | 7 ++++---
17
target/arm/translate-a64.c | 21 ++++++++++-----------
18
5 files changed, 20 insertions(+), 31 deletions(-)
19
17
20
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
18
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu-param.h
20
--- a/target/arm/tcg/vec_helper.c
23
+++ b/target/arm/cpu-param.h
21
+++ b/target/arm/tcg/vec_helper.c
24
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ bool is_ebf(CPUARMState *env, float_status *statusp, float_status *oddstatusp)
25
*
23
* no effect on AArch32 instructions.
26
* For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2].
24
*/
27
* Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format.
25
bool ebf = is_a64(env) && env->vfp.fpcr & FPCR_EBF;
28
- * For shareability, as in the SH field of the VMSAv8-64 PTEs.
26
- *statusp = (float_status){
29
+ * For shareability and guarded, as in the SH and GP fields respectively
27
- .tininess_before_rounding = float_tininess_before_rounding,
30
+ * of the VMSAv8-64 PTEs.
28
- .float_rounding_mode = float_round_to_odd_inf,
31
*/
29
- .flush_to_zero = true,
32
# define TARGET_PAGE_ENTRY_EXTRA \
30
- .flush_inputs_to_zero = true,
33
- uint8_t pte_attrs; \
31
- .default_nan_mode = true,
34
- uint8_t shareability;
32
- };
33
+
34
+ *statusp = env->vfp.fp_status;
35
+ set_default_nan_mode(true, statusp);
36
37
if (ebf) {
38
- float_status *fpst = &env->vfp.fp_status;
39
- set_flush_to_zero(get_flush_to_zero(fpst), statusp);
40
- set_flush_inputs_to_zero(get_flush_inputs_to_zero(fpst), statusp);
41
- set_float_rounding_mode(get_float_rounding_mode(fpst), statusp);
35
-
42
-
36
+ uint8_t pte_attrs; \
43
/* EBF=1 needs to do a step with round-to-odd semantics */
37
+ uint8_t shareability; \
44
*oddstatusp = *statusp;
38
+ bool guarded;
45
set_float_rounding_mode(float_round_to_odd, oddstatusp);
39
#endif
46
+ } else {
40
47
+ set_flush_to_zero(true, statusp);
41
#define NB_MMU_MODES 8
48
+ set_flush_inputs_to_zero(true, statusp);
42
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
49
+ set_float_rounding_mode(float_round_to_odd_inf, statusp);
43
index XXXXXXX..XXXXXXX 100644
50
}
44
--- a/target/arm/cpu.h
45
+++ b/target/arm/cpu.h
46
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
47
/* Shared between translate-sve.c and sve_helper.c. */
48
extern const uint64_t pred_esz_masks[5];
49
50
-/* Helper for the macros below, validating the argument type. */
51
-static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
52
-{
53
- return x;
54
-}
55
-
51
-
56
-/*
52
return ebf;
57
- * Lvalue macros for ARM TLB bits that we must cache in the TCG TLB.
58
- * Using these should be a bit more self-documenting than using the
59
- * generic target bits directly.
60
- */
61
-#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
62
-
63
/*
64
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
65
* Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect
66
diff --git a/target/arm/internals.h b/target/arm/internals.h
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/internals.h
69
+++ b/target/arm/internals.h
70
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
71
unsigned int attrs:8;
72
unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */
73
bool is_s2_format:1;
74
+ bool guarded:1; /* guarded bit of the v8-64 PTE */
75
} ARMCacheAttrs;
76
77
/* Fields that are valid upon success. */
78
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/ptw.c
81
+++ b/target/arm/ptw.c
82
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
83
*/
84
result->f.attrs.secure = false;
85
}
86
- /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
87
- if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
88
- arm_tlb_bti_gp(&result->f.attrs) = true;
89
+
90
+ /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
91
+ if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
92
+ result->f.guarded = guarded;
93
}
94
95
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
96
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-a64.c
99
+++ b/target/arm/translate-a64.c
100
@@ -XXX,XX +XXX,XX @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s)
101
#ifdef CONFIG_USER_ONLY
102
return page_get_flags(addr) & PAGE_BTI;
103
#else
104
+ CPUTLBEntryFull *full;
105
+ void *host;
106
int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx);
107
- unsigned int index = tlb_index(env, mmu_idx, addr);
108
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
109
+ int flags;
110
111
/*
112
* We test this immediately after reading an insn, which means
113
- * that any normal page must be in the TLB. The only exception
114
- * would be for executing from flash or device memory, which
115
- * does not retain the TLB entry.
116
- *
117
- * FIXME: Assume false for those, for now. We could use
118
- * arm_cpu_get_phys_page_attrs_debug to re-read the page
119
- * table entry even for that case.
120
+ * that the TLB entry must be present and valid, and thus this
121
+ * access will never raise an exception.
122
*/
123
- return (tlb_hit(entry->addr_code, addr) &&
124
- arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs));
125
+ flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx,
126
+ false, &host, &full, 0);
127
+ assert(!(flags & TLB_INVALID_MASK));
128
+
129
+ return full->guarded;
130
#endif
131
}
53
}
132
54
133
--
55
--
134
2.25.1
56
2.34.1
57
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Currently we hardcode the default NaN value in parts64_default_nan()
2
using a compile-time ifdef ladder. This is awkward for two cases:
3
* for single-QEMU-binary we can't hard-code target-specifics like this
4
* for Arm FEAT_AFP the default NaN value depends on FPCR.AH
5
(specifically the sign bit is different)
2
6
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
7
Add a field to float_status to specify the default NaN value; fall
8
back to the old ifdef behaviour if these are not set.
4
9
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
The default NaN value is specified by setting a uint8_t to a
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
pattern corresponding to the sign and upper fraction parts of
7
Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org
12
the NaN; the lower bits of the fraction are set from bit 0 of
13
the pattern.
14
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20241202131347.498124-35-peter.maydell@linaro.org
9
---
18
---
10
target/arm/translate-a64.c | 41 +++++++++++++++++++++++++++-----------
19
include/fpu/softfloat-helpers.h | 11 +++++++
11
1 file changed, 29 insertions(+), 12 deletions(-)
20
include/fpu/softfloat-types.h | 10 ++++++
21
fpu/softfloat-specialize.c.inc | 55 ++++++++++++++++++++-------------
22
3 files changed, 54 insertions(+), 22 deletions(-)
12
23
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
24
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
14
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
26
--- a/include/fpu/softfloat-helpers.h
16
+++ b/target/arm/translate-a64.c
27
+++ b/include/fpu/softfloat-helpers.h
17
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
28
@@ -XXX,XX +XXX,XX @@ static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
18
}
29
status->float_infzeronan_rule = rule;
19
}
30
}
20
31
21
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
32
+static inline void set_float_default_nan_pattern(uint8_t dnan_pattern,
33
+ float_status *status)
22
+{
34
+{
23
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
35
+ status->default_nan_pattern = dnan_pattern;
24
+}
36
+}
25
+
37
+
26
void gen_a64_update_pc(DisasContext *s, target_long diff)
38
static inline void set_flush_to_zero(bool val, float_status *status)
27
{
39
{
28
- tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
40
status->flush_to_zero = val;
29
+ gen_pc_plus_diff(s, cpu_pc, diff);
41
@@ -XXX,XX +XXX,XX @@ static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status
42
return status->float_infzeronan_rule;
30
}
43
}
31
44
32
/*
45
+static inline uint8_t get_float_default_nan_pattern(float_status *status)
33
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
46
+{
34
47
+ return status->default_nan_pattern;
35
if (insn & (1U << 31)) {
48
+}
36
/* BL Branch with link */
49
+
37
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
50
static inline bool get_flush_to_zero(float_status *status)
38
+ gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
39
}
40
41
/* B Branch / BL Branch with link */
42
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
43
default:
44
goto do_unallocated;
45
}
46
- gen_a64_set_pc(s, dst);
47
/* BLR also needs to load return address */
48
if (opc == 1) {
49
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
50
+ TCGv_i64 lr = cpu_reg(s, 30);
51
+ if (dst == lr) {
52
+ TCGv_i64 tmp = new_tmp_a64(s);
53
+ tcg_gen_mov_i64(tmp, dst);
54
+ dst = tmp;
55
+ }
56
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
57
}
58
+ gen_a64_set_pc(s, dst);
59
break;
60
61
case 8: /* BRAA */
62
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
63
} else {
64
dst = cpu_reg(s, rn);
65
}
66
- gen_a64_set_pc(s, dst);
67
/* BLRAA also needs to load return address */
68
if (opc == 9) {
69
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
70
+ TCGv_i64 lr = cpu_reg(s, 30);
71
+ if (dst == lr) {
72
+ TCGv_i64 tmp = new_tmp_a64(s);
73
+ tcg_gen_mov_i64(tmp, dst);
74
+ dst = tmp;
75
+ }
76
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
77
}
78
+ gen_a64_set_pc(s, dst);
79
break;
80
81
case 4: /* ERET */
82
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
83
84
tcg_rt = cpu_reg(s, rt);
85
86
- clean_addr = tcg_constant_i64(s->pc_curr + imm);
87
+ clean_addr = new_tmp_a64(s);
88
+ gen_pc_plus_diff(s, clean_addr, imm);
89
if (is_vector) {
90
do_fp_ld(s, rt, clean_addr, size);
91
} else {
92
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
93
static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
94
{
51
{
95
unsigned int page, rd;
52
return status->flush_to_zero;
96
- uint64_t base;
53
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
97
- uint64_t offset;
54
index XXXXXXX..XXXXXXX 100644
98
+ int64_t offset;
55
--- a/include/fpu/softfloat-types.h
99
56
+++ b/include/fpu/softfloat-types.h
100
page = extract32(insn, 31, 1);
57
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
101
/* SignExtend(immhi:immlo) -> offset */
58
/* should denormalised inputs go to zero and set the input_denormal flag? */
102
offset = sextract64(insn, 5, 19);
59
bool flush_inputs_to_zero;
103
offset = offset << 2 | extract32(insn, 29, 2);
60
bool default_nan_mode;
104
rd = extract32(insn, 0, 5);
61
+ /*
105
- base = s->pc_curr;
62
+ * The pattern to use for the default NaN. Here the high bit specifies
106
63
+ * the default NaN's sign bit, and bits 6..0 specify the high bits of the
107
if (page) {
64
+ * fractional part. The low bits of the fractional part are copies of bit 0.
108
/* ADRP (page based) */
65
+ * The exponent of the default NaN is (as for any NaN) always all 1s.
109
- base &= ~0xfff;
66
+ * Note that a value of 0 here is not a valid NaN. The target must set
110
offset <<= 12;
67
+ * this to the correct non-zero value, or we will assert when trying to
111
+ /* The page offset is ok for TARGET_TB_PCREL. */
68
+ * create a default NaN.
112
+ offset -= s->pc_curr & 0xfff;
69
+ */
113
}
70
+ uint8_t default_nan_pattern;
114
71
/*
115
- tcg_gen_movi_i64(cpu_reg(s, rd), base + offset);
72
* The flags below are not used on all specializations and may
116
+ gen_pc_plus_diff(s, cpu_reg(s, rd), offset);
73
* constant fold away (see snan_bit_is_one()/no_signalling_nans() in
117
}
74
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
118
75
index XXXXXXX..XXXXXXX 100644
119
/*
76
--- a/fpu/softfloat-specialize.c.inc
77
+++ b/fpu/softfloat-specialize.c.inc
78
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
79
{
80
bool sign = 0;
81
uint64_t frac;
82
+ uint8_t dnan_pattern = status->default_nan_pattern;
83
84
+ if (dnan_pattern == 0) {
85
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
86
- /* !snan_bit_is_one, set all bits */
87
- frac = (1ULL << DECOMPOSED_BINARY_POINT) - 1;
88
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
89
+ /* Sign bit clear, all frac bits set */
90
+ dnan_pattern = 0b01111111;
91
+#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
92
|| defined(TARGET_MICROBLAZE)
93
- /* !snan_bit_is_one, set sign and msb */
94
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
95
- sign = 1;
96
+ /* Sign bit set, most significant frac bit set */
97
+ dnan_pattern = 0b11000000;
98
#elif defined(TARGET_HPPA)
99
- /* snan_bit_is_one, set msb-1. */
100
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 2);
101
+ /* Sign bit clear, msb-1 frac bit set */
102
+ dnan_pattern = 0b00100000;
103
#elif defined(TARGET_HEXAGON)
104
- sign = 1;
105
- frac = ~0ULL;
106
+ /* Sign bit set, all frac bits set. */
107
+ dnan_pattern = 0b11111111;
108
#else
109
- /*
110
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
111
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
112
- * do not have floating-point.
113
- */
114
- if (snan_bit_is_one(status)) {
115
- /* set all bits other than msb */
116
- frac = (1ULL << (DECOMPOSED_BINARY_POINT - 1)) - 1;
117
- } else {
118
- /* set msb */
119
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
120
- }
121
+ /*
122
+ * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
123
+ * S390, SH4, TriCore, and Xtensa. Our other supported targets
124
+ * do not have floating-point.
125
+ */
126
+ if (snan_bit_is_one(status)) {
127
+ /* sign bit clear, set all frac bits other than msb */
128
+ dnan_pattern = 0b00111111;
129
+ } else {
130
+ /* sign bit clear, set frac msb */
131
+ dnan_pattern = 0b01000000;
132
+ }
133
#endif
134
+ }
135
+ assert(dnan_pattern != 0);
136
+
137
+ sign = dnan_pattern >> 7;
138
+ /*
139
+ * Place default_nan_pattern [6:0] into bits [62:56],
140
+ * and replecate bit [0] down into [55:0]
141
+ */
142
+ frac = deposit64(0, DECOMPOSED_BINARY_POINT - 7, 7, dnan_pattern);
143
+ frac = deposit64(frac, 0, DECOMPOSED_BINARY_POINT - 7, -(dnan_pattern & 1));
144
145
*p = (FloatParts64) {
146
.cls = float_class_qnan,
120
--
147
--
121
2.25.1
148
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the tests/fp code.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-36-peter.maydell@linaro.org
6
---
7
tests/fp/fp-bench.c | 1 +
8
tests/fp/fp-test-log2.c | 1 +
9
tests/fp/fp-test.c | 1 +
10
3 files changed, 3 insertions(+)
11
12
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tests/fp/fp-bench.c
15
+++ b/tests/fp/fp-bench.c
16
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
17
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
18
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
19
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
20
+ set_float_default_nan_pattern(0b01000000, &soft_status);
21
22
f = bench_funcs[operation][precision];
23
g_assert(f);
24
diff --git a/tests/fp/fp-test-log2.c b/tests/fp/fp-test-log2.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/tests/fp/fp-test-log2.c
27
+++ b/tests/fp/fp-test-log2.c
28
@@ -XXX,XX +XXX,XX @@ int main(int ac, char **av)
29
int i;
30
31
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
32
+ set_float_default_nan_pattern(0b01000000, &qsf);
33
set_float_rounding_mode(float_round_nearest_even, &qsf);
34
35
test.d = 0.0;
36
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/tests/fp/fp-test.c
39
+++ b/tests/fp/fp-test.c
40
@@ -XXX,XX +XXX,XX @@ void run_test(void)
41
*/
42
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
43
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
44
+ set_float_default_nan_pattern(0b01000000, &qsf);
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
46
47
genCases_setLevel(test_level);
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-37-peter.maydell@linaro.org
7
---
8
target/microblaze/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/microblaze/cpu.c
15
+++ b/target/microblaze/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_reset_hold(Object *obj, ResetType type)
17
* this architecture.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
23
#if defined(CONFIG_USER_ONLY)
24
/* start in user mode with interrupts enabled. */
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
34
- || defined(TARGET_MICROBLAZE)
35
+#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
/* Sign bit set, most significant frac bit set */
37
dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-38-peter.maydell@linaro.org
7
---
8
target/i386/tcg/fpu_helper.c | 4 ++++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 4 insertions(+), 3 deletions(-)
11
12
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/i386/tcg/fpu_helper.c
15
+++ b/target/i386/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
17
*/
18
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
19
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
+ set_float_default_nan_pattern(0b11000000, &env->mmx_status);
23
+ set_float_default_nan_pattern(0b11000000, &env->sse_status);
24
}
25
26
static inline uint8_t save_exception_flags(CPUX86State *env)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
32
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
/* Sign bit clear, all frac bits set */
34
dnan_pattern = 0b01111111;
35
-#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
- /* Sign bit set, most significant frac bit set */
37
- dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
/* Sign bit clear, msb-1 frac bit set */
40
dnan_pattern = 0b00100000;
41
--
42
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-39-peter.maydell@linaro.org
7
---
8
target/hppa/fpu_helper.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 2 insertions(+), 3 deletions(-)
11
12
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/hppa/fpu_helper.c
15
+++ b/target/hppa/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
17
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
18
/* For inf * 0 + NaN, return the input NaN */
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ /* Default NaN: sign bit clear, msb-1 frac bit set */
21
+ set_float_default_nan_pattern(0b00100000, &env->fp_status);
22
}
23
24
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_HPPA)
34
- /* Sign bit clear, msb-1 frac bit set */
35
- dnan_pattern = 0b00100000;
36
#elif defined(TARGET_HEXAGON)
37
/* Sign bit set, all frac bits set. */
38
dnan_pattern = 0b11111111;
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the alpha target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-40-peter.maydell@linaro.org
6
---
7
target/alpha/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/alpha/cpu.c
13
+++ b/target/alpha/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_initfn(Object *obj)
15
* operand in Fa. That is float_2nan_prop_ba.
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
18
+ /* Default NaN: sign bit clear, msb frac bit set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
#if defined(CONFIG_USER_ONLY)
21
env->flags = ENV_FLAG_PS_USER | ENV_FLAG_FEN;
22
cpu_alpha_store_fpcr(env, (uint64_t)(FPCR_INVD | FPCR_DZED | FPCR_OVFD
23
--
24
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for the arm target.
2
This includes setting it for the old linux-user nwfpe emulation.
3
For nwfpe, our default doesn't match the real kernel, but we
4
avoid making a behaviour change in this commit.
2
5
3
The return type of the functions is already bool, but in a few
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
instances we used an integer type with the return statement.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-41-peter.maydell@linaro.org
9
---
10
linux-user/arm/nwfpe/fpa11.c | 5 +++++
11
target/arm/cpu.c | 2 ++
12
2 files changed, 7 insertions(+)
5
13
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/linux-user/arm/nwfpe/fpa11.c b/linux-user/arm/nwfpe/fpa11.c
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/ptw.c | 7 +++----
12
1 file changed, 3 insertions(+), 4 deletions(-)
13
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/ptw.c
16
--- a/linux-user/arm/nwfpe/fpa11.c
17
+++ b/target/arm/ptw.c
17
+++ b/linux-user/arm/nwfpe/fpa11.c
18
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
18
@@ -XXX,XX +XXX,XX @@ void resetFPA11(void)
19
result->f.lg_page_size = TARGET_PAGE_BITS;
19
* this late date.
20
result->cacheattrs.shareability = shareability;
20
*/
21
result->cacheattrs.attrs = memattr;
21
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &fpa11->fp_status);
22
- return 0;
22
+ /*
23
+ return false;
23
+ * Use the same default NaN value as Arm VFP. This doesn't match
24
+ * the Linux kernel's nwfpe emulation, which uses an all-1s value.
25
+ */
26
+ set_float_default_nan_pattern(0b01000000, &fpa11->fp_status);
24
}
27
}
25
28
26
static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
29
void SetRoundingMode(const unsigned int opcode)
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
35
* the pseudocode function the arguments are in the order c, a, b.
36
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
37
* and the input NaN if it is signalling
38
+ * * Default NaN has sign bit clear, msb frac bit set
39
*/
40
static void arm_set_default_fp_behaviours(float_status *s)
28
{
41
{
29
hwaddr ipa;
42
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
30
int s1_prot;
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
31
- int ret;
44
set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
32
bool is_secure = ptw->in_secure;
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
33
- bool ipa_secure, s2walk_secure;
46
+ set_float_default_nan_pattern(0b01000000, s);
34
+ bool ret, ipa_secure, s2walk_secure;
35
ARMCacheAttrs cacheattrs1;
36
bool is_el0;
37
uint64_t hcr;
38
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
39
&& (ipa_secure
40
|| !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
41
42
- return 0;
43
+ return false;
44
}
47
}
45
48
46
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
49
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
47
--
50
--
48
2.25.1
51
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for loongarch.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-42-peter.maydell@linaro.org
6
---
7
target/loongarch/tcg/fpu_helper.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/loongarch/tcg/fpu_helper.c
13
+++ b/target/loongarch/tcg/fpu_helper.c
14
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
15
*/
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
17
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
18
+ /* Default NaN: sign bit clear, msb frac bit set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
}
21
22
int ieee_ex_to_loongarch(int xcpt)
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for m68k.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-43-peter.maydell@linaro.org
6
---
7
target/m68k/cpu.c | 2 ++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/m68k/cpu.c
14
+++ b/target/m68k/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
16
* preceding paragraph for nonsignaling NaNs.
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
+ /* Default NaN: sign bit clear, all frac bits set */
20
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
21
22
nan = floatx80_default_nan(&env->fp_status);
23
for (i = 0; i < 8; i++) {
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
29
uint8_t dnan_pattern = status->default_nan_pattern;
30
31
if (dnan_pattern == 0) {
32
-#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
+#if defined(TARGET_SPARC)
34
/* Sign bit clear, all frac bits set */
35
dnan_pattern = 0b01111111;
36
#elif defined(TARGET_HEXAGON)
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for MIPS. Note that this
2
is our only target which currently changes the default NaN
3
at runtime (which it was previously doing indirectly when it
4
changed the snan_bit_is_one setting).
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-44-peter.maydell@linaro.org
9
---
10
target/mips/fpu_helper.h | 7 +++++++
11
target/mips/msa.c | 3 +++
12
2 files changed, 10 insertions(+)
13
14
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/mips/fpu_helper.h
17
+++ b/target/mips/fpu_helper.h
18
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
19
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
20
nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
21
set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
22
+ /*
23
+ * With nan2008, the default NaN value has the sign bit clear and the
24
+ * frac msb set; with the older mode, the sign bit is clear, and all
25
+ * frac bits except the msb are set.
26
+ */
27
+ set_float_default_nan_pattern(nan2008 ? 0b01000000 : 0b00111111,
28
+ &env->active_fpu.fp_status);
29
30
}
31
32
diff --git a/target/mips/msa.c b/target/mips/msa.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/mips/msa.c
35
+++ b/target/mips/msa.c
36
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
37
/* Inf * 0 + NaN returns the input NaN */
38
set_float_infzeronan_rule(float_infzeronan_dnan_never,
39
&env->active_tc.msa_fp_status);
40
+ /* Default NaN: sign bit clear, frac msb set */
41
+ set_float_default_nan_pattern(0b01000000,
42
+ &env->active_tc.msa_fp_status);
43
}
44
--
45
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for openrisc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-45-peter.maydell@linaro.org
6
---
7
target/openrisc/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/openrisc/cpu.c
13
+++ b/target/openrisc/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_reset_hold(Object *obj, ResetType type)
15
*/
16
set_float_2nan_prop_rule(float_2nan_prop_x87, &cpu->env.fp_status);
17
18
+ /* Default NaN: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &cpu->env.fp_status);
20
21
#ifndef CONFIG_USER_ONLY
22
cpu->env.picmr = 0x00000000;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for ppc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-46-peter.maydell@linaro.org
6
---
7
target/ppc/cpu_init.c | 4 ++++
8
1 file changed, 4 insertions(+)
9
10
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/ppc/cpu_init.c
13
+++ b/target/ppc/cpu_init.c
14
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
17
18
+ /* Default NaN: sign bit clear, set frac msb */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
+ set_float_default_nan_pattern(0b01000000, &env->vec_status);
21
+
22
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
23
ppc_spr_t *spr = &env->spr_cb[i];
24
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for sh4. Note that sh4
2
is one of the only three targets (the others being HPPA and
3
sometimes MIPS) that has snan_bit_is_one set.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-47-peter.maydell@linaro.org
8
---
9
target/sh4/cpu.c | 2 ++
10
1 file changed, 2 insertions(+)
11
12
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sh4/cpu.c
15
+++ b/target/sh4/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_reset_hold(Object *obj, ResetType type)
17
set_flush_to_zero(1, &env->fp_status);
18
#endif
19
set_default_nan_mode(1, &env->fp_status);
20
+ /* sign bit clear, set all frac bits other than msb */
21
+ set_float_default_nan_pattern(0b00111111, &env->fp_status);
22
}
23
24
static void superh_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for rx.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-48-peter.maydell@linaro.org
6
---
7
target/rx/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/rx/cpu.c
13
+++ b/target/rx/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_reset_hold(Object *obj, ResetType type)
15
* then prefer dest over source", which is float_2nan_prop_s_ab.
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
18
+ /* Default NaN value: sign bit clear, set frac msb */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
}
21
22
static ObjectClass *rx_cpu_class_by_name(const char *cpu_model)
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for s390x.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-49-peter.maydell@linaro.org
6
---
7
target/s390x/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/s390x/cpu.c
13
+++ b/target/s390x/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_always,
17
&env->fpu_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fpu_status);
20
/* fall through */
21
case RESET_TYPE_S390_CPU_NORMAL:
22
env->psw.mask &= ~PSW_MASK_RI;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for SPARC, and remove
2
the ifdef from parts64_default_nan.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-50-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 5 +----
10
2 files changed, 3 insertions(+), 4 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
18
/* For inf * 0 + NaN, return the input NaN */
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ /* Default NaN value: sign bit clear, all frac bits set */
21
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
uint8_t dnan_pattern = status->default_nan_pattern;
31
32
if (dnan_pattern == 0) {
33
-#if defined(TARGET_SPARC)
34
- /* Sign bit clear, all frac bits set */
35
- dnan_pattern = 0b01111111;
36
-#elif defined(TARGET_HEXAGON)
37
+#if defined(TARGET_HEXAGON)
38
/* Sign bit set, all frac bits set. */
39
dnan_pattern = 0b11111111;
40
#else
41
--
42
2.34.1
diff view generated by jsdifflib
1
Currently the microdrive code uses device_legacy_reset() to reset
1
Set the default NaN pattern explicitly for xtensa.
2
itself, and has its reset method call reset on the IDE bus as the
3
last thing it does. Switch to using device_cold_reset().
4
5
The only concrete microdrive device is the TYPE_DSCM1XXXX; it is not
6
command-line pluggable, so it is used only by the old pxa2xx Arm
7
boards 'akita', 'borzoi', 'spitz', 'terrier' and 'tosa'.
8
9
You might think that this would result in the IDE bus being
10
reset automatically, but it does not, because the IDEBus type
11
does not set the BusClass::reset method. Instead the controller
12
must explicitly call ide_bus_reset(). We therefore leave that
13
call in md_reset().
14
15
Note also that because the PCMCIA card device is a direct subclass of
16
TYPE_DEVICE and we don't model the PCMCIA controller-to-card
17
interface as a qbus, PCMCIA cards are not on any qbus and so they
18
don't get reset when the system is reset. The reset only happens via
19
the dscm1xxxx_attach() and dscm1xxxx_detach() functions during
20
machine creation.
21
22
Because our aim here is merely to try to get rid of calls to the
23
device_legacy_reset() function, we leave these other dubious
24
reset-related issues alone. (They all stem from this code being
25
absolutely ancient.)
26
2
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Message-id: 20221013174042.1602926-1-peter.maydell@linaro.org
5
Message-id: 20241202131347.498124-51-peter.maydell@linaro.org
30
---
6
---
31
hw/ide/microdrive.c | 8 ++++----
7
target/xtensa/cpu.c | 2 ++
32
1 file changed, 4 insertions(+), 4 deletions(-)
8
1 file changed, 2 insertions(+)
33
9
34
diff --git a/hw/ide/microdrive.c b/hw/ide/microdrive.c
10
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
35
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
36
--- a/hw/ide/microdrive.c
12
--- a/target/xtensa/cpu.c
37
+++ b/hw/ide/microdrive.c
13
+++ b/target/xtensa/cpu.c
38
@@ -XXX,XX +XXX,XX @@ static void md_attr_write(PCMCIACardState *card, uint32_t at, uint8_t value)
14
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
39
case 0x00:    /* Configuration Option Register */
15
/* For inf * 0 + NaN, return the input NaN */
40
s->opt = value & 0xcf;
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
41
if (value & OPT_SRESET) {
17
set_no_signaling_nans(!dfpu, &env->fp_status);
42
- device_legacy_reset(DEVICE(s));
18
+ /* Default NaN value: sign bit clear, set frac msb */
43
+ device_cold_reset(DEVICE(s));
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
44
}
20
xtensa_use_first_nan(env, !dfpu);
45
md_interrupt_update(s);
46
break;
47
@@ -XXX,XX +XXX,XX @@ static void md_common_write(PCMCIACardState *card, uint32_t at, uint16_t value)
48
case 0xe:    /* Device Control */
49
s->ctrl = value;
50
if (value & CTRL_SRST) {
51
- device_legacy_reset(DEVICE(s));
52
+ device_cold_reset(DEVICE(s));
53
}
54
md_interrupt_update(s);
55
break;
56
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_attach(PCMCIACardState *card)
57
md->attr_base = pcc->cis[0x74] | (pcc->cis[0x76] << 8);
58
md->io_base = 0x0;
59
60
- device_legacy_reset(DEVICE(md));
61
+ device_cold_reset(DEVICE(md));
62
md_interrupt_update(md);
63
64
return 0;
65
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_detach(PCMCIACardState *card)
66
{
67
MicroDriveState *md = MICRODRIVE(card);
68
69
- device_legacy_reset(DEVICE(md));
70
+ device_cold_reset(DEVICE(md));
71
return 0;
72
}
21
}
73
22
74
--
23
--
75
2.25.1
24
2.34.1
76
77
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for hexagon.
2
Remove the ifdef from parts64_default_nan(); the only
3
remaining unconverted targets all use the default case.
2
4
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-52-peter.maydell@linaro.org
8
---
9
target/hexagon/cpu.c | 2 ++
10
fpu/softfloat-specialize.c.inc | 5 -----
11
2 files changed, 2 insertions(+), 5 deletions(-)
4
12
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 37 +++++++++++++++++++++----------------
11
1 file changed, 21 insertions(+), 16 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
15
--- a/target/hexagon/cpu.c
16
+++ b/target/arm/translate.c
16
+++ b/target/hexagon/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static uint32_t read_pc(DisasContext *s)
17
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_reset_hold(Object *obj, ResetType type)
18
return s->pc_curr + (s->thumb ? 4 : 8);
18
19
set_default_nan_mode(1, &env->fp_status);
20
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
21
+ /* Default NaN value: sign bit set, all frac bits set */
22
+ set_float_default_nan_pattern(0b11111111, &env->fp_status);
19
}
23
}
20
24
21
+/* The pc_curr difference for an architectural jump. */
25
static void hexagon_cpu_disas_set_info(CPUState *s, disassemble_info *info)
22
+static target_long jmp_diff(DisasContext *s, target_long diff)
26
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
23
+{
27
index XXXXXXX..XXXXXXX 100644
24
+ return diff + (s->thumb ? 4 : 8);
28
--- a/fpu/softfloat-specialize.c.inc
25
+}
29
+++ b/fpu/softfloat-specialize.c.inc
26
+
30
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
27
/* Set a variable to the value of a CPU register. */
31
uint8_t dnan_pattern = status->default_nan_pattern;
28
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
32
29
{
33
if (dnan_pattern == 0) {
30
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
34
-#if defined(TARGET_HEXAGON)
31
* cpu_loop_exec. Any live exit_requests will be processed as we
35
- /* Sign bit set, all frac bits set. */
32
* enter the next TB.
36
- dnan_pattern = 0b11111111;
33
*/
37
-#else
34
-static void gen_goto_tb(DisasContext *s, int n, int diff)
38
/*
35
+static void gen_goto_tb(DisasContext *s, int n, target_long diff)
39
* This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
36
{
40
* S390, SH4, TriCore, and Xtensa. Our other supported targets
37
target_ulong dest = s->pc_curr + diff;
41
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
38
42
/* sign bit clear, set frac msb */
39
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
43
dnan_pattern = 0b01000000;
40
}
44
}
41
45
-#endif
42
/* Jump, specifying which TB number to use if we gen_goto_tb() */
43
-static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
44
+static void gen_jmp_tb(DisasContext *s, target_long diff, int tbno)
45
{
46
- int diff = dest - s->pc_curr;
47
-
48
if (unlikely(s->ss_active)) {
49
/* An indirect jump so that we still trigger the debug exception. */
50
gen_update_pc(s, diff);
51
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
52
}
46
}
53
}
47
assert(dnan_pattern != 0);
54
55
-static inline void gen_jmp(DisasContext *s, uint32_t dest)
56
+static inline void gen_jmp(DisasContext *s, target_long diff)
57
{
58
- gen_jmp_tb(s, dest, 0);
59
+ gen_jmp_tb(s, diff, 0);
60
}
61
62
static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
63
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
64
65
static bool trans_B(DisasContext *s, arg_i *a)
66
{
67
- gen_jmp(s, read_pc(s) + a->imm);
68
+ gen_jmp(s, jmp_diff(s, a->imm));
69
return true;
70
}
71
72
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
73
return true;
74
}
75
arm_skip_unless(s, a->cond);
76
- gen_jmp(s, read_pc(s) + a->imm);
77
+ gen_jmp(s, jmp_diff(s, a->imm));
78
return true;
79
}
80
81
static bool trans_BL(DisasContext *s, arg_i *a)
82
{
83
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
84
- gen_jmp(s, read_pc(s) + a->imm);
85
+ gen_jmp(s, jmp_diff(s, a->imm));
86
return true;
87
}
88
89
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
90
}
91
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
92
store_cpu_field_constant(!s->thumb, thumb);
93
- gen_jmp(s, (read_pc(s) & ~3) + a->imm);
94
+ /* This jump is computed from an aligned PC: subtract off the low bits. */
95
+ gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
96
return true;
97
}
98
99
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
100
* when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
101
*/
102
}
103
- gen_jmp_tb(s, s->base.pc_next, 1);
104
+ gen_jmp_tb(s, curr_insn_len(s), 1);
105
106
gen_set_label(nextlabel);
107
- gen_jmp(s, read_pc(s) + a->imm);
108
+ gen_jmp(s, jmp_diff(s, a->imm));
109
return true;
110
}
111
112
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
113
114
if (a->f) {
115
/* Loop-forever: just jump back to the loop start */
116
- gen_jmp(s, read_pc(s) - a->imm);
117
+ gen_jmp(s, jmp_diff(s, -a->imm));
118
return true;
119
}
120
121
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
122
tcg_temp_free_i32(decr);
123
}
124
/* Jump back to the loop start */
125
- gen_jmp(s, read_pc(s) - a->imm);
126
+ gen_jmp(s, jmp_diff(s, -a->imm));
127
128
gen_set_label(loopend);
129
if (a->tp) {
130
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
131
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
132
}
133
/* End TB, continuing to following insn */
134
- gen_jmp_tb(s, s->base.pc_next, 1);
135
+ gen_jmp_tb(s, curr_insn_len(s), 1);
136
return true;
137
}
138
139
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
140
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
141
tmp, 0, s->condlabel);
142
tcg_temp_free_i32(tmp);
143
- gen_jmp(s, read_pc(s) + a->imm);
144
+ gen_jmp(s, jmp_diff(s, a->imm));
145
return true;
146
}
147
48
148
--
49
--
149
2.25.1
50
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for riscv.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-53-peter.maydell@linaro.org
6
---
7
target/riscv/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/riscv/cpu.c
13
+++ b/target/riscv/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj, ResetType type)
15
cs->exception_index = RISCV_EXCP_NONE;
16
env->load_res = -1;
17
set_default_nan_mode(1, &env->fp_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
env->vill = true;
21
22
#ifndef CONFIG_USER_ONLY
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for tricore.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-54-peter.maydell@linaro.org
6
---
7
target/tricore/helper.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/tricore/helper.c b/target/tricore/helper.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/tricore/helper.c
13
+++ b/target/tricore/helper.c
14
@@ -XXX,XX +XXX,XX @@ void fpu_set_state(CPUTriCoreState *env)
15
set_flush_to_zero(1, &env->fp_status);
16
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
17
set_default_nan_mode(1, &env->fp_status);
18
+ /* Default NaN pattern: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
}
21
22
uint32_t psw_read(CPUTriCoreState *env)
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Now that all our targets have bene converted to explicitly specify
2
their pattern for the default NaN value we can remove the remaining
3
fallback code in parts64_default_nan().
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-55-peter.maydell@linaro.org
8
---
9
fpu/softfloat-specialize.c.inc | 14 --------------
10
1 file changed, 14 deletions(-)
11
12
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
13
index XXXXXXX..XXXXXXX 100644
14
--- a/fpu/softfloat-specialize.c.inc
15
+++ b/fpu/softfloat-specialize.c.inc
16
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
17
uint64_t frac;
18
uint8_t dnan_pattern = status->default_nan_pattern;
19
20
- if (dnan_pattern == 0) {
21
- /*
22
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
23
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
24
- * do not have floating-point.
25
- */
26
- if (snan_bit_is_one(status)) {
27
- /* sign bit clear, set all frac bits other than msb */
28
- dnan_pattern = 0b00111111;
29
- } else {
30
- /* sign bit clear, set frac msb */
31
- dnan_pattern = 0b01000000;
32
- }
33
- }
34
assert(dnan_pattern != 0);
35
36
sign = dnan_pattern >> 7;
37
--
38
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
3
Inline pickNaNMulAdd into its only caller. This makes
4
one assert redundant with the immediately preceding IF.
4
5
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-3-richard.henderson@linaro.org
9
[PMM: keep comment from old code in new location]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/translate.c | 38 +++++++++++++++++++++-----------------
12
fpu/softfloat-parts.c.inc | 41 +++++++++++++++++++++++++-
11
1 file changed, 21 insertions(+), 17 deletions(-)
13
fpu/softfloat-specialize.c.inc | 54 ----------------------------------
14
2 files changed, 40 insertions(+), 55 deletions(-)
12
15
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
18
--- a/fpu/softfloat-parts.c.inc
16
+++ b/target/arm/translate.c
19
+++ b/fpu/softfloat-parts.c.inc
17
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
21
}
22
23
if (s->default_nan_mode) {
24
+ /*
25
+ * We guarantee not to require the target to tell us how to
26
+ * pick a NaN if we're always returning the default NaN.
27
+ * But if we're not in default-NaN mode then the target must
28
+ * specify.
29
+ */
30
which = 3;
31
+ } else if (infzero) {
32
+ /*
33
+ * Inf * 0 + NaN -- some implementations return the
34
+ * default NaN here, and some return the input NaN.
35
+ */
36
+ switch (s->float_infzeronan_rule) {
37
+ case float_infzeronan_dnan_never:
38
+ which = 2;
39
+ break;
40
+ case float_infzeronan_dnan_always:
41
+ which = 3;
42
+ break;
43
+ case float_infzeronan_dnan_if_qnan:
44
+ which = is_qnan(c->cls) ? 3 : 2;
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
} else {
50
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
51
+ FloatClass cls[3] = { a->cls, b->cls, c->cls };
52
+ Float3NaNPropRule rule = s->float_3nan_prop_rule;
53
+
54
+ assert(rule != float_3nan_prop_none);
55
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
56
+ /* We have at least one SNaN input and should prefer it */
57
+ do {
58
+ which = rule & R_3NAN_1ST_MASK;
59
+ rule >>= R_3NAN_1ST_LENGTH;
60
+ } while (!is_snan(cls[which]));
61
+ } else {
62
+ do {
63
+ which = rule & R_3NAN_1ST_MASK;
64
+ rule >>= R_3NAN_1ST_LENGTH;
65
+ } while (!is_nan(cls[which]));
66
+ }
67
}
68
69
if (which == 3) {
70
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
71
index XXXXXXX..XXXXXXX 100644
72
--- a/fpu/softfloat-specialize.c.inc
73
+++ b/fpu/softfloat-specialize.c.inc
74
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
18
}
75
}
19
}
76
}
20
77
21
-/* The architectural value of PC. */
78
-/*----------------------------------------------------------------------------
22
-static uint32_t read_pc(DisasContext *s)
79
-| Select which NaN to propagate for a three-input operation.
80
-| For the moment we assume that no CPU needs the 'larger significand'
81
-| information.
82
-| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
83
-*----------------------------------------------------------------------------*/
84
-static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
85
- bool infzero, bool have_snan, float_status *status)
23
-{
86
-{
24
- return s->pc_curr + (s->thumb ? 4 : 8);
87
- FloatClass cls[3] = { a_cls, b_cls, c_cls };
88
- Float3NaNPropRule rule = status->float_3nan_prop_rule;
89
- int which;
90
-
91
- /*
92
- * We guarantee not to require the target to tell us how to
93
- * pick a NaN if we're always returning the default NaN.
94
- * But if we're not in default-NaN mode then the target must
95
- * specify.
96
- */
97
- assert(!status->default_nan_mode);
98
-
99
- if (infzero) {
100
- /*
101
- * Inf * 0 + NaN -- some implementations return the default NaN here,
102
- * and some return the input NaN.
103
- */
104
- switch (status->float_infzeronan_rule) {
105
- case float_infzeronan_dnan_never:
106
- return 2;
107
- case float_infzeronan_dnan_always:
108
- return 3;
109
- case float_infzeronan_dnan_if_qnan:
110
- return is_qnan(c_cls) ? 3 : 2;
111
- default:
112
- g_assert_not_reached();
113
- }
114
- }
115
-
116
- assert(rule != float_3nan_prop_none);
117
- if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
118
- /* We have at least one SNaN input and should prefer it */
119
- do {
120
- which = rule & R_3NAN_1ST_MASK;
121
- rule >>= R_3NAN_1ST_LENGTH;
122
- } while (!is_snan(cls[which]));
123
- } else {
124
- do {
125
- which = rule & R_3NAN_1ST_MASK;
126
- rule >>= R_3NAN_1ST_LENGTH;
127
- } while (!is_nan(cls[which]));
128
- }
129
- return which;
25
-}
130
-}
26
-
131
-
27
/* The pc_curr difference for an architectural jump. */
132
/*----------------------------------------------------------------------------
28
static target_long jmp_diff(DisasContext *s, target_long diff)
133
| Returns 1 if the double-precision floating-point value `a' is a quiet
29
{
134
| NaN; otherwise returns 0.
30
return diff + (s->thumb ? 4 : 8);
31
}
32
33
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
34
+{
35
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
36
+}
37
+
38
/* Set a variable to the value of a CPU register. */
39
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
40
{
41
if (reg == 15) {
42
- tcg_gen_movi_i32(var, read_pc(s));
43
+ gen_pc_plus_diff(s, var, jmp_diff(s, 0));
44
} else {
45
tcg_gen_mov_i32(var, cpu_R[reg]);
46
}
47
@@ -XXX,XX +XXX,XX @@ TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs)
48
TCGv_i32 tmp = tcg_temp_new_i32();
49
50
if (reg == 15) {
51
- tcg_gen_movi_i32(tmp, (read_pc(s) & ~3) + ofs);
52
+ /*
53
+ * This address is computed from an aligned PC:
54
+ * subtract off the low bits.
55
+ */
56
+ gen_pc_plus_diff(s, tmp, jmp_diff(s, ofs - (s->pc_curr & 3)));
57
} else {
58
tcg_gen_addi_i32(tmp, cpu_R[reg], ofs);
59
}
60
@@ -XXX,XX +XXX,XX @@ void unallocated_encoding(DisasContext *s)
61
/* Force a TB lookup after an instruction that changes the CPU state. */
62
void gen_lookup_tb(DisasContext *s)
63
{
64
- tcg_gen_movi_i32(cpu_R[15], s->base.pc_next);
65
+ gen_pc_plus_diff(s, cpu_R[15], curr_insn_len(s));
66
s->base.is_jmp = DISAS_EXIT;
67
}
68
69
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_r(DisasContext *s, arg_BLX_r *a)
70
return false;
71
}
72
tmp = load_reg(s, a->rm);
73
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
74
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
75
gen_bx(s, tmp);
76
return true;
77
}
78
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
79
80
static bool trans_BL(DisasContext *s, arg_i *a)
81
{
82
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
83
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
84
gen_jmp(s, jmp_diff(s, a->imm));
85
return true;
86
}
87
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
88
if (s->thumb && (a->imm & 2)) {
89
return false;
90
}
91
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
92
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
93
store_cpu_field_constant(!s->thumb, thumb);
94
/* This jump is computed from an aligned PC: subtract off the low bits. */
95
gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
96
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
97
static bool trans_BL_BLX_prefix(DisasContext *s, arg_BL_BLX_prefix *a)
98
{
99
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
100
- tcg_gen_movi_i32(cpu_R[14], read_pc(s) + (a->imm << 12));
101
+ gen_pc_plus_diff(s, cpu_R[14], jmp_diff(s, a->imm << 12));
102
return true;
103
}
104
105
@@ -XXX,XX +XXX,XX @@ static bool trans_BL_suffix(DisasContext *s, arg_BL_suffix *a)
106
107
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
108
tcg_gen_addi_i32(tmp, cpu_R[14], (a->imm << 1) | 1);
109
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
110
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
111
gen_bx(s, tmp);
112
return true;
113
}
114
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_suffix(DisasContext *s, arg_BLX_suffix *a)
115
tmp = tcg_temp_new_i32();
116
tcg_gen_addi_i32(tmp, cpu_R[14], a->imm << 1);
117
tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
118
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
119
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
120
gen_bx(s, tmp);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
124
tcg_gen_add_i32(addr, addr, tmp);
125
126
gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), half ? MO_UW : MO_UB);
127
- tcg_temp_free_i32(addr);
128
129
tcg_gen_add_i32(tmp, tmp, tmp);
130
- tcg_gen_addi_i32(tmp, tmp, read_pc(s));
131
+ gen_pc_plus_diff(s, addr, jmp_diff(s, 0));
132
+ tcg_gen_add_i32(tmp, tmp, addr);
133
+ tcg_temp_free_i32(addr);
134
store_reg(s, 15, tmp);
135
return true;
136
}
137
--
135
--
138
2.25.1
136
2.34.1
139
137
140
138
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
3
Remove "3" as a special case for which and simply
4
branch to return the desired value.
4
5
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-4-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate.h | 5 +++--
11
fpu/softfloat-parts.c.inc | 20 ++++++++++----------
11
target/arm/translate-a64.c | 28 ++++++++++-------------
12
1 file changed, 10 insertions(+), 10 deletions(-)
12
target/arm/translate-m-nocp.c | 6 ++---
13
target/arm/translate-mve.c | 2 +-
14
target/arm/translate-vfp.c | 6 ++---
15
target/arm/translate.c | 42 +++++++++++++++++------------------
16
6 files changed, 43 insertions(+), 46 deletions(-)
17
13
18
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate.h
16
--- a/fpu/softfloat-parts.c.inc
21
+++ b/target/arm/translate.h
17
+++ b/fpu/softfloat-parts.c.inc
22
@@ -XXX,XX +XXX,XX @@ void arm_jump_cc(DisasCompare *cmp, TCGLabel *label);
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
23
void arm_gen_test_cc(int cc, TCGLabel *label);
19
* But if we're not in default-NaN mode then the target must
24
MemOp pow2_align(unsigned i);
20
* specify.
25
void unallocated_encoding(DisasContext *s);
21
*/
26
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
22
- which = 3;
27
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
23
+ goto default_nan;
28
uint32_t syn, uint32_t target_el);
24
} else if (infzero) {
29
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn);
25
/*
30
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
26
* Inf * 0 + NaN -- some implementations return the
31
+ int excp, uint32_t syn);
27
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
32
28
*/
33
/* Return state of Alternate Half-precision flag, caller frees result */
29
switch (s->float_infzeronan_rule) {
34
static inline TCGv_i32 get_ahp_flag(void)
30
case float_infzeronan_dnan_never:
35
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
31
- which = 2;
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/translate-a64.c
38
+++ b/target/arm/translate-a64.c
39
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check_only(DisasContext *s)
40
assert(!s->fp_access_checked);
41
s->fp_access_checked = true;
42
43
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
44
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
45
syn_fp_access_trap(1, 0xe, false, 0),
46
s->fp_excp_el);
47
return false;
48
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
49
return false;
50
}
51
if (s->sme_trap_nonstreaming && s->is_nonstreaming) {
52
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
53
+ gen_exception_insn(s, 0, EXCP_UDEF,
54
syn_smetrap(SME_ET_Streaming, false));
55
return false;
56
}
57
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
58
goto fail_exit;
59
}
60
} else if (s->sve_excp_el) {
61
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
62
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
63
syn_sve_access_trap(), s->sve_excp_el);
64
goto fail_exit;
65
}
66
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
67
static bool sme_access_check(DisasContext *s)
68
{
69
if (s->sme_excp_el) {
70
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
71
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
72
syn_smetrap(SME_ET_AccessTrap, false),
73
s->sme_excp_el);
74
return false;
75
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
76
return false;
77
}
78
if (FIELD_EX64(req, SVCR, SM) && !s->pstate_sm) {
79
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
80
+ gen_exception_insn(s, 0, EXCP_UDEF,
81
syn_smetrap(SME_ET_NotStreaming, false));
82
return false;
83
}
84
if (FIELD_EX64(req, SVCR, ZA) && !s->pstate_za) {
85
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
86
+ gen_exception_insn(s, 0, EXCP_UDEF,
87
syn_smetrap(SME_ET_InactiveZA, false));
88
return false;
89
}
90
@@ -XXX,XX +XXX,XX @@ static void gen_sysreg_undef(DisasContext *s, bool isread,
91
} else {
92
syndrome = syn_uncategorized();
93
}
94
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syndrome);
95
+ gen_exception_insn(s, 0, EXCP_UDEF, syndrome);
96
}
97
98
/* MRS - move from system register
99
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
100
switch (op2_ll) {
101
case 1: /* SVC */
102
gen_ss_advance(s);
103
- gen_exception_insn(s, s->base.pc_next, EXCP_SWI,
104
- syn_aa64_svc(imm16));
105
+ gen_exception_insn(s, 4, EXCP_SWI, syn_aa64_svc(imm16));
106
break;
32
break;
107
case 2: /* HVC */
33
case float_infzeronan_dnan_always:
108
if (s->current_el == 0) {
34
- which = 3;
109
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
35
- break;
110
gen_a64_update_pc(s, 0);
36
+ goto default_nan;
111
gen_helper_pre_hvc(cpu_env);
37
case float_infzeronan_dnan_if_qnan:
112
gen_ss_advance(s);
38
- which = is_qnan(c->cls) ? 3 : 2;
113
- gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
39
+ if (is_qnan(c->cls)) {
114
- syn_aa64_hvc(imm16), 2);
40
+ goto default_nan;
115
+ gen_exception_insn_el(s, 4, EXCP_HVC, syn_aa64_hvc(imm16), 2);
41
+ }
116
break;
117
case 3: /* SMC */
118
if (s->current_el == 0) {
119
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
120
gen_a64_update_pc(s, 0);
121
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
122
gen_ss_advance(s);
123
- gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
124
- syn_aa64_smc(imm16), 3);
125
+ gen_exception_insn_el(s, 4, EXCP_SMC, syn_aa64_smc(imm16), 3);
126
break;
42
break;
127
default:
43
default:
128
unallocated_encoding(s);
44
g_assert_not_reached();
129
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
45
}
130
* Illegal execution state. This has priority over BTI
46
+ which = 2;
131
* exceptions, but comes after instruction abort exceptions.
47
} else {
132
*/
48
FloatClass cls[3] = { a->cls, b->cls, c->cls };
133
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
49
Float3NaNPropRule rule = s->float_3nan_prop_rule;
134
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
50
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
135
return;
51
}
136
}
52
}
137
53
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
54
- if (which == 3) {
139
if (s->btype != 0
55
- parts_default_nan(a, s);
140
&& s->guarded_page
56
- return a;
141
&& !btype_destination_ok(insn, s->bt, s->btype)) {
57
- }
142
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
58
-
143
- syn_btitrap(s->btype));
59
switch (which) {
144
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_btitrap(s->btype));
60
case 0:
145
return;
61
break;
146
}
62
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
147
} else {
63
parts_silence_nan(a, s);
148
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/target/arm/translate-m-nocp.c
151
+++ b/target/arm/translate-m-nocp.c
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
153
tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
154
155
if (s->fp_excp_el != 0) {
156
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
157
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
158
syn_uncategorized(), s->fp_excp_el);
159
return true;
160
}
64
}
161
@@ -XXX,XX +XXX,XX @@ static bool trans_NOCP(DisasContext *s, arg_nocp *a)
65
return a;
162
}
66
+
163
67
+ default_nan:
164
if (a->cp != 10) {
68
+ parts_default_nan(a, s);
165
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized());
69
+ return a;
166
+ gen_exception_insn(s, 0, EXCP_NOCP, syn_uncategorized());
167
return true;
168
}
169
170
if (s->fp_excp_el != 0) {
171
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
172
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
173
syn_uncategorized(), s->fp_excp_el);
174
return true;
175
}
176
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
177
index XXXXXXX..XXXXXXX 100644
178
--- a/target/arm/translate-mve.c
179
+++ b/target/arm/translate-mve.c
180
@@ -XXX,XX +XXX,XX @@ bool mve_eci_check(DisasContext *s)
181
return true;
182
default:
183
/* Reserved value: INVSTATE UsageFault */
184
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
185
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
186
return false;
187
}
188
}
70
}
189
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
71
190
index XXXXXXX..XXXXXXX 100644
72
/*
191
--- a/target/arm/translate-vfp.c
192
+++ b/target/arm/translate-vfp.c
193
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
194
int coproc = arm_dc_feature(s, ARM_FEATURE_V8) ? 0 : 0xa;
195
uint32_t syn = syn_fp_access_trap(1, 0xe, false, coproc);
196
197
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF, syn, s->fp_excp_el);
198
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn, s->fp_excp_el);
199
return false;
200
}
201
202
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
203
* appear to be any insns which touch VFP which are allowed.
204
*/
205
if (s->sme_trap_nonstreaming) {
206
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
207
+ gen_exception_insn(s, 0, EXCP_UDEF,
208
syn_smetrap(SME_ET_Streaming,
209
curr_insn_len(s) == 2));
210
return false;
211
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
212
* the encoding space handled by the patterns in m-nocp.decode,
213
* and for them we may need to raise NOCP here.
214
*/
215
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
216
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
217
syn_uncategorized(), s->fp_excp_el);
218
return false;
219
}
220
diff --git a/target/arm/translate.c b/target/arm/translate.c
221
index XXXXXXX..XXXXXXX 100644
222
--- a/target/arm/translate.c
223
+++ b/target/arm/translate.c
224
@@ -XXX,XX +XXX,XX @@ static void gen_exception(int excp, uint32_t syndrome)
225
tcg_constant_i32(syndrome));
226
}
227
228
-static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
229
- uint32_t syn, TCGv_i32 tcg_el)
230
+static void gen_exception_insn_el_v(DisasContext *s, target_long pc_diff,
231
+ int excp, uint32_t syn, TCGv_i32 tcg_el)
232
{
233
if (s->aarch64) {
234
- gen_a64_update_pc(s, pc - s->pc_curr);
235
+ gen_a64_update_pc(s, pc_diff);
236
} else {
237
gen_set_condexec(s);
238
- gen_update_pc(s, pc - s->pc_curr);
239
+ gen_update_pc(s, pc_diff);
240
}
241
gen_exception_el_v(excp, syn, tcg_el);
242
s->base.is_jmp = DISAS_NORETURN;
243
}
244
245
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
246
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
247
uint32_t syn, uint32_t target_el)
248
{
249
- gen_exception_insn_el_v(s, pc, excp, syn, tcg_constant_i32(target_el));
250
+ gen_exception_insn_el_v(s, pc_diff, excp, syn,
251
+ tcg_constant_i32(target_el));
252
}
253
254
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
255
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
256
+ int excp, uint32_t syn)
257
{
258
if (s->aarch64) {
259
- gen_a64_update_pc(s, pc - s->pc_curr);
260
+ gen_a64_update_pc(s, pc_diff);
261
} else {
262
gen_set_condexec(s);
263
- gen_update_pc(s, pc - s->pc_curr);
264
+ gen_update_pc(s, pc_diff);
265
}
266
gen_exception(excp, syn);
267
s->base.is_jmp = DISAS_NORETURN;
268
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
269
void unallocated_encoding(DisasContext *s)
270
{
271
/* Unallocated and reserved encodings are uncategorized */
272
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
273
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
274
}
275
276
/* Force a TB lookup after an instruction that changes the CPU state. */
277
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
278
tcg_el = tcg_constant_i32(3);
279
}
280
281
- gen_exception_insn_el_v(s, s->pc_curr, EXCP_UDEF,
282
+ gen_exception_insn_el_v(s, 0, EXCP_UDEF,
283
syn_uncategorized(), tcg_el);
284
tcg_temp_free_i32(tcg_el);
285
return false;
286
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
287
288
undef:
289
/* If we get here then some access check did not pass */
290
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
291
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
292
return false;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
296
* For the UNPREDICTABLE cases we choose to UNDEF.
297
*/
298
if (s->current_el == 1 && !s->ns && mode == ARM_CPU_MODE_MON) {
299
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
300
- syn_uncategorized(), 3);
301
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_uncategorized(), 3);
302
return;
303
}
304
305
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
306
* Do the check-and-raise-exception by hand.
307
*/
308
if (s->fp_excp_el) {
309
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
310
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
311
syn_uncategorized(), s->fp_excp_el);
312
return true;
313
}
314
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
315
tmp = load_cpu_field(v7m.ltpsize);
316
tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
317
tcg_temp_free_i32(tmp);
318
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
319
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
320
gen_set_label(skipexc);
321
}
322
323
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
324
* UsageFault exception.
325
*/
326
if (arm_dc_feature(s, ARM_FEATURE_M)) {
327
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
328
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
329
return;
330
}
331
332
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
333
* Illegal execution state. This has priority over BTI
334
* exceptions, but comes after instruction abort exceptions.
335
*/
336
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
337
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
338
return;
339
}
340
341
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
342
* Illegal execution state. This has priority over BTI
343
* exceptions, but comes after instruction abort exceptions.
344
*/
345
- gen_exception_insn(dc, dc->pc_curr, EXCP_UDEF, syn_illegalstate());
346
+ gen_exception_insn(dc, 0, EXCP_UDEF, syn_illegalstate());
347
return;
348
}
349
350
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
351
*/
352
tcg_remove_ops_after(dc->insn_eci_rewind);
353
dc->condjmp = 0;
354
- gen_exception_insn(dc, dc->pc_curr, EXCP_INVSTATE,
355
- syn_uncategorized());
356
+ gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
357
}
358
359
arm_post_translate_insn(dc);
360
--
73
--
361
2.25.1
74
2.34.1
362
75
363
76
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
A simple helper to retrieve the length of the current insn.
3
Assign the pointer return value to 'a' directly,
4
rather than going through an intermediary index.
4
5
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-5-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate.h | 5 +++++
11
fpu/softfloat-parts.c.inc | 32 ++++++++++----------------------
11
target/arm/translate-vfp.c | 2 +-
12
1 file changed, 10 insertions(+), 22 deletions(-)
12
target/arm/translate.c | 5 ++---
13
3 files changed, 8 insertions(+), 4 deletions(-)
14
13
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
16
--- a/fpu/softfloat-parts.c.inc
18
+++ b/target/arm/translate.h
17
+++ b/fpu/softfloat-parts.c.inc
19
@@ -XXX,XX +XXX,XX @@ static inline void disas_set_insn_syndrome(DisasContext *s, uint32_t syn)
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
20
s->insn_start = NULL;
19
FloatPartsN *c, float_status *s,
21
}
20
int ab_mask, int abc_mask)
22
21
{
23
+static inline int curr_insn_len(DisasContext *s)
22
- int which;
24
+{
23
bool infzero = (ab_mask == float_cmask_infzero);
25
+ return s->base.pc_next - s->pc_curr;
24
bool have_snan = (abc_mask & float_cmask_snan);
26
+}
25
+ FloatPartsN *ret;
27
+
26
28
/* is_jmp field values */
27
if (unlikely(have_snan)) {
29
#define DISAS_JUMP DISAS_TARGET_0 /* only pc was modified dynamically */
28
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
30
/* CPU state was modified dynamically; exit to main loop for interrupts. */
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
31
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
30
default:
32
index XXXXXXX..XXXXXXX 100644
31
g_assert_not_reached();
33
--- a/target/arm/translate-vfp.c
32
}
34
+++ b/target/arm/translate-vfp.c
33
- which = 2;
35
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
34
+ ret = c;
36
if (s->sme_trap_nonstreaming) {
35
} else {
37
gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
36
- FloatClass cls[3] = { a->cls, b->cls, c->cls };
38
syn_smetrap(SME_ET_Streaming,
37
+ FloatPartsN *val[3] = { a, b, c };
39
- s->base.pc_next - s->pc_curr == 2));
38
Float3NaNPropRule rule = s->float_3nan_prop_rule;
40
+ curr_insn_len(s) == 2));
39
41
return false;
40
assert(rule != float_3nan_prop_none);
41
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
42
/* We have at least one SNaN input and should prefer it */
43
do {
44
- which = rule & R_3NAN_1ST_MASK;
45
+ ret = val[rule & R_3NAN_1ST_MASK];
46
rule >>= R_3NAN_1ST_LENGTH;
47
- } while (!is_snan(cls[which]));
48
+ } while (!is_snan(ret->cls));
49
} else {
50
do {
51
- which = rule & R_3NAN_1ST_MASK;
52
+ ret = val[rule & R_3NAN_1ST_MASK];
53
rule >>= R_3NAN_1ST_LENGTH;
54
- } while (!is_nan(cls[which]));
55
+ } while (!is_nan(ret->cls));
56
}
42
}
57
}
43
58
44
diff --git a/target/arm/translate.c b/target/arm/translate.c
59
- switch (which) {
45
index XXXXXXX..XXXXXXX 100644
60
- case 0:
46
--- a/target/arm/translate.c
61
- break;
47
+++ b/target/arm/translate.c
62
- case 1:
48
@@ -XXX,XX +XXX,XX @@ static ISSInfo make_issinfo(DisasContext *s, int rd, bool p, bool w)
63
- a = b;
49
/* ISS not valid if writeback */
64
- break;
50
if (p && !w) {
65
- case 2:
51
ret = rd;
66
- a = c;
52
- if (s->base.pc_next - s->pc_curr == 2) {
67
- break;
53
+ if (curr_insn_len(s) == 2) {
68
- default:
54
ret |= ISSIs16Bit;
69
- g_assert_not_reached();
55
}
70
+ if (is_snan(ret->cls)) {
56
} else {
71
+ parts_silence_nan(ret, s);
57
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
72
}
58
/* nothing more to generate */
73
- if (is_snan(a->cls)) {
59
break;
74
- parts_silence_nan(a, s);
60
case DISAS_WFI:
75
- }
61
- gen_helper_wfi(cpu_env,
76
- return a;
62
- tcg_constant_i32(dc->base.pc_next - dc->pc_curr));
77
+ return ret;
63
+ gen_helper_wfi(cpu_env, tcg_constant_i32(curr_insn_len(dc)));
78
64
/*
79
default_nan:
65
* The helper doesn't necessarily throw an exception, but we
80
parts_default_nan(a, s);
66
* must go back to the main loop to check for interrupts anyway.
67
--
81
--
68
2.25.1
82
2.34.1
69
83
70
84
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
3
While all indices into val[] should be in [0-2], the mask
4
Since we always pass dc->pc_curr, fold the arithmetic to zero displacement.
4
applied is two bits. To help static analysis see there is
5
no possibility of read beyond the end of the array, pad the
6
array to 4 entries, with the final being (implicitly) NULL.
5
7
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-id: 20241203203949.483774-6-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
target/arm/translate-a64.c | 6 +++---
13
fpu/softfloat-parts.c.inc | 2 +-
12
target/arm/translate.c | 10 +++++-----
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
2 files changed, 8 insertions(+), 8 deletions(-)
14
15
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
18
--- a/fpu/softfloat-parts.c.inc
18
+++ b/target/arm/translate-a64.c
19
+++ b/fpu/softfloat-parts.c.inc
19
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
20
gen_helper_exception_internal(cpu_env, tcg_constant_i32(excp));
21
}
22
23
-static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
24
+static void gen_exception_internal_insn(DisasContext *s, int excp)
25
{
26
- gen_a64_update_pc(s, pc - s->pc_curr);
27
+ gen_a64_update_pc(s, 0);
28
gen_exception_internal(excp);
29
s->base.is_jmp = DISAS_NORETURN;
30
}
31
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
32
* Secondly, "HLT 0xf000" is the A64 semihosting syscall instruction.
33
*/
34
if (semihosting_enabled(s->current_el == 0) && imm16 == 0xf000) {
35
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
36
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
37
} else {
38
unallocated_encoding(s);
39
}
21
}
40
diff --git a/target/arm/translate.c b/target/arm/translate.c
22
ret = c;
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate.c
43
+++ b/target/arm/translate.c
44
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
45
s->base.is_jmp = DISAS_SMC;
46
}
47
48
-static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
49
+static void gen_exception_internal_insn(DisasContext *s, int excp)
50
{
51
gen_set_condexec(s);
52
- gen_update_pc(s, pc - s->pc_curr);
53
+ gen_update_pc(s, 0);
54
gen_exception_internal(excp);
55
s->base.is_jmp = DISAS_NORETURN;
56
}
57
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
58
*/
59
if (semihosting_enabled(s->current_el != 0) &&
60
(imm == (s->thumb ? 0x3c : 0xf000))) {
61
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
62
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
63
return;
64
}
65
66
@@ -XXX,XX +XXX,XX @@ static bool trans_BKPT(DisasContext *s, arg_BKPT *a)
67
if (arm_dc_feature(s, ARM_FEATURE_M) &&
68
semihosting_enabled(s->current_el == 0) &&
69
(a->imm == 0xab)) {
70
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
71
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
72
} else {
23
} else {
73
gen_exception_bkpt_insn(s, syn_aa32_bkpt(a->imm, false));
24
- FloatPartsN *val[3] = { a, b, c };
74
}
25
+ FloatPartsN *val[R_3NAN_1ST_MASK + 1] = { a, b, c };
75
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
26
Float3NaNPropRule rule = s->float_3nan_prop_rule;
76
if (!arm_dc_feature(s, ARM_FEATURE_M) &&
27
77
semihosting_enabled(s->current_el == 0) &&
28
assert(rule != float_3nan_prop_none);
78
(a->imm == semihost_imm)) {
79
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
80
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
81
} else {
82
gen_update_pc(s, curr_insn_len(s));
83
s->svc_imm = a->imm;
84
--
29
--
85
2.25.1
30
2.34.1
86
31
87
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This function is part of the public interface and
4
is not "specialized" to any target in any way.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241203203949.483774-7-richard.henderson@linaro.org
5
Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/ptw.c | 191 +++++++++++++++++++++++++----------------------
11
fpu/softfloat.c | 52 ++++++++++++++++++++++++++++++++++
9
1 file changed, 100 insertions(+), 91 deletions(-)
12
fpu/softfloat-specialize.c.inc | 52 ----------------------------------
13
2 files changed, 52 insertions(+), 52 deletions(-)
10
14
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/ptw.c
17
--- a/fpu/softfloat.c
14
+++ b/target/arm/ptw.c
18
+++ b/fpu/softfloat.c
15
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
16
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
20
*zExpPtr = 1 - shiftCount;
17
__attribute__((nonnull));
21
}
18
22
19
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
23
+/*----------------------------------------------------------------------------
20
+ target_ulong address,
24
+| Takes two extended double-precision floating-point values `a' and `b', one
21
+ MMUAccessType access_type,
25
+| of which is a NaN, and returns the appropriate NaN result. If either `a' or
22
+ GetPhysAddrResult *result,
26
+| `b' is a signaling NaN, the invalid exception is raised.
23
+ ARMMMUFaultInfo *fi)
27
+*----------------------------------------------------------------------------*/
24
+ __attribute__((nonnull));
25
+
28
+
26
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
29
+floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
27
static const uint8_t pamax_map[] = {
28
[0] = 32,
29
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
30
return 0;
31
}
32
33
+static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
34
+ target_ulong address,
35
+ MMUAccessType access_type,
36
+ GetPhysAddrResult *result,
37
+ ARMMMUFaultInfo *fi)
38
+{
30
+{
39
+ hwaddr ipa;
31
+ bool aIsLargerSignificand;
40
+ int s1_prot;
32
+ FloatClass a_cls, b_cls;
41
+ int ret;
42
+ bool is_secure = ptw->in_secure;
43
+ bool ipa_secure, s2walk_secure;
44
+ ARMCacheAttrs cacheattrs1;
45
+ bool is_el0;
46
+ uint64_t hcr;
47
+
33
+
48
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type, result, fi);
34
+ /* This is not complete, but is good enough for pickNaN. */
35
+ a_cls = (!floatx80_is_any_nan(a)
36
+ ? float_class_normal
37
+ : floatx80_is_signaling_nan(a, status)
38
+ ? float_class_snan
39
+ : float_class_qnan);
40
+ b_cls = (!floatx80_is_any_nan(b)
41
+ ? float_class_normal
42
+ : floatx80_is_signaling_nan(b, status)
43
+ ? float_class_snan
44
+ : float_class_qnan);
49
+
45
+
50
+ /* If S1 fails or S2 is disabled, return early. */
46
+ if (is_snan(a_cls) || is_snan(b_cls)) {
51
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, is_secure)) {
47
+ float_raise(float_flag_invalid, status);
52
+ return ret;
53
+ }
48
+ }
54
+
49
+
55
+ ipa = result->f.phys_addr;
50
+ if (status->default_nan_mode) {
56
+ ipa_secure = result->f.attrs.secure;
51
+ return floatx80_default_nan(status);
57
+ if (is_secure) {
58
+ /* Select TCR based on the NS bit from the S1 walk. */
59
+ s2walk_secure = !(ipa_secure
60
+ ? env->cp15.vstcr_el2 & VSTCR_SW
61
+ : env->cp15.vtcr_el2 & VTCR_NSW);
62
+ } else {
63
+ assert(!ipa_secure);
64
+ s2walk_secure = false;
65
+ }
52
+ }
66
+
53
+
67
+ is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0;
54
+ if (a.low < b.low) {
68
+ ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
55
+ aIsLargerSignificand = 0;
69
+ ptw->in_secure = s2walk_secure;
56
+ } else if (b.low < a.low) {
70
+
57
+ aIsLargerSignificand = 1;
71
+ /*
58
+ } else {
72
+ * S1 is done, now do S2 translation.
59
+ aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
73
+ * Save the stage1 results so that we may merge prot and cacheattrs later.
74
+ */
75
+ s1_prot = result->f.prot;
76
+ cacheattrs1 = result->cacheattrs;
77
+ memset(result, 0, sizeof(*result));
78
+
79
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type, is_el0, result, fi);
80
+ fi->s2addr = ipa;
81
+
82
+ /* Combine the S1 and S2 perms. */
83
+ result->f.prot &= s1_prot;
84
+
85
+ /* If S2 fails, return early. */
86
+ if (ret) {
87
+ return ret;
88
+ }
60
+ }
89
+
61
+
90
+ /* Combine the S1 and S2 cache attributes. */
62
+ if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
91
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
63
+ if (is_snan(b_cls)) {
92
+ if (hcr & HCR_DC) {
64
+ return floatx80_silence_nan(b, status);
93
+ /*
94
+ * HCR.DC forces the first stage attributes to
95
+ * Normal Non-Shareable,
96
+ * Inner Write-Back Read-Allocate Write-Allocate,
97
+ * Outer Write-Back Read-Allocate Write-Allocate.
98
+ * Do not overwrite Tagged within attrs.
99
+ */
100
+ if (cacheattrs1.attrs != 0xf0) {
101
+ cacheattrs1.attrs = 0xff;
102
+ }
65
+ }
103
+ cacheattrs1.shareability = 0;
66
+ return b;
67
+ } else {
68
+ if (is_snan(a_cls)) {
69
+ return floatx80_silence_nan(a, status);
70
+ }
71
+ return a;
104
+ }
72
+ }
105
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
106
+ result->cacheattrs);
107
+
108
+ /*
109
+ * Check if IPA translates to secure or non-secure PA space.
110
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
111
+ */
112
+ result->f.attrs.secure =
113
+ (is_secure
114
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
115
+ && (ipa_secure
116
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
117
+
118
+ return 0;
119
+}
73
+}
120
+
74
+
121
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
75
/*----------------------------------------------------------------------------
122
target_ulong address,
76
| Takes an abstract floating-point value having sign `zSign', exponent `zExp',
123
MMUAccessType access_type,
77
| and extended significand formed by the concatenation of `zSig0' and `zSig1',
124
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
78
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
125
if (mmu_idx != s1_mmu_idx) {
79
index XXXXXXX..XXXXXXX 100644
126
/*
80
--- a/fpu/softfloat-specialize.c.inc
127
* Call ourselves recursively to do the stage 1 and then stage 2
81
+++ b/fpu/softfloat-specialize.c.inc
128
- * translations if mmu_idx is a two-stage regime.
82
@@ -XXX,XX +XXX,XX @@ floatx80 floatx80_silence_nan(floatx80 a, float_status *status)
129
+ * translations if mmu_idx is a two-stage regime, and EL2 present.
83
return a;
130
+ * Otherwise, a stage1+stage2 translation is just stage 1.
84
}
131
*/
85
132
+ ptw->in_mmu_idx = mmu_idx = s1_mmu_idx;
86
-/*----------------------------------------------------------------------------
133
if (arm_feature(env, ARM_FEATURE_EL2)) {
87
-| Takes two extended double-precision floating-point values `a' and `b', one
134
- hwaddr ipa;
88
-| of which is a NaN, and returns the appropriate NaN result. If either `a' or
135
- int s1_prot;
89
-| `b' is a signaling NaN, the invalid exception is raised.
136
- int ret;
90
-*----------------------------------------------------------------------------*/
137
- bool ipa_secure, s2walk_secure;
138
- ARMCacheAttrs cacheattrs1;
139
- bool is_el0;
140
- uint64_t hcr;
141
-
91
-
142
- ptw->in_mmu_idx = s1_mmu_idx;
92
-floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
143
- ret = get_phys_addr_with_struct(env, ptw, address, access_type,
93
-{
144
- result, fi);
94
- bool aIsLargerSignificand;
95
- FloatClass a_cls, b_cls;
145
-
96
-
146
- /* If S1 fails or S2 is disabled, return early. */
97
- /* This is not complete, but is good enough for pickNaN. */
147
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
98
- a_cls = (!floatx80_is_any_nan(a)
148
- is_secure)) {
99
- ? float_class_normal
149
- return ret;
100
- : floatx80_is_signaling_nan(a, status)
150
- }
101
- ? float_class_snan
102
- : float_class_qnan);
103
- b_cls = (!floatx80_is_any_nan(b)
104
- ? float_class_normal
105
- : floatx80_is_signaling_nan(b, status)
106
- ? float_class_snan
107
- : float_class_qnan);
151
-
108
-
152
- ipa = result->f.phys_addr;
109
- if (is_snan(a_cls) || is_snan(b_cls)) {
153
- ipa_secure = result->f.attrs.secure;
110
- float_raise(float_flag_invalid, status);
154
- if (is_secure) {
111
- }
155
- /* Select TCR based on the NS bit from the S1 walk. */
156
- s2walk_secure = !(ipa_secure
157
- ? env->cp15.vstcr_el2 & VSTCR_SW
158
- : env->cp15.vtcr_el2 & VTCR_NSW);
159
- } else {
160
- assert(!ipa_secure);
161
- s2walk_secure = false;
162
- }
163
-
112
-
164
- ptw->in_mmu_idx =
113
- if (status->default_nan_mode) {
165
- s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
114
- return floatx80_default_nan(status);
166
- ptw->in_secure = s2walk_secure;
115
- }
167
- is_el0 = mmu_idx == ARMMMUIdx_E10_0;
168
-
116
-
169
- /*
117
- if (a.low < b.low) {
170
- * S1 is done, now do S2 translation.
118
- aIsLargerSignificand = 0;
171
- * Save the stage1 results so that we may merge
119
- } else if (b.low < a.low) {
172
- * prot and cacheattrs later.
120
- aIsLargerSignificand = 1;
173
- */
121
- } else {
174
- s1_prot = result->f.prot;
122
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
175
- cacheattrs1 = result->cacheattrs;
123
- }
176
- memset(result, 0, sizeof(*result));
177
-
124
-
178
- ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
125
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
179
- is_el0, result, fi);
126
- if (is_snan(b_cls)) {
180
- fi->s2addr = ipa;
127
- return floatx80_silence_nan(b, status);
128
- }
129
- return b;
130
- } else {
131
- if (is_snan(a_cls)) {
132
- return floatx80_silence_nan(a, status);
133
- }
134
- return a;
135
- }
136
-}
181
-
137
-
182
- /* Combine the S1 and S2 perms. */
138
/*----------------------------------------------------------------------------
183
- result->f.prot &= s1_prot;
139
| Returns 1 if the quadruple-precision floating-point value `a' is a quiet
184
-
140
| NaN; otherwise returns 0.
185
- /* If S2 fails, return early. */
186
- if (ret) {
187
- return ret;
188
- }
189
-
190
- /* Combine the S1 and S2 cache attributes. */
191
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
192
- if (hcr & HCR_DC) {
193
- /*
194
- * HCR.DC forces the first stage attributes to
195
- * Normal Non-Shareable,
196
- * Inner Write-Back Read-Allocate Write-Allocate,
197
- * Outer Write-Back Read-Allocate Write-Allocate.
198
- * Do not overwrite Tagged within attrs.
199
- */
200
- if (cacheattrs1.attrs != 0xf0) {
201
- cacheattrs1.attrs = 0xff;
202
- }
203
- cacheattrs1.shareability = 0;
204
- }
205
- result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
206
- result->cacheattrs);
207
-
208
- /*
209
- * Check if IPA translates to secure or non-secure PA space.
210
- * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
211
- */
212
- result->f.attrs.secure =
213
- (is_secure
214
- && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
215
- && (ipa_secure
216
- || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
217
-
218
- return 0;
219
- } else {
220
- /*
221
- * For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
222
- */
223
- mmu_idx = stage_1_mmu_idx(mmu_idx);
224
+ return get_phys_addr_twostage(env, ptw, address, access_type,
225
+ result, fi);
226
}
227
}
228
229
--
141
--
230
2.25.1
142
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The CPUTLBEntryFull structure now stores the original pte attributes, as
3
Unpacking and repacking the parts may be slightly more work
4
well as the physical address. Therefore, we no longer need a separate
4
than we did before, but we get to reuse more code. For a
5
bit in MemTxAttrs, nor do we need to walk the tree of memory regions.
5
code path handling exceptional values, this is an improvement.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241203203949.483774-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/cpu.h | 1 -
12
fpu/softfloat.c | 43 +++++--------------------------------------
13
target/arm/sve_ldst_internal.h | 1 +
13
1 file changed, 5 insertions(+), 38 deletions(-)
14
target/arm/mte_helper.c | 62 ++++++++++------------------------
15
target/arm/sve_helper.c | 54 ++++++++++-------------------
16
target/arm/tlb_helper.c | 4 ---
17
5 files changed, 36 insertions(+), 86 deletions(-)
18
14
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
17
--- a/fpu/softfloat.c
22
+++ b/target/arm/cpu.h
18
+++ b/fpu/softfloat.c
23
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
24
* generic target bits directly.
20
25
*/
21
floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
26
#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
22
{
27
-#define arm_tlb_mte_tagged(x) (typecheck_memtxattrs(x)->target_tlb_bit1)
23
- bool aIsLargerSignificand;
28
24
- FloatClass a_cls, b_cls;
29
/*
25
+ FloatParts128 pa, pb, *pr;
30
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
26
31
diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
27
- /* This is not complete, but is good enough for pickNaN. */
32
index XXXXXXX..XXXXXXX 100644
28
- a_cls = (!floatx80_is_any_nan(a)
33
--- a/target/arm/sve_ldst_internal.h
29
- ? float_class_normal
34
+++ b/target/arm/sve_ldst_internal.h
30
- : floatx80_is_signaling_nan(a, status)
35
@@ -XXX,XX +XXX,XX @@ typedef struct {
31
- ? float_class_snan
36
void *host;
32
- : float_class_qnan);
37
int flags;
33
- b_cls = (!floatx80_is_any_nan(b)
38
MemTxAttrs attrs;
34
- ? float_class_normal
39
+ bool tagged;
35
- : floatx80_is_signaling_nan(b, status)
40
} SVEHostPage;
36
- ? float_class_snan
41
37
- : float_class_qnan);
42
bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
38
-
43
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
39
- if (is_snan(a_cls) || is_snan(b_cls)) {
44
index XXXXXXX..XXXXXXX 100644
40
- float_raise(float_flag_invalid, status);
45
--- a/target/arm/mte_helper.c
46
+++ b/target/arm/mte_helper.c
47
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
48
TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1);
49
return tags + index;
50
#else
51
- uintptr_t index;
52
CPUTLBEntryFull *full;
53
+ MemTxAttrs attrs;
54
int in_page, flags;
55
- ram_addr_t ptr_ra;
56
hwaddr ptr_paddr, tag_paddr, xlat;
57
MemoryRegion *mr;
58
ARMASIdx tag_asi;
59
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
60
* valid. Indicate to probe_access_flags no-fault, then assert that
61
* we received a valid page.
62
*/
63
- flags = probe_access_flags(env, ptr, ptr_access, ptr_mmu_idx,
64
- ra == 0, &host, ra);
65
+ flags = probe_access_full(env, ptr, ptr_access, ptr_mmu_idx,
66
+ ra == 0, &host, &full, ra);
67
assert(!(flags & TLB_INVALID_MASK));
68
69
- /*
70
- * Find the CPUTLBEntryFull for ptr. This *must* be present in the TLB
71
- * because we just found the mapping.
72
- * TODO: Perhaps there should be a cputlb helper that returns a
73
- * matching tlb entry + iotlb entry.
74
- */
75
- index = tlb_index(env, ptr_mmu_idx, ptr);
76
-# ifdef CONFIG_DEBUG_TCG
77
- {
78
- CPUTLBEntry *entry = tlb_entry(env, ptr_mmu_idx, ptr);
79
- target_ulong comparator = (ptr_access == MMU_DATA_LOAD
80
- ? entry->addr_read
81
- : tlb_addr_write(entry));
82
- g_assert(tlb_hit(comparator, ptr));
83
- }
41
- }
84
-# endif
85
- full = &env_tlb(env)->d[ptr_mmu_idx].fulltlb[index];
86
-
42
-
87
/* If the virtual page MemAttr != Tagged, access unchecked. */
43
- if (status->default_nan_mode) {
88
- if (!arm_tlb_mte_tagged(&full->attrs)) {
44
+ if (!floatx80_unpack_canonical(&pa, a, status) ||
89
+ if (full->pte_attrs != 0xf0) {
45
+ !floatx80_unpack_canonical(&pb, b, status)) {
90
return NULL;
46
return floatx80_default_nan(status);
91
}
47
}
92
48
93
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
49
- if (a.low < b.low) {
94
return NULL;
50
- aIsLargerSignificand = 0;
95
}
51
- } else if (b.low < a.low) {
96
52
- aIsLargerSignificand = 1;
97
+ /*
53
- } else {
98
+ * Remember these values across the second lookup below,
54
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
99
+ * which may invalidate this pointer via tlb resize.
55
- }
100
+ */
101
+ ptr_paddr = full->phys_addr;
102
+ attrs = full->attrs;
103
+ full = NULL;
104
+
105
/*
106
* The Normal memory access can extend to the next page. E.g. a single
107
* 8-byte access to the last byte of a page will check only the last
108
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
109
*/
110
in_page = -(ptr | TARGET_PAGE_MASK);
111
if (unlikely(ptr_size > in_page)) {
112
- void *ignore;
113
- flags |= probe_access_flags(env, ptr + in_page, ptr_access,
114
- ptr_mmu_idx, ra == 0, &ignore, ra);
115
+ flags |= probe_access_full(env, ptr + in_page, ptr_access,
116
+ ptr_mmu_idx, ra == 0, &host, &full, ra);
117
assert(!(flags & TLB_INVALID_MASK));
118
}
119
120
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
121
if (unlikely(flags & TLB_WATCHPOINT)) {
122
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
123
assert(ra != 0);
124
- cpu_check_watchpoint(env_cpu(env), ptr, ptr_size,
125
- full->attrs, wp, ra);
126
+ cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra);
127
}
128
129
- /*
130
- * Find the physical address within the normal mem space.
131
- * The memory region lookup must succeed because TLB_MMIO was
132
- * not set in the cputlb lookup above.
133
- */
134
- mr = memory_region_from_host(host, &ptr_ra);
135
- tcg_debug_assert(mr != NULL);
136
- tcg_debug_assert(memory_region_is_ram(mr));
137
- ptr_paddr = ptr_ra;
138
- do {
139
- ptr_paddr += mr->addr;
140
- mr = mr->container;
141
- } while (mr);
142
-
56
-
143
/* Convert to the physical address in tag space. */
57
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
144
tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1);
58
- if (is_snan(b_cls)) {
145
59
- return floatx80_silence_nan(b, status);
146
/* Look up the address in tag space. */
60
- }
147
- tag_asi = full->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
61
- return b;
148
+ tag_asi = attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
62
- } else {
149
tag_as = cpu_get_address_space(env_cpu(env), tag_asi);
63
- if (is_snan(a_cls)) {
150
mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL,
64
- return floatx80_silence_nan(a, status);
151
- tag_access == MMU_DATA_STORE,
65
- }
152
- full->attrs);
66
- return a;
153
+ tag_access == MMU_DATA_STORE, attrs);
154
155
/*
156
* Note that @mr will never be NULL. If there is nothing in the address
157
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/target/arm/sve_helper.c
160
+++ b/target/arm/sve_helper.c
161
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
162
*/
163
addr = useronly_clean_ptr(addr);
164
165
+#ifdef CONFIG_USER_ONLY
166
flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
167
&info->host, retaddr);
168
+ memset(&info->attrs, 0, sizeof(info->attrs));
169
+ /* Require both ANON and MTE; see allocation_tag_mem(). */
170
+ info->tagged = (flags & PAGE_ANON) && (flags & PAGE_MTE);
171
+#else
172
+ CPUTLBEntryFull *full;
173
+ flags = probe_access_full(env, addr, access_type, mmu_idx, nofault,
174
+ &info->host, &full, retaddr);
175
+ info->attrs = full->attrs;
176
+ info->tagged = full->pte_attrs == 0xf0;
177
+#endif
178
info->flags = flags;
179
180
if (flags & TLB_INVALID_MASK) {
181
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
182
183
/* Ensure that info->host[] is relative to addr, not addr + mem_off. */
184
info->host -= mem_off;
185
-
186
-#ifdef CONFIG_USER_ONLY
187
- memset(&info->attrs, 0, sizeof(info->attrs));
188
- /* Require both MAP_ANON and PROT_MTE -- see allocation_tag_mem. */
189
- arm_tlb_mte_tagged(&info->attrs) =
190
- (flags & PAGE_ANON) && (flags & PAGE_MTE);
191
-#else
192
- /*
193
- * Find the iotlbentry for addr and return the transaction attributes.
194
- * This *must* be present in the TLB because we just found the mapping.
195
- */
196
- {
197
- uintptr_t index = tlb_index(env, mmu_idx, addr);
198
-
199
-# ifdef CONFIG_DEBUG_TCG
200
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
201
- target_ulong comparator = (access_type == MMU_DATA_LOAD
202
- ? entry->addr_read
203
- : tlb_addr_write(entry));
204
- g_assert(tlb_hit(comparator, addr));
205
-# endif
206
-
207
- CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
208
- info->attrs = full->attrs;
209
- }
67
- }
210
-#endif
68
+ pr = parts_pick_nan(&pa, &pb, status);
211
-
69
+ return floatx80_round_pack_canonical(pr, status);
212
return true;
213
}
70
}
214
71
215
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
72
/*----------------------------------------------------------------------------
216
intptr_t mem_off, reg_off, reg_last;
217
218
/* Process the page only if MemAttr == Tagged. */
219
- if (arm_tlb_mte_tagged(&info->page[0].attrs)) {
220
+ if (info->page[0].tagged) {
221
mem_off = info->mem_off_first[0];
222
reg_off = info->reg_off_first[0];
223
reg_last = info->reg_off_split;
224
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
225
}
226
227
mem_off = info->mem_off_first[1];
228
- if (mem_off >= 0 && arm_tlb_mte_tagged(&info->page[1].attrs)) {
229
+ if (mem_off >= 0 && info->page[1].tagged) {
230
reg_off = info->reg_off_first[1];
231
reg_last = info->reg_off_last[1];
232
233
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
234
* Disable MTE checking if the Tagged bit is not set. Since TBI must
235
* be set within MTEDESC for MTE, !mtedesc => !mte_active.
236
*/
237
- if (!arm_tlb_mte_tagged(&info.page[0].attrs)) {
238
+ if (!info.page[0].tagged) {
239
mtedesc = 0;
240
}
241
242
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
243
cpu_check_watchpoint(env_cpu(env), addr, msize,
244
info.attrs, BP_MEM_READ, retaddr);
245
}
246
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
247
+ if (mtedesc && info.tagged) {
248
mte_check(env, mtedesc, addr, retaddr);
249
}
250
if (unlikely(info.flags & TLB_MMIO)) {
251
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
252
msize, info.attrs,
253
BP_MEM_READ, retaddr);
254
}
255
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
256
+ if (mtedesc && info.tagged) {
257
mte_check(env, mtedesc, addr, retaddr);
258
}
259
tlb_fn(env, &scratch, reg_off, addr, retaddr);
260
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
261
(env_cpu(env), addr, msize) & BP_MEM_READ)) {
262
goto fault;
263
}
264
- if (mtedesc &&
265
- arm_tlb_mte_tagged(&info.attrs) &&
266
- !mte_probe(env, mtedesc, addr)) {
267
+ if (mtedesc && info.tagged && !mte_probe(env, mtedesc, addr)) {
268
goto fault;
269
}
270
271
@@ -XXX,XX +XXX,XX @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
272
info.attrs, BP_MEM_WRITE, retaddr);
273
}
274
275
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
276
+ if (mtedesc && info.tagged) {
277
mte_check(env, mtedesc, addr, retaddr);
278
}
279
}
280
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
281
index XXXXXXX..XXXXXXX 100644
282
--- a/target/arm/tlb_helper.c
283
+++ b/target/arm/tlb_helper.c
284
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
285
res.f.phys_addr &= TARGET_PAGE_MASK;
286
address &= TARGET_PAGE_MASK;
287
}
288
- /* Notice and record tagged memory. */
289
- if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
290
- arm_tlb_mte_tagged(&res.f.attrs) = true;
291
- }
292
293
res.f.pte_attrs = res.cacheattrs.attrs;
294
res.f.shareability = res.cacheattrs.shareability;
295
--
73
--
296
2.25.1
74
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb.
3
Inline pickNaN into its only caller. This makes one assert
4
Flush the tlb when invalidating stage 1+2 translations. Re-use
4
redundant with the immediately preceding IF.
5
alle1_tlbmask() for other instances of EL1&0 + Stage2.
6
5
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org
8
Message-id: 20241203203949.483774-9-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/cpu-param.h | 2 +-
11
fpu/softfloat-parts.c.inc | 82 +++++++++++++++++++++++++----
13
target/arm/cpu.h | 23 ++++---
12
fpu/softfloat-specialize.c.inc | 96 ----------------------------------
14
target/arm/helper.c | 151 ++++++++++++++++++++++++++++++-----------
13
2 files changed, 73 insertions(+), 105 deletions(-)
15
3 files changed, 127 insertions(+), 49 deletions(-)
14
16
15
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
17
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu-param.h
17
--- a/fpu/softfloat-parts.c.inc
20
+++ b/target/arm/cpu-param.h
18
+++ b/fpu/softfloat-parts.c.inc
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
22
bool guarded;
20
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
23
#endif
21
float_status *s)
24
22
{
25
-#define NB_MMU_MODES 10
23
+ int cmp, which;
26
+#define NB_MMU_MODES 12
24
+
27
25
if (is_snan(a->cls) || is_snan(b->cls)) {
28
#endif
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
}
28
29
if (s->default_nan_mode) {
30
parts_default_nan(a, s);
31
- } else {
32
- int cmp = frac_cmp(a, b);
33
- if (cmp == 0) {
34
- cmp = a->sign < b->sign;
35
- }
36
+ return a;
37
+ }
38
39
- if (pickNaN(a->cls, b->cls, cmp > 0, s)) {
40
- a = b;
41
- }
42
+ cmp = frac_cmp(a, b);
43
+ if (cmp == 0) {
44
+ cmp = a->sign < b->sign;
45
+ }
46
+
47
+ switch (s->float_2nan_prop_rule) {
48
+ case float_2nan_prop_s_ab:
49
if (is_snan(a->cls)) {
50
- parts_silence_nan(a, s);
51
+ which = 0;
52
+ } else if (is_snan(b->cls)) {
53
+ which = 1;
54
+ } else if (is_qnan(a->cls)) {
55
+ which = 0;
56
+ } else {
57
+ which = 1;
58
}
59
+ break;
60
+ case float_2nan_prop_s_ba:
61
+ if (is_snan(b->cls)) {
62
+ which = 1;
63
+ } else if (is_snan(a->cls)) {
64
+ which = 0;
65
+ } else if (is_qnan(b->cls)) {
66
+ which = 1;
67
+ } else {
68
+ which = 0;
69
+ }
70
+ break;
71
+ case float_2nan_prop_ab:
72
+ which = is_nan(a->cls) ? 0 : 1;
73
+ break;
74
+ case float_2nan_prop_ba:
75
+ which = is_nan(b->cls) ? 1 : 0;
76
+ break;
77
+ case float_2nan_prop_x87:
78
+ /*
79
+ * This implements x87 NaN propagation rules:
80
+ * SNaN + QNaN => return the QNaN
81
+ * two SNaNs => return the one with the larger significand, silenced
82
+ * two QNaNs => return the one with the larger significand
83
+ * SNaN and a non-NaN => return the SNaN, silenced
84
+ * QNaN and a non-NaN => return the QNaN
85
+ *
86
+ * If we get down to comparing significands and they are the same,
87
+ * return the NaN with the positive sign bit (if any).
88
+ */
89
+ if (is_snan(a->cls)) {
90
+ if (is_snan(b->cls)) {
91
+ which = cmp > 0 ? 0 : 1;
92
+ } else {
93
+ which = is_qnan(b->cls) ? 1 : 0;
94
+ }
95
+ } else if (is_qnan(a->cls)) {
96
+ if (is_snan(b->cls) || !is_qnan(b->cls)) {
97
+ which = 0;
98
+ } else {
99
+ which = cmp > 0 ? 0 : 1;
100
+ }
101
+ } else {
102
+ which = 1;
103
+ }
104
+ break;
105
+ default:
106
+ g_assert_not_reached();
107
+ }
108
+
109
+ if (which) {
110
+ a = b;
111
+ }
112
+ if (is_snan(a->cls)) {
113
+ parts_silence_nan(a, s);
114
}
115
return a;
116
}
117
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
118
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu.h
119
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/target/arm/cpu.h
120
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
121
@@ -XXX,XX +XXX,XX @@ bool float32_is_signaling_nan(float32 a_, float_status *status)
34
* EL2 (aka NS PL2)
35
* EL3 (aka S PL1)
36
* Physical (NS & S)
37
+ * Stage2 (NS & S)
38
*
39
- * for a total of 10 different mmu_idx.
40
+ * for a total of 12 different mmu_idx.
41
*
42
* R profile CPUs have an MPU, but can use the same set of MMU indexes
43
* as A profile. They only need to distinguish EL0 and EL1 (and
44
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
45
ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
46
ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
47
48
+ /*
49
+ * Used for second stage of an S12 page table walk, or for descriptor
50
+ * loads during first stage of an S1 page table walk. Note that both
51
+ * are in use simultaneously for SecureEL2: the security state for
52
+ * the S2 ptw is selected by the NS bit from the S1 ptw.
53
+ */
54
+ ARMMMUIdx_Stage2 = 10 | ARM_MMU_IDX_A,
55
+ ARMMMUIdx_Stage2_S = 11 | ARM_MMU_IDX_A,
56
+
57
/*
58
* These are not allocated TLBs and are used only for AT system
59
* instructions or for the first stage of an S12 page table walk.
60
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
61
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
62
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
63
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
64
- /*
65
- * Not allocated a TLB: used only for second stage of an S12 page
66
- * table walk, or for descriptor loads during first stage of an S1
67
- * page table walk. Note that if we ever want to have a TLB for this
68
- * then various TLB flush insns which currently are no-ops or flush
69
- * only stage 1 MMU indexes will need to change to flush stage 2.
70
- */
71
- ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
72
- ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
73
74
/*
75
* M-profile.
76
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
77
TO_CORE_BIT(E20_2),
78
TO_CORE_BIT(E20_2_PAN),
79
TO_CORE_BIT(E3),
80
+ TO_CORE_BIT(Stage2),
81
+ TO_CORE_BIT(Stage2_S),
82
83
TO_CORE_BIT(MUser),
84
TO_CORE_BIT(MPriv),
85
diff --git a/target/arm/helper.c b/target/arm/helper.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/helper.c
88
+++ b/target/arm/helper.c
89
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
90
raw_write(env, ri, value);
91
}
92
93
+static int alle1_tlbmask(CPUARMState *env)
94
+{
95
+ /*
96
+ * Note that the 'ALL' scope must invalidate both stage 1 and
97
+ * stage 2 translations, whereas most other scopes only invalidate
98
+ * stage 1 translations.
99
+ */
100
+ return (ARMMMUIdxBit_E10_1 |
101
+ ARMMMUIdxBit_E10_1_PAN |
102
+ ARMMMUIdxBit_E10_0 |
103
+ ARMMMUIdxBit_Stage2 |
104
+ ARMMMUIdxBit_Stage2_S);
105
+}
106
+
107
+
108
/* IS variants of TLB operations must affect all cores */
109
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
uint64_t value)
111
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
{
113
CPUState *cs = env_cpu(env);
114
115
- tlb_flush_by_mmuidx(cs,
116
- ARMMMUIdxBit_E10_1 |
117
- ARMMMUIdxBit_E10_1_PAN |
118
- ARMMMUIdxBit_E10_0);
119
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
120
}
121
122
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
123
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
{
125
CPUState *cs = env_cpu(env);
126
127
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
128
- ARMMMUIdxBit_E10_1 |
129
- ARMMMUIdxBit_E10_1_PAN |
130
- ARMMMUIdxBit_E10_0);
131
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
132
}
133
134
135
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
136
ARMMMUIdxBit_E2);
137
}
138
139
+static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
140
+ uint64_t value)
141
+{
142
+ CPUState *cs = env_cpu(env);
143
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
144
+
145
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
146
+}
147
+
148
+static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
149
+ uint64_t value)
150
+{
151
+ CPUState *cs = env_cpu(env);
152
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
153
+
154
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
155
+}
156
+
157
static const ARMCPRegInfo cp_reginfo[] = {
158
/* Define the secure and non-secure FCSE identifier CP registers
159
* separately because there is no secure bank in V8 (no _EL3). This allows
160
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
162
/*
163
* A change in VMID to the stage2 page table (Stage2) invalidates
164
- * the combined stage 1&2 tlbs (EL10_1 and EL10_0).
165
+ * the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
166
*/
167
if (raw_read(env, ri) != value) {
168
- uint16_t mask = ARMMMUIdxBit_E10_1 |
169
- ARMMMUIdxBit_E10_1_PAN |
170
- ARMMMUIdxBit_E10_0;
171
- tlb_flush_by_mmuidx(cs, mask);
172
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
173
raw_write(env, ri, value);
174
}
122
}
175
}
123
}
176
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
177
}
125
-/*----------------------------------------------------------------------------
178
}
126
-| Select which NaN to propagate for a two-input operation.
179
127
-| IEEE754 doesn't specify all the details of this, so the
180
-static int alle1_tlbmask(CPUARMState *env)
128
-| algorithm is target-specific.
129
-| The routine is passed various bits of information about the
130
-| two NaNs and should return 0 to select NaN a and 1 for NaN b.
131
-| Note that signalling NaNs are always squashed to quiet NaNs
132
-| by the caller, by calling floatXX_silence_nan() before
133
-| returning them.
134
-|
135
-| aIsLargerSignificand is only valid if both a and b are NaNs
136
-| of some kind, and is true if a has the larger significand,
137
-| or if both a and b have the same significand but a is
138
-| positive but b is negative. It is only needed for the x87
139
-| tie-break rule.
140
-*----------------------------------------------------------------------------*/
141
-
142
-static int pickNaN(FloatClass a_cls, FloatClass b_cls,
143
- bool aIsLargerSignificand, float_status *status)
181
-{
144
-{
182
- /*
145
- /*
183
- * Note that the 'ALL' scope must invalidate both stage 1 and
146
- * We guarantee not to require the target to tell us how to
184
- * stage 2 translations, whereas most other scopes only invalidate
147
- * pick a NaN if we're always returning the default NaN.
185
- * stage 1 translations.
148
- * But if we're not in default-NaN mode then the target must
149
- * specify via set_float_2nan_prop_rule().
186
- */
150
- */
187
- return (ARMMMUIdxBit_E10_1 |
151
- assert(!status->default_nan_mode);
188
- ARMMMUIdxBit_E10_1_PAN |
152
-
189
- ARMMMUIdxBit_E10_0);
153
- switch (status->float_2nan_prop_rule) {
154
- case float_2nan_prop_s_ab:
155
- if (is_snan(a_cls)) {
156
- return 0;
157
- } else if (is_snan(b_cls)) {
158
- return 1;
159
- } else if (is_qnan(a_cls)) {
160
- return 0;
161
- } else {
162
- return 1;
163
- }
164
- break;
165
- case float_2nan_prop_s_ba:
166
- if (is_snan(b_cls)) {
167
- return 1;
168
- } else if (is_snan(a_cls)) {
169
- return 0;
170
- } else if (is_qnan(b_cls)) {
171
- return 1;
172
- } else {
173
- return 0;
174
- }
175
- break;
176
- case float_2nan_prop_ab:
177
- if (is_nan(a_cls)) {
178
- return 0;
179
- } else {
180
- return 1;
181
- }
182
- break;
183
- case float_2nan_prop_ba:
184
- if (is_nan(b_cls)) {
185
- return 1;
186
- } else {
187
- return 0;
188
- }
189
- break;
190
- case float_2nan_prop_x87:
191
- /*
192
- * This implements x87 NaN propagation rules:
193
- * SNaN + QNaN => return the QNaN
194
- * two SNaNs => return the one with the larger significand, silenced
195
- * two QNaNs => return the one with the larger significand
196
- * SNaN and a non-NaN => return the SNaN, silenced
197
- * QNaN and a non-NaN => return the QNaN
198
- *
199
- * If we get down to comparing significands and they are the same,
200
- * return the NaN with the positive sign bit (if any).
201
- */
202
- if (is_snan(a_cls)) {
203
- if (is_snan(b_cls)) {
204
- return aIsLargerSignificand ? 0 : 1;
205
- }
206
- return is_qnan(b_cls) ? 1 : 0;
207
- } else if (is_qnan(a_cls)) {
208
- if (is_snan(b_cls) || !is_qnan(b_cls)) {
209
- return 0;
210
- } else {
211
- return aIsLargerSignificand ? 0 : 1;
212
- }
213
- } else {
214
- return 1;
215
- }
216
- default:
217
- g_assert_not_reached();
218
- }
190
-}
219
-}
191
-
220
-
192
static int e2_tlbmask(CPUARMState *env)
221
/*----------------------------------------------------------------------------
193
{
222
| Returns 1 if the double-precision floating-point value `a' is a quiet
194
return (ARMMMUIdxBit_E20_0 |
223
| NaN; otherwise returns 0.
195
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
ARMMMUIdxBit_E3, bits);
197
}
198
199
+static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
200
+{
201
+ /*
202
+ * The MSB of value is the NS field, which only applies if SEL2
203
+ * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
204
+ */
205
+ return (value >= 0
206
+ && cpu_isar_feature(aa64_sel2, env_archcpu(env))
207
+ && arm_is_secure_below_el3(env)
208
+ ? ARMMMUIdxBit_Stage2_S
209
+ : ARMMMUIdxBit_Stage2);
210
+}
211
+
212
+static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
+ uint64_t value)
214
+{
215
+ CPUState *cs = env_cpu(env);
216
+ int mask = ipas2e1_tlbmask(env, value);
217
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
218
+
219
+ if (tlb_force_broadcast(env)) {
220
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
221
+ } else {
222
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
223
+ }
224
+}
225
+
226
+static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
227
+ uint64_t value)
228
+{
229
+ CPUState *cs = env_cpu(env);
230
+ int mask = ipas2e1_tlbmask(env, value);
231
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
232
+
233
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
234
+}
235
+
236
#ifdef TARGET_AARCH64
237
typedef struct {
238
uint64_t base;
239
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
240
241
do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
242
}
243
+
244
+static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
245
+ uint64_t value)
246
+{
247
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
248
+ tlb_force_broadcast(env));
249
+}
250
+
251
+static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
252
+ const ARMCPRegInfo *ri,
253
+ uint64_t value)
254
+{
255
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
256
+}
257
#endif
258
259
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
260
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
261
.writefn = tlbi_aa64_vae1_write },
262
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
263
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
264
- .access = PL2_W, .type = ARM_CP_NOP },
265
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
266
+ .writefn = tlbi_aa64_ipas2e1is_write },
267
{ .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
268
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
269
- .access = PL2_W, .type = ARM_CP_NOP },
270
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
271
+ .writefn = tlbi_aa64_ipas2e1is_write },
272
{ .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
273
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
274
.access = PL2_W, .type = ARM_CP_NO_RAW,
275
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
276
.writefn = tlbi_aa64_alle1is_write },
277
{ .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
278
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
279
- .access = PL2_W, .type = ARM_CP_NOP },
280
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
281
+ .writefn = tlbi_aa64_ipas2e1_write },
282
{ .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
283
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
284
- .access = PL2_W, .type = ARM_CP_NOP },
285
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
286
+ .writefn = tlbi_aa64_ipas2e1_write },
287
{ .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
288
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
289
.access = PL2_W, .type = ARM_CP_NO_RAW,
290
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
291
.writefn = tlbimva_hyp_is_write },
292
{ .name = "TLBIIPAS2",
293
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
294
- .type = ARM_CP_NOP, .access = PL2_W },
295
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
296
+ .writefn = tlbiipas2_hyp_write },
297
{ .name = "TLBIIPAS2IS",
298
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
299
- .type = ARM_CP_NOP, .access = PL2_W },
300
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
301
+ .writefn = tlbiipas2is_hyp_write },
302
{ .name = "TLBIIPAS2L",
303
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
304
- .type = ARM_CP_NOP, .access = PL2_W },
305
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
306
+ .writefn = tlbiipas2_hyp_write },
307
{ .name = "TLBIIPAS2LIS",
308
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
309
- .type = ARM_CP_NOP, .access = PL2_W },
310
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
311
+ .writefn = tlbiipas2is_hyp_write },
312
/* 32 bit cache operations */
313
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
314
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
315
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
316
.writefn = tlbi_aa64_rvae1_write },
317
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
318
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
319
- .access = PL2_W, .type = ARM_CP_NOP },
320
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
321
+ .writefn = tlbi_aa64_ripas2e1is_write },
322
{ .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
323
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
324
- .access = PL2_W, .type = ARM_CP_NOP },
325
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
326
+ .writefn = tlbi_aa64_ripas2e1is_write },
327
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
328
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
329
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
330
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
331
.writefn = tlbi_aa64_rvae2is_write },
332
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
333
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
334
- .access = PL2_W, .type = ARM_CP_NOP },
335
- { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
336
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
337
+ .writefn = tlbi_aa64_ripas2e1_write },
338
+ { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
339
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
340
- .access = PL2_W, .type = ARM_CP_NOP },
341
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
342
+ .writefn = tlbi_aa64_ripas2e1_write },
343
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
344
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
345
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
346
--
224
--
347
2.25.1
225
2.34.1
226
227
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Compare only the VMID field when considering whether we need to flush.
3
Remember if there was an SNaN, and use that to simplify
4
float_2nan_prop_s_{ab,ba} to only the snan component.
5
Then, fall through to the corresponding
6
float_2nan_prop_{ab,ba} case to handle any remaining
7
nans, which must be quiet.
4
8
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org
11
Message-id: 20241203203949.483774-10-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
target/arm/helper.c | 4 ++--
14
fpu/softfloat-parts.c.inc | 32 ++++++++++++--------------------
11
1 file changed, 2 insertions(+), 2 deletions(-)
15
1 file changed, 12 insertions(+), 20 deletions(-)
12
16
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
19
--- a/fpu/softfloat-parts.c.inc
16
+++ b/target/arm/helper.c
20
+++ b/fpu/softfloat-parts.c.inc
17
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
21
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
18
* A change in VMID to the stage2 page table (Stage2) invalidates
22
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
19
* the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
23
float_status *s)
20
*/
24
{
21
- if (raw_read(env, ri) != value) {
25
+ bool have_snan = false;
22
+ if (extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
26
int cmp, which;
23
tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
27
24
- raw_write(env, ri, value);
28
if (is_snan(a->cls) || is_snan(b->cls)) {
29
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
30
+ have_snan = true;
25
}
31
}
26
+ raw_write(env, ri, value);
32
27
}
33
if (s->default_nan_mode) {
28
34
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
29
static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
35
36
switch (s->float_2nan_prop_rule) {
37
case float_2nan_prop_s_ab:
38
- if (is_snan(a->cls)) {
39
- which = 0;
40
- } else if (is_snan(b->cls)) {
41
- which = 1;
42
- } else if (is_qnan(a->cls)) {
43
- which = 0;
44
- } else {
45
- which = 1;
46
+ if (have_snan) {
47
+ which = is_snan(a->cls) ? 0 : 1;
48
+ break;
49
}
50
- break;
51
- case float_2nan_prop_s_ba:
52
- if (is_snan(b->cls)) {
53
- which = 1;
54
- } else if (is_snan(a->cls)) {
55
- which = 0;
56
- } else if (is_qnan(b->cls)) {
57
- which = 1;
58
- } else {
59
- which = 0;
60
- }
61
- break;
62
+ /* fall through */
63
case float_2nan_prop_ab:
64
which = is_nan(a->cls) ? 0 : 1;
65
break;
66
+ case float_2nan_prop_s_ba:
67
+ if (have_snan) {
68
+ which = is_snan(b->cls) ? 1 : 0;
69
+ break;
70
+ }
71
+ /* fall through */
72
case float_2nan_prop_ba:
73
which = is_nan(b->cls) ? 1 : 0;
74
break;
30
--
75
--
31
2.25.1
76
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Not yet used, but add mmu indexes for 1-1 mapping
3
Move the fractional comparison to the end of the
4
to physical addresses.
4
float_2nan_prop_x87 case. This is not required for
5
any other 2nan propagation rule. Reorganize the
6
x87 case itself to break out of the switch when the
7
fractional comparison is not required.
5
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20241203203949.483774-11-richard.henderson@linaro.org
8
Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
target/arm/cpu-param.h | 2 +-
14
fpu/softfloat-parts.c.inc | 19 +++++++++----------
12
target/arm/cpu.h | 7 ++++++-
15
1 file changed, 9 insertions(+), 10 deletions(-)
13
target/arm/ptw.c | 19 +++++++++++++++++--
14
3 files changed, 24 insertions(+), 4 deletions(-)
15
16
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu-param.h
19
--- a/fpu/softfloat-parts.c.inc
19
+++ b/target/arm/cpu-param.h
20
+++ b/fpu/softfloat-parts.c.inc
20
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
21
bool guarded;
22
return a;
22
#endif
23
}
23
24
24
-#define NB_MMU_MODES 8
25
- cmp = frac_cmp(a, b);
25
+#define NB_MMU_MODES 10
26
- if (cmp == 0) {
26
27
- cmp = a->sign < b->sign;
27
#endif
28
- }
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
-
29
index XXXXXXX..XXXXXXX 100644
30
switch (s->float_2nan_prop_rule) {
30
--- a/target/arm/cpu.h
31
case float_2nan_prop_s_ab:
31
+++ b/target/arm/cpu.h
32
if (have_snan) {
32
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
33
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
33
* EL2 EL2&0 +PAN
34
* return the NaN with the positive sign bit (if any).
34
* EL2 (aka NS PL2)
35
*/
35
* EL3 (aka S PL1)
36
if (is_snan(a->cls)) {
36
+ * Physical (NS & S)
37
- if (is_snan(b->cls)) {
37
*
38
- which = cmp > 0 ? 0 : 1;
38
- * for a total of 8 different mmu_idx.
39
- } else {
39
+ * for a total of 10 different mmu_idx.
40
+ if (!is_snan(b->cls)) {
40
*
41
which = is_qnan(b->cls) ? 1 : 0;
41
* R profile CPUs have an MPU, but can use the same set of MMU indexes
42
+ break;
42
* as A profile. They only need to distinguish EL0 and EL1 (and
43
}
43
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
44
} else if (is_qnan(a->cls)) {
44
ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
45
if (is_snan(b->cls) || !is_qnan(b->cls)) {
45
ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
46
which = 0;
46
47
- } else {
47
+ /* TLBs with 1-1 mapping to the physical address spaces. */
48
- which = cmp > 0 ? 0 : 1;
48
+ ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
49
+ break;
49
+ ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
50
}
50
+
51
} else {
51
/*
52
which = 1;
52
* These are not allocated TLBs and are used only for AT system
53
+ break;
53
* instructions or for the first stage of an S12 page table walk.
54
}
54
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
55
+ cmp = frac_cmp(a, b);
55
index XXXXXXX..XXXXXXX 100644
56
+ if (cmp == 0) {
56
--- a/target/arm/ptw.c
57
+ cmp = a->sign < b->sign;
57
+++ b/target/arm/ptw.c
58
+ }
58
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
59
+ which = cmp > 0 ? 0 : 1;
59
case ARMMMUIdx_E3:
60
break;
60
break;
61
62
+ case ARMMMUIdx_Phys_NS:
63
+ case ARMMMUIdx_Phys_S:
64
+ /* No translation for physical address spaces. */
65
+ return true;
66
+
67
default:
61
default:
68
g_assert_not_reached();
62
g_assert_not_reached();
69
}
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
71
{
72
uint8_t memattr = 0x00; /* Device nGnRnE */
73
uint8_t shareability = 0; /* non-sharable */
74
+ int r_el;
75
76
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
77
- int r_el = regime_el(env, mmu_idx);
78
+ switch (mmu_idx) {
79
+ case ARMMMUIdx_Stage2:
80
+ case ARMMMUIdx_Stage2_S:
81
+ case ARMMMUIdx_Phys_NS:
82
+ case ARMMMUIdx_Phys_S:
83
+ break;
84
85
+ default:
86
+ r_el = regime_el(env, mmu_idx);
87
if (arm_el_is_aa64(env, r_el)) {
88
int pamax = arm_pamax(env_archcpu(env));
89
uint64_t tcr = env->cp15.tcr_el[r_el];
90
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
91
shareability = 2; /* outer sharable */
92
}
93
result->cacheattrs.is_s2_format = false;
94
+ break;
95
}
96
97
result->f.phys_addr = address;
98
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
99
is_secure = arm_is_secure_below_el3(env);
100
break;
101
case ARMMMUIdx_Stage2:
102
+ case ARMMMUIdx_Phys_NS:
103
case ARMMMUIdx_MPrivNegPri:
104
case ARMMMUIdx_MUserNegPri:
105
case ARMMMUIdx_MPriv:
106
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
107
break;
108
case ARMMMUIdx_E3:
109
case ARMMMUIdx_Stage2_S:
110
+ case ARMMMUIdx_Phys_S:
111
case ARMMMUIdx_MSPrivNegPri:
112
case ARMMMUIdx_MSUserNegPri:
113
case ARMMMUIdx_MSPriv:
114
--
63
--
115
2.25.1
64
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In preparation for TARGET_TB_PCREL, reduce reliance on
3
Replace the "index" selecting between A and B with a result variable
4
absolute values by passing in pc difference.
4
of the proper type. This improves clarity within the function.
5
5
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/translate-a32.h | 2 +-
11
fpu/softfloat-parts.c.inc | 28 +++++++++++++---------------
12
target/arm/translate.h | 6 ++--
12
1 file changed, 13 insertions(+), 15 deletions(-)
13
target/arm/translate-a64.c | 32 +++++++++---------
14
target/arm/translate-vfp.c | 2 +-
15
target/arm/translate.c | 68 ++++++++++++++++++++------------------
16
5 files changed, 56 insertions(+), 54 deletions(-)
17
13
18
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate-a32.h
16
--- a/fpu/softfloat-parts.c.inc
21
+++ b/target/arm/translate-a32.h
17
+++ b/fpu/softfloat-parts.c.inc
22
@@ -XXX,XX +XXX,XX @@ void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop);
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
23
TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs);
19
float_status *s)
24
void gen_set_cpsr(TCGv_i32 var, uint32_t mask);
25
void gen_set_condexec(DisasContext *s);
26
-void gen_set_pc_im(DisasContext *s, target_ulong val);
27
+void gen_update_pc(DisasContext *s, target_long diff);
28
void gen_lookup_tb(DisasContext *s);
29
long vfp_reg_offset(bool dp, unsigned reg);
30
long neon_full_reg_offset(unsigned reg);
31
diff --git a/target/arm/translate.h b/target/arm/translate.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate.h
34
+++ b/target/arm/translate.h
35
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
36
* For instructions which want an immediate exit to the main loop, as opposed
37
* to attempting to use lookup_and_goto_ptr. Unlike DISAS_UPDATE_EXIT, this
38
* doesn't write the PC on exiting the translation loop so you need to ensure
39
- * something (gen_a64_set_pc_im or runtime helper) has done so before we reach
40
+ * something (gen_a64_update_pc or runtime helper) has done so before we reach
41
* return from cpu_tb_exec.
42
*/
43
#define DISAS_EXIT DISAS_TARGET_9
44
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
45
46
#ifdef TARGET_AARCH64
47
void a64_translate_init(void);
48
-void gen_a64_set_pc_im(uint64_t val);
49
+void gen_a64_update_pc(DisasContext *s, target_long diff);
50
extern const TranslatorOps aarch64_translator_ops;
51
#else
52
static inline void a64_translate_init(void)
53
{
20
{
54
}
21
bool have_snan = false;
55
22
- int cmp, which;
56
-static inline void gen_a64_set_pc_im(uint64_t val)
23
+ FloatPartsN *ret;
57
+static inline void gen_a64_update_pc(DisasContext *s, target_long diff)
24
+ int cmp;
58
{
25
59
}
26
if (is_snan(a->cls) || is_snan(b->cls)) {
60
#endif
27
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
61
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
28
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
62
index XXXXXXX..XXXXXXX 100644
29
switch (s->float_2nan_prop_rule) {
63
--- a/target/arm/translate-a64.c
30
case float_2nan_prop_s_ab:
64
+++ b/target/arm/translate-a64.c
31
if (have_snan) {
65
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
32
- which = is_snan(a->cls) ? 0 : 1;
66
}
33
+ ret = is_snan(a->cls) ? a : b;
67
}
34
break;
68
35
}
69
-void gen_a64_set_pc_im(uint64_t val)
36
/* fall through */
70
+void gen_a64_update_pc(DisasContext *s, target_long diff)
37
case float_2nan_prop_ab:
71
{
38
- which = is_nan(a->cls) ? 0 : 1;
72
- tcg_gen_movi_i64(cpu_pc, val);
39
+ ret = is_nan(a->cls) ? a : b;
73
+ tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
40
break;
74
}
41
case float_2nan_prop_s_ba:
75
42
if (have_snan) {
76
/*
43
- which = is_snan(b->cls) ? 1 : 0;
77
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
44
+ ret = is_snan(b->cls) ? b : a;
78
45
break;
79
static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
46
}
80
{
47
/* fall through */
81
- gen_a64_set_pc_im(pc);
48
case float_2nan_prop_ba:
82
+ gen_a64_update_pc(s, pc - s->pc_curr);
49
- which = is_nan(b->cls) ? 1 : 0;
83
gen_exception_internal(excp);
50
+ ret = is_nan(b->cls) ? b : a;
84
s->base.is_jmp = DISAS_NORETURN;
51
break;
85
}
52
case float_2nan_prop_x87:
86
53
/*
87
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syndrome)
54
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
88
{
89
- gen_a64_set_pc_im(s->pc_curr);
90
+ gen_a64_update_pc(s, 0);
91
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syndrome));
92
s->base.is_jmp = DISAS_NORETURN;
93
}
94
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
95
96
if (use_goto_tb(s, dest)) {
97
tcg_gen_goto_tb(n);
98
- gen_a64_set_pc_im(dest);
99
+ gen_a64_update_pc(s, diff);
100
tcg_gen_exit_tb(s->base.tb, n);
101
s->base.is_jmp = DISAS_NORETURN;
102
} else {
103
- gen_a64_set_pc_im(dest);
104
+ gen_a64_update_pc(s, diff);
105
if (s->ss_active) {
106
gen_step_complete_exception(s);
107
} else {
108
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
109
uint32_t syndrome;
110
111
syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
112
- gen_a64_set_pc_im(s->pc_curr);
113
+ gen_a64_update_pc(s, 0);
114
gen_helper_access_check_cp_reg(cpu_env,
115
tcg_constant_ptr(ri),
116
tcg_constant_i32(syndrome),
117
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
118
* The readfn or writefn might raise an exception;
119
* synchronize the CPU state in case it does.
120
*/
55
*/
121
- gen_a64_set_pc_im(s->pc_curr);
56
if (is_snan(a->cls)) {
122
+ gen_a64_update_pc(s, 0);
57
if (!is_snan(b->cls)) {
123
}
58
- which = is_qnan(b->cls) ? 1 : 0;
124
59
+ ret = is_qnan(b->cls) ? b : a;
125
/* Handle special cases first */
126
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
127
/* The pre HVC helper handles cases when HVC gets trapped
128
* as an undefined insn by runtime configuration.
129
*/
130
- gen_a64_set_pc_im(s->pc_curr);
131
+ gen_a64_update_pc(s, 0);
132
gen_helper_pre_hvc(cpu_env);
133
gen_ss_advance(s);
134
gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
135
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
136
unallocated_encoding(s);
137
break;
60
break;
138
}
61
}
139
- gen_a64_set_pc_im(s->pc_curr);
62
} else if (is_qnan(a->cls)) {
140
+ gen_a64_update_pc(s, 0);
63
if (is_snan(b->cls) || !is_qnan(b->cls)) {
141
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
64
- which = 0;
142
gen_ss_advance(s);
65
+ ret = a;
143
gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
66
break;
144
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
67
}
145
*/
68
} else {
146
switch (dc->base.is_jmp) {
69
- which = 1;
147
default:
70
+ ret = b;
148
- gen_a64_set_pc_im(dc->base.pc_next);
149
+ gen_a64_update_pc(dc, 4);
150
/* fall through */
151
case DISAS_EXIT:
152
case DISAS_JUMP:
153
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
154
break;
71
break;
155
default:
72
}
156
case DISAS_UPDATE_EXIT:
73
cmp = frac_cmp(a, b);
157
- gen_a64_set_pc_im(dc->base.pc_next);
74
if (cmp == 0) {
158
+ gen_a64_update_pc(dc, 4);
75
cmp = a->sign < b->sign;
159
/* fall through */
76
}
160
case DISAS_EXIT:
77
- which = cmp > 0 ? 0 : 1;
161
tcg_gen_exit_tb(NULL, 0);
78
+ ret = cmp > 0 ? a : b;
162
break;
79
break;
163
case DISAS_UPDATE_NOCHAIN:
80
default:
164
- gen_a64_set_pc_im(dc->base.pc_next);
81
g_assert_not_reached();
165
+ gen_a64_update_pc(dc, 4);
166
/* fall through */
167
case DISAS_JUMP:
168
tcg_gen_lookup_and_goto_ptr();
169
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
170
case DISAS_SWI:
171
break;
172
case DISAS_WFE:
173
- gen_a64_set_pc_im(dc->base.pc_next);
174
+ gen_a64_update_pc(dc, 4);
175
gen_helper_wfe(cpu_env);
176
break;
177
case DISAS_YIELD:
178
- gen_a64_set_pc_im(dc->base.pc_next);
179
+ gen_a64_update_pc(dc, 4);
180
gen_helper_yield(cpu_env);
181
break;
182
case DISAS_WFI:
183
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
184
* This is a special case because we don't want to just halt
185
* the CPU if trying to debug across a WFI.
186
*/
187
- gen_a64_set_pc_im(dc->base.pc_next);
188
+ gen_a64_update_pc(dc, 4);
189
gen_helper_wfi(cpu_env, tcg_constant_i32(4));
190
/*
191
* The helper doesn't necessarily throw an exception, but we
192
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/translate-vfp.c
195
+++ b/target/arm/translate-vfp.c
196
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
197
case ARM_VFP_FPSID:
198
if (s->current_el == 1) {
199
gen_set_condexec(s);
200
- gen_set_pc_im(s, s->pc_curr);
201
+ gen_update_pc(s, 0);
202
gen_helper_check_hcr_el2_trap(cpu_env,
203
tcg_constant_i32(a->rt),
204
tcg_constant_i32(a->reg));
205
diff --git a/target/arm/translate.c b/target/arm/translate.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate.c
208
+++ b/target/arm/translate.c
209
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
210
}
82
}
83
84
- if (which) {
85
- a = b;
86
+ if (is_snan(ret->cls)) {
87
+ parts_silence_nan(ret, s);
88
}
89
- if (is_snan(a->cls)) {
90
- parts_silence_nan(a, s);
91
- }
92
- return a;
93
+ return ret;
211
}
94
}
212
95
213
-void gen_set_pc_im(DisasContext *s, target_ulong val)
96
static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
214
+void gen_update_pc(DisasContext *s, target_long diff)
215
{
216
- tcg_gen_movi_i32(cpu_R[15], val);
217
+ tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
218
}
219
220
/* Set PC and Thumb state from var. var is marked as dead. */
221
@@ -XXX,XX +XXX,XX @@ static inline void gen_bxns(DisasContext *s, int rm)
222
223
/* The bxns helper may raise an EXCEPTION_EXIT exception, so in theory
224
* we need to sync state before calling it, but:
225
- * - we don't need to do gen_set_pc_im() because the bxns helper will
226
+ * - we don't need to do gen_update_pc() because the bxns helper will
227
* always set the PC itself
228
* - we don't need to do gen_set_condexec() because BXNS is UNPREDICTABLE
229
* unless it's outside an IT block or the last insn in an IT block,
230
@@ -XXX,XX +XXX,XX @@ static inline void gen_blxns(DisasContext *s, int rm)
231
* We do however need to set the PC, because the blxns helper reads it.
232
* The blxns helper may throw an exception.
233
*/
234
- gen_set_pc_im(s, s->base.pc_next);
235
+ gen_update_pc(s, curr_insn_len(s));
236
gen_helper_v7m_blxns(cpu_env, var);
237
tcg_temp_free_i32(var);
238
s->base.is_jmp = DISAS_EXIT;
239
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
240
* as an undefined insn by runtime configuration (ie before
241
* the insn really executes).
242
*/
243
- gen_set_pc_im(s, s->pc_curr);
244
+ gen_update_pc(s, 0);
245
gen_helper_pre_hvc(cpu_env);
246
/* Otherwise we will treat this as a real exception which
247
* happens after execution of the insn. (The distinction matters
248
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
249
* for single stepping.)
250
*/
251
s->svc_imm = imm16;
252
- gen_set_pc_im(s, s->base.pc_next);
253
+ gen_update_pc(s, curr_insn_len(s));
254
s->base.is_jmp = DISAS_HVC;
255
}
256
257
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
258
/* As with HVC, we may take an exception either before or after
259
* the insn executes.
260
*/
261
- gen_set_pc_im(s, s->pc_curr);
262
+ gen_update_pc(s, 0);
263
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa32_smc()));
264
- gen_set_pc_im(s, s->base.pc_next);
265
+ gen_update_pc(s, curr_insn_len(s));
266
s->base.is_jmp = DISAS_SMC;
267
}
268
269
static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
270
{
271
gen_set_condexec(s);
272
- gen_set_pc_im(s, pc);
273
+ gen_update_pc(s, pc - s->pc_curr);
274
gen_exception_internal(excp);
275
s->base.is_jmp = DISAS_NORETURN;
276
}
277
@@ -XXX,XX +XXX,XX @@ static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
278
uint32_t syn, TCGv_i32 tcg_el)
279
{
280
if (s->aarch64) {
281
- gen_a64_set_pc_im(pc);
282
+ gen_a64_update_pc(s, pc - s->pc_curr);
283
} else {
284
gen_set_condexec(s);
285
- gen_set_pc_im(s, pc);
286
+ gen_update_pc(s, pc - s->pc_curr);
287
}
288
gen_exception_el_v(excp, syn, tcg_el);
289
s->base.is_jmp = DISAS_NORETURN;
290
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
291
void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
292
{
293
if (s->aarch64) {
294
- gen_a64_set_pc_im(pc);
295
+ gen_a64_update_pc(s, pc - s->pc_curr);
296
} else {
297
gen_set_condexec(s);
298
- gen_set_pc_im(s, pc);
299
+ gen_update_pc(s, pc - s->pc_curr);
300
}
301
gen_exception(excp, syn);
302
s->base.is_jmp = DISAS_NORETURN;
303
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
304
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
305
{
306
gen_set_condexec(s);
307
- gen_set_pc_im(s, s->pc_curr);
308
+ gen_update_pc(s, 0);
309
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syn));
310
s->base.is_jmp = DISAS_NORETURN;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
313
314
if (translator_use_goto_tb(&s->base, dest)) {
315
tcg_gen_goto_tb(n);
316
- gen_set_pc_im(s, dest);
317
+ gen_update_pc(s, diff);
318
tcg_gen_exit_tb(s->base.tb, n);
319
} else {
320
- gen_set_pc_im(s, dest);
321
+ gen_update_pc(s, diff);
322
gen_goto_ptr();
323
}
324
s->base.is_jmp = DISAS_NORETURN;
325
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
326
/* Jump, specifying which TB number to use if we gen_goto_tb() */
327
static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
328
{
329
+ int diff = dest - s->pc_curr;
330
+
331
if (unlikely(s->ss_active)) {
332
/* An indirect jump so that we still trigger the debug exception. */
333
- gen_set_pc_im(s, dest);
334
+ gen_update_pc(s, diff);
335
s->base.is_jmp = DISAS_JUMP;
336
return;
337
}
338
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
339
* gen_jmp();
340
* on the second call to gen_jmp().
341
*/
342
- gen_goto_tb(s, tbno, dest - s->pc_curr);
343
+ gen_goto_tb(s, tbno, diff);
344
break;
345
case DISAS_UPDATE_NOCHAIN:
346
case DISAS_UPDATE_EXIT:
347
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
348
* Avoid using goto_tb so we really do exit back to the main loop
349
* and don't chain to another TB.
350
*/
351
- gen_set_pc_im(s, dest);
352
+ gen_update_pc(s, diff);
353
gen_goto_ptr();
354
s->base.is_jmp = DISAS_NORETURN;
355
break;
356
@@ -XXX,XX +XXX,XX @@ static void gen_msr_banked(DisasContext *s, int r, int sysm, int rn)
357
358
/* Sync state because msr_banked() can raise exceptions */
359
gen_set_condexec(s);
360
- gen_set_pc_im(s, s->pc_curr);
361
+ gen_update_pc(s, 0);
362
tcg_reg = load_reg(s, rn);
363
gen_helper_msr_banked(cpu_env, tcg_reg,
364
tcg_constant_i32(tgtmode),
365
@@ -XXX,XX +XXX,XX @@ static void gen_mrs_banked(DisasContext *s, int r, int sysm, int rn)
366
367
/* Sync state because mrs_banked() can raise exceptions */
368
gen_set_condexec(s);
369
- gen_set_pc_im(s, s->pc_curr);
370
+ gen_update_pc(s, 0);
371
tcg_reg = tcg_temp_new_i32();
372
gen_helper_mrs_banked(tcg_reg, cpu_env,
373
tcg_constant_i32(tgtmode),
374
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
375
}
376
377
gen_set_condexec(s);
378
- gen_set_pc_im(s, s->pc_curr);
379
+ gen_update_pc(s, 0);
380
gen_helper_access_check_cp_reg(cpu_env,
381
tcg_constant_ptr(ri),
382
tcg_constant_i32(syndrome),
383
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
384
* synchronize the CPU state in case it does.
385
*/
386
gen_set_condexec(s);
387
- gen_set_pc_im(s, s->pc_curr);
388
+ gen_update_pc(s, 0);
389
}
390
391
/* Handle special cases first */
392
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
393
unallocated_encoding(s);
394
return;
395
}
396
- gen_set_pc_im(s, s->base.pc_next);
397
+ gen_update_pc(s, curr_insn_len(s));
398
s->base.is_jmp = DISAS_WFI;
399
return;
400
default:
401
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
402
addr = tcg_temp_new_i32();
403
/* get_r13_banked() will raise an exception if called from System mode */
404
gen_set_condexec(s);
405
- gen_set_pc_im(s, s->pc_curr);
406
+ gen_update_pc(s, 0);
407
gen_helper_get_r13_banked(addr, cpu_env, tcg_constant_i32(mode));
408
switch (amode) {
409
case 0: /* DA */
410
@@ -XXX,XX +XXX,XX @@ static bool trans_YIELD(DisasContext *s, arg_YIELD *a)
411
* scheduling of other vCPUs.
412
*/
413
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
414
- gen_set_pc_im(s, s->base.pc_next);
415
+ gen_update_pc(s, curr_insn_len(s));
416
s->base.is_jmp = DISAS_YIELD;
417
}
418
return true;
419
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
420
* implemented so we can't sleep like WFI does.
421
*/
422
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
423
- gen_set_pc_im(s, s->base.pc_next);
424
+ gen_update_pc(s, curr_insn_len(s));
425
s->base.is_jmp = DISAS_WFE;
426
}
427
return true;
428
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
429
static bool trans_WFI(DisasContext *s, arg_WFI *a)
430
{
431
/* For WFI, halt the vCPU until an IRQ. */
432
- gen_set_pc_im(s, s->base.pc_next);
433
+ gen_update_pc(s, curr_insn_len(s));
434
s->base.is_jmp = DISAS_WFI;
435
return true;
436
}
437
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
438
(a->imm == semihost_imm)) {
439
gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
440
} else {
441
- gen_set_pc_im(s, s->base.pc_next);
442
+ gen_update_pc(s, curr_insn_len(s));
443
s->svc_imm = a->imm;
444
s->base.is_jmp = DISAS_SWI;
445
}
446
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
447
case DISAS_TOO_MANY:
448
case DISAS_UPDATE_EXIT:
449
case DISAS_UPDATE_NOCHAIN:
450
- gen_set_pc_im(dc, dc->base.pc_next);
451
+ gen_update_pc(dc, curr_insn_len(dc));
452
/* fall through */
453
default:
454
/* FIXME: Single stepping a WFI insn will not halt the CPU. */
455
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
456
gen_goto_tb(dc, 1, curr_insn_len(dc));
457
break;
458
case DISAS_UPDATE_NOCHAIN:
459
- gen_set_pc_im(dc, dc->base.pc_next);
460
+ gen_update_pc(dc, curr_insn_len(dc));
461
/* fall through */
462
case DISAS_JUMP:
463
gen_goto_ptr();
464
break;
465
case DISAS_UPDATE_EXIT:
466
- gen_set_pc_im(dc, dc->base.pc_next);
467
+ gen_update_pc(dc, curr_insn_len(dc));
468
/* fall through */
469
default:
470
/* indicate that the hash table must be used to find the next TB */
471
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
472
gen_set_label(dc->condlabel);
473
gen_set_condexec(dc);
474
if (unlikely(dc->ss_active)) {
475
- gen_set_pc_im(dc, dc->base.pc_next);
476
+ gen_update_pc(dc, curr_insn_len(dc));
477
gen_singlestep_exception(dc);
478
} else {
479
gen_goto_tb(dc, 1, curr_insn_len(dc));
480
--
97
--
481
2.25.1
98
2.34.1
482
99
483
100
diff view generated by jsdifflib
New patch
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
1
2
3
I'm migrating to Qualcomm's new open source email infrastructure, so
4
update my email address, and update the mailmap to match.
5
6
Signed-off-by: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
8
Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20241205114047.1125842-1-leif.lindholm@oss.qualcomm.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
MAINTAINERS | 2 +-
15
.mailmap | 5 +++--
16
2 files changed, 4 insertions(+), 3 deletions(-)
17
18
diff --git a/MAINTAINERS b/MAINTAINERS
19
index XXXXXXX..XXXXXXX 100644
20
--- a/MAINTAINERS
21
+++ b/MAINTAINERS
22
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
23
SBSA-REF
24
M: Radoslaw Biernacki <rad@semihalf.com>
25
M: Peter Maydell <peter.maydell@linaro.org>
26
-R: Leif Lindholm <quic_llindhol@quicinc.com>
27
+R: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
28
R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
29
L: qemu-arm@nongnu.org
30
S: Maintained
31
diff --git a/.mailmap b/.mailmap
32
index XXXXXXX..XXXXXXX 100644
33
--- a/.mailmap
34
+++ b/.mailmap
35
@@ -XXX,XX +XXX,XX @@ Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
36
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
37
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
38
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
39
-Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
40
-Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
41
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <quic_llindhol@quicinc.com>
42
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif.lindholm@linaro.org>
43
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif@nuviainc.com>
44
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
45
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
46
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
47
--
48
2.34.1
49
50
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <vikram.garhwal@bytedance.com>
1
2
3
Previously, maintainer role was paused due to inactive email id. Commit id:
4
c009d715721861984c4987bcc78b7ee183e86d75.
5
6
Signed-off-by: Vikram Garhwal <vikram.garhwal@bytedance.com>
7
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
8
Message-id: 20241204184205.12952-1-vikram.garhwal@bytedance.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
MAINTAINERS | 2 ++
12
1 file changed, 2 insertions(+)
13
14
diff --git a/MAINTAINERS b/MAINTAINERS
15
index XXXXXXX..XXXXXXX 100644
16
--- a/MAINTAINERS
17
+++ b/MAINTAINERS
18
@@ -XXX,XX +XXX,XX @@ F: tests/qtest/fuzz-sb16-test.c
19
20
Xilinx CAN
21
M: Francisco Iglesias <francisco.iglesias@amd.com>
22
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
23
S: Maintained
24
F: hw/net/can/xlnx-*
25
F: include/hw/net/xlnx-*
26
@@ -XXX,XX +XXX,XX @@ F: include/hw/rx/
27
CAN bus subsystem and hardware
28
M: Pavel Pisa <pisa@cmp.felk.cvut.cz>
29
M: Francisco Iglesias <francisco.iglesias@amd.com>
30
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
31
S: Maintained
32
W: https://canbus.pages.fel.cvut.cz/
33
F: net/can/*
34
--
35
2.34.1
diff view generated by jsdifflib