1
arm queue: big stuff here is my MVE codegen optimisation,
1
First arm pullreq of the cycle; this is mostly my softfloat NaN
2
and Alex's Apple Silicon hvf support.
2
handling series. (Lots more in my to-review queue, but I don't
3
like pullreqs growing too close to a hundred patches at a time :-))
3
4
5
thanks
4
-- PMM
6
-- PMM
5
7
6
The following changes since commit 7adb961995a3744f51396502b33ad04a56a317c3:
8
The following changes since commit 97f2796a3736ed37a1b85dc1c76a6c45b829dd17:
7
9
8
Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20210916' into staging (2021-09-19 18:53:29 +0100)
10
Open 10.0 development tree (2024-12-10 17:41:17 +0000)
9
11
10
are available in the Git repository at:
12
are available in the Git repository at:
11
13
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210920
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20241211
13
15
14
for you to fetch changes up to 1dc5a60bfe406bc1122d68cbdefda38d23134b27:
16
for you to fetch changes up to 1abe28d519239eea5cf9620bb13149423e5665f8:
15
17
16
target/arm: Optimize MVE 1op-immediate insns (2021-09-20 14:18:01 +0100)
18
MAINTAINERS: Add correct email address for Vikram Garhwal (2024-12-11 15:31:09 +0000)
17
19
18
----------------------------------------------------------------
20
----------------------------------------------------------------
19
target-arm queue:
21
target-arm queue:
20
* Optimize codegen for MVE when predication not active
22
* hw/net/lan9118: Extract PHY model, reuse with imx_fec, fix bugs
21
* hvf: Add Apple Silicon support
23
* fpu: Make muladd NaN handling runtime-selected, not compile-time
22
* hw/intc: Set GIC maintenance interrupt level to only 0 or 1
24
* fpu: Make default NaN pattern runtime-selected, not compile-time
23
* Fix mishandling of MVE FPSCR.LTPSIZE reset for usermode emulator
25
* fpu: Minor NaN-related cleanups
24
* elf2dmp: Fix coverity nits
26
* MAINTAINERS: email address updates
25
27
26
----------------------------------------------------------------
28
----------------------------------------------------------------
27
Alexander Graf (7):
29
Bernhard Beschow (5):
28
arm: Move PMC register definitions to internals.h
30
hw/net/lan9118: Extract lan9118_phy
29
hvf: Add execute to dirty log permission bitmap
31
hw/net/lan9118_phy: Reuse in imx_fec and consolidate implementations
30
hvf: Introduce hvf_arch_init() callback
32
hw/net/lan9118_phy: Fix off-by-one error in MII_ANLPAR register
31
hvf: Add Apple Silicon support
33
hw/net/lan9118_phy: Reuse MII constants
32
hvf: arm: Implement PSCI handling
34
hw/net/lan9118_phy: Add missing 100 mbps full duplex advertisement
33
arm: Add Hypervisor.framework build target
34
hvf: arm: Add rudimentary PMC support
35
35
36
Peter Collingbourne (1):
36
Leif Lindholm (1):
37
arm/hvf: Add a WFI handler
37
MAINTAINERS: update email address for Leif Lindholm
38
38
39
Peter Maydell (18):
39
Peter Maydell (54):
40
elf2dmp: Check curl_easy_setopt() return value
40
fpu: handle raising Invalid for infzero in pick_nan_muladd
41
elf2dmp: Fail cleanly if PDB file specifies zero block_size
41
fpu: Check for default_nan_mode before calling pickNaNMulAdd
42
target/arm: Don't skip M-profile reset entirely in user mode
42
softfloat: Allow runtime choice of inf * 0 + NaN result
43
target/arm: Always clear exclusive monitor on reset
43
tests/fp: Explicitly set inf-zero-nan rule
44
target/arm: Consolidate ifdef blocks in reset
44
target/arm: Set FloatInfZeroNaNRule explicitly
45
hvf: arm: Implement -cpu host
45
target/s390: Set FloatInfZeroNaNRule explicitly
46
target/arm: Avoid goto_tb if we're trying to exit to the main loop
46
target/ppc: Set FloatInfZeroNaNRule explicitly
47
target/arm: Enforce that FPDSCR.LTPSIZE is 4 on inbound migration
47
target/mips: Set FloatInfZeroNaNRule explicitly
48
target/arm: Add TB flag for "MVE insns not predicated"
48
target/sparc: Set FloatInfZeroNaNRule explicitly
49
target/arm: Optimize MVE logic ops
49
target/xtensa: Set FloatInfZeroNaNRule explicitly
50
target/arm: Optimize MVE arithmetic ops
50
target/x86: Set FloatInfZeroNaNRule explicitly
51
target/arm: Optimize MVE VNEG, VABS
51
target/loongarch: Set FloatInfZeroNaNRule explicitly
52
target/arm: Optimize MVE VDUP
52
target/hppa: Set FloatInfZeroNaNRule explicitly
53
target/arm: Optimize MVE VMVN
53
softfloat: Pass have_snan to pickNaNMulAdd
54
target/arm: Optimize MVE VSHL, VSHR immediate forms
54
softfloat: Allow runtime choice of NaN propagation for muladd
55
target/arm: Optimize MVE VSHLL and VMOVL
55
tests/fp: Explicitly set 3-NaN propagation rule
56
target/arm: Optimize MVE VSLI and VSRI
56
target/arm: Set Float3NaNPropRule explicitly
57
target/arm: Optimize MVE 1op-immediate insns
57
target/loongarch: Set Float3NaNPropRule explicitly
58
target/ppc: Set Float3NaNPropRule explicitly
59
target/s390x: Set Float3NaNPropRule explicitly
60
target/sparc: Set Float3NaNPropRule explicitly
61
target/mips: Set Float3NaNPropRule explicitly
62
target/xtensa: Set Float3NaNPropRule explicitly
63
target/i386: Set Float3NaNPropRule explicitly
64
target/hppa: Set Float3NaNPropRule explicitly
65
fpu: Remove use_first_nan field from float_status
66
target/m68k: Don't pass NULL float_status to floatx80_default_nan()
67
softfloat: Create floatx80 default NaN from parts64_default_nan
68
target/loongarch: Use normal float_status in fclass_s and fclass_d helpers
69
target/m68k: In frem helper, initialize local float_status from env->fp_status
70
target/m68k: Init local float_status from env fp_status in gdb get/set reg
71
target/sparc: Initialize local scratch float_status from env->fp_status
72
target/ppc: Use env->fp_status in helper_compute_fprf functions
73
fpu: Allow runtime choice of default NaN value
74
tests/fp: Set default NaN pattern explicitly
75
target/microblaze: Set default NaN pattern explicitly
76
target/i386: Set default NaN pattern explicitly
77
target/hppa: Set default NaN pattern explicitly
78
target/alpha: Set default NaN pattern explicitly
79
target/arm: Set default NaN pattern explicitly
80
target/loongarch: Set default NaN pattern explicitly
81
target/m68k: Set default NaN pattern explicitly
82
target/mips: Set default NaN pattern explicitly
83
target/openrisc: Set default NaN pattern explicitly
84
target/ppc: Set default NaN pattern explicitly
85
target/sh4: Set default NaN pattern explicitly
86
target/rx: Set default NaN pattern explicitly
87
target/s390x: Set default NaN pattern explicitly
88
target/sparc: Set default NaN pattern explicitly
89
target/xtensa: Set default NaN pattern explicitly
90
target/hexagon: Set default NaN pattern explicitly
91
target/riscv: Set default NaN pattern explicitly
92
target/tricore: Set default NaN pattern explicitly
93
fpu: Remove default handling for dnan_pattern
58
94
59
Shashi Mallela (1):
95
Richard Henderson (11):
60
hw/intc: Set GIC maintenance interrupt level to only 0 or 1
96
target/arm: Copy entire float_status in is_ebf
97
softfloat: Inline pickNaNMulAdd
98
softfloat: Use goto for default nan case in pick_nan_muladd
99
softfloat: Remove which from parts_pick_nan_muladd
100
softfloat: Pad array size in pick_nan_muladd
101
softfloat: Move propagateFloatx80NaN to softfloat.c
102
softfloat: Use parts_pick_nan in propagateFloatx80NaN
103
softfloat: Inline pickNaN
104
softfloat: Share code between parts_pick_nan cases
105
softfloat: Sink frac_cmp in parts_pick_nan until needed
106
softfloat: Replace WHICH with RET in parts_pick_nan
61
107
62
meson.build | 8 +
108
Vikram Garhwal (1):
63
include/sysemu/hvf_int.h | 12 +-
109
MAINTAINERS: Add correct email address for Vikram Garhwal
64
target/arm/cpu.h | 6 +-
65
target/arm/hvf_arm.h | 18 +
66
target/arm/internals.h | 44 ++
67
target/arm/kvm_arm.h | 2 -
68
target/arm/translate.h | 2 +
69
accel/hvf/hvf-accel-ops.c | 21 +-
70
contrib/elf2dmp/download.c | 22 +-
71
contrib/elf2dmp/pdb.c | 4 +
72
hw/intc/arm_gicv3_cpuif.c | 5 +-
73
target/arm/cpu.c | 56 +-
74
target/arm/helper.c | 77 ++-
75
target/arm/hvf/hvf.c | 1278 +++++++++++++++++++++++++++++++++++++++++
76
target/arm/machine.c | 13 +
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 310 +++++++---
79
target/arm/translate-vfp.c | 33 +-
80
target/arm/translate.c | 42 +-
81
target/i386/hvf/hvf.c | 10 +
82
MAINTAINERS | 5 +
83
target/arm/hvf/meson.build | 3 +
84
target/arm/hvf/trace-events | 11 +
85
target/arm/meson.build | 2 +
86
24 files changed, 1824 insertions(+), 168 deletions(-)
87
create mode 100644 target/arm/hvf_arm.h
88
create mode 100644 target/arm/hvf/hvf.c
89
create mode 100644 target/arm/hvf/meson.build
90
create mode 100644 target/arm/hvf/trace-events
91
110
111
MAINTAINERS | 4 +-
112
include/fpu/softfloat-helpers.h | 38 +++-
113
include/fpu/softfloat-types.h | 89 +++++++-
114
include/hw/net/imx_fec.h | 9 +-
115
include/hw/net/lan9118_phy.h | 37 ++++
116
include/hw/net/mii.h | 6 +
117
target/mips/fpu_helper.h | 20 ++
118
target/sparc/helper.h | 4 +-
119
fpu/softfloat.c | 19 ++
120
hw/net/imx_fec.c | 146 ++------------
121
hw/net/lan9118.c | 137 ++-----------
122
hw/net/lan9118_phy.c | 222 ++++++++++++++++++++
123
linux-user/arm/nwfpe/fpa11.c | 5 +
124
target/alpha/cpu.c | 2 +
125
target/arm/cpu.c | 10 +
126
target/arm/tcg/vec_helper.c | 20 +-
127
target/hexagon/cpu.c | 2 +
128
target/hppa/fpu_helper.c | 12 ++
129
target/i386/tcg/fpu_helper.c | 12 ++
130
target/loongarch/tcg/fpu_helper.c | 14 +-
131
target/m68k/cpu.c | 14 +-
132
target/m68k/fpu_helper.c | 6 +-
133
target/m68k/helper.c | 6 +-
134
target/microblaze/cpu.c | 2 +
135
target/mips/msa.c | 10 +
136
target/openrisc/cpu.c | 2 +
137
target/ppc/cpu_init.c | 19 ++
138
target/ppc/fpu_helper.c | 3 +-
139
target/riscv/cpu.c | 2 +
140
target/rx/cpu.c | 2 +
141
target/s390x/cpu.c | 5 +
142
target/sh4/cpu.c | 2 +
143
target/sparc/cpu.c | 6 +
144
target/sparc/fop_helper.c | 8 +-
145
target/sparc/translate.c | 4 +-
146
target/tricore/helper.c | 2 +
147
target/xtensa/cpu.c | 4 +
148
target/xtensa/fpu_helper.c | 3 +-
149
tests/fp/fp-bench.c | 7 +
150
tests/fp/fp-test-log2.c | 1 +
151
tests/fp/fp-test.c | 7 +
152
fpu/softfloat-parts.c.inc | 152 +++++++++++---
153
fpu/softfloat-specialize.c.inc | 412 ++------------------------------------
154
.mailmap | 5 +-
155
hw/net/Kconfig | 5 +
156
hw/net/meson.build | 1 +
157
hw/net/trace-events | 10 +-
158
47 files changed, 778 insertions(+), 730 deletions(-)
159
create mode 100644 include/hw/net/lan9118_phy.h
160
create mode 100644 hw/net/lan9118_phy.c
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
With Apple Silicon available to the masses, it's a good time to add support
3
A very similar implementation of the same device exists in imx_fec. Prepare for
4
for driving its virtualization extensions from QEMU.
4
a common implementation by extracting a device model into its own files.
5
5
6
This patch adds all necessary architecture specific code to get basic VMs
6
Some migration state has been moved into the new device model which breaks
7
working, including save/restore.
7
migration compatibility for the following machines:
8
* smdkc210
9
* realview-*
10
* vexpress-*
11
* kzm
12
* mps2-*
8
13
9
Known limitations:
14
While breaking migration ABI, fix the size of the MII registers to be 16 bit,
15
as defined by IEEE 802.3u.
10
16
11
- WFI handling is missing (follows in later patch)
17
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
12
- No watchpoint/breakpoint support
18
Tested-by: Guenter Roeck <linux@roeck-us.net>
13
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
15
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20210916155404.86958-5-agraf@csgraf.de
20
Message-id: 20241102125724.532843-2-shentey@gmail.com
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
22
---
21
meson.build | 1 +
23
include/hw/net/lan9118_phy.h | 37 ++++++++
22
include/sysemu/hvf_int.h | 10 +-
24
hw/net/lan9118.c | 137 +++++-----------------------
23
accel/hvf/hvf-accel-ops.c | 9 +
25
hw/net/lan9118_phy.c | 169 +++++++++++++++++++++++++++++++++++
24
target/arm/hvf/hvf.c | 794 ++++++++++++++++++++++++++++++++++++
26
hw/net/Kconfig | 4 +
25
target/i386/hvf/hvf.c | 5 +
27
hw/net/meson.build | 1 +
26
MAINTAINERS | 5 +
28
5 files changed, 233 insertions(+), 115 deletions(-)
27
target/arm/hvf/trace-events | 10 +
29
create mode 100644 include/hw/net/lan9118_phy.h
28
7 files changed, 833 insertions(+), 1 deletion(-)
30
create mode 100644 hw/net/lan9118_phy.c
29
create mode 100644 target/arm/hvf/hvf.c
30
create mode 100644 target/arm/hvf/trace-events
31
31
32
diff --git a/meson.build b/meson.build
32
diff --git a/include/hw/net/lan9118_phy.h b/include/hw/net/lan9118_phy.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/meson.build
35
+++ b/meson.build
36
@@ -XXX,XX +XXX,XX @@ if have_system or have_user
37
'accel/tcg',
38
'hw/core',
39
'target/arm',
40
+ 'target/arm/hvf',
41
'target/hppa',
42
'target/i386',
43
'target/i386/kvm',
44
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
45
index XXXXXXX..XXXXXXX 100644
46
--- a/include/sysemu/hvf_int.h
47
+++ b/include/sysemu/hvf_int.h
48
@@ -XXX,XX +XXX,XX @@
49
#ifndef HVF_INT_H
50
#define HVF_INT_H
51
52
+#ifdef __aarch64__
53
+#include <Hypervisor/Hypervisor.h>
54
+#else
55
#include <Hypervisor/hv.h>
56
+#endif
57
58
/* hvf_slot flags */
59
#define HVF_SLOT_LOG (1 << 0)
60
@@ -XXX,XX +XXX,XX @@ struct HVFState {
61
int num_slots;
62
63
hvf_vcpu_caps *hvf_caps;
64
+ uint64_t vtimer_offset;
65
};
66
extern HVFState *hvf_state;
67
68
struct hvf_vcpu_state {
69
- int fd;
70
+ uint64_t fd;
71
+ void *exit;
72
+ bool vtimer_masked;
73
};
74
75
void assert_hvf_ok(hv_return_t ret);
76
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *);
77
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
78
int hvf_put_registers(CPUState *);
79
int hvf_get_registers(CPUState *);
80
+void hvf_kick_vcpu_thread(CPUState *cpu);
81
82
#endif
83
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/accel/hvf/hvf-accel-ops.c
86
+++ b/accel/hvf/hvf-accel-ops.c
87
@@ -XXX,XX +XXX,XX @@
88
89
HVFState *hvf_state;
90
91
+#ifdef __aarch64__
92
+#define HV_VM_DEFAULT NULL
93
+#endif
94
+
95
/* Memory slots */
96
97
hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
98
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
99
pthread_sigmask(SIG_BLOCK, NULL, &set);
100
sigdelset(&set, SIG_IPI);
101
102
+#ifdef __aarch64__
103
+ r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
104
+#else
105
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
106
+#endif
107
cpu->vcpu_dirty = 1;
108
assert_hvf_ok(r);
109
110
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
111
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
112
113
ops->create_vcpu_thread = hvf_start_vcpu_thread;
114
+ ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
115
116
ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
117
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
118
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
119
new file mode 100644
33
new file mode 100644
120
index XXXXXXX..XXXXXXX
34
index XXXXXXX..XXXXXXX
121
--- /dev/null
35
--- /dev/null
122
+++ b/target/arm/hvf/hvf.c
36
+++ b/include/hw/net/lan9118_phy.h
123
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@
124
+/*
38
+/*
125
+ * QEMU Hypervisor.framework support for Apple Silicon
39
+ * SMSC LAN9118 PHY emulation
126
+
40
+ *
127
+ * Copyright 2020 Alexander Graf <agraf@csgraf.de>
41
+ * Copyright (c) 2009 CodeSourcery, LLC.
42
+ * Written by Paul Brook
128
+ *
43
+ *
129
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
130
+ * See the COPYING file in the top-level directory.
45
+ * See the COPYING file in the top-level directory.
131
+ *
132
+ */
46
+ */
133
+
47
+
134
+#include "qemu/osdep.h"
48
+#ifndef HW_NET_LAN9118_PHY_H
135
+#include "qemu-common.h"
49
+#define HW_NET_LAN9118_PHY_H
136
+#include "qemu/error-report.h"
50
+
137
+
51
+#include "qom/object.h"
138
+#include "sysemu/runstate.h"
52
+#include "hw/sysbus.h"
139
+#include "sysemu/hvf.h"
53
+
140
+#include "sysemu/hvf_int.h"
54
+#define TYPE_LAN9118_PHY "lan9118-phy"
141
+#include "sysemu/hw_accel.h"
55
+OBJECT_DECLARE_SIMPLE_TYPE(Lan9118PhyState, LAN9118_PHY)
142
+
56
+
143
+#include <mach/mach_time.h>
57
+typedef struct Lan9118PhyState {
144
+
58
+ SysBusDevice parent_obj;
145
+#include "exec/address-spaces.h"
59
+
146
+#include "hw/irq.h"
60
+ uint16_t status;
147
+#include "qemu/main-loop.h"
61
+ uint16_t control;
148
+#include "sysemu/cpus.h"
62
+ uint16_t advertise;
149
+#include "target/arm/cpu.h"
63
+ uint16_t ints;
150
+#include "target/arm/internals.h"
64
+ uint16_t int_mask;
151
+#include "trace/trace-target_arm_hvf.h"
65
+ qemu_irq irq;
152
+#include "migration/vmstate.h"
66
+ bool link_down;
153
+
67
+} Lan9118PhyState;
154
+#define HVF_SYSREG(crn, crm, op0, op1, op2) \
68
+
155
+ ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
69
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down);
156
+#define PL1_WRITE_MASK 0x4
70
+void lan9118_phy_reset(Lan9118PhyState *s);
157
+
71
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg);
158
+#define SYSREG(op0, op1, crn, crm, op2) \
72
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val);
159
+ ((op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (crm << 1))
73
+
160
+#define SYSREG_MASK SYSREG(0x3, 0x7, 0xf, 0xf, 0x7)
161
+#define SYSREG_OSLAR_EL1 SYSREG(2, 0, 1, 0, 4)
162
+#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
163
+#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
164
+#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
165
+
166
+#define WFX_IS_WFE (1 << 0)
167
+
168
+#define TMR_CTL_ENABLE (1 << 0)
169
+#define TMR_CTL_IMASK (1 << 1)
170
+#define TMR_CTL_ISTATUS (1 << 2)
171
+
172
+typedef struct HVFVTimer {
173
+ /* Vtimer value during migration and paused state */
174
+ uint64_t vtimer_val;
175
+} HVFVTimer;
176
+
177
+static HVFVTimer vtimer;
178
+
179
+struct hvf_reg_match {
180
+ int reg;
181
+ uint64_t offset;
182
+};
183
+
184
+static const struct hvf_reg_match hvf_reg_match[] = {
185
+ { HV_REG_X0, offsetof(CPUARMState, xregs[0]) },
186
+ { HV_REG_X1, offsetof(CPUARMState, xregs[1]) },
187
+ { HV_REG_X2, offsetof(CPUARMState, xregs[2]) },
188
+ { HV_REG_X3, offsetof(CPUARMState, xregs[3]) },
189
+ { HV_REG_X4, offsetof(CPUARMState, xregs[4]) },
190
+ { HV_REG_X5, offsetof(CPUARMState, xregs[5]) },
191
+ { HV_REG_X6, offsetof(CPUARMState, xregs[6]) },
192
+ { HV_REG_X7, offsetof(CPUARMState, xregs[7]) },
193
+ { HV_REG_X8, offsetof(CPUARMState, xregs[8]) },
194
+ { HV_REG_X9, offsetof(CPUARMState, xregs[9]) },
195
+ { HV_REG_X10, offsetof(CPUARMState, xregs[10]) },
196
+ { HV_REG_X11, offsetof(CPUARMState, xregs[11]) },
197
+ { HV_REG_X12, offsetof(CPUARMState, xregs[12]) },
198
+ { HV_REG_X13, offsetof(CPUARMState, xregs[13]) },
199
+ { HV_REG_X14, offsetof(CPUARMState, xregs[14]) },
200
+ { HV_REG_X15, offsetof(CPUARMState, xregs[15]) },
201
+ { HV_REG_X16, offsetof(CPUARMState, xregs[16]) },
202
+ { HV_REG_X17, offsetof(CPUARMState, xregs[17]) },
203
+ { HV_REG_X18, offsetof(CPUARMState, xregs[18]) },
204
+ { HV_REG_X19, offsetof(CPUARMState, xregs[19]) },
205
+ { HV_REG_X20, offsetof(CPUARMState, xregs[20]) },
206
+ { HV_REG_X21, offsetof(CPUARMState, xregs[21]) },
207
+ { HV_REG_X22, offsetof(CPUARMState, xregs[22]) },
208
+ { HV_REG_X23, offsetof(CPUARMState, xregs[23]) },
209
+ { HV_REG_X24, offsetof(CPUARMState, xregs[24]) },
210
+ { HV_REG_X25, offsetof(CPUARMState, xregs[25]) },
211
+ { HV_REG_X26, offsetof(CPUARMState, xregs[26]) },
212
+ { HV_REG_X27, offsetof(CPUARMState, xregs[27]) },
213
+ { HV_REG_X28, offsetof(CPUARMState, xregs[28]) },
214
+ { HV_REG_X29, offsetof(CPUARMState, xregs[29]) },
215
+ { HV_REG_X30, offsetof(CPUARMState, xregs[30]) },
216
+ { HV_REG_PC, offsetof(CPUARMState, pc) },
217
+};
218
+
219
+static const struct hvf_reg_match hvf_fpreg_match[] = {
220
+ { HV_SIMD_FP_REG_Q0, offsetof(CPUARMState, vfp.zregs[0]) },
221
+ { HV_SIMD_FP_REG_Q1, offsetof(CPUARMState, vfp.zregs[1]) },
222
+ { HV_SIMD_FP_REG_Q2, offsetof(CPUARMState, vfp.zregs[2]) },
223
+ { HV_SIMD_FP_REG_Q3, offsetof(CPUARMState, vfp.zregs[3]) },
224
+ { HV_SIMD_FP_REG_Q4, offsetof(CPUARMState, vfp.zregs[4]) },
225
+ { HV_SIMD_FP_REG_Q5, offsetof(CPUARMState, vfp.zregs[5]) },
226
+ { HV_SIMD_FP_REG_Q6, offsetof(CPUARMState, vfp.zregs[6]) },
227
+ { HV_SIMD_FP_REG_Q7, offsetof(CPUARMState, vfp.zregs[7]) },
228
+ { HV_SIMD_FP_REG_Q8, offsetof(CPUARMState, vfp.zregs[8]) },
229
+ { HV_SIMD_FP_REG_Q9, offsetof(CPUARMState, vfp.zregs[9]) },
230
+ { HV_SIMD_FP_REG_Q10, offsetof(CPUARMState, vfp.zregs[10]) },
231
+ { HV_SIMD_FP_REG_Q11, offsetof(CPUARMState, vfp.zregs[11]) },
232
+ { HV_SIMD_FP_REG_Q12, offsetof(CPUARMState, vfp.zregs[12]) },
233
+ { HV_SIMD_FP_REG_Q13, offsetof(CPUARMState, vfp.zregs[13]) },
234
+ { HV_SIMD_FP_REG_Q14, offsetof(CPUARMState, vfp.zregs[14]) },
235
+ { HV_SIMD_FP_REG_Q15, offsetof(CPUARMState, vfp.zregs[15]) },
236
+ { HV_SIMD_FP_REG_Q16, offsetof(CPUARMState, vfp.zregs[16]) },
237
+ { HV_SIMD_FP_REG_Q17, offsetof(CPUARMState, vfp.zregs[17]) },
238
+ { HV_SIMD_FP_REG_Q18, offsetof(CPUARMState, vfp.zregs[18]) },
239
+ { HV_SIMD_FP_REG_Q19, offsetof(CPUARMState, vfp.zregs[19]) },
240
+ { HV_SIMD_FP_REG_Q20, offsetof(CPUARMState, vfp.zregs[20]) },
241
+ { HV_SIMD_FP_REG_Q21, offsetof(CPUARMState, vfp.zregs[21]) },
242
+ { HV_SIMD_FP_REG_Q22, offsetof(CPUARMState, vfp.zregs[22]) },
243
+ { HV_SIMD_FP_REG_Q23, offsetof(CPUARMState, vfp.zregs[23]) },
244
+ { HV_SIMD_FP_REG_Q24, offsetof(CPUARMState, vfp.zregs[24]) },
245
+ { HV_SIMD_FP_REG_Q25, offsetof(CPUARMState, vfp.zregs[25]) },
246
+ { HV_SIMD_FP_REG_Q26, offsetof(CPUARMState, vfp.zregs[26]) },
247
+ { HV_SIMD_FP_REG_Q27, offsetof(CPUARMState, vfp.zregs[27]) },
248
+ { HV_SIMD_FP_REG_Q28, offsetof(CPUARMState, vfp.zregs[28]) },
249
+ { HV_SIMD_FP_REG_Q29, offsetof(CPUARMState, vfp.zregs[29]) },
250
+ { HV_SIMD_FP_REG_Q30, offsetof(CPUARMState, vfp.zregs[30]) },
251
+ { HV_SIMD_FP_REG_Q31, offsetof(CPUARMState, vfp.zregs[31]) },
252
+};
253
+
254
+struct hvf_sreg_match {
255
+ int reg;
256
+ uint32_t key;
257
+ uint32_t cp_idx;
258
+};
259
+
260
+static struct hvf_sreg_match hvf_sreg_match[] = {
261
+ { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
262
+ { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
263
+ { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
264
+ { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
265
+
266
+ { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
267
+ { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
268
+ { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
269
+ { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
270
+
271
+ { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
272
+ { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
273
+ { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
274
+ { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
275
+
276
+ { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
277
+ { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
278
+ { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
279
+ { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
280
+
281
+ { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
282
+ { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
283
+ { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
284
+ { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
285
+
286
+ { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
287
+ { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
288
+ { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
289
+ { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
290
+
291
+ { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
292
+ { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
293
+ { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
294
+ { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
295
+
296
+ { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
297
+ { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
298
+ { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
299
+ { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
300
+
301
+ { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
302
+ { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
303
+ { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
304
+ { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
305
+
306
+ { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
307
+ { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
308
+ { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
309
+ { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
310
+
311
+ { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
312
+ { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
313
+ { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
314
+ { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
315
+
316
+ { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
317
+ { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
318
+ { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
319
+ { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
320
+
321
+ { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
322
+ { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
323
+ { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
324
+ { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
325
+
326
+ { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
327
+ { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
328
+ { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
329
+ { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
330
+
331
+ { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
332
+ { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
333
+ { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
334
+ { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
335
+
336
+ { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
337
+ { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
338
+ { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
339
+ { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
340
+
341
+#ifdef SYNC_NO_RAW_REGS
342
+ /*
343
+ * The registers below are manually synced on init because they are
344
+ * marked as NO_RAW. We still list them to make number space sync easier.
345
+ */
346
+ { HV_SYS_REG_MDCCINT_EL1, HVF_SYSREG(0, 2, 2, 0, 0) },
347
+ { HV_SYS_REG_MIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 0) },
348
+ { HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
349
+ { HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
350
+#endif
74
+#endif
351
+ { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
75
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
352
+ { HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
76
index XXXXXXX..XXXXXXX 100644
353
+ { HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
77
--- a/hw/net/lan9118.c
354
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
78
+++ b/hw/net/lan9118.c
355
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, HVF_SYSREG(0, 6, 3, 0, 1) },
79
@@ -XXX,XX +XXX,XX @@
356
+#ifdef SYNC_NO_MMFR0
80
#include "net/net.h"
357
+ /* We keep the hardware MMFR0 around. HW limits are there anyway */
81
#include "net/eth.h"
358
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, HVF_SYSREG(0, 7, 3, 0, 0) },
82
#include "hw/irq.h"
359
+#endif
83
+#include "hw/net/lan9118_phy.h"
360
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, HVF_SYSREG(0, 7, 3, 0, 1) },
84
#include "hw/net/lan9118.h"
361
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, HVF_SYSREG(0, 7, 3, 0, 2) },
85
#include "hw/ptimer.h"
362
+
86
#include "hw/qdev-properties.h"
363
+ { HV_SYS_REG_MDSCR_EL1, HVF_SYSREG(0, 2, 2, 0, 2) },
87
@@ -XXX,XX +XXX,XX @@ do { printf("lan9118: " fmt , ## __VA_ARGS__); } while (0)
364
+ { HV_SYS_REG_SCTLR_EL1, HVF_SYSREG(1, 0, 3, 0, 0) },
88
#define MAC_CR_RXEN 0x00000004
365
+ { HV_SYS_REG_CPACR_EL1, HVF_SYSREG(1, 0, 3, 0, 2) },
89
#define MAC_CR_RESERVED 0x7f404213
366
+ { HV_SYS_REG_TTBR0_EL1, HVF_SYSREG(2, 0, 3, 0, 0) },
90
367
+ { HV_SYS_REG_TTBR1_EL1, HVF_SYSREG(2, 0, 3, 0, 1) },
91
-#define PHY_INT_ENERGYON 0x80
368
+ { HV_SYS_REG_TCR_EL1, HVF_SYSREG(2, 0, 3, 0, 2) },
92
-#define PHY_INT_AUTONEG_COMPLETE 0x40
369
+
93
-#define PHY_INT_FAULT 0x20
370
+ { HV_SYS_REG_APIAKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 0) },
94
-#define PHY_INT_DOWN 0x10
371
+ { HV_SYS_REG_APIAKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 1) },
95
-#define PHY_INT_AUTONEG_LP 0x08
372
+ { HV_SYS_REG_APIBKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 2) },
96
-#define PHY_INT_PARFAULT 0x04
373
+ { HV_SYS_REG_APIBKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 3) },
97
-#define PHY_INT_AUTONEG_PAGE 0x02
374
+ { HV_SYS_REG_APDAKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 0) },
98
-
375
+ { HV_SYS_REG_APDAKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 1) },
99
#define GPT_TIMER_EN 0x20000000
376
+ { HV_SYS_REG_APDBKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 2) },
100
377
+ { HV_SYS_REG_APDBKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 3) },
101
/*
378
+ { HV_SYS_REG_APGAKEYLO_EL1, HVF_SYSREG(2, 3, 3, 0, 0) },
102
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
379
+ { HV_SYS_REG_APGAKEYHI_EL1, HVF_SYSREG(2, 3, 3, 0, 1) },
103
uint32_t mac_mii_data;
380
+
104
uint32_t mac_flow;
381
+ { HV_SYS_REG_SPSR_EL1, HVF_SYSREG(4, 0, 3, 0, 0) },
105
382
+ { HV_SYS_REG_ELR_EL1, HVF_SYSREG(4, 0, 3, 0, 1) },
106
- uint32_t phy_status;
383
+ { HV_SYS_REG_SP_EL0, HVF_SYSREG(4, 1, 3, 0, 0) },
107
- uint32_t phy_control;
384
+ { HV_SYS_REG_AFSR0_EL1, HVF_SYSREG(5, 1, 3, 0, 0) },
108
- uint32_t phy_advertise;
385
+ { HV_SYS_REG_AFSR1_EL1, HVF_SYSREG(5, 1, 3, 0, 1) },
109
- uint32_t phy_int;
386
+ { HV_SYS_REG_ESR_EL1, HVF_SYSREG(5, 2, 3, 0, 0) },
110
- uint32_t phy_int_mask;
387
+ { HV_SYS_REG_FAR_EL1, HVF_SYSREG(6, 0, 3, 0, 0) },
111
+ Lan9118PhyState mii;
388
+ { HV_SYS_REG_PAR_EL1, HVF_SYSREG(7, 4, 3, 0, 0) },
112
+ IRQState mii_irq;
389
+ { HV_SYS_REG_MAIR_EL1, HVF_SYSREG(10, 2, 3, 0, 0) },
113
390
+ { HV_SYS_REG_AMAIR_EL1, HVF_SYSREG(10, 3, 3, 0, 0) },
114
int32_t eeprom_writable;
391
+ { HV_SYS_REG_VBAR_EL1, HVF_SYSREG(12, 0, 3, 0, 0) },
115
uint8_t eeprom[128];
392
+ { HV_SYS_REG_CONTEXTIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 1) },
116
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
393
+ { HV_SYS_REG_TPIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 4) },
117
394
+ { HV_SYS_REG_CNTKCTL_EL1, HVF_SYSREG(14, 1, 3, 0, 0) },
118
static const VMStateDescription vmstate_lan9118 = {
395
+ { HV_SYS_REG_CSSELR_EL1, HVF_SYSREG(0, 0, 3, 2, 0) },
119
.name = "lan9118",
396
+ { HV_SYS_REG_TPIDR_EL0, HVF_SYSREG(13, 0, 3, 3, 2) },
120
- .version_id = 2,
397
+ { HV_SYS_REG_TPIDRRO_EL0, HVF_SYSREG(13, 0, 3, 3, 3) },
121
- .minimum_version_id = 1,
398
+ { HV_SYS_REG_CNTV_CTL_EL0, HVF_SYSREG(14, 3, 3, 3, 1) },
122
+ .version_id = 3,
399
+ { HV_SYS_REG_CNTV_CVAL_EL0, HVF_SYSREG(14, 3, 3, 3, 2) },
123
+ .minimum_version_id = 3,
400
+ { HV_SYS_REG_SP_EL1, HVF_SYSREG(4, 1, 3, 4, 0) },
124
.fields = (const VMStateField[]) {
401
+};
125
VMSTATE_PTIMER(timer, lan9118_state),
402
+
126
VMSTATE_UINT32(irq_cfg, lan9118_state),
403
+int hvf_get_registers(CPUState *cpu)
127
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118 = {
404
+{
128
VMSTATE_UINT32(mac_mii_acc, lan9118_state),
405
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
129
VMSTATE_UINT32(mac_mii_data, lan9118_state),
406
+ CPUARMState *env = &arm_cpu->env;
130
VMSTATE_UINT32(mac_flow, lan9118_state),
407
+ hv_return_t ret;
131
- VMSTATE_UINT32(phy_status, lan9118_state),
408
+ uint64_t val;
132
- VMSTATE_UINT32(phy_control, lan9118_state),
409
+ hv_simd_fp_uchar16_t fpval;
133
- VMSTATE_UINT32(phy_advertise, lan9118_state),
410
+ int i;
134
- VMSTATE_UINT32(phy_int, lan9118_state),
411
+
135
- VMSTATE_UINT32(phy_int_mask, lan9118_state),
412
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
136
VMSTATE_INT32(eeprom_writable, lan9118_state),
413
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val);
137
VMSTATE_UINT8_ARRAY(eeprom, lan9118_state, 128),
414
+ *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val;
138
VMSTATE_INT32(tx_fifo_size, lan9118_state),
415
+ assert_hvf_ok(ret);
139
@@ -XXX,XX +XXX,XX @@ static void lan9118_reload_eeprom(lan9118_state *s)
416
+ }
140
lan9118_mac_changed(s);
417
+
141
}
418
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
142
419
+ ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
143
-static void phy_update_irq(lan9118_state *s)
420
+ &fpval);
144
+static void lan9118_update_irq(void *opaque, int n, int level)
421
+ memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval));
145
{
422
+ assert_hvf_ok(ret);
146
- if (s->phy_int & s->phy_int_mask) {
423
+ }
147
+ lan9118_state *s = opaque;
424
+
148
+
425
+ val = 0;
149
+ if (level) {
426
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val);
150
s->int_sts |= PHY_INT;
427
+ assert_hvf_ok(ret);
151
} else {
428
+ vfp_set_fpcr(env, val);
152
s->int_sts &= ~PHY_INT;
429
+
153
@@ -XXX,XX +XXX,XX @@ static void phy_update_irq(lan9118_state *s)
430
+ val = 0;
154
lan9118_update(s);
431
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val);
155
}
432
+ assert_hvf_ok(ret);
156
433
+ vfp_set_fpsr(env, val);
157
-static void phy_update_link(lan9118_state *s)
434
+
158
-{
435
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val);
159
- /* Autonegotiation status mirrors link status. */
436
+ assert_hvf_ok(ret);
160
- if (qemu_get_queue(s->nic)->link_down) {
437
+ pstate_write(env, val);
161
- s->phy_status &= ~0x0024;
438
+
162
- s->phy_int |= PHY_INT_DOWN;
439
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
163
- } else {
440
+ if (hvf_sreg_match[i].cp_idx == -1) {
164
- s->phy_status |= 0x0024;
441
+ continue;
165
- s->phy_int |= PHY_INT_ENERGYON;
442
+ }
166
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
443
+
167
- }
444
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
168
- phy_update_irq(s);
445
+ assert_hvf_ok(ret);
169
-}
446
+
170
-
447
+ arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
171
static void lan9118_set_link(NetClientState *nc)
448
+ }
172
{
449
+ assert(write_list_to_cpustate(arm_cpu));
173
- phy_update_link(qemu_get_nic_opaque(nc));
450
+
174
-}
451
+ aarch64_restore_sp(env, arm_current_el(env));
175
-
452
+
176
-static void phy_reset(lan9118_state *s)
453
+ return 0;
177
-{
454
+}
178
- s->phy_status = 0x7809;
455
+
179
- s->phy_control = 0x3000;
456
+int hvf_put_registers(CPUState *cpu)
180
- s->phy_advertise = 0x01e1;
457
+{
181
- s->phy_int_mask = 0;
458
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
182
- s->phy_int = 0;
459
+ CPUARMState *env = &arm_cpu->env;
183
- phy_update_link(s);
460
+ hv_return_t ret;
184
+ lan9118_phy_update_link(&LAN9118(qemu_get_nic_opaque(nc))->mii,
461
+ uint64_t val;
185
+ nc->link_down);
462
+ hv_simd_fp_uchar16_t fpval;
186
}
463
+ int i;
187
464
+
188
static void lan9118_reset(DeviceState *d)
465
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
189
@@ -XXX,XX +XXX,XX @@ static void lan9118_reset(DeviceState *d)
466
+ val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset);
190
s->read_word_n = 0;
467
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val);
191
s->write_word_n = 0;
468
+ assert_hvf_ok(ret);
192
469
+ }
193
- phy_reset(s);
470
+
194
-
471
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
195
s->eeprom_writable = 0;
472
+ memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval));
196
lan9118_reload_eeprom(s);
473
+ ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
197
}
474
+ fpval);
198
@@ -XXX,XX +XXX,XX @@ static void do_tx_packet(lan9118_state *s)
475
+ assert_hvf_ok(ret);
199
uint32_t status;
476
+ }
200
477
+
201
/* FIXME: Honor TX disable, and allow queueing of packets. */
478
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env));
202
- if (s->phy_control & 0x4000) {
479
+ assert_hvf_ok(ret);
203
+ if (s->mii.control & 0x4000) {
480
+
204
/* This assumes the receive routine doesn't touch the VLANClient. */
481
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env));
205
qemu_receive_packet(qemu_get_queue(s->nic), s->txp->data, s->txp->len);
482
+ assert_hvf_ok(ret);
206
} else {
483
+
207
@@ -XXX,XX +XXX,XX @@ static void tx_fifo_push(lan9118_state *s, uint32_t val)
484
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env));
208
}
485
+ assert_hvf_ok(ret);
209
}
486
+
210
487
+ aarch64_save_sp(env, arm_current_el(env));
211
-static uint32_t do_phy_read(lan9118_state *s, int reg)
488
+
212
-{
489
+ assert(write_cpustate_to_list(arm_cpu, false));
213
- uint32_t val;
490
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
214
-
491
+ if (hvf_sreg_match[i].cp_idx == -1) {
215
- switch (reg) {
492
+ continue;
216
- case 0: /* Basic Control */
493
+ }
217
- return s->phy_control;
494
+
218
- case 1: /* Basic Status */
495
+ val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
219
- return s->phy_status;
496
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
220
- case 2: /* ID1 */
497
+ assert_hvf_ok(ret);
221
- return 0x0007;
498
+ }
222
- case 3: /* ID2 */
499
+
223
- return 0xc0d1;
500
+ ret = hv_vcpu_set_vtimer_offset(cpu->hvf->fd, hvf_state->vtimer_offset);
224
- case 4: /* Auto-neg advertisement */
501
+ assert_hvf_ok(ret);
225
- return s->phy_advertise;
502
+
226
- case 5: /* Auto-neg Link Partner Ability */
503
+ return 0;
227
- return 0x0f71;
504
+}
228
- case 6: /* Auto-neg Expansion */
505
+
229
- return 1;
506
+static void flush_cpu_state(CPUState *cpu)
230
- /* TODO 17, 18, 27, 29, 30, 31 */
507
+{
231
- case 29: /* Interrupt source. */
508
+ if (cpu->vcpu_dirty) {
232
- val = s->phy_int;
509
+ hvf_put_registers(cpu);
233
- s->phy_int = 0;
510
+ cpu->vcpu_dirty = false;
234
- phy_update_irq(s);
511
+ }
235
- return val;
512
+}
236
- case 30: /* Interrupt mask */
513
+
237
- return s->phy_int_mask;
514
+static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val)
238
- default:
515
+{
239
- qemu_log_mask(LOG_GUEST_ERROR,
516
+ hv_return_t r;
240
- "do_phy_read: PHY read reg %d\n", reg);
517
+
241
- return 0;
518
+ flush_cpu_state(cpu);
242
- }
519
+
243
-}
520
+ if (rt < 31) {
244
-
521
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val);
245
-static void do_phy_write(lan9118_state *s, int reg, uint32_t val)
522
+ assert_hvf_ok(r);
246
-{
523
+ }
247
- switch (reg) {
524
+}
248
- case 0: /* Basic Control */
525
+
249
- if (val & 0x8000) {
526
+static uint64_t hvf_get_reg(CPUState *cpu, int rt)
250
- phy_reset(s);
527
+{
251
- break;
528
+ uint64_t val = 0;
252
- }
529
+ hv_return_t r;
253
- s->phy_control = val & 0x7980;
530
+
254
- /* Complete autonegotiation immediately. */
531
+ flush_cpu_state(cpu);
255
- if (val & 0x1000) {
532
+
256
- s->phy_status |= 0x0020;
533
+ if (rt < 31) {
257
- }
534
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val);
258
- break;
535
+ assert_hvf_ok(r);
259
- case 4: /* Auto-neg advertisement */
536
+ }
260
- s->phy_advertise = (val & 0x2d7f) | 0x80;
537
+
261
- break;
538
+ return val;
262
- /* TODO 17, 18, 27, 31 */
539
+}
263
- case 30: /* Interrupt mask */
540
+
264
- s->phy_int_mask = val & 0xff;
541
+void hvf_arch_vcpu_destroy(CPUState *cpu)
265
- phy_update_irq(s);
542
+{
266
- break;
543
+}
267
- default:
544
+
268
- qemu_log_mask(LOG_GUEST_ERROR,
545
+int hvf_arch_init_vcpu(CPUState *cpu)
269
- "do_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
546
+{
270
- }
547
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
271
-}
548
+ CPUARMState *env = &arm_cpu->env;
272
-
549
+ uint32_t sregs_match_len = ARRAY_SIZE(hvf_sreg_match);
273
static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
550
+ uint32_t sregs_cnt = 0;
274
{
551
+ uint64_t pfr;
275
switch (reg) {
552
+ hv_return_t ret;
276
@@ -XXX,XX +XXX,XX @@ static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
553
+ int i;
277
if (val & 2) {
554
+
278
DPRINTF("PHY write %d = 0x%04x\n",
555
+ env->aarch64 = 1;
279
(val >> 6) & 0x1f, s->mac_mii_data);
556
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz));
280
- do_phy_write(s, (val >> 6) & 0x1f, s->mac_mii_data);
557
+
281
+ lan9118_phy_write(&s->mii, (val >> 6) & 0x1f, s->mac_mii_data);
558
+ /* Allocate enough space for our sysreg sync */
282
} else {
559
+ arm_cpu->cpreg_indexes = g_renew(uint64_t, arm_cpu->cpreg_indexes,
283
- s->mac_mii_data = do_phy_read(s, (val >> 6) & 0x1f);
560
+ sregs_match_len);
284
+ s->mac_mii_data = lan9118_phy_read(&s->mii, (val >> 6) & 0x1f);
561
+ arm_cpu->cpreg_values = g_renew(uint64_t, arm_cpu->cpreg_values,
285
DPRINTF("PHY read %d = 0x%04x\n",
562
+ sregs_match_len);
286
(val >> 6) & 0x1f, s->mac_mii_data);
563
+ arm_cpu->cpreg_vmstate_indexes = g_renew(uint64_t,
287
}
564
+ arm_cpu->cpreg_vmstate_indexes,
288
@@ -XXX,XX +XXX,XX @@ static void lan9118_writel(void *opaque, hwaddr offset,
565
+ sregs_match_len);
289
break;
566
+ arm_cpu->cpreg_vmstate_values = g_renew(uint64_t,
290
case CSR_PMT_CTRL:
567
+ arm_cpu->cpreg_vmstate_values,
291
if (val & 0x400) {
568
+ sregs_match_len);
292
- phy_reset(s);
569
+
293
+ lan9118_phy_reset(&s->mii);
570
+ memset(arm_cpu->cpreg_values, 0, sregs_match_len * sizeof(uint64_t));
294
}
571
+
295
s->pmt_ctrl &= ~0x34e;
572
+ /* Populate cp list for all known sysregs */
296
s->pmt_ctrl |= (val & 0x34e);
573
+ for (i = 0; i < sregs_match_len; i++) {
297
@@ -XXX,XX +XXX,XX @@ static void lan9118_realize(DeviceState *dev, Error **errp)
574
+ const ARMCPRegInfo *ri;
298
const MemoryRegionOps *mem_ops =
575
+ uint32_t key = hvf_sreg_match[i].key;
299
s->mode_16bit ? &lan9118_16bit_mem_ops : &lan9118_mem_ops;
576
+
300
577
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
301
+ qemu_init_irq(&s->mii_irq, lan9118_update_irq, s, 0);
578
+ if (ri) {
302
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
579
+ assert(!(ri->type & ARM_CP_NO_RAW));
303
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
580
+ hvf_sreg_match[i].cp_idx = sregs_cnt;
581
+ arm_cpu->cpreg_indexes[sregs_cnt++] = cpreg_to_kvm_id(key);
582
+ } else {
583
+ hvf_sreg_match[i].cp_idx = -1;
584
+ }
585
+ }
586
+ arm_cpu->cpreg_array_len = sregs_cnt;
587
+ arm_cpu->cpreg_vmstate_array_len = sregs_cnt;
588
+
589
+ assert(write_cpustate_to_list(arm_cpu, false));
590
+
591
+ /* Set CP_NO_RAW system registers on init */
592
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1,
593
+ arm_cpu->midr);
594
+ assert_hvf_ok(ret);
595
+
596
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1,
597
+ arm_cpu->mp_affinity);
598
+ assert_hvf_ok(ret);
599
+
600
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
601
+ assert_hvf_ok(ret);
602
+ pfr |= env->gicv3state ? (1 << 24) : 0;
603
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
604
+ assert_hvf_ok(ret);
605
+
606
+ /* We're limited to underlying hardware caps, override internal versions */
607
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
608
+ &arm_cpu->isar.id_aa64mmfr0);
609
+ assert_hvf_ok(ret);
610
+
611
+ return 0;
612
+}
613
+
614
+void hvf_kick_vcpu_thread(CPUState *cpu)
615
+{
616
+ hv_vcpus_exit(&cpu->hvf->fd, 1);
617
+}
618
+
619
+static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
620
+ uint32_t syndrome)
621
+{
622
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
623
+ CPUARMState *env = &arm_cpu->env;
624
+
625
+ cpu->exception_index = excp;
626
+ env->exception.target_el = 1;
627
+ env->exception.syndrome = syndrome;
628
+
629
+ arm_cpu_do_interrupt(cpu);
630
+}
631
+
632
+static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
633
+{
634
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
635
+ CPUARMState *env = &arm_cpu->env;
636
+ uint64_t val = 0;
637
+
638
+ switch (reg) {
639
+ case SYSREG_CNTPCT_EL0:
640
+ val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
641
+ gt_cntfrq_period_ns(arm_cpu);
642
+ break;
643
+ case SYSREG_OSLSR_EL1:
644
+ val = env->cp15.oslsr_el1;
645
+ break;
646
+ case SYSREG_OSDLR_EL1:
647
+ /* Dummy register */
648
+ break;
649
+ default:
650
+ cpu_synchronize_state(cpu);
651
+ trace_hvf_unhandled_sysreg_read(env->pc, reg,
652
+ (reg >> 20) & 0x3,
653
+ (reg >> 14) & 0x7,
654
+ (reg >> 10) & 0xf,
655
+ (reg >> 1) & 0xf,
656
+ (reg >> 17) & 0x7);
657
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
658
+ return 1;
659
+ }
660
+
661
+ trace_hvf_sysreg_read(reg,
662
+ (reg >> 20) & 0x3,
663
+ (reg >> 14) & 0x7,
664
+ (reg >> 10) & 0xf,
665
+ (reg >> 1) & 0xf,
666
+ (reg >> 17) & 0x7,
667
+ val);
668
+ hvf_set_reg(cpu, rt, val);
669
+
670
+ return 0;
671
+}
672
+
673
+static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
674
+{
675
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
676
+ CPUARMState *env = &arm_cpu->env;
677
+
678
+ trace_hvf_sysreg_write(reg,
679
+ (reg >> 20) & 0x3,
680
+ (reg >> 14) & 0x7,
681
+ (reg >> 10) & 0xf,
682
+ (reg >> 1) & 0xf,
683
+ (reg >> 17) & 0x7,
684
+ val);
685
+
686
+ switch (reg) {
687
+ case SYSREG_OSLAR_EL1:
688
+ env->cp15.oslsr_el1 = val & 1;
689
+ break;
690
+ case SYSREG_OSDLR_EL1:
691
+ /* Dummy register */
692
+ break;
693
+ default:
694
+ cpu_synchronize_state(cpu);
695
+ trace_hvf_unhandled_sysreg_write(env->pc, reg,
696
+ (reg >> 20) & 0x3,
697
+ (reg >> 14) & 0x7,
698
+ (reg >> 10) & 0xf,
699
+ (reg >> 1) & 0xf,
700
+ (reg >> 17) & 0x7);
701
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
702
+ return 1;
703
+ }
704
+
705
+ return 0;
706
+}
707
+
708
+static int hvf_inject_interrupts(CPUState *cpu)
709
+{
710
+ if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) {
711
+ trace_hvf_inject_fiq();
712
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ,
713
+ true);
714
+ }
715
+
716
+ if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
717
+ trace_hvf_inject_irq();
718
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ,
719
+ true);
720
+ }
721
+
722
+ return 0;
723
+}
724
+
725
+static uint64_t hvf_vtimer_val_raw(void)
726
+{
727
+ /*
728
+ * mach_absolute_time() returns the vtimer value without the VM
729
+ * offset that we define. Add our own offset on top.
730
+ */
731
+ return mach_absolute_time() - hvf_state->vtimer_offset;
732
+}
733
+
734
+static void hvf_sync_vtimer(CPUState *cpu)
735
+{
736
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
737
+ hv_return_t r;
738
+ uint64_t ctl;
739
+ bool irq_state;
740
+
741
+ if (!cpu->hvf->vtimer_masked) {
742
+ /* We will get notified on vtimer changes by hvf, nothing to do */
743
+ return;
304
+ return;
744
+ }
305
+ }
745
+
306
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
746
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
307
+
747
+ assert_hvf_ok(r);
308
memory_region_init_io(&s->mmio, OBJECT(dev), mem_ops, s,
748
+
309
"lan9118-mmio", 0x100);
749
+ irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) ==
310
sysbus_init_mmio(sbd, &s->mmio);
750
+ (TMR_CTL_ENABLE | TMR_CTL_ISTATUS);
311
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
751
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], irq_state);
752
+
753
+ if (!irq_state) {
754
+ /* Timer no longer asserting, we can unmask it */
755
+ hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
756
+ cpu->hvf->vtimer_masked = false;
757
+ }
758
+}
759
+
760
+int hvf_vcpu_exec(CPUState *cpu)
761
+{
762
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
763
+ CPUARMState *env = &arm_cpu->env;
764
+ hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
765
+ hv_return_t r;
766
+ bool advance_pc = false;
767
+
768
+ if (hvf_inject_interrupts(cpu)) {
769
+ return EXCP_INTERRUPT;
770
+ }
771
+
772
+ if (cpu->halted) {
773
+ return EXCP_HLT;
774
+ }
775
+
776
+ flush_cpu_state(cpu);
777
+
778
+ qemu_mutex_unlock_iothread();
779
+ assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd));
780
+
781
+ /* handle VMEXIT */
782
+ uint64_t exit_reason = hvf_exit->reason;
783
+ uint64_t syndrome = hvf_exit->exception.syndrome;
784
+ uint32_t ec = syn_get_ec(syndrome);
785
+
786
+ qemu_mutex_lock_iothread();
787
+ switch (exit_reason) {
788
+ case HV_EXIT_REASON_EXCEPTION:
789
+ /* This is the main one, handle below. */
790
+ break;
791
+ case HV_EXIT_REASON_VTIMER_ACTIVATED:
792
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
793
+ cpu->hvf->vtimer_masked = true;
794
+ return 0;
795
+ case HV_EXIT_REASON_CANCELED:
796
+ /* we got kicked, no exit to process */
797
+ return 0;
798
+ default:
799
+ assert(0);
800
+ }
801
+
802
+ hvf_sync_vtimer(cpu);
803
+
804
+ switch (ec) {
805
+ case EC_DATAABORT: {
806
+ bool isv = syndrome & ARM_EL_ISV;
807
+ bool iswrite = (syndrome >> 6) & 1;
808
+ bool s1ptw = (syndrome >> 7) & 1;
809
+ uint32_t sas = (syndrome >> 22) & 3;
810
+ uint32_t len = 1 << sas;
811
+ uint32_t srt = (syndrome >> 16) & 0x1f;
812
+ uint64_t val = 0;
813
+
814
+ trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
815
+ hvf_exit->exception.physical_address, isv,
816
+ iswrite, s1ptw, len, srt);
817
+
818
+ assert(isv);
819
+
820
+ if (iswrite) {
821
+ val = hvf_get_reg(cpu, srt);
822
+ address_space_write(&address_space_memory,
823
+ hvf_exit->exception.physical_address,
824
+ MEMTXATTRS_UNSPECIFIED, &val, len);
825
+ } else {
826
+ address_space_read(&address_space_memory,
827
+ hvf_exit->exception.physical_address,
828
+ MEMTXATTRS_UNSPECIFIED, &val, len);
829
+ hvf_set_reg(cpu, srt, val);
830
+ }
831
+
832
+ advance_pc = true;
833
+ break;
834
+ }
835
+ case EC_SYSTEMREGISTERTRAP: {
836
+ bool isread = (syndrome >> 0) & 1;
837
+ uint32_t rt = (syndrome >> 5) & 0x1f;
838
+ uint32_t reg = syndrome & SYSREG_MASK;
839
+ uint64_t val;
840
+ int ret = 0;
841
+
842
+ if (isread) {
843
+ ret = hvf_sysreg_read(cpu, reg, rt);
844
+ } else {
845
+ val = hvf_get_reg(cpu, rt);
846
+ ret = hvf_sysreg_write(cpu, reg, val);
847
+ }
848
+
849
+ advance_pc = !ret;
850
+ break;
851
+ }
852
+ case EC_WFX_TRAP:
853
+ advance_pc = true;
854
+ break;
855
+ case EC_AA64_HVC:
856
+ cpu_synchronize_state(cpu);
857
+ trace_hvf_unknown_hvc(env->xregs[0]);
858
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
859
+ env->xregs[0] = -1;
860
+ break;
861
+ case EC_AA64_SMC:
862
+ cpu_synchronize_state(cpu);
863
+ trace_hvf_unknown_smc(env->xregs[0]);
864
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
865
+ break;
866
+ default:
867
+ cpu_synchronize_state(cpu);
868
+ trace_hvf_exit(syndrome, ec, env->pc);
869
+ error_report("0x%llx: unhandled exception ec=0x%x", env->pc, ec);
870
+ }
871
+
872
+ if (advance_pc) {
873
+ uint64_t pc;
874
+
875
+ flush_cpu_state(cpu);
876
+
877
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
878
+ assert_hvf_ok(r);
879
+ pc += 4;
880
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
881
+ assert_hvf_ok(r);
882
+ }
883
+
884
+ return 0;
885
+}
886
+
887
+static const VMStateDescription vmstate_hvf_vtimer = {
888
+ .name = "hvf-vtimer",
889
+ .version_id = 1,
890
+ .minimum_version_id = 1,
891
+ .fields = (VMStateField[]) {
892
+ VMSTATE_UINT64(vtimer_val, HVFVTimer),
893
+ VMSTATE_END_OF_LIST()
894
+ },
895
+};
896
+
897
+static void hvf_vm_state_change(void *opaque, bool running, RunState state)
898
+{
899
+ HVFVTimer *s = opaque;
900
+
901
+ if (running) {
902
+ /* Update vtimer offset on all CPUs */
903
+ hvf_state->vtimer_offset = mach_absolute_time() - s->vtimer_val;
904
+ cpu_synchronize_all_states();
905
+ } else {
906
+ /* Remember vtimer value on every pause */
907
+ s->vtimer_val = hvf_vtimer_val_raw();
908
+ }
909
+}
910
+
911
+int hvf_arch_init(void)
912
+{
913
+ hvf_state->vtimer_offset = mach_absolute_time();
914
+ vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
915
+ qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
916
+ return 0;
917
+}
918
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
919
index XXXXXXX..XXXXXXX 100644
920
--- a/target/i386/hvf/hvf.c
921
+++ b/target/i386/hvf/hvf.c
922
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
923
return env->apic_bus_freq != 0;
924
}
925
926
+void hvf_kick_vcpu_thread(CPUState *cpu)
927
+{
928
+ cpus_kick_thread(cpu);
929
+}
930
+
931
int hvf_arch_init(void)
932
{
933
return 0;
934
diff --git a/MAINTAINERS b/MAINTAINERS
935
index XXXXXXX..XXXXXXX 100644
936
--- a/MAINTAINERS
937
+++ b/MAINTAINERS
938
@@ -XXX,XX +XXX,XX @@ F: accel/accel-*.c
939
F: accel/Makefile.objs
940
F: accel/stubs/Makefile.objs
941
942
+Apple Silicon HVF CPUs
943
+M: Alexander Graf <agraf@csgraf.de>
944
+S: Maintained
945
+F: target/arm/hvf/
946
+
947
X86 HVF CPUs
948
M: Cameron Esfahani <dirty@apple.com>
949
M: Roman Bolshakov <r.bolshakov@yadro.com>
950
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
951
new file mode 100644
312
new file mode 100644
952
index XXXXXXX..XXXXXXX
313
index XXXXXXX..XXXXXXX
953
--- /dev/null
314
--- /dev/null
954
+++ b/target/arm/hvf/trace-events
315
+++ b/hw/net/lan9118_phy.c
955
@@ -XXX,XX +XXX,XX @@
316
@@ -XXX,XX +XXX,XX @@
956
+hvf_unhandled_sysreg_read(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg read at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
317
+/*
957
+hvf_unhandled_sysreg_write(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg write at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
318
+ * SMSC LAN9118 PHY emulation
958
+hvf_inject_fiq(void) "injecting FIQ"
319
+ *
959
+hvf_inject_irq(void) "injecting IRQ"
320
+ * Copyright (c) 2009 CodeSourcery, LLC.
960
+hvf_data_abort(uint64_t pc, uint64_t va, uint64_t pa, bool isv, bool iswrite, bool s1ptw, uint32_t len, uint32_t srt) "data abort: [pc=0x%"PRIx64" va=0x%016"PRIx64" pa=0x%016"PRIx64" isv=%d iswrite=%d s1ptw=%d len=%d srt=%d]"
321
+ * Written by Paul Brook
961
+hvf_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d) = 0x%016"PRIx64
322
+ *
962
+hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d, val=0x%016"PRIx64")"
323
+ * This code is licensed under the GNU GPL v2
963
+hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
324
+ *
964
+hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
325
+ * Contributions after 2012-01-13 are licensed under the terms of the
965
+hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
326
+ * GNU GPL, version 2 or (at your option) any later version.
327
+ */
328
+
329
+#include "qemu/osdep.h"
330
+#include "hw/net/lan9118_phy.h"
331
+#include "hw/irq.h"
332
+#include "hw/resettable.h"
333
+#include "migration/vmstate.h"
334
+#include "qemu/log.h"
335
+
336
+#define PHY_INT_ENERGYON (1 << 7)
337
+#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
338
+#define PHY_INT_FAULT (1 << 5)
339
+#define PHY_INT_DOWN (1 << 4)
340
+#define PHY_INT_AUTONEG_LP (1 << 3)
341
+#define PHY_INT_PARFAULT (1 << 2)
342
+#define PHY_INT_AUTONEG_PAGE (1 << 1)
343
+
344
+static void lan9118_phy_update_irq(Lan9118PhyState *s)
345
+{
346
+ qemu_set_irq(s->irq, !!(s->ints & s->int_mask));
347
+}
348
+
349
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
350
+{
351
+ uint16_t val;
352
+
353
+ switch (reg) {
354
+ case 0: /* Basic Control */
355
+ return s->control;
356
+ case 1: /* Basic Status */
357
+ return s->status;
358
+ case 2: /* ID1 */
359
+ return 0x0007;
360
+ case 3: /* ID2 */
361
+ return 0xc0d1;
362
+ case 4: /* Auto-neg advertisement */
363
+ return s->advertise;
364
+ case 5: /* Auto-neg Link Partner Ability */
365
+ return 0x0f71;
366
+ case 6: /* Auto-neg Expansion */
367
+ return 1;
368
+ /* TODO 17, 18, 27, 29, 30, 31 */
369
+ case 29: /* Interrupt source. */
370
+ val = s->ints;
371
+ s->ints = 0;
372
+ lan9118_phy_update_irq(s);
373
+ return val;
374
+ case 30: /* Interrupt mask */
375
+ return s->int_mask;
376
+ default:
377
+ qemu_log_mask(LOG_GUEST_ERROR,
378
+ "lan9118_phy_read: PHY read reg %d\n", reg);
379
+ return 0;
380
+ }
381
+}
382
+
383
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
384
+{
385
+ switch (reg) {
386
+ case 0: /* Basic Control */
387
+ if (val & 0x8000) {
388
+ lan9118_phy_reset(s);
389
+ break;
390
+ }
391
+ s->control = val & 0x7980;
392
+ /* Complete autonegotiation immediately. */
393
+ if (val & 0x1000) {
394
+ s->status |= 0x0020;
395
+ }
396
+ break;
397
+ case 4: /* Auto-neg advertisement */
398
+ s->advertise = (val & 0x2d7f) | 0x80;
399
+ break;
400
+ /* TODO 17, 18, 27, 31 */
401
+ case 30: /* Interrupt mask */
402
+ s->int_mask = val & 0xff;
403
+ lan9118_phy_update_irq(s);
404
+ break;
405
+ default:
406
+ qemu_log_mask(LOG_GUEST_ERROR,
407
+ "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
408
+ }
409
+}
410
+
411
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
412
+{
413
+ s->link_down = link_down;
414
+
415
+ /* Autonegotiation status mirrors link status. */
416
+ if (link_down) {
417
+ s->status &= ~0x0024;
418
+ s->ints |= PHY_INT_DOWN;
419
+ } else {
420
+ s->status |= 0x0024;
421
+ s->ints |= PHY_INT_ENERGYON;
422
+ s->ints |= PHY_INT_AUTONEG_COMPLETE;
423
+ }
424
+ lan9118_phy_update_irq(s);
425
+}
426
+
427
+void lan9118_phy_reset(Lan9118PhyState *s)
428
+{
429
+ s->control = 0x3000;
430
+ s->status = 0x7809;
431
+ s->advertise = 0x01e1;
432
+ s->int_mask = 0;
433
+ s->ints = 0;
434
+ lan9118_phy_update_link(s, s->link_down);
435
+}
436
+
437
+static void lan9118_phy_reset_hold(Object *obj, ResetType type)
438
+{
439
+ Lan9118PhyState *s = LAN9118_PHY(obj);
440
+
441
+ lan9118_phy_reset(s);
442
+}
443
+
444
+static void lan9118_phy_init(Object *obj)
445
+{
446
+ Lan9118PhyState *s = LAN9118_PHY(obj);
447
+
448
+ qdev_init_gpio_out(DEVICE(s), &s->irq, 1);
449
+}
450
+
451
+static const VMStateDescription vmstate_lan9118_phy = {
452
+ .name = "lan9118-phy",
453
+ .version_id = 1,
454
+ .minimum_version_id = 1,
455
+ .fields = (const VMStateField[]) {
456
+ VMSTATE_UINT16(control, Lan9118PhyState),
457
+ VMSTATE_UINT16(status, Lan9118PhyState),
458
+ VMSTATE_UINT16(advertise, Lan9118PhyState),
459
+ VMSTATE_UINT16(ints, Lan9118PhyState),
460
+ VMSTATE_UINT16(int_mask, Lan9118PhyState),
461
+ VMSTATE_BOOL(link_down, Lan9118PhyState),
462
+ VMSTATE_END_OF_LIST()
463
+ }
464
+};
465
+
466
+static void lan9118_phy_class_init(ObjectClass *klass, void *data)
467
+{
468
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
469
+ DeviceClass *dc = DEVICE_CLASS(klass);
470
+
471
+ rc->phases.hold = lan9118_phy_reset_hold;
472
+ dc->vmsd = &vmstate_lan9118_phy;
473
+}
474
+
475
+static const TypeInfo types[] = {
476
+ {
477
+ .name = TYPE_LAN9118_PHY,
478
+ .parent = TYPE_SYS_BUS_DEVICE,
479
+ .instance_size = sizeof(Lan9118PhyState),
480
+ .instance_init = lan9118_phy_init,
481
+ .class_init = lan9118_phy_class_init,
482
+ }
483
+};
484
+
485
+DEFINE_TYPES(types)
486
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
487
index XXXXXXX..XXXXXXX 100644
488
--- a/hw/net/Kconfig
489
+++ b/hw/net/Kconfig
490
@@ -XXX,XX +XXX,XX @@ config VMXNET3_PCI
491
config SMC91C111
492
bool
493
494
+config LAN9118_PHY
495
+ bool
496
+
497
config LAN9118
498
bool
499
+ select LAN9118_PHY
500
select PTIMER
501
502
config NE2000_ISA
503
diff --git a/hw/net/meson.build b/hw/net/meson.build
504
index XXXXXXX..XXXXXXX 100644
505
--- a/hw/net/meson.build
506
+++ b/hw/net/meson.build
507
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_VMXNET3_PCI', if_true: files('vmxnet3.c'))
508
509
system_ss.add(when: 'CONFIG_SMC91C111', if_true: files('smc91c111.c'))
510
system_ss.add(when: 'CONFIG_LAN9118', if_true: files('lan9118.c'))
511
+system_ss.add(when: 'CONFIG_LAN9118_PHY', if_true: files('lan9118_phy.c'))
512
system_ss.add(when: 'CONFIG_NE2000_ISA', if_true: files('ne2000-isa.c'))
513
system_ss.add(when: 'CONFIG_OPENCORES_ETH', if_true: files('opencores_eth.c'))
514
system_ss.add(when: 'CONFIG_XGMAC', if_true: files('xgmac.c'))
966
--
515
--
967
2.20.1
516
2.34.1
968
969
diff view generated by jsdifflib
1
From: Peter Collingbourne <pcc@google.com>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Sleep on WFI until the VTIMER is due but allow ourselves to be woken
3
imx_fec models the same PHY as lan9118_phy. The code is almost the same with
4
up on IPI.
4
imx_fec having more logging and tracing. Merge these improvements into
5
lan9118_phy and reuse in imx_fec to fix the code duplication.
5
6
6
In this implementation IPI is blocked on the CPU thread at startup and
7
Some migration state how resides in the new device model which breaks migration
7
pselect() is used to atomically unblock the signal and begin sleeping.
8
compatibility for the following machines:
8
The signal is sent unconditionally so there's no need to worry about
9
* imx25-pdk
9
races between actually sleeping and the "we think we're sleeping"
10
* sabrelite
10
state. It may lead to an extra wakeup but that's better than missing
11
* mcimx7d-sabre
11
it entirely.
12
* mcimx6ul-evk
12
13
13
Signed-off-by: Peter Collingbourne <pcc@google.com>
14
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
15
Tested-by: Guenter Roeck <linux@roeck-us.net>
15
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
17
Message-id: 20241102125724.532843-3-shentey@gmail.com
17
Message-id: 20210916155404.86958-6-agraf@csgraf.de
18
[agraf: Remove unused 'set' variable, always advance PC on WFX trap,
19
support vm stop / continue operations and cntv offsets]
20
Signed-off-by: Alexander Graf <agraf@csgraf.de>
21
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
22
Reviewed-by: Sergio Lopez <slp@redhat.com>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
19
---
25
include/sysemu/hvf_int.h | 1 +
20
include/hw/net/imx_fec.h | 9 ++-
26
accel/hvf/hvf-accel-ops.c | 5 +--
21
hw/net/imx_fec.c | 146 ++++-----------------------------------
27
target/arm/hvf/hvf.c | 79 +++++++++++++++++++++++++++++++++++++++
22
hw/net/lan9118_phy.c | 82 ++++++++++++++++------
28
3 files changed, 82 insertions(+), 3 deletions(-)
23
hw/net/Kconfig | 1 +
24
hw/net/trace-events | 10 +--
25
5 files changed, 85 insertions(+), 163 deletions(-)
29
26
30
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
27
diff --git a/include/hw/net/imx_fec.h b/include/hw/net/imx_fec.h
31
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
32
--- a/include/sysemu/hvf_int.h
29
--- a/include/hw/net/imx_fec.h
33
+++ b/include/sysemu/hvf_int.h
30
+++ b/include/hw/net/imx_fec.h
34
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
31
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(IMXFECState, IMX_FEC)
35
uint64_t fd;
32
#define TYPE_IMX_ENET "imx.enet"
36
void *exit;
33
37
bool vtimer_masked;
34
#include "hw/sysbus.h"
38
+ sigset_t unblock_ipi_mask;
35
+#include "hw/net/lan9118_phy.h"
36
+#include "hw/irq.h"
37
#include "net/net.h"
38
39
#define ENET_EIR 1
40
@@ -XXX,XX +XXX,XX @@ struct IMXFECState {
41
uint32_t tx_descriptor[ENET_TX_RING_NUM];
42
uint32_t tx_ring_num;
43
44
- uint32_t phy_status;
45
- uint32_t phy_control;
46
- uint32_t phy_advertise;
47
- uint32_t phy_int;
48
- uint32_t phy_int_mask;
49
+ Lan9118PhyState mii;
50
+ IRQState mii_irq;
51
uint32_t phy_num;
52
bool phy_connected;
53
struct IMXFECState *phy_consumer;
54
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/net/imx_fec.c
57
+++ b/hw/net/imx_fec.c
58
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth_txdescs = {
59
60
static const VMStateDescription vmstate_imx_eth = {
61
.name = TYPE_IMX_FEC,
62
- .version_id = 2,
63
- .minimum_version_id = 2,
64
+ .version_id = 3,
65
+ .minimum_version_id = 3,
66
.fields = (const VMStateField[]) {
67
VMSTATE_UINT32_ARRAY(regs, IMXFECState, ENET_MAX),
68
VMSTATE_UINT32(rx_descriptor, IMXFECState),
69
VMSTATE_UINT32(tx_descriptor[0], IMXFECState),
70
- VMSTATE_UINT32(phy_status, IMXFECState),
71
- VMSTATE_UINT32(phy_control, IMXFECState),
72
- VMSTATE_UINT32(phy_advertise, IMXFECState),
73
- VMSTATE_UINT32(phy_int, IMXFECState),
74
- VMSTATE_UINT32(phy_int_mask, IMXFECState),
75
VMSTATE_END_OF_LIST()
76
},
77
.subsections = (const VMStateDescription * const []) {
78
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth = {
79
},
39
};
80
};
40
81
41
void assert_hvf_ok(hv_return_t ret);
82
-#define PHY_INT_ENERGYON (1 << 7)
42
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
83
-#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
43
index XXXXXXX..XXXXXXX 100644
84
-#define PHY_INT_FAULT (1 << 5)
44
--- a/accel/hvf/hvf-accel-ops.c
85
-#define PHY_INT_DOWN (1 << 4)
45
+++ b/accel/hvf/hvf-accel-ops.c
86
-#define PHY_INT_AUTONEG_LP (1 << 3)
46
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
87
-#define PHY_INT_PARFAULT (1 << 2)
47
cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
88
-#define PHY_INT_AUTONEG_PAGE (1 << 1)
48
89
-
49
/* init cpu signals */
90
static void imx_eth_update(IMXFECState *s);
50
- sigset_t set;
91
51
struct sigaction sigact;
92
/*
52
93
@@ -XXX,XX +XXX,XX @@ static void imx_eth_update(IMXFECState *s);
53
memset(&sigact, 0, sizeof(sigact));
94
* For now we don't handle any GPIO/interrupt line, so the OS will
54
sigact.sa_handler = dummy_signal;
95
* have to poll for the PHY status.
55
sigaction(SIG_IPI, &sigact, NULL);
96
*/
56
97
-static void imx_phy_update_irq(IMXFECState *s)
57
- pthread_sigmask(SIG_BLOCK, NULL, &set);
98
+static void imx_phy_update_irq(void *opaque, int n, int level)
58
- sigdelset(&set, SIG_IPI);
59
+ pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
60
+ sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
61
62
#ifdef __aarch64__
63
r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
64
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/hvf/hvf.c
67
+++ b/target/arm/hvf/hvf.c
68
@@ -XXX,XX +XXX,XX @@
69
* QEMU Hypervisor.framework support for Apple Silicon
70
71
* Copyright 2020 Alexander Graf <agraf@csgraf.de>
72
+ * Copyright 2020 Google LLC
73
*
74
* This work is licensed under the terms of the GNU GPL, version 2 or later.
75
* See the COPYING file in the top-level directory.
76
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu)
77
78
void hvf_kick_vcpu_thread(CPUState *cpu)
79
{
99
{
80
+ cpus_kick_thread(cpu);
100
- imx_eth_update(s);
81
hv_vcpus_exit(&cpu->hvf->fd, 1);
101
-}
82
}
102
-
83
103
-static void imx_phy_update_link(IMXFECState *s)
84
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_vtimer_val_raw(void)
104
-{
85
return mach_absolute_time() - hvf_state->vtimer_offset;
105
- /* Autonegotiation status mirrors link status. */
86
}
106
- if (qemu_get_queue(s->nic)->link_down) {
87
107
- trace_imx_phy_update_link("down");
88
+static uint64_t hvf_vtimer_val(void)
108
- s->phy_status &= ~0x0024;
89
+{
109
- s->phy_int |= PHY_INT_DOWN;
90
+ if (!runstate_is_running()) {
110
- } else {
91
+ /* VM is paused, the vtimer value is in vtimer.vtimer_val */
111
- trace_imx_phy_update_link("up");
92
+ return vtimer.vtimer_val;
112
- s->phy_status |= 0x0024;
93
+ }
113
- s->phy_int |= PHY_INT_ENERGYON;
94
+
114
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
95
+ return hvf_vtimer_val_raw();
115
- }
96
+}
116
- imx_phy_update_irq(s);
97
+
117
+ imx_eth_update(opaque);
98
+static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts)
118
}
99
+{
119
100
+ /*
120
static void imx_eth_set_link(NetClientState *nc)
101
+ * Use pselect to sleep so that other threads can IPI us while we're
121
{
102
+ * sleeping.
122
- imx_phy_update_link(IMX_FEC(qemu_get_nic_opaque(nc)));
103
+ */
123
-}
104
+ qatomic_mb_set(&cpu->thread_kicked, false);
124
-
105
+ qemu_mutex_unlock_iothread();
125
-static void imx_phy_reset(IMXFECState *s)
106
+ pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask);
126
-{
107
+ qemu_mutex_lock_iothread();
127
- trace_imx_phy_reset();
108
+}
128
-
109
+
129
- s->phy_status = 0x7809;
110
+static void hvf_wfi(CPUState *cpu)
130
- s->phy_control = 0x3000;
111
+{
131
- s->phy_advertise = 0x01e1;
112
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
132
- s->phy_int_mask = 0;
113
+ struct timespec ts;
133
- s->phy_int = 0;
114
+ hv_return_t r;
134
- imx_phy_update_link(s);
115
+ uint64_t ctl;
135
+ lan9118_phy_update_link(&IMX_FEC(qemu_get_nic_opaque(nc))->mii,
116
+ uint64_t cval;
136
+ nc->link_down);
117
+ int64_t ticks_to_sleep;
137
}
118
+ uint64_t seconds;
138
119
+ uint64_t nanos;
139
static uint32_t imx_phy_read(IMXFECState *s, int reg)
120
+ uint32_t cntfrq;
140
{
121
+
141
- uint32_t val;
122
+ if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
142
uint32_t phy = reg / 32;
123
+ /* Interrupt pending, no need to wait */
143
144
if (!s->phy_connected) {
145
@@ -XXX,XX +XXX,XX @@ static uint32_t imx_phy_read(IMXFECState *s, int reg)
146
147
reg %= 32;
148
149
- switch (reg) {
150
- case 0: /* Basic Control */
151
- val = s->phy_control;
152
- break;
153
- case 1: /* Basic Status */
154
- val = s->phy_status;
155
- break;
156
- case 2: /* ID1 */
157
- val = 0x0007;
158
- break;
159
- case 3: /* ID2 */
160
- val = 0xc0d1;
161
- break;
162
- case 4: /* Auto-neg advertisement */
163
- val = s->phy_advertise;
164
- break;
165
- case 5: /* Auto-neg Link Partner Ability */
166
- val = 0x0f71;
167
- break;
168
- case 6: /* Auto-neg Expansion */
169
- val = 1;
170
- break;
171
- case 29: /* Interrupt source. */
172
- val = s->phy_int;
173
- s->phy_int = 0;
174
- imx_phy_update_irq(s);
175
- break;
176
- case 30: /* Interrupt mask */
177
- val = s->phy_int_mask;
178
- break;
179
- case 17:
180
- case 18:
181
- case 27:
182
- case 31:
183
- qemu_log_mask(LOG_UNIMP, "[%s.phy]%s: reg %d not implemented\n",
184
- TYPE_IMX_FEC, __func__, reg);
185
- val = 0;
186
- break;
187
- default:
188
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
189
- TYPE_IMX_FEC, __func__, reg);
190
- val = 0;
191
- break;
192
- }
193
-
194
- trace_imx_phy_read(val, phy, reg);
195
-
196
- return val;
197
+ return lan9118_phy_read(&s->mii, reg);
198
}
199
200
static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
201
@@ -XXX,XX +XXX,XX @@ static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
202
203
reg %= 32;
204
205
- trace_imx_phy_write(val, phy, reg);
206
-
207
- switch (reg) {
208
- case 0: /* Basic Control */
209
- if (val & 0x8000) {
210
- imx_phy_reset(s);
211
- } else {
212
- s->phy_control = val & 0x7980;
213
- /* Complete autonegotiation immediately. */
214
- if (val & 0x1000) {
215
- s->phy_status |= 0x0020;
216
- }
217
- }
218
- break;
219
- case 4: /* Auto-neg advertisement */
220
- s->phy_advertise = (val & 0x2d7f) | 0x80;
221
- break;
222
- case 30: /* Interrupt mask */
223
- s->phy_int_mask = val & 0xff;
224
- imx_phy_update_irq(s);
225
- break;
226
- case 17:
227
- case 18:
228
- case 27:
229
- case 31:
230
- qemu_log_mask(LOG_UNIMP, "[%s.phy)%s: reg %d not implemented\n",
231
- TYPE_IMX_FEC, __func__, reg);
232
- break;
233
- default:
234
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
235
- TYPE_IMX_FEC, __func__, reg);
236
- break;
237
- }
238
+ lan9118_phy_write(&s->mii, reg, val);
239
}
240
241
static void imx_fec_read_bd(IMXFECBufDesc *bd, dma_addr_t addr)
242
@@ -XXX,XX +XXX,XX @@ static void imx_eth_reset(DeviceState *d)
243
244
s->rx_descriptor = 0;
245
memset(s->tx_descriptor, 0, sizeof(s->tx_descriptor));
246
-
247
- /* We also reset the PHY */
248
- imx_phy_reset(s);
249
}
250
251
static uint32_t imx_default_read(IMXFECState *s, uint32_t index)
252
@@ -XXX,XX +XXX,XX @@ static void imx_eth_realize(DeviceState *dev, Error **errp)
253
sysbus_init_irq(sbd, &s->irq[0]);
254
sysbus_init_irq(sbd, &s->irq[1]);
255
256
+ qemu_init_irq(&s->mii_irq, imx_phy_update_irq, s, 0);
257
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
258
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
124
+ return;
259
+ return;
125
+ }
260
+ }
126
+
261
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
127
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
262
+
128
+ assert_hvf_ok(r);
263
qemu_macaddr_default_if_unset(&s->conf.macaddr);
129
+
264
130
+ if (!(ctl & 1) || (ctl & 2)) {
265
s->nic = qemu_new_nic(&imx_eth_net_info, &s->conf,
131
+ /* Timer disabled or masked, just wait for an IPI. */
266
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
132
+ hvf_wait_for_ipi(cpu, NULL);
267
index XXXXXXX..XXXXXXX 100644
133
+ return;
268
--- a/hw/net/lan9118_phy.c
134
+ }
269
+++ b/hw/net/lan9118_phy.c
135
+
270
@@ -XXX,XX +XXX,XX @@
136
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
271
* Copyright (c) 2009 CodeSourcery, LLC.
137
+ assert_hvf_ok(r);
272
* Written by Paul Brook
138
+
273
*
139
+ ticks_to_sleep = cval - hvf_vtimer_val();
274
+ * Copyright (c) 2013 Jean-Christophe Dubois. <jcd@tribudubois.net>
140
+ if (ticks_to_sleep < 0) {
275
+ *
141
+ return;
276
* This code is licensed under the GNU GPL v2
142
+ }
277
*
143
+
278
* Contributions after 2012-01-13 are licensed under the terms of the
144
+ cntfrq = gt_cntfrq_period_ns(arm_cpu);
279
@@ -XXX,XX +XXX,XX @@
145
+ seconds = muldiv64(ticks_to_sleep, cntfrq, NANOSECONDS_PER_SECOND);
280
#include "hw/resettable.h"
146
+ ticks_to_sleep -= muldiv64(seconds, NANOSECONDS_PER_SECOND, cntfrq);
281
#include "migration/vmstate.h"
147
+ nanos = ticks_to_sleep * cntfrq;
282
#include "qemu/log.h"
148
+
283
+#include "trace.h"
149
+ /*
284
150
+ * Don't sleep for less than the time a context switch would take,
285
#define PHY_INT_ENERGYON (1 << 7)
151
+ * so that we can satisfy fast timer requests on the same CPU.
286
#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
152
+ * Measurements on M1 show the sweet spot to be ~2ms.
287
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
153
+ */
288
154
+ if (!seconds && nanos < (2 * SCALE_MS)) {
289
switch (reg) {
155
+ return;
290
case 0: /* Basic Control */
156
+ }
291
- return s->control;
157
+
292
+ val = s->control;
158
+ ts = (struct timespec) { seconds, nanos };
293
+ break;
159
+ hvf_wait_for_ipi(cpu, &ts);
294
case 1: /* Basic Status */
160
+}
295
- return s->status;
161
+
296
+ val = s->status;
162
static void hvf_sync_vtimer(CPUState *cpu)
297
+ break;
298
case 2: /* ID1 */
299
- return 0x0007;
300
+ val = 0x0007;
301
+ break;
302
case 3: /* ID2 */
303
- return 0xc0d1;
304
+ val = 0xc0d1;
305
+ break;
306
case 4: /* Auto-neg advertisement */
307
- return s->advertise;
308
+ val = s->advertise;
309
+ break;
310
case 5: /* Auto-neg Link Partner Ability */
311
- return 0x0f71;
312
+ val = 0x0f71;
313
+ break;
314
case 6: /* Auto-neg Expansion */
315
- return 1;
316
- /* TODO 17, 18, 27, 29, 30, 31 */
317
+ val = 1;
318
+ break;
319
case 29: /* Interrupt source. */
320
val = s->ints;
321
s->ints = 0;
322
lan9118_phy_update_irq(s);
323
- return val;
324
+ break;
325
case 30: /* Interrupt mask */
326
- return s->int_mask;
327
+ val = s->int_mask;
328
+ break;
329
+ case 17:
330
+ case 18:
331
+ case 27:
332
+ case 31:
333
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
334
+ __func__, reg);
335
+ val = 0;
336
+ break;
337
default:
338
- qemu_log_mask(LOG_GUEST_ERROR,
339
- "lan9118_phy_read: PHY read reg %d\n", reg);
340
- return 0;
341
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
342
+ __func__, reg);
343
+ val = 0;
344
+ break;
345
}
346
+
347
+ trace_lan9118_phy_read(val, reg);
348
+
349
+ return val;
350
}
351
352
void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
163
{
353
{
164
ARMCPU *arm_cpu = ARM_CPU(cpu);
354
+ trace_lan9118_phy_write(val, reg);
165
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
355
+
356
switch (reg) {
357
case 0: /* Basic Control */
358
if (val & 0x8000) {
359
lan9118_phy_reset(s);
360
- break;
361
- }
362
- s->control = val & 0x7980;
363
- /* Complete autonegotiation immediately. */
364
- if (val & 0x1000) {
365
- s->status |= 0x0020;
366
+ } else {
367
+ s->control = val & 0x7980;
368
+ /* Complete autonegotiation immediately. */
369
+ if (val & 0x1000) {
370
+ s->status |= 0x0020;
371
+ }
372
}
373
break;
374
case 4: /* Auto-neg advertisement */
375
s->advertise = (val & 0x2d7f) | 0x80;
376
break;
377
- /* TODO 17, 18, 27, 31 */
378
case 30: /* Interrupt mask */
379
s->int_mask = val & 0xff;
380
lan9118_phy_update_irq(s);
381
break;
382
+ case 17:
383
+ case 18:
384
+ case 27:
385
+ case 31:
386
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
387
+ __func__, reg);
388
+ break;
389
default:
390
- qemu_log_mask(LOG_GUEST_ERROR,
391
- "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
392
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
393
+ __func__, reg);
394
+ break;
166
}
395
}
167
case EC_WFX_TRAP:
396
}
168
advance_pc = true;
397
169
+ if (!(syndrome & WFX_IS_WFE)) {
398
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
170
+ hvf_wfi(cpu);
399
171
+ }
400
/* Autonegotiation status mirrors link status. */
172
break;
401
if (link_down) {
173
case EC_AA64_HVC:
402
+ trace_lan9118_phy_update_link("down");
174
cpu_synchronize_state(cpu);
403
s->status &= ~0x0024;
404
s->ints |= PHY_INT_DOWN;
405
} else {
406
+ trace_lan9118_phy_update_link("up");
407
s->status |= 0x0024;
408
s->ints |= PHY_INT_ENERGYON;
409
s->ints |= PHY_INT_AUTONEG_COMPLETE;
410
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
411
412
void lan9118_phy_reset(Lan9118PhyState *s)
413
{
414
+ trace_lan9118_phy_reset();
415
+
416
s->control = 0x3000;
417
s->status = 0x7809;
418
s->advertise = 0x01e1;
419
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_phy = {
420
.version_id = 1,
421
.minimum_version_id = 1,
422
.fields = (const VMStateField[]) {
423
- VMSTATE_UINT16(control, Lan9118PhyState),
424
VMSTATE_UINT16(status, Lan9118PhyState),
425
+ VMSTATE_UINT16(control, Lan9118PhyState),
426
VMSTATE_UINT16(advertise, Lan9118PhyState),
427
VMSTATE_UINT16(ints, Lan9118PhyState),
428
VMSTATE_UINT16(int_mask, Lan9118PhyState),
429
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
430
index XXXXXXX..XXXXXXX 100644
431
--- a/hw/net/Kconfig
432
+++ b/hw/net/Kconfig
433
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_SUN8I_EMAC
434
435
config IMX_FEC
436
bool
437
+ select LAN9118_PHY
438
439
config CADENCE
440
bool
441
diff --git a/hw/net/trace-events b/hw/net/trace-events
442
index XXXXXXX..XXXXXXX 100644
443
--- a/hw/net/trace-events
444
+++ b/hw/net/trace-events
445
@@ -XXX,XX +XXX,XX @@ allwinner_sun8i_emac_set_link(bool active) "Set link: active=%u"
446
allwinner_sun8i_emac_read(uint64_t offset, uint64_t val) "MMIO read: offset=0x%" PRIx64 " value=0x%" PRIx64
447
allwinner_sun8i_emac_write(uint64_t offset, uint64_t val) "MMIO write: offset=0x%" PRIx64 " value=0x%" PRIx64
448
449
+# lan9118_phy.c
450
+lan9118_phy_read(uint16_t val, int reg) "[0x%02x] -> 0x%04" PRIx16
451
+lan9118_phy_write(uint16_t val, int reg) "[0x%02x] <- 0x%04" PRIx16
452
+lan9118_phy_update_link(const char *s) "%s"
453
+lan9118_phy_reset(void) ""
454
+
455
# lance.c
456
lance_mem_readw(uint64_t addr, uint32_t ret) "addr=0x%"PRIx64"val=0x%04x"
457
lance_mem_writew(uint64_t addr, uint32_t val) "addr=0x%"PRIx64"val=0x%04x"
458
@@ -XXX,XX +XXX,XX @@ i82596_set_multicast(uint16_t count) "Added %d multicast entries"
459
i82596_channel_attention(void *s) "%p: Received CHANNEL ATTENTION"
460
461
# imx_fec.c
462
-imx_phy_read(uint32_t val, int phy, int reg) "0x%04"PRIx32" <= phy[%d].reg[%d]"
463
imx_phy_read_num(int phy, int configured) "read request from unconfigured phy %d (configured %d)"
464
-imx_phy_write(uint32_t val, int phy, int reg) "0x%04"PRIx32" => phy[%d].reg[%d]"
465
imx_phy_write_num(int phy, int configured) "write request to unconfigured phy %d (configured %d)"
466
-imx_phy_update_link(const char *s) "%s"
467
-imx_phy_reset(void) ""
468
imx_fec_read_bd(uint64_t addr, int flags, int len, int data) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x"
469
imx_enet_read_bd(uint64_t addr, int flags, int len, int data, int options, int status) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x option 0x%04x status 0x%04x"
470
imx_eth_tx_bd_busy(void) "tx_bd ran out of descriptors to transmit"
175
--
471
--
176
2.20.1
472
2.34.1
177
178
diff view generated by jsdifflib
New patch
1
From: Bernhard Beschow <shentey@gmail.com>
1
2
3
Turns 0x70 into 0xe0 (== 0x70 << 1) which adds the missing MII_ANLPAR_TX and
4
fixes the MSB of selector field to be zero, as specified in the datasheet.
5
6
Fixes: 2a424990170b "LAN9118 emulation"
7
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
8
Tested-by: Guenter Roeck <linux@roeck-us.net>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20241102125724.532843-4-shentey@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/net/lan9118_phy.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/net/lan9118_phy.c
19
+++ b/hw/net/lan9118_phy.c
20
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
21
val = s->advertise;
22
break;
23
case 5: /* Auto-neg Link Partner Ability */
24
- val = 0x0f71;
25
+ val = 0x0fe1;
26
break;
27
case 6: /* Auto-neg Expansion */
28
val = 1;
29
--
30
2.34.1
diff view generated by jsdifflib
New patch
1
From: Bernhard Beschow <shentey@gmail.com>
1
2
3
Prefer named constants over magic values for better readability.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
7
Tested-by: Guenter Roeck <linux@roeck-us.net>
8
Message-id: 20241102125724.532843-5-shentey@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/net/mii.h | 6 +++++
12
hw/net/lan9118_phy.c | 63 ++++++++++++++++++++++++++++----------------
13
2 files changed, 46 insertions(+), 23 deletions(-)
14
15
diff --git a/include/hw/net/mii.h b/include/hw/net/mii.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/net/mii.h
18
+++ b/include/hw/net/mii.h
19
@@ -XXX,XX +XXX,XX @@
20
#define MII_BMSR_JABBER (1 << 1) /* Jabber detected */
21
#define MII_BMSR_EXTCAP (1 << 0) /* Ext-reg capability */
22
23
+#define MII_ANAR_RFAULT (1 << 13) /* Say we can detect faults */
24
#define MII_ANAR_PAUSE_ASYM (1 << 11) /* Try for asymmetric pause */
25
#define MII_ANAR_PAUSE (1 << 10) /* Try for pause */
26
#define MII_ANAR_TXFD (1 << 8)
27
@@ -XXX,XX +XXX,XX @@
28
#define MII_ANAR_10FD (1 << 6)
29
#define MII_ANAR_10 (1 << 5)
30
#define MII_ANAR_CSMACD (1 << 0)
31
+#define MII_ANAR_SELECT (0x001f) /* Selector bits */
32
33
#define MII_ANLPAR_ACK (1 << 14)
34
#define MII_ANLPAR_PAUSEASY (1 << 11) /* can pause asymmetrically */
35
@@ -XXX,XX +XXX,XX @@
36
#define RTL8201CP_PHYID1 0x0000
37
#define RTL8201CP_PHYID2 0x8201
38
39
+/* SMSC LAN9118 */
40
+#define SMSCLAN9118_PHYID1 0x0007
41
+#define SMSCLAN9118_PHYID2 0xc0d1
42
+
43
/* RealTek 8211E */
44
#define RTL8211E_PHYID1 0x001c
45
#define RTL8211E_PHYID2 0xc915
46
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/net/lan9118_phy.c
49
+++ b/hw/net/lan9118_phy.c
50
@@ -XXX,XX +XXX,XX @@
51
52
#include "qemu/osdep.h"
53
#include "hw/net/lan9118_phy.h"
54
+#include "hw/net/mii.h"
55
#include "hw/irq.h"
56
#include "hw/resettable.h"
57
#include "migration/vmstate.h"
58
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
59
uint16_t val;
60
61
switch (reg) {
62
- case 0: /* Basic Control */
63
+ case MII_BMCR:
64
val = s->control;
65
break;
66
- case 1: /* Basic Status */
67
+ case MII_BMSR:
68
val = s->status;
69
break;
70
- case 2: /* ID1 */
71
- val = 0x0007;
72
+ case MII_PHYID1:
73
+ val = SMSCLAN9118_PHYID1;
74
break;
75
- case 3: /* ID2 */
76
- val = 0xc0d1;
77
+ case MII_PHYID2:
78
+ val = SMSCLAN9118_PHYID2;
79
break;
80
- case 4: /* Auto-neg advertisement */
81
+ case MII_ANAR:
82
val = s->advertise;
83
break;
84
- case 5: /* Auto-neg Link Partner Ability */
85
- val = 0x0fe1;
86
+ case MII_ANLPAR:
87
+ val = MII_ANLPAR_PAUSEASY | MII_ANLPAR_PAUSE | MII_ANLPAR_T4 |
88
+ MII_ANLPAR_TXFD | MII_ANLPAR_TX | MII_ANLPAR_10FD |
89
+ MII_ANLPAR_10 | MII_ANLPAR_CSMACD;
90
break;
91
- case 6: /* Auto-neg Expansion */
92
- val = 1;
93
+ case MII_ANER:
94
+ val = MII_ANER_NWAY;
95
break;
96
case 29: /* Interrupt source. */
97
val = s->ints;
98
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
99
trace_lan9118_phy_write(val, reg);
100
101
switch (reg) {
102
- case 0: /* Basic Control */
103
- if (val & 0x8000) {
104
+ case MII_BMCR:
105
+ if (val & MII_BMCR_RESET) {
106
lan9118_phy_reset(s);
107
} else {
108
- s->control = val & 0x7980;
109
+ s->control = val & (MII_BMCR_LOOPBACK | MII_BMCR_SPEED100 |
110
+ MII_BMCR_AUTOEN | MII_BMCR_PDOWN | MII_BMCR_FD |
111
+ MII_BMCR_CTST);
112
/* Complete autonegotiation immediately. */
113
- if (val & 0x1000) {
114
- s->status |= 0x0020;
115
+ if (val & MII_BMCR_AUTOEN) {
116
+ s->status |= MII_BMSR_AN_COMP;
117
}
118
}
119
break;
120
- case 4: /* Auto-neg advertisement */
121
- s->advertise = (val & 0x2d7f) | 0x80;
122
+ case MII_ANAR:
123
+ s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
124
+ MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
125
+ MII_ANAR_SELECT))
126
+ | MII_ANAR_TX;
127
break;
128
case 30: /* Interrupt mask */
129
s->int_mask = val & 0xff;
130
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
131
/* Autonegotiation status mirrors link status. */
132
if (link_down) {
133
trace_lan9118_phy_update_link("down");
134
- s->status &= ~0x0024;
135
+ s->status &= ~(MII_BMSR_AN_COMP | MII_BMSR_LINK_ST);
136
s->ints |= PHY_INT_DOWN;
137
} else {
138
trace_lan9118_phy_update_link("up");
139
- s->status |= 0x0024;
140
+ s->status |= MII_BMSR_AN_COMP | MII_BMSR_LINK_ST;
141
s->ints |= PHY_INT_ENERGYON;
142
s->ints |= PHY_INT_AUTONEG_COMPLETE;
143
}
144
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_reset(Lan9118PhyState *s)
145
{
146
trace_lan9118_phy_reset();
147
148
- s->control = 0x3000;
149
- s->status = 0x7809;
150
- s->advertise = 0x01e1;
151
+ s->control = MII_BMCR_AUTOEN | MII_BMCR_SPEED100;
152
+ s->status = MII_BMSR_100TX_FD
153
+ | MII_BMSR_100TX_HD
154
+ | MII_BMSR_10T_FD
155
+ | MII_BMSR_10T_HD
156
+ | MII_BMSR_AUTONEG
157
+ | MII_BMSR_EXTCAP;
158
+ s->advertise = MII_ANAR_TXFD
159
+ | MII_ANAR_TX
160
+ | MII_ANAR_10FD
161
+ | MII_ANAR_10
162
+ | MII_ANAR_CSMACD;
163
s->int_mask = 0;
164
s->ints = 0;
165
lan9118_phy_update_link(s, s->link_down);
166
--
167
2.34.1
diff view generated by jsdifflib
New patch
1
From: Bernhard Beschow <shentey@gmail.com>
1
2
3
The real device advertises this mode and the device model already advertises
4
100 mbps half duplex and 10 mbps full+half duplex. So advertise this mode to
5
make the model more realistic.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
9
Tested-by: Guenter Roeck <linux@roeck-us.net>
10
Message-id: 20241102125724.532843-6-shentey@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/net/lan9118_phy.c | 4 ++--
14
1 file changed, 2 insertions(+), 2 deletions(-)
15
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/net/lan9118_phy.c
19
+++ b/hw/net/lan9118_phy.c
20
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
21
break;
22
case MII_ANAR:
23
s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
24
- MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
25
- MII_ANAR_SELECT))
26
+ MII_ANAR_PAUSE | MII_ANAR_TXFD | MII_ANAR_10FD |
27
+ MII_ANAR_10 | MII_ANAR_SELECT))
28
| MII_ANAR_TX;
29
break;
30
case 30: /* Interrupt mask */
31
--
32
2.34.1
diff view generated by jsdifflib
New patch
1
For IEEE fused multiply-add, the (0 * inf) + NaN case should raise
2
Invalid for the multiplication of 0 by infinity. Currently we handle
3
this in the per-architecture ifdef ladder in pickNaNMulAdd().
4
However, since this isn't really architecture specific we can hoist
5
it up to the generic code.
1
6
7
For the cases where the infzero test in pickNaNMulAdd was
8
returning 2, we can delete the check entirely and allow the
9
code to fall into the normal pick-a-NaN handling, because this
10
will return 2 anyway (input 'c' being the only NaN in this case).
11
For the cases where infzero was returning 3 to indicate "return
12
the default NaN", we must retain that "return 3".
13
14
For Arm, this looks like it might be a behaviour change because we
15
used to set float_flag_invalid | float_flag_invalid_imz only if C is
16
a quiet NaN. However, it is not, because Arm target code never looks
17
at float_flag_invalid_imz, and for the (0 * inf) + SNaN case we
18
already raised float_flag_invalid via the "abc_mask &
19
float_cmask_snan" check in pick_nan_muladd.
20
21
For any target architecture using the "default implementation" at the
22
bottom of the ifdef, this is a behaviour change but will be fixing a
23
bug (where we failed to raise the Invalid exception for (0 * inf +
24
QNaN). The architectures using the default case are:
25
* hppa
26
* i386
27
* sh4
28
* tricore
29
30
The x86, Tricore and SH4 CPU architecture manuals are clear that this
31
should have raised Invalid; HPPA is a bit vaguer but still seems
32
clear enough.
33
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20241202131347.498124-2-peter.maydell@linaro.org
37
---
38
fpu/softfloat-parts.c.inc | 13 +++++++------
39
fpu/softfloat-specialize.c.inc | 29 +----------------------------
40
2 files changed, 8 insertions(+), 34 deletions(-)
41
42
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
43
index XXXXXXX..XXXXXXX 100644
44
--- a/fpu/softfloat-parts.c.inc
45
+++ b/fpu/softfloat-parts.c.inc
46
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
47
int ab_mask, int abc_mask)
48
{
49
int which;
50
+ bool infzero = (ab_mask == float_cmask_infzero);
51
52
if (unlikely(abc_mask & float_cmask_snan)) {
53
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
54
}
55
56
- which = pickNaNMulAdd(a->cls, b->cls, c->cls,
57
- ab_mask == float_cmask_infzero, s);
58
+ if (infzero) {
59
+ /* This is (0 * inf) + NaN or (inf * 0) + NaN */
60
+ float_raise(float_flag_invalid | float_flag_invalid_imz, s);
61
+ }
62
+
63
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
64
65
if (s->default_nan_mode || which == 3) {
66
- /*
67
- * Note that this check is after pickNaNMulAdd so that function
68
- * has an opportunity to set the Invalid flag for infzero.
69
- */
70
parts_default_nan(a, s);
71
return a;
72
}
73
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
74
index XXXXXXX..XXXXXXX 100644
75
--- a/fpu/softfloat-specialize.c.inc
76
+++ b/fpu/softfloat-specialize.c.inc
77
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
78
* the default NaN
79
*/
80
if (infzero && is_qnan(c_cls)) {
81
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
82
return 3;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
86
* case sets InvalidOp and returns the default NaN
87
*/
88
if (infzero) {
89
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
90
return 3;
91
}
92
/* Prefer sNaN over qNaN, in the a, b, c order. */
93
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
94
* For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
95
* case sets InvalidOp and returns the input value 'c'
96
*/
97
- if (infzero) {
98
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
99
- return 2;
100
- }
101
/* Prefer sNaN over qNaN, in the c, a, b order. */
102
if (is_snan(c_cls)) {
103
return 2;
104
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
105
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
106
* case sets InvalidOp and returns the input value 'c'
107
*/
108
- if (infzero) {
109
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
110
- return 2;
111
- }
112
+
113
/* Prefer sNaN over qNaN, in the c, a, b order. */
114
if (is_snan(c_cls)) {
115
return 2;
116
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
117
* to return an input NaN if we have one (ie c) rather than generating
118
* a default NaN
119
*/
120
- if (infzero) {
121
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
122
- return 2;
123
- }
124
125
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
126
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
127
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
128
return 1;
129
}
130
#elif defined(TARGET_RISCV)
131
- /* For RISC-V, InvalidOp is set when multiplicands are Inf and zero */
132
- if (infzero) {
133
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
134
- }
135
return 3; /* default NaN */
136
#elif defined(TARGET_S390X)
137
if (infzero) {
138
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
139
return 3;
140
}
141
142
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
143
return 2;
144
}
145
#elif defined(TARGET_SPARC)
146
- /* For (inf,0,nan) return c. */
147
- if (infzero) {
148
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
149
- return 2;
150
- }
151
/* Prefer SNaN over QNaN, order C, B, A. */
152
if (is_snan(c_cls)) {
153
return 2;
154
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
155
* For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
156
* an input NaN if we have one (ie c).
157
*/
158
- if (infzero) {
159
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
160
- return 2;
161
- }
162
if (status->use_first_nan) {
163
if (is_nan(a_cls)) {
164
return 0;
165
--
166
2.34.1
diff view generated by jsdifflib
New patch
1
If the target sets default_nan_mode then we're always going to return
2
the default NaN, and pickNaNMulAdd() no longer has any side effects.
3
For consistency with pickNaN(), check for default_nan_mode before
4
calling pickNaNMulAdd().
1
5
6
When we convert pickNaNMulAdd() to allow runtime selection of the NaN
7
propagation rule, this means we won't have to make the targets which
8
use default_nan_mode also set a propagation rule.
9
10
Since RiscV always uses default_nan_mode, this allows us to remove
11
its ifdef case from pickNaNMulAdd().
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-3-peter.maydell@linaro.org
16
---
17
fpu/softfloat-parts.c.inc | 8 ++++++--
18
fpu/softfloat-specialize.c.inc | 9 +++++++--
19
2 files changed, 13 insertions(+), 4 deletions(-)
20
21
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
22
index XXXXXXX..XXXXXXX 100644
23
--- a/fpu/softfloat-parts.c.inc
24
+++ b/fpu/softfloat-parts.c.inc
25
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
26
float_raise(float_flag_invalid | float_flag_invalid_imz, s);
27
}
28
29
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
30
+ if (s->default_nan_mode) {
31
+ which = 3;
32
+ } else {
33
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ }
35
36
- if (s->default_nan_mode || which == 3) {
37
+ if (which == 3) {
38
parts_default_nan(a, s);
39
return a;
40
}
41
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
42
index XXXXXXX..XXXXXXX 100644
43
--- a/fpu/softfloat-specialize.c.inc
44
+++ b/fpu/softfloat-specialize.c.inc
45
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
46
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
47
bool infzero, float_status *status)
48
{
49
+ /*
50
+ * We guarantee not to require the target to tell us how to
51
+ * pick a NaN if we're always returning the default NaN.
52
+ * But if we're not in default-NaN mode then the target must
53
+ * specify.
54
+ */
55
+ assert(!status->default_nan_mode);
56
#if defined(TARGET_ARM)
57
/* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
58
* the default NaN
59
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
60
} else {
61
return 1;
62
}
63
-#elif defined(TARGET_RISCV)
64
- return 3; /* default NaN */
65
#elif defined(TARGET_S390X)
66
if (infzero) {
67
return 3;
68
--
69
2.34.1
diff view generated by jsdifflib
New patch
1
1
IEEE 758 does not define a fixed rule for what NaN to return in
2
the case of a fused multiply-add of inf * 0 + NaN. Different
3
architectures thus do different things:
4
* some return the default NaN
5
* some return the input NaN
6
* Arm returns the default NaN if the input NaN is quiet,
7
and the input NaN if it is signalling
8
9
We want to make this logic be runtime selected rather than
10
hardcoded into the binary, because:
11
* this will let us have multiple targets in one QEMU binary
12
* the Arm FEAT_AFP architectural feature includes letting
13
the guest select a NaN propagation rule at runtime
14
15
In this commit we add an enum for the propagation rule, the field in
16
float_status, and the corresponding getters and setters. We change
17
pickNaNMulAdd to honour this, but because all targets still leave
18
this field at its default 0 value, the fallback logic will pick the
19
rule type with the old ifdef ladder.
20
21
Note that four architectures both use the muladd softfloat functions
22
and did not have a branch of the ifdef ladder to specify their
23
behaviour (and so were ending up with the "default" case, probably
24
wrongly): i386, HPPA, SH4 and Tricore. SH4 and Tricore both set
25
default_nan_mode, and so will never get into pickNaNMulAdd(). For
26
HPPA and i386 we retain the same behaviour as the old default-case,
27
which is to not ever return the default NaN. This might not be
28
correct but it is not a behaviour change.
29
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
32
Message-id: 20241202131347.498124-4-peter.maydell@linaro.org
33
---
34
include/fpu/softfloat-helpers.h | 11 ++++
35
include/fpu/softfloat-types.h | 23 +++++++++
36
fpu/softfloat-specialize.c.inc | 91 ++++++++++++++++++++++-----------
37
3 files changed, 95 insertions(+), 30 deletions(-)
38
39
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
40
index XXXXXXX..XXXXXXX 100644
41
--- a/include/fpu/softfloat-helpers.h
42
+++ b/include/fpu/softfloat-helpers.h
43
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
44
status->float_2nan_prop_rule = rule;
45
}
46
47
+static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
48
+ float_status *status)
49
+{
50
+ status->float_infzeronan_rule = rule;
51
+}
52
+
53
static inline void set_flush_to_zero(bool val, float_status *status)
54
{
55
status->flush_to_zero = val;
56
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
57
return status->float_2nan_prop_rule;
58
}
59
60
+static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
61
+{
62
+ return status->float_infzeronan_rule;
63
+}
64
+
65
static inline bool get_flush_to_zero(float_status *status)
66
{
67
return status->flush_to_zero;
68
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
69
index XXXXXXX..XXXXXXX 100644
70
--- a/include/fpu/softfloat-types.h
71
+++ b/include/fpu/softfloat-types.h
72
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
73
float_2nan_prop_x87,
74
} Float2NaNPropRule;
75
76
+/*
77
+ * Rule for result of fused multiply-add 0 * Inf + NaN.
78
+ * This must be a NaN, but implementations differ on whether this
79
+ * is the input NaN or the default NaN.
80
+ *
81
+ * You don't need to set this if default_nan_mode is enabled.
82
+ * When not in default-NaN mode, it is an error for the target
83
+ * not to set the rule in float_status if it uses muladd, and we
84
+ * will assert if we need to handle an input NaN and no rule was
85
+ * selected.
86
+ */
87
+typedef enum __attribute__((__packed__)) {
88
+ /* No propagation rule specified */
89
+ float_infzeronan_none = 0,
90
+ /* Result is never the default NaN (so always the input NaN) */
91
+ float_infzeronan_dnan_never,
92
+ /* Result is always the default NaN */
93
+ float_infzeronan_dnan_always,
94
+ /* Result is the default NaN if the input NaN is quiet */
95
+ float_infzeronan_dnan_if_qnan,
96
+} FloatInfZeroNaNRule;
97
+
98
/*
99
* Floating Point Status. Individual architectures may maintain
100
* several versions of float_status for different functions. The
101
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
102
FloatRoundMode float_rounding_mode;
103
FloatX80RoundPrec floatx80_rounding_precision;
104
Float2NaNPropRule float_2nan_prop_rule;
105
+ FloatInfZeroNaNRule float_infzeronan_rule;
106
bool tininess_before_rounding;
107
/* should denormalised results go to zero and set the inexact flag? */
108
bool flush_to_zero;
109
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
110
index XXXXXXX..XXXXXXX 100644
111
--- a/fpu/softfloat-specialize.c.inc
112
+++ b/fpu/softfloat-specialize.c.inc
113
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
114
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
115
bool infzero, float_status *status)
116
{
117
+ FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
118
+
119
/*
120
* We guarantee not to require the target to tell us how to
121
* pick a NaN if we're always returning the default NaN.
122
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
123
* specify.
124
*/
125
assert(!status->default_nan_mode);
126
+
127
+ if (rule == float_infzeronan_none) {
128
+ /*
129
+ * Temporarily fall back to ifdef ladder
130
+ */
131
#if defined(TARGET_ARM)
132
- /* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
133
- * the default NaN
134
- */
135
- if (infzero && is_qnan(c_cls)) {
136
- return 3;
137
+ /*
138
+ * For ARM, the (inf,zero,qnan) case returns the default NaN,
139
+ * but (inf,zero,snan) returns the input NaN.
140
+ */
141
+ rule = float_infzeronan_dnan_if_qnan;
142
+#elif defined(TARGET_MIPS)
143
+ if (snan_bit_is_one(status)) {
144
+ /*
145
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
146
+ * case sets InvalidOp and returns the default NaN
147
+ */
148
+ rule = float_infzeronan_dnan_always;
149
+ } else {
150
+ /*
151
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
152
+ * case sets InvalidOp and returns the input value 'c'
153
+ */
154
+ rule = float_infzeronan_dnan_never;
155
+ }
156
+#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
157
+ defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
158
+ defined(TARGET_I386) || defined(TARGET_LOONGARCH)
159
+ /*
160
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
161
+ * case sets InvalidOp and returns the input value 'c'
162
+ */
163
+ /*
164
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
165
+ * to return an input NaN if we have one (ie c) rather than generating
166
+ * a default NaN
167
+ */
168
+ rule = float_infzeronan_dnan_never;
169
+#elif defined(TARGET_S390X)
170
+ rule = float_infzeronan_dnan_always;
171
+#endif
172
}
173
174
+ if (infzero) {
175
+ /*
176
+ * Inf * 0 + NaN -- some implementations return the default NaN here,
177
+ * and some return the input NaN.
178
+ */
179
+ switch (rule) {
180
+ case float_infzeronan_dnan_never:
181
+ return 2;
182
+ case float_infzeronan_dnan_always:
183
+ return 3;
184
+ case float_infzeronan_dnan_if_qnan:
185
+ return is_qnan(c_cls) ? 3 : 2;
186
+ default:
187
+ g_assert_not_reached();
188
+ }
189
+ }
190
+
191
+#if defined(TARGET_ARM)
192
+
193
/* This looks different from the ARM ARM pseudocode, because the ARM ARM
194
* puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
195
*/
196
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
197
}
198
#elif defined(TARGET_MIPS)
199
if (snan_bit_is_one(status)) {
200
- /*
201
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
202
- * case sets InvalidOp and returns the default NaN
203
- */
204
- if (infzero) {
205
- return 3;
206
- }
207
/* Prefer sNaN over qNaN, in the a, b, c order. */
208
if (is_snan(a_cls)) {
209
return 0;
210
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
211
return 2;
212
}
213
} else {
214
- /*
215
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
216
- * case sets InvalidOp and returns the input value 'c'
217
- */
218
/* Prefer sNaN over qNaN, in the c, a, b order. */
219
if (is_snan(c_cls)) {
220
return 2;
221
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
222
}
223
}
224
#elif defined(TARGET_LOONGARCH64)
225
- /*
226
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
227
- * case sets InvalidOp and returns the input value 'c'
228
- */
229
-
230
/* Prefer sNaN over qNaN, in the c, a, b order. */
231
if (is_snan(c_cls)) {
232
return 2;
233
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
234
return 1;
235
}
236
#elif defined(TARGET_PPC)
237
- /* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
238
- * to return an input NaN if we have one (ie c) rather than generating
239
- * a default NaN
240
- */
241
-
242
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
243
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
244
*/
245
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
246
return 1;
247
}
248
#elif defined(TARGET_S390X)
249
- if (infzero) {
250
- return 3;
251
- }
252
-
253
if (is_snan(a_cls)) {
254
return 0;
255
} else if (is_snan(b_cls)) {
256
--
257
2.34.1
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for the inf-zero-nan
2
muladd special case. In meson.build we put -DTARGET_ARM in fpcflags,
3
and so we should select here the Arm rule of
4
float_infzeronan_dnan_if_qnan.
1
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241202131347.498124-5-peter.maydell@linaro.org
9
---
10
tests/fp/fp-bench.c | 5 +++++
11
tests/fp/fp-test.c | 5 +++++
12
2 files changed, 10 insertions(+)
13
14
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/fp/fp-bench.c
17
+++ b/tests/fp/fp-bench.c
18
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
19
{
20
bench_func_t f;
21
22
+ /*
23
+ * These implementation-defined choices for various things IEEE
24
+ * doesn't specify match those used by the Arm architecture.
25
+ */
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
28
29
f = bench_funcs[operation][precision];
30
g_assert(f);
31
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/tests/fp/fp-test.c
34
+++ b/tests/fp/fp-test.c
35
@@ -XXX,XX +XXX,XX @@ void run_test(void)
36
{
37
unsigned int i;
38
39
+ /*
40
+ * These implementation-defined choices for various things IEEE
41
+ * doesn't specify match those used by the Arm architecture.
42
+ */
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
44
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
45
46
genCases_setLevel(test_level);
47
verCases_maxErrorCount = n_max_errors;
48
--
49
2.34.1
diff view generated by jsdifflib
1
Currently all of the M-profile specific code in arm_cpu_reset() is
1
Set the FloatInfZeroNaNRule explicitly for the Arm target,
2
inside a !defined(CONFIG_USER_ONLY) ifdef block. This is
2
so we can remove the ifdef from pickNaNMulAdd().
3
unintentional: it happened because originally the only
4
M-profile-specific handling was the setup of the initial SP and PC
5
from the vector table, which is system-emulation only. But then we
6
added a lot of other M-profile setup to the same "if (ARM_FEATURE_M)"
7
code block without noticing that it was all inside a not-user-mode
8
ifdef. This has generally been harmless, but with the addition of
9
v8.1M low-overhead-loop support we ran into a problem: the reset of
10
FPSCR.LTPSIZE to 4 was only being done for system emulation mode, so
11
if a user-mode guest tried to execute the LE instruction it would
12
incorrectly take a UsageFault.
13
3
14
Adjust the ifdefs so only the really system-emulation specific parts
15
are covered. Because this means we now run some reset code that sets
16
up initial values in the FPCCR and similar FPU related registers,
17
explicitly set up the registers controlling FPU context handling in
18
user-emulation mode so that the FPU works by design and not by
19
chance.
20
21
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/613
22
Cc: qemu-stable@nongnu.org
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20210914120725.24992-2-peter.maydell@linaro.org
6
Message-id: 20241202131347.498124-6-peter.maydell@linaro.org
26
---
7
---
27
target/arm/cpu.c | 19 +++++++++++++++++++
8
target/arm/cpu.c | 3 +++
28
1 file changed, 19 insertions(+)
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
2 files changed, 4 insertions(+), 7 deletions(-)
29
11
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
14
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
35
env->uncached_cpsr = ARM_CPU_MODE_SVC;
17
* * tininess-before-rounding
36
}
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
37
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
38
+#endif
20
+ * * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
39
21
+ * and the input NaN if it is signalling
40
if (arm_feature(env, ARM_FEATURE_M)) {
22
*/
41
+#ifndef CONFIG_USER_ONLY
23
static void arm_set_default_fp_behaviours(float_status *s)
42
uint32_t initial_msp; /* Loaded from 0x0 */
24
{
43
uint32_t initial_pc; /* Loaded from 0x4 */
25
set_float_detect_tininess(float_tininess_before_rounding, s);
44
uint8_t *rom;
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
45
uint32_t vecbase;
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
46
+#endif
28
}
47
29
48
if (cpu_isar_feature(aa32_lob, cpu)) {
30
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
/*
37
* Temporarily fall back to ifdef ladder
38
*/
39
-#if defined(TARGET_ARM)
40
- /*
41
- * For ARM, the (inf,zero,qnan) case returns the default NaN,
42
- * but (inf,zero,snan) returns the input NaN.
43
- */
44
- rule = float_infzeronan_dnan_if_qnan;
45
-#elif defined(TARGET_MIPS)
46
+#if defined(TARGET_MIPS)
47
if (snan_bit_is_one(status)) {
49
/*
48
/*
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
49
* For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
51
env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
52
R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
53
}
54
+
55
+#ifndef CONFIG_USER_ONLY
56
/* Unlike A/R profile, M profile defines the reset LR value */
57
env->regs[14] = 0xffffffff;
58
59
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
60
env->regs[13] = initial_msp & 0xFFFFFFFC;
61
env->regs[15] = initial_pc & ~1;
62
env->thumb = initial_pc & 1;
63
+#else
64
+ /*
65
+ * For user mode we run non-secure and with access to the FPU.
66
+ * The FPU context is active (ie does not need further setup)
67
+ * and is owned by non-secure.
68
+ */
69
+ env->v7m.secure = false;
70
+ env->v7m.nsacr = 0xcff;
71
+ env->v7m.cpacr[M_REG_NS] = 0xf0ffff;
72
+ env->v7m.fpccr[M_REG_S] &=
73
+ ~(R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK);
74
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
75
+#endif
76
}
77
78
+#ifndef CONFIG_USER_ONLY
79
/* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
80
* executing as AArch32 then check if highvecs are enabled and
81
* adjust the PC accordingly.
82
--
50
--
83
2.20.1
51
2.34.1
84
85
diff view generated by jsdifflib
1
Move an ifndef CONFIG_USER_ONLY code block up in arm_cpu_reset() so
1
Set the FloatInfZeroNaNRule explicitly for s390, so we
2
it can be merged with another earlier one.
2
can remove the ifdef from pickNaNMulAdd().
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210914120725.24992-4-peter.maydell@linaro.org
6
Message-id: 20241202131347.498124-7-peter.maydell@linaro.org
7
---
7
---
8
target/arm/cpu.c | 22 ++++++++++------------
8
target/s390x/cpu.c | 2 ++
9
1 file changed, 10 insertions(+), 12 deletions(-)
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
10
11
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.c
14
--- a/target/s390x/cpu.c
14
+++ b/target/arm/cpu.c
15
+++ b/target/s390x/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
16
env->uncached_cpsr = ARM_CPU_MODE_SVC;
17
set_float_detect_tininess(float_tininess_before_rounding,
17
}
18
&env->fpu_status);
18
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
19
+
20
+ set_float_infzeronan_rule(float_infzeronan_dnan_always,
20
+ /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
21
+ &env->fpu_status);
21
+ * executing as AArch32 then check if highvecs are enabled and
22
/* fall through */
22
+ * adjust the PC accordingly.
23
case RESET_TYPE_S390_CPU_NORMAL:
23
+ */
24
env->psw.mask &= ~PSW_MASK_RI;
24
+ if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
+ env->regs[15] = 0xFFFF0000;
26
index XXXXXXX..XXXXXXX 100644
26
+ }
27
--- a/fpu/softfloat-specialize.c.inc
27
+
28
+++ b/fpu/softfloat-specialize.c.inc
28
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
#endif
30
* a default NaN
30
31
*/
31
if (arm_feature(env, ARM_FEATURE_M)) {
32
rule = float_infzeronan_dnan_never;
32
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
33
-#elif defined(TARGET_S390X)
34
- rule = float_infzeronan_dnan_always;
33
#endif
35
#endif
34
}
36
}
35
37
36
-#ifndef CONFIG_USER_ONLY
37
- /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
38
- * executing as AArch32 then check if highvecs are enabled and
39
- * adjust the PC accordingly.
40
- */
41
- if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
42
- env->regs[15] = 0xFFFF0000;
43
- }
44
-
45
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
46
-#endif
47
-
48
/* M profile requires that reset clears the exclusive monitor;
49
* A profile does not, but clearing it makes more sense than having it
50
* set with an exclusive access on address zero.
51
--
38
--
52
2.20.1
39
2.34.1
53
54
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the PPC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-8-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 7 +++++++
9
fpu/softfloat-specialize.c.inc | 7 +------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
22
+ * to return an input NaN if we have one (ie c) rather than generating
23
+ * a default NaN
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
27
28
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
29
ppc_spr_t *spr = &env->spr_cb[i];
30
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
31
index XXXXXXX..XXXXXXX 100644
32
--- a/fpu/softfloat-specialize.c.inc
33
+++ b/fpu/softfloat-specialize.c.inc
34
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
*/
36
rule = float_infzeronan_dnan_never;
37
}
38
-#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
39
+#elif defined(TARGET_SPARC) || \
40
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
41
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
42
/*
43
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
44
* case sets InvalidOp and returns the input value 'c'
45
*/
46
- /*
47
- * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
48
- * to return an input NaN if we have one (ie c) rather than generating
49
- * a default NaN
50
- */
51
rule = float_infzeronan_dnan_never;
52
#endif
53
}
54
--
55
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the MIPS target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-9-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 9 +++++++++
9
target/mips/msa.c | 4 ++++
10
fpu/softfloat-specialize.c.inc | 16 +---------------
11
3 files changed, 14 insertions(+), 15 deletions(-)
12
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/mips/fpu_helper.h
16
+++ b/target/mips/fpu_helper.h
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_flush_mode(CPUMIPSState *env)
18
static inline void restore_snan_bit_mode(CPUMIPSState *env)
19
{
20
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
21
+ FloatInfZeroNaNRule izn_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
set_snan_bit_is_one(!nan2008, &env->active_fpu.fp_status);
28
set_default_nan_mode(!nan2008, &env->active_fpu.fp_status);
29
+ /*
30
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
31
+ * case sets InvalidOp and returns the default NaN.
32
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
33
+ * case sets InvalidOp and returns the input value 'c'.
34
+ */
35
+ izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
36
+ set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
37
}
38
39
static inline void restore_fp_status(CPUMIPSState *env)
40
diff --git a/target/mips/msa.c b/target/mips/msa.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/mips/msa.c
43
+++ b/target/mips/msa.c
44
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
45
46
/* set proper signanling bit meaning ("1" means "quiet") */
47
set_snan_bit_is_one(0, &env->active_tc.msa_fp_status);
48
+
49
+ /* Inf * 0 + NaN returns the input NaN */
50
+ set_float_infzeronan_rule(float_infzeronan_dnan_never,
51
+ &env->active_tc.msa_fp_status);
52
}
53
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
54
index XXXXXXX..XXXXXXX 100644
55
--- a/fpu/softfloat-specialize.c.inc
56
+++ b/fpu/softfloat-specialize.c.inc
57
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
58
/*
59
* Temporarily fall back to ifdef ladder
60
*/
61
-#if defined(TARGET_MIPS)
62
- if (snan_bit_is_one(status)) {
63
- /*
64
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
65
- * case sets InvalidOp and returns the default NaN
66
- */
67
- rule = float_infzeronan_dnan_always;
68
- } else {
69
- /*
70
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
71
- * case sets InvalidOp and returns the input value 'c'
72
- */
73
- rule = float_infzeronan_dnan_never;
74
- }
75
-#elif defined(TARGET_SPARC) || \
76
+#if defined(TARGET_SPARC) || \
77
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
78
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
79
/*
80
--
81
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the SPARC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-10-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_SPARC) || \
34
- defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
35
+#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
36
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
37
/*
38
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the xtensa target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-11-peter.maydell@linaro.org
7
---
8
target/xtensa/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 +-
10
2 files changed, 3 insertions(+), 1 deletion(-)
11
12
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/xtensa/cpu.c
15
+++ b/target/xtensa/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
17
reset_mmu(env);
18
cs->halted = env->runstall;
19
#endif
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
set_no_signaling_nans(!dfpu, &env->fp_status);
23
xtensa_use_first_nan(env, !dfpu);
24
}
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
34
+#if defined(TARGET_HPPA) || \
35
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
36
/*
37
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the x86 target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-12-peter.maydell@linaro.org
6
---
7
target/i386/tcg/fpu_helper.c | 7 +++++++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 8 insertions(+), 1 deletion(-)
10
11
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/i386/tcg/fpu_helper.c
14
+++ b/target/i386/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->mmx_status);
18
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->sse_status);
19
+ /*
20
+ * Only SSE has multiply-add instructions. In the SDM Section 14.5.2
21
+ * "Fused-Multiply-ADD (FMA) Numeric Behavior" the NaN handling is
22
+ * specified -- for 0 * inf + NaN the input NaN is selected, and if
23
+ * there are multiple input NaNs they are selected in the order a, b, c.
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
26
}
27
28
static inline uint8_t save_exception_flags(CPUX86State *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
34
* Temporarily fall back to ifdef ladder
35
*/
36
#if defined(TARGET_HPPA) || \
37
- defined(TARGET_I386) || defined(TARGET_LOONGARCH)
38
+ defined(TARGET_LOONGARCH)
39
/*
40
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
41
* case sets InvalidOp and returns the input value 'c'
42
--
43
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the loongarch target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-13-peter.maydell@linaro.org
6
---
7
target/loongarch/tcg/fpu_helper.c | 5 +++++
8
fpu/softfloat-specialize.c.inc | 7 +------
9
2 files changed, 6 insertions(+), 6 deletions(-)
10
11
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/loongarch/tcg/fpu_helper.c
14
+++ b/target/loongarch/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
16
&env->fp_status);
17
set_flush_to_zero(0, &env->fp_status);
18
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
19
+ /*
20
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
21
+ * case sets InvalidOp and returns the input value 'c'
22
+ */
23
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
}
25
26
int ieee_ex_to_loongarch(int xcpt)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
32
/*
33
* Temporarily fall back to ifdef ladder
34
*/
35
-#if defined(TARGET_HPPA) || \
36
- defined(TARGET_LOONGARCH)
37
- /*
38
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
- * case sets InvalidOp and returns the input value 'c'
40
- */
41
+#if defined(TARGET_HPPA)
42
rule = float_infzeronan_dnan_never;
43
#endif
44
}
45
--
46
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the HPPA target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
As this is the last target to be converted to explicitly setting
5
the rule, we can remove the fallback code in pickNaNMulAdd()
6
entirely.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20241202131347.498124-14-peter.maydell@linaro.org
11
---
12
target/hppa/fpu_helper.c | 2 ++
13
fpu/softfloat-specialize.c.inc | 13 +------------
14
2 files changed, 3 insertions(+), 12 deletions(-)
15
16
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/hppa/fpu_helper.c
19
+++ b/target/hppa/fpu_helper.c
20
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
21
* HPPA does note implement a CPU reset method at all...
22
*/
23
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
24
+ /* For inf * 0 + NaN, return the input NaN */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
}
27
28
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
34
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
bool infzero, float_status *status)
36
{
37
- FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
38
-
39
/*
40
* We guarantee not to require the target to tell us how to
41
* pick a NaN if we're always returning the default NaN.
42
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
43
*/
44
assert(!status->default_nan_mode);
45
46
- if (rule == float_infzeronan_none) {
47
- /*
48
- * Temporarily fall back to ifdef ladder
49
- */
50
-#if defined(TARGET_HPPA)
51
- rule = float_infzeronan_dnan_never;
52
-#endif
53
- }
54
-
55
if (infzero) {
56
/*
57
* Inf * 0 + NaN -- some implementations return the default NaN here,
58
* and some return the input NaN.
59
*/
60
- switch (rule) {
61
+ switch (status->float_infzeronan_rule) {
62
case float_infzeronan_dnan_never:
63
return 2;
64
case float_infzeronan_dnan_always:
65
--
66
2.34.1
diff view generated by jsdifflib
1
Optimize the MVE VSHLL insns by using TCG vector ops when possible.
1
The new implementation of pickNaNMulAdd() will find it convenient
2
This includes the VMOVL insn, which we handle in mve.decode as "VSHLL
2
to know whether at least one of the three arguments to the muladd
3
with zero shift count".
3
was a signaling NaN. We already calculate that in the caller,
4
so pass it in as a new bool have_snan.
4
5
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-11-peter.maydell@linaro.org
8
Message-id: 20241202131347.498124-15-peter.maydell@linaro.org
8
---
9
---
9
target/arm/translate-mve.c | 67 +++++++++++++++++++++++++++++++++-----
10
fpu/softfloat-parts.c.inc | 5 +++--
10
1 file changed, 59 insertions(+), 8 deletions(-)
11
fpu/softfloat-specialize.c.inc | 2 +-
12
2 files changed, 4 insertions(+), 3 deletions(-)
11
13
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
16
--- a/fpu/softfloat-parts.c.inc
15
+++ b/target/arm/translate-mve.c
17
+++ b/fpu/softfloat-parts.c.inc
16
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u)
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
17
DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s)
19
{
18
DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u)
20
int which;
19
21
bool infzero = (ab_mask == float_cmask_infzero);
20
-#define DO_VSHLL(INSN, FN) \
22
+ bool have_snan = (abc_mask & float_cmask_snan);
21
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
23
22
- { \
24
- if (unlikely(abc_mask & float_cmask_snan)) {
23
- static MVEGenTwoOpShiftFn * const fns[] = { \
25
+ if (unlikely(have_snan)) {
24
- gen_helper_mve_##FN##b, \
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
25
- gen_helper_mve_##FN##h, \
26
- }; \
27
- return do_2shift(s, a, fns[a->size], false); \
28
+#define DO_VSHLL(INSN, FN) \
29
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
30
+ { \
31
+ static MVEGenTwoOpShiftFn * const fns[] = { \
32
+ gen_helper_mve_##FN##b, \
33
+ gen_helper_mve_##FN##h, \
34
+ }; \
35
+ return do_2shift_vec(s, a, fns[a->size], false, do_gvec_##FN); \
36
}
27
}
37
28
38
+/*
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
39
+ * For the VSHLL vector helpers, the vece is the size of the input
30
if (s->default_nan_mode) {
40
+ * (ie MO_8 or MO_16); the helpers want to work in the output size.
31
which = 3;
41
+ * The shift count can be 0..<input size>, inclusive. (0 is VMOVL.)
32
} else {
42
+ */
33
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
43
+static void do_gvec_vshllbs(unsigned vece, uint32_t dofs, uint32_t aofs,
34
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
44
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
35
}
45
+{
36
46
+ unsigned ovece = vece + 1;
37
if (which == 3) {
47
+ unsigned ibits = vece == MO_8 ? 8 : 16;
38
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
48
+ tcg_gen_gvec_shli(ovece, dofs, aofs, ibits, oprsz, maxsz);
39
index XXXXXXX..XXXXXXX 100644
49
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
40
--- a/fpu/softfloat-specialize.c.inc
50
+}
41
+++ b/fpu/softfloat-specialize.c.inc
51
+
42
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
52
+static void do_gvec_vshllbu(unsigned vece, uint32_t dofs, uint32_t aofs,
43
| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
53
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
44
*----------------------------------------------------------------------------*/
54
+{
45
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
55
+ unsigned ovece = vece + 1;
46
- bool infzero, float_status *status)
56
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
47
+ bool infzero, bool have_snan, float_status *status)
57
+ ovece == MO_16 ? 0xff : 0xffff, oprsz, maxsz);
48
{
58
+ tcg_gen_gvec_shli(ovece, dofs, dofs, shift, oprsz, maxsz);
49
/*
59
+}
50
* We guarantee not to require the target to tell us how to
60
+
61
+static void do_gvec_vshllts(unsigned vece, uint32_t dofs, uint32_t aofs,
62
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
63
+{
64
+ unsigned ovece = vece + 1;
65
+ unsigned ibits = vece == MO_8 ? 8 : 16;
66
+ if (shift == 0) {
67
+ tcg_gen_gvec_sari(ovece, dofs, aofs, ibits, oprsz, maxsz);
68
+ } else {
69
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
70
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
71
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
72
+ }
73
+}
74
+
75
+static void do_gvec_vshlltu(unsigned vece, uint32_t dofs, uint32_t aofs,
76
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
77
+{
78
+ unsigned ovece = vece + 1;
79
+ unsigned ibits = vece == MO_8 ? 8 : 16;
80
+ if (shift == 0) {
81
+ tcg_gen_gvec_shri(ovece, dofs, aofs, ibits, oprsz, maxsz);
82
+ } else {
83
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
84
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
85
+ tcg_gen_gvec_shri(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
86
+ }
87
+}
88
+
89
DO_VSHLL(VSHLL_BS, vshllbs)
90
DO_VSHLL(VSHLL_BU, vshllbu)
91
DO_VSHLL(VSHLL_TS, vshllts)
92
--
51
--
93
2.20.1
52
2.34.1
94
95
diff view generated by jsdifflib
1
Optimize the MVE VSHL and VSHR immediate forms by using TCG vector
1
IEEE 758 does not define a fixed rule for which NaN to pick as the
2
ops when possible.
2
result if both operands of a 3-operand fused multiply-add operation
3
are NaNs. As a result different architectures have ended up with
4
different rules for propagating NaNs.
5
6
QEMU currently hardcodes the NaN propagation logic into the binary
7
because pickNaNMulAdd() has an ifdef ladder for different targets.
8
We want to make the propagation rule instead be selectable at
9
runtime, because:
10
* this will let us have multiple targets in one QEMU binary
11
* the Arm FEAT_AFP architectural feature includes letting
12
the guest select a NaN propagation rule at runtime
13
14
In this commit we add an enum for the propagation rule, the field in
15
float_status, and the corresponding getters and setters. We change
16
pickNaNMulAdd to honour this, but because all targets still leave
17
this field at its default 0 value, the fallback logic will pick the
18
rule type with the old ifdef ladder.
19
20
It's valid not to set a propagation rule if default_nan_mode is
21
enabled, because in that case there's no need to pick a NaN; all the
22
callers of pickNaNMulAdd() catch this case and skip calling it.
3
23
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210913095440.13462-10-peter.maydell@linaro.org
26
Message-id: 20241202131347.498124-16-peter.maydell@linaro.org
7
---
27
---
8
target/arm/translate-mve.c | 83 +++++++++++++++++++++++++++++---------
28
include/fpu/softfloat-helpers.h | 11 +++
9
1 file changed, 63 insertions(+), 20 deletions(-)
29
include/fpu/softfloat-types.h | 55 +++++++++++
10
30
fpu/softfloat-specialize.c.inc | 167 ++++++++------------------------
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
31
3 files changed, 107 insertions(+), 126 deletions(-)
32
33
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
12
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-mve.c
35
--- a/include/fpu/softfloat-helpers.h
14
+++ b/target/arm/translate-mve.c
36
+++ b/include/fpu/softfloat-helpers.h
15
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
37
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
16
return do_1imm(s, a, fn);
38
status->float_2nan_prop_rule = rule;
17
}
39
}
18
40
19
-static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
41
+static inline void set_float_3nan_prop_rule(Float3NaNPropRule rule,
20
- bool negateshift)
42
+ float_status *status)
21
+static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
43
+{
22
+ bool negateshift, GVecGen2iFn vecfn)
44
+ status->float_3nan_prop_rule = rule;
45
+}
46
+
47
static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
48
float_status *status)
23
{
49
{
24
TCGv_ptr qd, qm;
50
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
25
int shift = a->shift;
51
return status->float_2nan_prop_rule;
26
@@ -XXX,XX +XXX,XX @@ static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
52
}
27
shift = -shift;
53
54
+static inline Float3NaNPropRule get_float_3nan_prop_rule(float_status *status)
55
+{
56
+ return status->float_3nan_prop_rule;
57
+}
58
+
59
static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
60
{
61
return status->float_infzeronan_rule;
62
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
63
index XXXXXXX..XXXXXXX 100644
64
--- a/include/fpu/softfloat-types.h
65
+++ b/include/fpu/softfloat-types.h
66
@@ -XXX,XX +XXX,XX @@ this code that are retained.
67
#ifndef SOFTFLOAT_TYPES_H
68
#define SOFTFLOAT_TYPES_H
69
70
+#include "hw/registerfields.h"
71
+
72
/*
73
* Software IEC/IEEE floating-point types.
74
*/
75
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
76
float_2nan_prop_x87,
77
} Float2NaNPropRule;
78
79
+/*
80
+ * 3-input NaN propagation rule, for fused multiply-add. Individual
81
+ * architectures have different rules for which input NaN is
82
+ * propagated to the output when there is more than one NaN on the
83
+ * input.
84
+ *
85
+ * If default_nan_mode is enabled then it is valid not to set a NaN
86
+ * propagation rule, because the softfloat code guarantees not to try
87
+ * to pick a NaN to propagate in default NaN mode. When not in
88
+ * default-NaN mode, it is an error for the target not to set the rule
89
+ * in float_status if it uses a muladd, and we will assert if we need
90
+ * to handle an input NaN and no rule was selected.
91
+ *
92
+ * The naming scheme for Float3NaNPropRule values is:
93
+ * float_3nan_prop_s_abc:
94
+ * = "Prefer SNaN over QNaN, then operand A over B over C"
95
+ * float_3nan_prop_abc:
96
+ * = "Prefer A over B over C regardless of SNaN vs QNAN"
97
+ *
98
+ * For QEMU, the multiply-add operation is A * B + C.
99
+ */
100
+
101
+/*
102
+ * We set the Float3NaNPropRule enum values up so we can select the
103
+ * right value in pickNaNMulAdd in a data driven way.
104
+ */
105
+FIELD(3NAN, 1ST, 0, 2) /* which operand is most preferred ? */
106
+FIELD(3NAN, 2ND, 2, 2) /* which operand is next most preferred ? */
107
+FIELD(3NAN, 3RD, 4, 2) /* which operand is least preferred ? */
108
+FIELD(3NAN, SNAN, 6, 1) /* do we prefer SNaN over QNaN ? */
109
+
110
+#define PROPRULE(X, Y, Z) \
111
+ ((X << R_3NAN_1ST_SHIFT) | (Y << R_3NAN_2ND_SHIFT) | (Z << R_3NAN_3RD_SHIFT))
112
+
113
+typedef enum __attribute__((__packed__)) {
114
+ float_3nan_prop_none = 0, /* No propagation rule specified */
115
+ float_3nan_prop_abc = PROPRULE(0, 1, 2),
116
+ float_3nan_prop_acb = PROPRULE(0, 2, 1),
117
+ float_3nan_prop_bac = PROPRULE(1, 0, 2),
118
+ float_3nan_prop_bca = PROPRULE(1, 2, 0),
119
+ float_3nan_prop_cab = PROPRULE(2, 0, 1),
120
+ float_3nan_prop_cba = PROPRULE(2, 1, 0),
121
+ float_3nan_prop_s_abc = float_3nan_prop_abc | R_3NAN_SNAN_MASK,
122
+ float_3nan_prop_s_acb = float_3nan_prop_acb | R_3NAN_SNAN_MASK,
123
+ float_3nan_prop_s_bac = float_3nan_prop_bac | R_3NAN_SNAN_MASK,
124
+ float_3nan_prop_s_bca = float_3nan_prop_bca | R_3NAN_SNAN_MASK,
125
+ float_3nan_prop_s_cab = float_3nan_prop_cab | R_3NAN_SNAN_MASK,
126
+ float_3nan_prop_s_cba = float_3nan_prop_cba | R_3NAN_SNAN_MASK,
127
+} Float3NaNPropRule;
128
+
129
+#undef PROPRULE
130
+
131
/*
132
* Rule for result of fused multiply-add 0 * Inf + NaN.
133
* This must be a NaN, but implementations differ on whether this
134
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
135
FloatRoundMode float_rounding_mode;
136
FloatX80RoundPrec floatx80_rounding_precision;
137
Float2NaNPropRule float_2nan_prop_rule;
138
+ Float3NaNPropRule float_3nan_prop_rule;
139
FloatInfZeroNaNRule float_infzeronan_rule;
140
bool tininess_before_rounding;
141
/* should denormalised results go to zero and set the inexact flag? */
142
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
143
index XXXXXXX..XXXXXXX 100644
144
--- a/fpu/softfloat-specialize.c.inc
145
+++ b/fpu/softfloat-specialize.c.inc
146
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
147
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
148
bool infzero, bool have_snan, float_status *status)
149
{
150
+ FloatClass cls[3] = { a_cls, b_cls, c_cls };
151
+ Float3NaNPropRule rule = status->float_3nan_prop_rule;
152
+ int which;
153
+
154
/*
155
* We guarantee not to require the target to tell us how to
156
* pick a NaN if we're always returning the default NaN.
157
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
158
}
28
}
159
}
29
160
30
- qd = mve_qreg_ptr(a->qd);
161
+ if (rule == float_3nan_prop_none) {
31
- qm = mve_qreg_ptr(a->qm);
162
#if defined(TARGET_ARM)
32
- fn(cpu_env, qd, qm, tcg_constant_i32(shift));
163
-
33
- tcg_temp_free_ptr(qd);
164
- /* This looks different from the ARM ARM pseudocode, because the ARM ARM
34
- tcg_temp_free_ptr(qm);
165
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
35
+ if (vecfn && mve_no_predication(s)) {
166
- */
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm),
167
- if (is_snan(c_cls)) {
37
+ shift, 16, 16);
168
- return 2;
169
- } else if (is_snan(a_cls)) {
170
- return 0;
171
- } else if (is_snan(b_cls)) {
172
- return 1;
173
- } else if (is_qnan(c_cls)) {
174
- return 2;
175
- } else if (is_qnan(a_cls)) {
176
- return 0;
177
- } else {
178
- return 1;
179
- }
180
+ /*
181
+ * This looks different from the ARM ARM pseudocode, because the ARM ARM
182
+ * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
183
+ */
184
+ rule = float_3nan_prop_s_cab;
185
#elif defined(TARGET_MIPS)
186
- if (snan_bit_is_one(status)) {
187
- /* Prefer sNaN over qNaN, in the a, b, c order. */
188
- if (is_snan(a_cls)) {
189
- return 0;
190
- } else if (is_snan(b_cls)) {
191
- return 1;
192
- } else if (is_snan(c_cls)) {
193
- return 2;
194
- } else if (is_qnan(a_cls)) {
195
- return 0;
196
- } else if (is_qnan(b_cls)) {
197
- return 1;
198
+ if (snan_bit_is_one(status)) {
199
+ rule = float_3nan_prop_s_abc;
200
} else {
201
- return 2;
202
+ rule = float_3nan_prop_s_cab;
203
}
204
- } else {
205
- /* Prefer sNaN over qNaN, in the c, a, b order. */
206
- if (is_snan(c_cls)) {
207
- return 2;
208
- } else if (is_snan(a_cls)) {
209
- return 0;
210
- } else if (is_snan(b_cls)) {
211
- return 1;
212
- } else if (is_qnan(c_cls)) {
213
- return 2;
214
- } else if (is_qnan(a_cls)) {
215
- return 0;
216
- } else {
217
- return 1;
218
- }
219
- }
220
#elif defined(TARGET_LOONGARCH64)
221
- /* Prefer sNaN over qNaN, in the c, a, b order. */
222
- if (is_snan(c_cls)) {
223
- return 2;
224
- } else if (is_snan(a_cls)) {
225
- return 0;
226
- } else if (is_snan(b_cls)) {
227
- return 1;
228
- } else if (is_qnan(c_cls)) {
229
- return 2;
230
- } else if (is_qnan(a_cls)) {
231
- return 0;
232
- } else {
233
- return 1;
234
- }
235
+ rule = float_3nan_prop_s_cab;
236
#elif defined(TARGET_PPC)
237
- /* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
238
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
239
- */
240
- if (is_nan(a_cls)) {
241
- return 0;
242
- } else if (is_nan(c_cls)) {
243
- return 2;
244
- } else {
245
- return 1;
246
- }
247
+ /*
248
+ * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
249
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
250
+ */
251
+ rule = float_3nan_prop_acb;
252
#elif defined(TARGET_S390X)
253
- if (is_snan(a_cls)) {
254
- return 0;
255
- } else if (is_snan(b_cls)) {
256
- return 1;
257
- } else if (is_snan(c_cls)) {
258
- return 2;
259
- } else if (is_qnan(a_cls)) {
260
- return 0;
261
- } else if (is_qnan(b_cls)) {
262
- return 1;
263
- } else {
264
- return 2;
265
- }
266
+ rule = float_3nan_prop_s_abc;
267
#elif defined(TARGET_SPARC)
268
- /* Prefer SNaN over QNaN, order C, B, A. */
269
- if (is_snan(c_cls)) {
270
- return 2;
271
- } else if (is_snan(b_cls)) {
272
- return 1;
273
- } else if (is_snan(a_cls)) {
274
- return 0;
275
- } else if (is_qnan(c_cls)) {
276
- return 2;
277
- } else if (is_qnan(b_cls)) {
278
- return 1;
279
- } else {
280
- return 0;
281
- }
282
+ rule = float_3nan_prop_s_cba;
283
#elif defined(TARGET_XTENSA)
284
- /*
285
- * For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
286
- * an input NaN if we have one (ie c).
287
- */
288
- if (status->use_first_nan) {
289
- if (is_nan(a_cls)) {
290
- return 0;
291
- } else if (is_nan(b_cls)) {
292
- return 1;
293
+ if (status->use_first_nan) {
294
+ rule = float_3nan_prop_abc;
295
} else {
296
- return 2;
297
+ rule = float_3nan_prop_cba;
298
}
299
- } else {
300
- if (is_nan(c_cls)) {
301
- return 2;
302
- } else if (is_nan(b_cls)) {
303
- return 1;
304
- } else {
305
- return 0;
306
- }
307
- }
308
#else
309
- /* A default implementation: prefer a to b to c.
310
- * This is unlikely to actually match any real implementation.
311
- */
312
- if (is_nan(a_cls)) {
313
- return 0;
314
- } else if (is_nan(b_cls)) {
315
- return 1;
316
- } else {
317
- return 2;
318
- }
319
+ rule = float_3nan_prop_abc;
320
#endif
321
+ }
322
+
323
+ assert(rule != float_3nan_prop_none);
324
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
325
+ /* We have at least one SNaN input and should prefer it */
326
+ do {
327
+ which = rule & R_3NAN_1ST_MASK;
328
+ rule >>= R_3NAN_1ST_LENGTH;
329
+ } while (!is_snan(cls[which]));
38
+ } else {
330
+ } else {
39
+ qd = mve_qreg_ptr(a->qd);
331
+ do {
40
+ qm = mve_qreg_ptr(a->qm);
332
+ which = rule & R_3NAN_1ST_MASK;
41
+ fn(cpu_env, qd, qm, tcg_constant_i32(shift));
333
+ rule >>= R_3NAN_1ST_LENGTH;
42
+ tcg_temp_free_ptr(qd);
334
+ } while (!is_nan(cls[which]));
43
+ tcg_temp_free_ptr(qm);
44
+ }
335
+ }
45
mve_update_eci(s);
336
+ return which;
46
return true;
47
}
337
}
48
338
49
-#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
339
/*----------------------------------------------------------------------------
50
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
51
- { \
52
- static MVEGenTwoOpShiftFn * const fns[] = { \
53
- gen_helper_mve_##FN##b, \
54
- gen_helper_mve_##FN##h, \
55
- gen_helper_mve_##FN##w, \
56
- NULL, \
57
- }; \
58
- return do_2shift(s, a, fns[a->size], NEGATESHIFT); \
59
+static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
60
+ bool negateshift)
61
+{
62
+ return do_2shift_vec(s, a, fn, negateshift, NULL);
63
+}
64
+
65
+#define DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, VECFN) \
66
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
67
+ { \
68
+ static MVEGenTwoOpShiftFn * const fns[] = { \
69
+ gen_helper_mve_##FN##b, \
70
+ gen_helper_mve_##FN##h, \
71
+ gen_helper_mve_##FN##w, \
72
+ NULL, \
73
+ }; \
74
+ return do_2shift_vec(s, a, fns[a->size], NEGATESHIFT, VECFN); \
75
}
76
77
-DO_2SHIFT(VSHLI, vshli_u, false)
78
+#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
79
+ DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, NULL)
80
+
81
+static void do_gvec_shri_s(unsigned vece, uint32_t dofs, uint32_t aofs,
82
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
83
+{
84
+ /*
85
+ * We get here with a negated shift count, and we must handle
86
+ * shifts by the element size, which tcg_gen_gvec_sari() does not do.
87
+ */
88
+ shift = -shift;
89
+ if (shift == (8 << vece)) {
90
+ shift--;
91
+ }
92
+ tcg_gen_gvec_sari(vece, dofs, aofs, shift, oprsz, maxsz);
93
+}
94
+
95
+static void do_gvec_shri_u(unsigned vece, uint32_t dofs, uint32_t aofs,
96
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
97
+{
98
+ /*
99
+ * We get here with a negated shift count, and we must handle
100
+ * shifts by the element size, which tcg_gen_gvec_shri() does not do.
101
+ */
102
+ shift = -shift;
103
+ if (shift == (8 << vece)) {
104
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, 0);
105
+ } else {
106
+ tcg_gen_gvec_shri(vece, dofs, aofs, shift, oprsz, maxsz);
107
+ }
108
+}
109
+
110
+DO_2SHIFT_VEC(VSHLI, vshli_u, false, tcg_gen_gvec_shli)
111
DO_2SHIFT(VQSHLI_S, vqshli_s, false)
112
DO_2SHIFT(VQSHLI_U, vqshli_u, false)
113
DO_2SHIFT(VQSHLUI, vqshlui_s, false)
114
/* These right shifts use a left-shift helper with negated shift count */
115
-DO_2SHIFT(VSHRI_S, vshli_s, true)
116
-DO_2SHIFT(VSHRI_U, vshli_u, true)
117
+DO_2SHIFT_VEC(VSHRI_S, vshli_s, true, do_gvec_shri_s)
118
+DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
119
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
120
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
121
122
--
340
--
123
2.20.1
341
2.34.1
124
125
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for propagating NaNs in
2
the muladd case. In meson.build we put -DTARGET_ARM in fpcflags, and
3
so we should select here the Arm rule of float_3nan_prop_s_cab.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-17-peter.maydell@linaro.org
8
---
9
tests/fp/fp-bench.c | 1 +
10
tests/fp/fp-test.c | 1 +
11
2 files changed, 2 insertions(+)
12
13
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/fp/fp-bench.c
16
+++ b/tests/fp/fp-bench.c
17
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
18
* doesn't specify match those used by the Arm architecture.
19
*/
20
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
22
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
23
24
f = bench_funcs[operation][precision];
25
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/fp/fp-test.c
28
+++ b/tests/fp/fp-test.c
29
@@ -XXX,XX +XXX,XX @@ void run_test(void)
30
* doesn't specify match those used by the Arm architecture.
31
*/
32
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
33
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
34
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
35
36
genCases_setLevel(test_level);
37
--
38
2.34.1
diff view generated by jsdifflib
1
There's no particular reason why the exclusive monitor should
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
be only cleared on reset in system emulation mode. It doesn't
2
ifdef from pickNaNMulAdd().
3
hurt if it isn't cleared in user mode, but we might as well
4
reduce the amount of code we have that's inside an ifdef.
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210914120725.24992-3-peter.maydell@linaro.org
6
Message-id: 20241202131347.498124-18-peter.maydell@linaro.org
9
---
7
---
10
target/arm/cpu.c | 6 +++---
8
target/arm/cpu.c | 5 +++++
11
1 file changed, 3 insertions(+), 3 deletions(-)
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
2 files changed, 6 insertions(+), 7 deletions(-)
12
11
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.c
14
--- a/target/arm/cpu.c
16
+++ b/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
18
env->regs[15] = 0xFFFF0000;
17
* * tininess-before-rounding
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
20
+ * * 3-input NaN propagation prefers SNaN over QNaN, and then
21
+ * operand C over A over B (see FPProcessNaNs3() pseudocode,
22
+ * but note that for QEMU muladd is a * b + c, whereas for
23
+ * the pseudocode function the arguments are in the order c, a, b.
24
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
25
* and the input NaN if it is signalling
26
*/
27
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
28
{
29
set_float_detect_tininess(float_tininess_before_rounding, s);
30
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
31
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
32
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
33
}
34
35
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
36
index XXXXXXX..XXXXXXX 100644
37
--- a/fpu/softfloat-specialize.c.inc
38
+++ b/fpu/softfloat-specialize.c.inc
39
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
19
}
40
}
20
41
21
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
42
if (rule == float_3nan_prop_none) {
22
+#endif
43
-#if defined(TARGET_ARM)
23
+
44
- /*
24
/* M profile requires that reset clears the exclusive monitor;
45
- * This looks different from the ARM ARM pseudocode, because the ARM ARM
25
* A profile does not, but clearing it makes more sense than having it
46
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
26
* set with an exclusive access on address zero.
47
- */
27
*/
48
- rule = float_3nan_prop_s_cab;
28
arm_clear_exclusive(env);
49
-#elif defined(TARGET_MIPS)
29
50
+#if defined(TARGET_MIPS)
30
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
51
if (snan_bit_is_one(status)) {
31
-#endif
52
rule = float_3nan_prop_s_abc;
32
-
53
} else {
33
if (arm_feature(env, ARM_FEATURE_PMSA)) {
34
if (cpu->pmsav7_dregion > 0) {
35
if (arm_feature(env, ARM_FEATURE_V8)) {
36
--
54
--
37
2.20.1
55
2.34.1
38
39
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for loongarch, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-19-peter.maydell@linaro.org
7
---
8
target/loongarch/tcg/fpu_helper.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/loongarch/tcg/fpu_helper.c
15
+++ b/target/loongarch/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
17
* case sets InvalidOp and returns the input value 'c'
18
*/
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
21
}
22
23
int ieee_ex_to_loongarch(int xcpt)
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_LOONGARCH64)
33
- rule = float_3nan_prop_s_cab;
34
#elif defined(TARGET_PPC)
35
/*
36
* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for PPC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-20-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 8 ++++++++
9
fpu/softfloat-specialize.c.inc | 6 ------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * NaN propagation for fused multiply-add:
22
+ * if fRA is a NaN return it; otherwise if fRB is a NaN return it;
23
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
24
+ * whereas QEMU labels the operands as (a * b) + c.
25
+ */
26
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->fp_status);
27
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->vec_status);
28
/*
29
* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
30
* to return an input NaN if we have one (ie c) rather than generating
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
} else {
37
rule = float_3nan_prop_s_cab;
38
}
39
-#elif defined(TARGET_PPC)
40
- /*
41
- * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
42
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
43
- */
44
- rule = float_3nan_prop_acb;
45
#elif defined(TARGET_S390X)
46
rule = float_3nan_prop_s_abc;
47
#elif defined(TARGET_SPARC)
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for s390x, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-21-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/cpu.c
15
+++ b/target/s390x/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
17
set_float_detect_tininess(float_tininess_before_rounding,
18
&env->fpu_status);
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
21
set_float_infzeronan_rule(float_infzeronan_dnan_always,
22
&env->fpu_status);
23
/* fall through */
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_S390X)
33
- rule = float_3nan_prop_s_abc;
34
#elif defined(TARGET_SPARC)
35
rule = float_3nan_prop_s_cba;
36
#elif defined(TARGET_XTENSA)
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for SPARC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-22-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For fused-multiply add, prefer SNaN over QNaN, then C->B->A */
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
22
/* For inf * 0 + NaN, return the input NaN */
23
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
} else {
31
rule = float_3nan_prop_s_cab;
32
}
33
-#elif defined(TARGET_SPARC)
34
- rule = float_3nan_prop_s_cba;
35
#elif defined(TARGET_XTENSA)
36
if (status->use_first_nan) {
37
rule = float_3nan_prop_abc;
38
--
39
2.34.1
diff view generated by jsdifflib
1
Optimize MVE arithmetic ops when we have a TCG
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
vector operation we can use.
2
ifdef from pickNaNMulAdd().
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-6-peter.maydell@linaro.org
6
Message-id: 20241202131347.498124-23-peter.maydell@linaro.org
8
---
7
---
9
target/arm/translate-mve.c | 20 +++++++++++---------
8
target/mips/fpu_helper.h | 4 ++++
10
1 file changed, 11 insertions(+), 9 deletions(-)
9
target/mips/msa.c | 3 +++
10
fpu/softfloat-specialize.c.inc | 8 +-------
11
3 files changed, 8 insertions(+), 7 deletions(-)
11
12
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
15
--- a/target/mips/fpu_helper.h
15
+++ b/target/arm/translate-mve.c
16
+++ b/target/mips/fpu_helper.h
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
17
return do_2op(s, a, gen_helper_mve_vpsel);
18
{
19
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
20
FloatInfZeroNaNRule izn_rule;
21
+ Float3NaNPropRule nan3_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
28
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
29
+ nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
30
+ set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
31
+
18
}
32
}
19
33
20
-#define DO_2OP(INSN, FN) \
34
static inline void restore_fp_status(CPUMIPSState *env)
21
+#define DO_2OP_VEC(INSN, FN, VECFN) \
35
diff --git a/target/mips/msa.c b/target/mips/msa.c
22
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
36
index XXXXXXX..XXXXXXX 100644
23
{ \
37
--- a/target/mips/msa.c
24
static MVEGenTwoOpFn * const fns[] = { \
38
+++ b/target/mips/msa.c
25
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
39
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
26
gen_helper_mve_##FN##w, \
40
set_float_2nan_prop_rule(float_2nan_prop_s_ab,
27
NULL, \
41
&env->active_tc.msa_fp_status);
28
}; \
42
29
- return do_2op(s, a, fns[a->size]); \
43
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab,
30
+ return do_2op_vec(s, a, fns[a->size], VECFN); \
44
+ &env->active_tc.msa_fp_status);
45
+
46
/* clear float_status exception flags */
47
set_float_exception_flags(0, &env->active_tc.msa_fp_status);
48
49
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
50
index XXXXXXX..XXXXXXX 100644
51
--- a/fpu/softfloat-specialize.c.inc
52
+++ b/fpu/softfloat-specialize.c.inc
53
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
31
}
54
}
32
55
33
-DO_2OP(VADD, vadd)
56
if (rule == float_3nan_prop_none) {
34
-DO_2OP(VSUB, vsub)
57
-#if defined(TARGET_MIPS)
35
-DO_2OP(VMUL, vmul)
58
- if (snan_bit_is_one(status)) {
36
+#define DO_2OP(INSN, FN) DO_2OP_VEC(INSN, FN, NULL)
59
- rule = float_3nan_prop_s_abc;
37
+
60
- } else {
38
+DO_2OP_VEC(VADD, vadd, tcg_gen_gvec_add)
61
- rule = float_3nan_prop_s_cab;
39
+DO_2OP_VEC(VSUB, vsub, tcg_gen_gvec_sub)
62
- }
40
+DO_2OP_VEC(VMUL, vmul, tcg_gen_gvec_mul)
63
-#elif defined(TARGET_XTENSA)
41
DO_2OP(VMULH_S, vmulhs)
64
+#if defined(TARGET_XTENSA)
42
DO_2OP(VMULH_U, vmulhu)
65
if (status->use_first_nan) {
43
DO_2OP(VRMULH_S, vrmulhs)
66
rule = float_3nan_prop_abc;
44
DO_2OP(VRMULH_U, vrmulhu)
67
} else {
45
-DO_2OP(VMAX_S, vmaxs)
46
-DO_2OP(VMAX_U, vmaxu)
47
-DO_2OP(VMIN_S, vmins)
48
-DO_2OP(VMIN_U, vminu)
49
+DO_2OP_VEC(VMAX_S, vmaxs, tcg_gen_gvec_smax)
50
+DO_2OP_VEC(VMAX_U, vmaxu, tcg_gen_gvec_umax)
51
+DO_2OP_VEC(VMIN_S, vmins, tcg_gen_gvec_smin)
52
+DO_2OP_VEC(VMIN_U, vminu, tcg_gen_gvec_umin)
53
DO_2OP(VABD_S, vabds)
54
DO_2OP(VABD_U, vabdu)
55
DO_2OP(VHADD_S, vhadds)
56
--
68
--
57
2.20.1
69
2.34.1
58
59
diff view generated by jsdifflib
1
Optimize the MVE VNEG and VABS insns by using TCG
1
Set the Float3NaNPropRule explicitly for xtensa, and remove the
2
vector ops when possible.
2
ifdef from pickNaNMulAdd().
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-7-peter.maydell@linaro.org
6
Message-id: 20241202131347.498124-24-peter.maydell@linaro.org
8
---
7
---
9
target/arm/translate-mve.c | 32 ++++++++++++++++++++++----------
8
target/xtensa/fpu_helper.c | 2 ++
10
1 file changed, 22 insertions(+), 10 deletions(-)
9
fpu/softfloat-specialize.c.inc | 8 --------
10
2 files changed, 2 insertions(+), 8 deletions(-)
11
11
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
12
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
14
--- a/target/xtensa/fpu_helper.c
15
+++ b/target/arm/translate-mve.c
15
+++ b/target/xtensa/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
16
@@ -XXX,XX +XXX,XX @@ void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
17
return true;
17
set_use_first_nan(use_first, &env->fp_status);
18
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
19
&env->fp_status);
20
+ set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
21
+ &env->fp_status);
18
}
22
}
19
23
20
-static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
24
void HELPER(wur_fpu2k_fcr)(CPUXtensaState *env, uint32_t v)
21
+static bool do_1op_vec(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn,
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
22
+ GVecGen2Fn vecfn)
26
index XXXXXXX..XXXXXXX 100644
23
{
27
--- a/fpu/softfloat-specialize.c.inc
24
TCGv_ptr qd, qm;
28
+++ b/fpu/softfloat-specialize.c.inc
25
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
26
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
27
return true;
28
}
30
}
29
31
30
- qd = mve_qreg_ptr(a->qd);
32
if (rule == float_3nan_prop_none) {
31
- qm = mve_qreg_ptr(a->qm);
33
-#if defined(TARGET_XTENSA)
32
- fn(cpu_env, qd, qm);
34
- if (status->use_first_nan) {
33
- tcg_temp_free_ptr(qd);
35
- rule = float_3nan_prop_abc;
34
- tcg_temp_free_ptr(qm);
36
- } else {
35
+ if (vecfn && mve_no_predication(s)) {
37
- rule = float_3nan_prop_cba;
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm), 16, 16);
38
- }
37
+ } else {
39
-#else
38
+ qd = mve_qreg_ptr(a->qd);
40
rule = float_3nan_prop_abc;
39
+ qm = mve_qreg_ptr(a->qm);
41
-#endif
40
+ fn(cpu_env, qd, qm);
41
+ tcg_temp_free_ptr(qd);
42
+ tcg_temp_free_ptr(qm);
43
+ }
44
mve_update_eci(s);
45
return true;
46
}
47
48
-#define DO_1OP(INSN, FN) \
49
+static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
50
+{
51
+ return do_1op_vec(s, a, fn, NULL);
52
+}
53
+
54
+#define DO_1OP_VEC(INSN, FN, VECFN) \
55
static bool trans_##INSN(DisasContext *s, arg_1op *a) \
56
{ \
57
static MVEGenOneOpFn * const fns[] = { \
58
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
59
gen_helper_mve_##FN##w, \
60
NULL, \
61
}; \
62
- return do_1op(s, a, fns[a->size]); \
63
+ return do_1op_vec(s, a, fns[a->size], VECFN); \
64
}
42
}
65
43
66
+#define DO_1OP(INSN, FN) DO_1OP_VEC(INSN, FN, NULL)
44
assert(rule != float_3nan_prop_none);
67
+
68
DO_1OP(VCLZ, vclz)
69
DO_1OP(VCLS, vcls)
70
-DO_1OP(VABS, vabs)
71
-DO_1OP(VNEG, vneg)
72
+DO_1OP_VEC(VABS, vabs, tcg_gen_gvec_abs)
73
+DO_1OP_VEC(VNEG, vneg, tcg_gen_gvec_neg)
74
DO_1OP(VQABS, vqabs)
75
DO_1OP(VQNEG, vqneg)
76
DO_1OP(VMAXA, vmaxa)
77
--
45
--
78
2.20.1
46
2.34.1
79
80
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for i386. We had no
2
i386-specific behaviour in the old ifdef ladder, so we were using the
3
default "prefer a then b then c" fallback; this is actually the
4
correct per-the-spec handling for i386.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-25-peter.maydell@linaro.org
9
---
10
target/i386/tcg/fpu_helper.c | 1 +
11
1 file changed, 1 insertion(+)
12
13
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/i386/tcg/fpu_helper.c
16
+++ b/target/i386/tcg/fpu_helper.c
17
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
18
* there are multiple input NaNs they are selected in the order a, b, c.
19
*/
20
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
21
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
22
}
23
24
static inline uint8_t save_exception_flags(CPUX86State *env)
25
--
26
2.34.1
diff view generated by jsdifflib
1
Optimize the MVE VDUP insns by using TCG vector ops when possible.
1
Set the Float3NaNPropRule explicitly for HPPA, and remove the
2
ifdef from pickNaNMulAdd().
3
4
HPPA is the only target that was using the default branch of the
5
ifdef ladder (other targets either do not use muladd or set
6
default_nan_mode), so we can remove the ifdef fallback entirely now
7
(allowing the "rule not set" case to fall into the default of the
8
switch statement and assert).
9
10
We add a TODO note that the HPPA rule is probably wrong; this is
11
not a behavioural change for this refactoring.
2
12
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210913095440.13462-8-peter.maydell@linaro.org
15
Message-id: 20241202131347.498124-26-peter.maydell@linaro.org
6
---
16
---
7
target/arm/translate-mve.c | 12 ++++++++----
17
target/hppa/fpu_helper.c | 8 ++++++++
8
1 file changed, 8 insertions(+), 4 deletions(-)
18
fpu/softfloat-specialize.c.inc | 4 ----
19
2 files changed, 8 insertions(+), 4 deletions(-)
9
20
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
21
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
11
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-mve.c
23
--- a/target/hppa/fpu_helper.c
13
+++ b/target/arm/translate-mve.c
24
+++ b/target/hppa/fpu_helper.c
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
25
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
15
return true;
26
* HPPA does note implement a CPU reset method at all...
27
*/
28
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
29
+ /*
30
+ * TODO: The HPPA architecture reference only documents its NaN
31
+ * propagation rule for 2-operand operations. Testing on real hardware
32
+ * might be necessary to confirm whether this order for muladd is correct.
33
+ * Not preferring the SNaN is almost certainly incorrect as it diverges
34
+ * from the documented rules for 2-operand operations.
35
+ */
36
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
37
/* For inf * 0 + NaN, return the input NaN */
38
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
39
}
40
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
41
index XXXXXXX..XXXXXXX 100644
42
--- a/fpu/softfloat-specialize.c.inc
43
+++ b/fpu/softfloat-specialize.c.inc
44
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
45
}
16
}
46
}
17
47
18
- qd = mve_qreg_ptr(a->qd);
48
- if (rule == float_3nan_prop_none) {
19
rt = load_reg(s, a->rt);
49
- rule = float_3nan_prop_abc;
20
- tcg_gen_dup_i32(a->size, rt, rt);
50
- }
21
- gen_helper_mve_vdup(cpu_env, qd, rt);
51
-
22
- tcg_temp_free_ptr(qd);
52
assert(rule != float_3nan_prop_none);
23
+ if (mve_no_predication(s)) {
53
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
24
+ tcg_gen_gvec_dup_i32(a->size, mve_qreg_offset(a->qd), 16, 16, rt);
54
/* We have at least one SNaN input and should prefer it */
25
+ } else {
26
+ qd = mve_qreg_ptr(a->qd);
27
+ tcg_gen_dup_i32(a->size, rt, rt);
28
+ gen_helper_mve_vdup(cpu_env, qd, rt);
29
+ tcg_temp_free_ptr(qd);
30
+ }
31
tcg_temp_free_i32(rt);
32
mve_update_eci(s);
33
return true;
34
--
55
--
35
2.20.1
56
2.34.1
36
37
diff view generated by jsdifflib
New patch
1
The use_first_nan field in float_status was an xtensa-specific way to
2
select at runtime from two different NaN propagation rules. Now that
3
xtensa is using the target-agnostic NaN propagation rule selection
4
that we've just added, we can remove use_first_nan, because there is
5
no longer any code that reads it.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241202131347.498124-27-peter.maydell@linaro.org
10
---
11
include/fpu/softfloat-helpers.h | 5 -----
12
include/fpu/softfloat-types.h | 1 -
13
target/xtensa/fpu_helper.c | 1 -
14
3 files changed, 7 deletions(-)
15
16
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/fpu/softfloat-helpers.h
19
+++ b/include/fpu/softfloat-helpers.h
20
@@ -XXX,XX +XXX,XX @@ static inline void set_snan_bit_is_one(bool val, float_status *status)
21
status->snan_bit_is_one = val;
22
}
23
24
-static inline void set_use_first_nan(bool val, float_status *status)
25
-{
26
- status->use_first_nan = val;
27
-}
28
-
29
static inline void set_no_signaling_nans(bool val, float_status *status)
30
{
31
status->no_signaling_nans = val;
32
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/include/fpu/softfloat-types.h
35
+++ b/include/fpu/softfloat-types.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
37
* softfloat-specialize.inc.c)
38
*/
39
bool snan_bit_is_one;
40
- bool use_first_nan;
41
bool no_signaling_nans;
42
/* should overflowed results subtract re_bias to its exponent? */
43
bool rebias_overflow;
44
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/xtensa/fpu_helper.c
47
+++ b/target/xtensa/fpu_helper.c
48
@@ -XXX,XX +XXX,XX @@ static const struct {
49
50
void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
51
{
52
- set_use_first_nan(use_first, &env->fp_status);
53
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
54
&env->fp_status);
55
set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
56
--
57
2.34.1
diff view generated by jsdifflib
New patch
1
Currently m68k_cpu_reset_hold() calls floatx80_default_nan(NULL)
2
to get the NaN bit pattern to reset the FPU registers. This
3
works because it happens that our implementation of
4
floatx80_default_nan() doesn't actually look at the float_status
5
pointer except for TARGET_MIPS. However, this isn't guaranteed,
6
and to be able to remove the ifdef in floatx80_default_nan()
7
we're going to need a real float_status here.
1
8
9
Rearrange m68k_cpu_reset_hold() so that we initialize env->fp_status
10
earlier, and thus can pass it to floatx80_default_nan().
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20241202131347.498124-28-peter.maydell@linaro.org
15
---
16
target/m68k/cpu.c | 12 +++++++-----
17
1 file changed, 7 insertions(+), 5 deletions(-)
18
19
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/m68k/cpu.c
22
+++ b/target/m68k/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
24
CPUState *cs = CPU(obj);
25
M68kCPUClass *mcc = M68K_CPU_GET_CLASS(obj);
26
CPUM68KState *env = cpu_env(cs);
27
- floatx80 nan = floatx80_default_nan(NULL);
28
+ floatx80 nan;
29
int i;
30
31
if (mcc->parent_phases.hold) {
32
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
33
#else
34
cpu_m68k_set_sr(env, SR_S | SR_I);
35
#endif
36
- for (i = 0; i < 8; i++) {
37
- env->fregs[i].d = nan;
38
- }
39
- cpu_m68k_set_fpcr(env, 0);
40
/*
41
* M68000 FAMILY PROGRAMMER'S REFERENCE MANUAL
42
* 3.4 FLOATING-POINT INSTRUCTION DETAILS
43
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
44
* preceding paragraph for nonsignaling NaNs.
45
*/
46
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
47
+
48
+ nan = floatx80_default_nan(&env->fp_status);
49
+ for (i = 0; i < 8; i++) {
50
+ env->fregs[i].d = nan;
51
+ }
52
+ cpu_m68k_set_fpcr(env, 0);
53
env->fpsr = 0;
54
55
/* TODO: We should set PC from the interrupt vector. */
56
--
57
2.34.1
diff view generated by jsdifflib
New patch
1
We create our 128-bit default NaN by calling parts64_default_nan()
2
and then adjusting the result. We can do the same trick for creating
3
the floatx80 default NaN, which lets us drop a target ifdef.
1
4
5
floatx80 is used only by:
6
i386
7
m68k
8
arm nwfpe old floating-point emulation emulation support
9
(which is essentially dead, especially the parts involving floatx80)
10
PPC (only in the xsrqpxp instruction, which just rounds an input
11
value by converting to floatx80 and back, so will never generate
12
the default NaN)
13
14
The floatx80 default NaN as currently implemented is:
15
m68k: sign = 0, exp = 1...1, int = 1, frac = 1....1
16
i386: sign = 1, exp = 1...1, int = 1, frac = 10...0
17
18
These are the same as the parts64_default_nan for these architectures.
19
20
This is technically a possible behaviour change for arm linux-user
21
nwfpe emulation emulation, because the default NaN will now have the
22
sign bit clear. But we were already generating a different floatx80
23
default NaN from the real kernel emulation we are supposedly
24
following, which appears to use an all-bits-1 value:
25
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L267
26
27
This won't affect the only "real" use of the nwfpe emulation, which
28
is ancient binaries that used it as part of the old floating point
29
calling convention; that only uses loads and stores of 32 and 64 bit
30
floats, not any of the floatx80 behaviour the original hardware had.
31
We also get the nwfpe float64 default NaN value wrong:
32
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L166
33
so if we ever cared about this obscure corner the right fix would be
34
to correct that so nwfpe used its own default-NaN setting rather
35
than the Arm VFP one.
36
37
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
39
Message-id: 20241202131347.498124-29-peter.maydell@linaro.org
40
---
41
fpu/softfloat-specialize.c.inc | 20 ++++++++++----------
42
1 file changed, 10 insertions(+), 10 deletions(-)
43
44
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
45
index XXXXXXX..XXXXXXX 100644
46
--- a/fpu/softfloat-specialize.c.inc
47
+++ b/fpu/softfloat-specialize.c.inc
48
@@ -XXX,XX +XXX,XX @@ static void parts128_silence_nan(FloatParts128 *p, float_status *status)
49
floatx80 floatx80_default_nan(float_status *status)
50
{
51
floatx80 r;
52
+ /*
53
+ * Extrapolate from the choices made by parts64_default_nan to fill
54
+ * in the floatx80 format. We assume that floatx80's explicit
55
+ * integer bit is always set (this is true for i386 and m68k,
56
+ * which are the only real users of this format).
57
+ */
58
+ FloatParts64 p64;
59
+ parts64_default_nan(&p64, status);
60
61
- /* None of the targets that have snan_bit_is_one use floatx80. */
62
- assert(!snan_bit_is_one(status));
63
-#if defined(TARGET_M68K)
64
- r.low = UINT64_C(0xFFFFFFFFFFFFFFFF);
65
- r.high = 0x7FFF;
66
-#else
67
- /* X86 */
68
- r.low = UINT64_C(0xC000000000000000);
69
- r.high = 0xFFFF;
70
-#endif
71
+ r.high = 0x7FFF | (p64.sign << 15);
72
+ r.low = (1ULL << DECOMPOSED_BINARY_POINT) | p64.frac;
73
return r;
74
}
75
76
--
77
2.34.1
diff view generated by jsdifflib
New patch
1
In target/loongarch's helper_fclass_s() and helper_fclass_d() we pass
2
a zero-initialized float_status struct to float32_is_quiet_nan() and
3
float64_is_quiet_nan(), with the cryptic comment "for
4
snan_bit_is_one".
1
5
6
This pattern appears to have been copied from target/riscv, where it
7
is used because the functions there do not have ready access to the
8
CPU state struct. The comment presumably refers to the fact that the
9
main reason the is_quiet_nan() functions want the float_state is
10
because they want to know about the snan_bit_is_one config.
11
12
In the loongarch helpers, though, we have the CPU state struct
13
to hand. Use the usual env->fp_status here. This avoids our needing
14
to track that we need to update the initializer of the local
15
float_status structs when the core softfloat code adds new
16
options for targets to configure their behaviour.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20241202131347.498124-30-peter.maydell@linaro.org
21
---
22
target/loongarch/tcg/fpu_helper.c | 6 ++----
23
1 file changed, 2 insertions(+), 4 deletions(-)
24
25
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/loongarch/tcg/fpu_helper.c
28
+++ b/target/loongarch/tcg/fpu_helper.c
29
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_s(CPULoongArchState *env, uint64_t fj)
30
} else if (float32_is_zero_or_denormal(f)) {
31
return sign ? 1 << 4 : 1 << 8;
32
} else if (float32_is_any_nan(f)) {
33
- float_status s = { }; /* for snan_bit_is_one */
34
- return float32_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
35
+ return float32_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
36
} else {
37
return sign ? 1 << 3 : 1 << 7;
38
}
39
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_d(CPULoongArchState *env, uint64_t fj)
40
} else if (float64_is_zero_or_denormal(f)) {
41
return sign ? 1 << 4 : 1 << 8;
42
} else if (float64_is_any_nan(f)) {
43
- float_status s = { }; /* for snan_bit_is_one */
44
- return float64_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
45
+ return float64_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
46
} else {
47
return sign ? 1 << 3 : 1 << 7;
48
}
49
--
50
2.34.1
diff view generated by jsdifflib
New patch
1
In the frem helper, we have a local float_status because we want to
2
execute the floatx80_div() with a custom rounding mode. Instead of
3
zero-initializing the local float_status and then having to set it up
4
with the m68k standard behaviour (including the NaN propagation rule
5
and copying the rounding precision from env->fp_status), initialize
6
it as a complete copy of env->fp_status. This will avoid our having
7
to add new code in this function for every new config knob we add
8
to fp_status.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-31-peter.maydell@linaro.org
13
---
14
target/m68k/fpu_helper.c | 6 ++----
15
1 file changed, 2 insertions(+), 4 deletions(-)
16
17
diff --git a/target/m68k/fpu_helper.c b/target/m68k/fpu_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/m68k/fpu_helper.c
20
+++ b/target/m68k/fpu_helper.c
21
@@ -XXX,XX +XXX,XX @@ void HELPER(frem)(CPUM68KState *env, FPReg *res, FPReg *val0, FPReg *val1)
22
23
fp_rem = floatx80_rem(val1->d, val0->d, &env->fp_status);
24
if (!floatx80_is_any_nan(fp_rem)) {
25
- float_status fp_status = { };
26
+ /* Use local temporary fp_status to set different rounding mode */
27
+ float_status fp_status = env->fp_status;
28
uint32_t quotient;
29
int sign;
30
31
/* Calculate quotient directly using round to nearest mode */
32
- set_float_2nan_prop_rule(float_2nan_prop_ab, &fp_status);
33
set_float_rounding_mode(float_round_nearest_even, &fp_status);
34
- set_floatx80_rounding_precision(
35
- get_floatx80_rounding_precision(&env->fp_status), &fp_status);
36
fp_quot.d = floatx80_div(val1->d, val0->d, &fp_status);
37
38
sign = extractFloatx80Sign(fp_quot.d);
39
--
40
2.34.1
diff view generated by jsdifflib
1
Architecturally, for an M-profile CPU with the LOB feature the
1
In cf_fpu_gdb_get_reg() and cf_fpu_gdb_set_reg() we do the conversion
2
LTPSIZE field in FPDSCR is always constant 4. QEMU's implementation
2
from float64 to floatx80 using a scratch float_status, because we
3
enforces this everywhere, except that we don't check that it is true
3
don't want the conversion to affect the CPU's floating point exception
4
in incoming migration data.
4
status. Currently we use a zero-initialized float_status. This will
5
5
get steadily more awkward as we add config knobs to float_status
6
We're going to add come in gen_update_fp_context() which relies on
6
that the target must initialize. Avoid having to add any of that
7
the "always 4" property. Since this is TCG-only, we don't actually
7
configuration here by instead initializing our local float_status
8
need to be robust to bogus incoming migration data, and the effect of
8
from the env->fp_status.
9
it being wrong would be wrong code generation rather than a QEMU
10
crash; but if it did ever happen somehow it would be very difficult
11
to track down the cause. Add a check so that we fail the inbound
12
migration if the FPDSCR.LTPSIZE value is incorrect.
13
9
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210913095440.13462-3-peter.maydell@linaro.org
12
Message-id: 20241202131347.498124-32-peter.maydell@linaro.org
17
---
13
---
18
target/arm/machine.c | 13 +++++++++++++
14
target/m68k/helper.c | 6 ++++--
19
1 file changed, 13 insertions(+)
15
1 file changed, 4 insertions(+), 2 deletions(-)
20
16
21
diff --git a/target/arm/machine.c b/target/arm/machine.c
17
diff --git a/target/m68k/helper.c b/target/m68k/helper.c
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/machine.c
19
--- a/target/m68k/helper.c
24
+++ b/target/arm/machine.c
20
+++ b/target/m68k/helper.c
25
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
21
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_get_reg(CPUState *cs, GByteArray *mem_buf, int n)
26
hw_breakpoint_update_all(cpu);
22
CPUM68KState *env = &cpu->env;
27
hw_watchpoint_update_all(cpu);
23
28
24
if (n < 8) {
29
+ /*
25
- float_status s = {};
30
+ * TCG gen_update_fp_context() relies on the invariant that
26
+ /* Use scratch float_status so any exceptions don't change CPU state */
31
+ * FPDSCR.LTPSIZE is constant 4 for M-profile with the LOB extension;
27
+ float_status s = env->fp_status;
32
+ * forbid bogus incoming data with some other value.
28
return gdb_get_reg64(mem_buf, floatx80_to_float64(env->fregs[n].d, &s));
33
+ */
29
}
34
+ if (arm_feature(env, ARM_FEATURE_M) && cpu_isar_feature(aa32_lob, cpu)) {
30
switch (n) {
35
+ if (extract32(env->v7m.fpdscr[M_REG_NS],
31
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_set_reg(CPUState *cs, uint8_t *mem_buf, int n)
36
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4 ||
32
CPUM68KState *env = &cpu->env;
37
+ extract32(env->v7m.fpdscr[M_REG_S],
33
38
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4) {
34
if (n < 8) {
39
+ return -1;
35
- float_status s = {};
40
+ }
36
+ /* Use scratch float_status so any exceptions don't change CPU state */
41
+ }
37
+ float_status s = env->fp_status;
42
if (!kvm_enabled()) {
38
env->fregs[n].d = float64_to_floatx80(ldq_be_p(mem_buf), &s);
43
pmu_op_finish(&cpu->env);
39
return 8;
44
}
40
}
45
--
41
--
46
2.20.1
42
2.34.1
47
48
diff view generated by jsdifflib
1
Optimize the MVE 1op-immediate insns (VORR, VBIC, VMOV) to
1
In the helper functions flcmps and flcmpd we use a scratch float_status
2
use TCG vector ops when possible.
2
so that we don't change the CPU state if the comparison raises any
3
floating point exception flags. Instead of zero-initializing this
4
scratch float_status, initialize it as a copy of env->fp_status. This
5
avoids the need to explicitly initialize settings like the NaN
6
propagation rule or others we might add to softfloat in future.
7
8
To do this we need to pass the CPU env pointer in to the helper.
3
9
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210913095440.13462-13-peter.maydell@linaro.org
12
Message-id: 20241202131347.498124-33-peter.maydell@linaro.org
7
---
13
---
8
target/arm/translate-mve.c | 26 +++++++++++++++++++++-----
14
target/sparc/helper.h | 4 ++--
9
1 file changed, 21 insertions(+), 5 deletions(-)
15
target/sparc/fop_helper.c | 8 ++++----
16
target/sparc/translate.c | 4 ++--
17
3 files changed, 8 insertions(+), 8 deletions(-)
10
18
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
19
diff --git a/target/sparc/helper.h b/target/sparc/helper.h
12
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-mve.c
21
--- a/target/sparc/helper.h
14
+++ b/target/arm/translate-mve.c
22
+++ b/target/sparc/helper.h
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDLV(DisasContext *s, arg_VADDLV *a)
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(fcmpd, TCG_CALL_NO_WG, i32, env, f64, f64)
16
return true;
24
DEF_HELPER_FLAGS_3(fcmped, TCG_CALL_NO_WG, i32, env, f64, f64)
25
DEF_HELPER_FLAGS_3(fcmpq, TCG_CALL_NO_WG, i32, env, i128, i128)
26
DEF_HELPER_FLAGS_3(fcmpeq, TCG_CALL_NO_WG, i32, env, i128, i128)
27
-DEF_HELPER_FLAGS_2(flcmps, TCG_CALL_NO_RWG_SE, i32, f32, f32)
28
-DEF_HELPER_FLAGS_2(flcmpd, TCG_CALL_NO_RWG_SE, i32, f64, f64)
29
+DEF_HELPER_FLAGS_3(flcmps, TCG_CALL_NO_RWG_SE, i32, env, f32, f32)
30
+DEF_HELPER_FLAGS_3(flcmpd, TCG_CALL_NO_RWG_SE, i32, env, f64, f64)
31
DEF_HELPER_2(raise_exception, noreturn, env, int)
32
33
DEF_HELPER_FLAGS_3(faddd, TCG_CALL_NO_WG, f64, env, f64, f64)
34
diff --git a/target/sparc/fop_helper.c b/target/sparc/fop_helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/sparc/fop_helper.c
37
+++ b/target/sparc/fop_helper.c
38
@@ -XXX,XX +XXX,XX @@ uint32_t helper_fcmpeq(CPUSPARCState *env, Int128 src1, Int128 src2)
39
return finish_fcmp(env, r, GETPC());
17
}
40
}
18
41
19
-static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
42
-uint32_t helper_flcmps(float32 src1, float32 src2)
20
+static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn,
43
+uint32_t helper_flcmps(CPUSPARCState *env, float32 src1, float32 src2)
21
+ GVecGen2iFn *vecfn)
22
{
44
{
23
TCGv_ptr qd;
45
/*
24
uint64_t imm;
46
* FLCMP never raises an exception nor modifies any FSR fields.
25
@@ -XXX,XX +XXX,XX @@ static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
47
* Perform the comparison with a dummy fp environment.
26
48
*/
27
imm = asimd_imm_const(a->imm, a->cmode, a->op);
49
- float_status discard = { };
28
50
+ float_status discard = env->fp_status;
29
- qd = mve_qreg_ptr(a->qd);
51
FloatRelation r;
30
- fn(cpu_env, qd, tcg_constant_i64(imm));
52
31
- tcg_temp_free_ptr(qd);
53
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
32
+ if (vecfn && mve_no_predication(s)) {
54
@@ -XXX,XX +XXX,XX @@ uint32_t helper_flcmps(float32 src1, float32 src2)
33
+ vecfn(MO_64, mve_qreg_offset(a->qd), mve_qreg_offset(a->qd),
55
g_assert_not_reached();
34
+ imm, 16, 16);
35
+ } else {
36
+ qd = mve_qreg_ptr(a->qd);
37
+ fn(cpu_env, qd, tcg_constant_i64(imm));
38
+ tcg_temp_free_ptr(qd);
39
+ }
40
mve_update_eci(s);
41
return true;
42
}
56
}
43
57
44
+static void gen_gvec_vmovi(unsigned vece, uint32_t dofs, uint32_t aofs,
58
-uint32_t helper_flcmpd(float64 src1, float64 src2)
45
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
59
+uint32_t helper_flcmpd(CPUSPARCState *env, float64 src1, float64 src2)
46
+{
47
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, c);
48
+}
49
+
50
static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
51
{
60
{
52
/* Handle decode of cmode/op here between VORR/VBIC/VMOV */
61
- float_status discard = { };
53
MVEGenOneOpImmFn *fn;
62
+ float_status discard = env->fp_status;
54
+ GVecGen2iFn *vecfn;
63
FloatRelation r;
55
64
56
if ((a->cmode & 1) && a->cmode < 12) {
65
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
57
if (a->op) {
66
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
58
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
67
index XXXXXXX..XXXXXXX 100644
59
* so the VBIC becomes a logical AND operation.
68
--- a/target/sparc/translate.c
60
*/
69
+++ b/target/sparc/translate.c
61
fn = gen_helper_mve_vandi;
70
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPs(DisasContext *dc, arg_FLCMPs *a)
62
+ vecfn = tcg_gen_gvec_andi;
71
63
} else {
72
src1 = gen_load_fpr_F(dc, a->rs1);
64
fn = gen_helper_mve_vorri;
73
src2 = gen_load_fpr_F(dc, a->rs2);
65
+ vecfn = tcg_gen_gvec_ori;
74
- gen_helper_flcmps(cpu_fcc[a->cc], src1, src2);
66
}
75
+ gen_helper_flcmps(cpu_fcc[a->cc], tcg_env, src1, src2);
67
} else {
76
return advance_pc(dc);
68
/* There is one unallocated cmode/op combination in this space */
69
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
70
}
71
/* asimd_imm_const() sorts out VMVNI vs VMOVI for us */
72
fn = gen_helper_mve_vmovi;
73
+ vecfn = gen_gvec_vmovi;
74
}
75
- return do_1imm(s, a, fn);
76
+ return do_1imm(s, a, fn, vecfn);
77
}
77
}
78
78
79
static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
79
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPd(DisasContext *dc, arg_FLCMPd *a)
80
81
src1 = gen_load_fpr_D(dc, a->rs1);
82
src2 = gen_load_fpr_D(dc, a->rs2);
83
- gen_helper_flcmpd(cpu_fcc[a->cc], src1, src2);
84
+ gen_helper_flcmpd(cpu_fcc[a->cc], tcg_env, src1, src2);
85
return advance_pc(dc);
86
}
87
80
--
88
--
81
2.20.1
89
2.34.1
82
83
diff view generated by jsdifflib
New patch
1
In the helper_compute_fprf functions, we pass a dummy float_status
2
in to the is_signaling_nan() function. This is unnecessary, because
3
we have convenient access to the CPU env pointer here and that
4
is already set up with the correct values for the snan_bit_is_one
5
and no_signaling_nans config settings. is_signaling_nan() doesn't
6
ever update the fp_status with any exception flags, so there is
7
no reason not to use env->fp_status here.
1
8
9
Use env->fp_status instead of the dummy fp_status.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20241202131347.498124-34-peter.maydell@linaro.org
14
---
15
target/ppc/fpu_helper.c | 3 +--
16
1 file changed, 1 insertion(+), 2 deletions(-)
17
18
diff --git a/target/ppc/fpu_helper.c b/target/ppc/fpu_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/ppc/fpu_helper.c
21
+++ b/target/ppc/fpu_helper.c
22
@@ -XXX,XX +XXX,XX @@ void helper_compute_fprf_##tp(CPUPPCState *env, tp arg) \
23
} else if (tp##_is_infinity(arg)) { \
24
fprf = neg ? 0x09 << FPSCR_FPRF : 0x05 << FPSCR_FPRF; \
25
} else { \
26
- float_status dummy = { }; /* snan_bit_is_one = 0 */ \
27
- if (tp##_is_signaling_nan(arg, &dummy)) { \
28
+ if (tp##_is_signaling_nan(arg, &env->fp_status)) { \
29
fprf = 0x00 << FPSCR_FPRF; \
30
} else { \
31
fprf = 0x11 << FPSCR_FPRF; \
32
--
33
2.34.1
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Hvf's permission bitmap during and after dirty logging does not include
3
Now that float_status has a bunch of fp parameters,
4
the HV_MEMORY_EXEC permission. At least on Apple Silicon, this leads to
4
it is easier to copy an existing structure than create
5
instruction faults once dirty logging was enabled.
5
one from scratch. Begin by copying the structure that
6
corresponds to the FPSR and make only the adjustments
7
required for BFloat16 semantics.
6
8
7
Add the bit to make it work properly.
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-3-agraf@csgraf.de
12
Message-id: 20241203203949.483774-2-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
---
14
accel/hvf/hvf-accel-ops.c | 4 ++--
15
target/arm/tcg/vec_helper.c | 20 +++++++-------------
15
1 file changed, 2 insertions(+), 2 deletions(-)
16
1 file changed, 7 insertions(+), 13 deletions(-)
16
17
17
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
18
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/hvf/hvf-accel-ops.c
20
--- a/target/arm/tcg/vec_helper.c
20
+++ b/accel/hvf/hvf-accel-ops.c
21
+++ b/target/arm/tcg/vec_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
22
@@ -XXX,XX +XXX,XX @@ bool is_ebf(CPUARMState *env, float_status *statusp, float_status *oddstatusp)
22
if (on) {
23
* no effect on AArch32 instructions.
23
slot->flags |= HVF_SLOT_LOG;
24
*/
24
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
25
bool ebf = is_a64(env) && env->vfp.fpcr & FPCR_EBF;
25
- HV_MEMORY_READ);
26
- *statusp = (float_status){
26
+ HV_MEMORY_READ | HV_MEMORY_EXEC);
27
- .tininess_before_rounding = float_tininess_before_rounding,
27
/* stop tracking region*/
28
- .float_rounding_mode = float_round_to_odd_inf,
28
} else {
29
- .flush_to_zero = true,
29
slot->flags &= ~HVF_SLOT_LOG;
30
- .flush_inputs_to_zero = true,
30
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
31
- .default_nan_mode = true,
31
- HV_MEMORY_READ | HV_MEMORY_WRITE);
32
- };
32
+ HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
33
+
34
+ *statusp = env->vfp.fp_status;
35
+ set_default_nan_mode(true, statusp);
36
37
if (ebf) {
38
- float_status *fpst = &env->vfp.fp_status;
39
- set_flush_to_zero(get_flush_to_zero(fpst), statusp);
40
- set_flush_inputs_to_zero(get_flush_inputs_to_zero(fpst), statusp);
41
- set_float_rounding_mode(get_float_rounding_mode(fpst), statusp);
42
-
43
/* EBF=1 needs to do a step with round-to-odd semantics */
44
*oddstatusp = *statusp;
45
set_float_rounding_mode(float_round_to_odd, oddstatusp);
46
+ } else {
47
+ set_flush_to_zero(true, statusp);
48
+ set_flush_inputs_to_zero(true, statusp);
49
+ set_float_rounding_mode(float_round_to_odd_inf, statusp);
33
}
50
}
51
-
52
return ebf;
34
}
53
}
35
54
36
--
55
--
37
2.20.1
56
2.34.1
38
57
39
58
diff view generated by jsdifflib
1
When not predicating, implement the MVE bitwise logical insns
1
Currently we hardcode the default NaN value in parts64_default_nan()
2
directly using TCG vector operations.
2
using a compile-time ifdef ladder. This is awkward for two cases:
3
* for single-QEMU-binary we can't hard-code target-specifics like this
4
* for Arm FEAT_AFP the default NaN value depends on FPCR.AH
5
(specifically the sign bit is different)
6
7
Add a field to float_status to specify the default NaN value; fall
8
back to the old ifdef behaviour if these are not set.
9
10
The default NaN value is specified by setting a uint8_t to a
11
pattern corresponding to the sign and upper fraction parts of
12
the NaN; the lower bits of the fraction are set from bit 0 of
13
the pattern.
3
14
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-5-peter.maydell@linaro.org
17
Message-id: 20241202131347.498124-35-peter.maydell@linaro.org
8
---
18
---
9
target/arm/translate-mve.c | 51 +++++++++++++++++++++++++++-----------
19
include/fpu/softfloat-helpers.h | 11 +++++++
10
1 file changed, 36 insertions(+), 15 deletions(-)
20
include/fpu/softfloat-types.h | 10 ++++++
21
fpu/softfloat-specialize.c.inc | 55 ++++++++++++++++++++-------------
22
3 files changed, 54 insertions(+), 22 deletions(-)
11
23
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
24
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
13
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
26
--- a/include/fpu/softfloat-helpers.h
15
+++ b/target/arm/translate-mve.c
27
+++ b/include/fpu/softfloat-helpers.h
16
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr mve_qreg_ptr(unsigned reg)
28
@@ -XXX,XX +XXX,XX @@ static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
17
return ret;
29
status->float_infzeronan_rule = rule;
18
}
30
}
19
31
20
+static bool mve_no_predication(DisasContext *s)
32
+static inline void set_float_default_nan_pattern(uint8_t dnan_pattern,
33
+ float_status *status)
21
+{
34
+{
22
+ /*
35
+ status->default_nan_pattern = dnan_pattern;
23
+ * Return true if we are executing the entire MVE instruction
24
+ * with no predication or partial-execution, and so we can safely
25
+ * use an inline TCG vector implementation.
26
+ */
27
+ return s->eci == 0 && s->mve_no_pred;
28
+}
36
+}
29
+
37
+
30
static bool mve_check_qreg_bank(DisasContext *s, int qmask)
38
static inline void set_flush_to_zero(bool val, float_status *status)
31
{
39
{
32
/*
40
status->flush_to_zero = val;
33
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
41
@@ -XXX,XX +XXX,XX @@ static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status
34
return do_1op(s, a, fns[a->size]);
42
return status->float_infzeronan_rule;
35
}
43
}
36
44
37
-static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
45
+static inline uint8_t get_float_default_nan_pattern(float_status *status)
38
+static bool do_2op_vec(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn,
39
+ GVecGen3Fn *vecfn)
40
{
41
TCGv_ptr qd, qn, qm;
42
43
@@ -XXX,XX +XXX,XX @@ static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
44
return true;
45
}
46
47
- qd = mve_qreg_ptr(a->qd);
48
- qn = mve_qreg_ptr(a->qn);
49
- qm = mve_qreg_ptr(a->qm);
50
- fn(cpu_env, qd, qn, qm);
51
- tcg_temp_free_ptr(qd);
52
- tcg_temp_free_ptr(qn);
53
- tcg_temp_free_ptr(qm);
54
+ if (vecfn && mve_no_predication(s)) {
55
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qn),
56
+ mve_qreg_offset(a->qm), 16, 16);
57
+ } else {
58
+ qd = mve_qreg_ptr(a->qd);
59
+ qn = mve_qreg_ptr(a->qn);
60
+ qm = mve_qreg_ptr(a->qm);
61
+ fn(cpu_env, qd, qn, qm);
62
+ tcg_temp_free_ptr(qd);
63
+ tcg_temp_free_ptr(qn);
64
+ tcg_temp_free_ptr(qm);
65
+ }
66
mve_update_eci(s);
67
return true;
68
}
69
70
-#define DO_LOGIC(INSN, HELPER) \
71
+static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn *fn)
72
+{
46
+{
73
+ return do_2op_vec(s, a, fn, NULL);
47
+ return status->default_nan_pattern;
74
+}
48
+}
75
+
49
+
76
+#define DO_LOGIC(INSN, HELPER, VECFN) \
50
static inline bool get_flush_to_zero(float_status *status)
77
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
78
{ \
79
- return do_2op(s, a, HELPER); \
80
+ return do_2op_vec(s, a, HELPER, VECFN); \
81
}
82
83
-DO_LOGIC(VAND, gen_helper_mve_vand)
84
-DO_LOGIC(VBIC, gen_helper_mve_vbic)
85
-DO_LOGIC(VORR, gen_helper_mve_vorr)
86
-DO_LOGIC(VORN, gen_helper_mve_vorn)
87
-DO_LOGIC(VEOR, gen_helper_mve_veor)
88
+DO_LOGIC(VAND, gen_helper_mve_vand, tcg_gen_gvec_and)
89
+DO_LOGIC(VBIC, gen_helper_mve_vbic, tcg_gen_gvec_andc)
90
+DO_LOGIC(VORR, gen_helper_mve_vorr, tcg_gen_gvec_or)
91
+DO_LOGIC(VORN, gen_helper_mve_vorn, tcg_gen_gvec_orc)
92
+DO_LOGIC(VEOR, gen_helper_mve_veor, tcg_gen_gvec_xor)
93
94
static bool trans_VPSEL(DisasContext *s, arg_2op *a)
95
{
51
{
52
return status->flush_to_zero;
53
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/include/fpu/softfloat-types.h
56
+++ b/include/fpu/softfloat-types.h
57
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
58
/* should denormalised inputs go to zero and set the input_denormal flag? */
59
bool flush_inputs_to_zero;
60
bool default_nan_mode;
61
+ /*
62
+ * The pattern to use for the default NaN. Here the high bit specifies
63
+ * the default NaN's sign bit, and bits 6..0 specify the high bits of the
64
+ * fractional part. The low bits of the fractional part are copies of bit 0.
65
+ * The exponent of the default NaN is (as for any NaN) always all 1s.
66
+ * Note that a value of 0 here is not a valid NaN. The target must set
67
+ * this to the correct non-zero value, or we will assert when trying to
68
+ * create a default NaN.
69
+ */
70
+ uint8_t default_nan_pattern;
71
/*
72
* The flags below are not used on all specializations and may
73
* constant fold away (see snan_bit_is_one()/no_signalling_nans() in
74
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
75
index XXXXXXX..XXXXXXX 100644
76
--- a/fpu/softfloat-specialize.c.inc
77
+++ b/fpu/softfloat-specialize.c.inc
78
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
79
{
80
bool sign = 0;
81
uint64_t frac;
82
+ uint8_t dnan_pattern = status->default_nan_pattern;
83
84
+ if (dnan_pattern == 0) {
85
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
86
- /* !snan_bit_is_one, set all bits */
87
- frac = (1ULL << DECOMPOSED_BINARY_POINT) - 1;
88
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
89
+ /* Sign bit clear, all frac bits set */
90
+ dnan_pattern = 0b01111111;
91
+#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
92
|| defined(TARGET_MICROBLAZE)
93
- /* !snan_bit_is_one, set sign and msb */
94
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
95
- sign = 1;
96
+ /* Sign bit set, most significant frac bit set */
97
+ dnan_pattern = 0b11000000;
98
#elif defined(TARGET_HPPA)
99
- /* snan_bit_is_one, set msb-1. */
100
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 2);
101
+ /* Sign bit clear, msb-1 frac bit set */
102
+ dnan_pattern = 0b00100000;
103
#elif defined(TARGET_HEXAGON)
104
- sign = 1;
105
- frac = ~0ULL;
106
+ /* Sign bit set, all frac bits set. */
107
+ dnan_pattern = 0b11111111;
108
#else
109
- /*
110
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
111
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
112
- * do not have floating-point.
113
- */
114
- if (snan_bit_is_one(status)) {
115
- /* set all bits other than msb */
116
- frac = (1ULL << (DECOMPOSED_BINARY_POINT - 1)) - 1;
117
- } else {
118
- /* set msb */
119
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
120
- }
121
+ /*
122
+ * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
123
+ * S390, SH4, TriCore, and Xtensa. Our other supported targets
124
+ * do not have floating-point.
125
+ */
126
+ if (snan_bit_is_one(status)) {
127
+ /* sign bit clear, set all frac bits other than msb */
128
+ dnan_pattern = 0b00111111;
129
+ } else {
130
+ /* sign bit clear, set frac msb */
131
+ dnan_pattern = 0b01000000;
132
+ }
133
#endif
134
+ }
135
+ assert(dnan_pattern != 0);
136
+
137
+ sign = dnan_pattern >> 7;
138
+ /*
139
+ * Place default_nan_pattern [6:0] into bits [62:56],
140
+ * and replecate bit [0] down into [55:0]
141
+ */
142
+ frac = deposit64(0, DECOMPOSED_BINARY_POINT - 7, 7, dnan_pattern);
143
+ frac = deposit64(frac, 0, DECOMPOSED_BINARY_POINT - 7, -(dnan_pattern & 1));
144
145
*p = (FloatParts64) {
146
.cls = float_class_qnan,
96
--
147
--
97
2.20.1
148
2.34.1
98
99
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the tests/fp code.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-36-peter.maydell@linaro.org
6
---
7
tests/fp/fp-bench.c | 1 +
8
tests/fp/fp-test-log2.c | 1 +
9
tests/fp/fp-test.c | 1 +
10
3 files changed, 3 insertions(+)
11
12
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tests/fp/fp-bench.c
15
+++ b/tests/fp/fp-bench.c
16
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
17
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
18
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
19
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
20
+ set_float_default_nan_pattern(0b01000000, &soft_status);
21
22
f = bench_funcs[operation][precision];
23
g_assert(f);
24
diff --git a/tests/fp/fp-test-log2.c b/tests/fp/fp-test-log2.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/tests/fp/fp-test-log2.c
27
+++ b/tests/fp/fp-test-log2.c
28
@@ -XXX,XX +XXX,XX @@ int main(int ac, char **av)
29
int i;
30
31
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
32
+ set_float_default_nan_pattern(0b01000000, &qsf);
33
set_float_rounding_mode(float_round_nearest_even, &qsf);
34
35
test.d = 0.0;
36
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/tests/fp/fp-test.c
39
+++ b/tests/fp/fp-test.c
40
@@ -XXX,XX +XXX,XX @@ void run_test(void)
41
*/
42
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
43
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
44
+ set_float_default_nan_pattern(0b01000000, &qsf);
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
46
47
genCases_setLevel(test_level);
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-37-peter.maydell@linaro.org
7
---
8
target/microblaze/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/microblaze/cpu.c
15
+++ b/target/microblaze/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_reset_hold(Object *obj, ResetType type)
17
* this architecture.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
23
#if defined(CONFIG_USER_ONLY)
24
/* start in user mode with interrupts enabled. */
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
34
- || defined(TARGET_MICROBLAZE)
35
+#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
/* Sign bit set, most significant frac bit set */
37
dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-38-peter.maydell@linaro.org
7
---
8
target/i386/tcg/fpu_helper.c | 4 ++++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 4 insertions(+), 3 deletions(-)
11
12
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/i386/tcg/fpu_helper.c
15
+++ b/target/i386/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
17
*/
18
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
19
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
+ set_float_default_nan_pattern(0b11000000, &env->mmx_status);
23
+ set_float_default_nan_pattern(0b11000000, &env->sse_status);
24
}
25
26
static inline uint8_t save_exception_flags(CPUX86State *env)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
32
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
/* Sign bit clear, all frac bits set */
34
dnan_pattern = 0b01111111;
35
-#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
- /* Sign bit set, most significant frac bit set */
37
- dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
/* Sign bit clear, msb-1 frac bit set */
40
dnan_pattern = 0b00100000;
41
--
42
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-39-peter.maydell@linaro.org
7
---
8
target/hppa/fpu_helper.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 2 insertions(+), 3 deletions(-)
11
12
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/hppa/fpu_helper.c
15
+++ b/target/hppa/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
17
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
18
/* For inf * 0 + NaN, return the input NaN */
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ /* Default NaN: sign bit clear, msb-1 frac bit set */
21
+ set_float_default_nan_pattern(0b00100000, &env->fp_status);
22
}
23
24
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_HPPA)
34
- /* Sign bit clear, msb-1 frac bit set */
35
- dnan_pattern = 0b00100000;
36
#elif defined(TARGET_HEXAGON)
37
/* Sign bit set, all frac bits set. */
38
dnan_pattern = 0b11111111;
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the alpha target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-40-peter.maydell@linaro.org
6
---
7
target/alpha/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/alpha/cpu.c
13
+++ b/target/alpha/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_initfn(Object *obj)
15
* operand in Fa. That is float_2nan_prop_ba.
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
18
+ /* Default NaN: sign bit clear, msb frac bit set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
#if defined(CONFIG_USER_ONLY)
21
env->flags = ENV_FLAG_PS_USER | ENV_FLAG_FEN;
22
cpu_alpha_store_fpcr(env, (uint64_t)(FPCR_INVD | FPCR_DZED | FPCR_OVFD
23
--
24
2.34.1
diff view generated by jsdifflib
1
Now that we have working system register sync, we push more target CPU
1
Set the default NaN pattern explicitly for the arm target.
2
properties into the virtual machine. That might be useful in some
2
This includes setting it for the old linux-user nwfpe emulation.
3
situations, but is not the typical case that users want.
3
For nwfpe, our default doesn't match the real kernel, but we
4
avoid making a behaviour change in this commit.
4
5
5
So let's add a -cpu host option that allows them to explicitly pass all
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
CPU capabilities of their host CPU into the guest.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-41-peter.maydell@linaro.org
9
---
10
linux-user/arm/nwfpe/fpa11.c | 5 +++++
11
target/arm/cpu.c | 2 ++
12
2 files changed, 7 insertions(+)
7
13
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
14
diff --git a/linux-user/arm/nwfpe/fpa11.c b/linux-user/arm/nwfpe/fpa11.c
9
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
10
Reviewed-by: Sergio Lopez <slp@redhat.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20210916155404.86958-7-agraf@csgraf.de
13
[PMM: drop unnecessary #include line from .h file]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/cpu.h | 2 +
17
target/arm/hvf_arm.h | 18 +++++++++
18
target/arm/kvm_arm.h | 2 -
19
target/arm/cpu.c | 13 ++++--
20
target/arm/hvf/hvf.c | 95 ++++++++++++++++++++++++++++++++++++++++++++
21
5 files changed, 124 insertions(+), 6 deletions(-)
22
create mode 100644 target/arm/hvf_arm.h
23
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu.h
16
--- a/linux-user/arm/nwfpe/fpa11.c
27
+++ b/target/arm/cpu.h
17
+++ b/linux-user/arm/nwfpe/fpa11.c
28
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
18
@@ -XXX,XX +XXX,XX @@ void resetFPA11(void)
29
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
19
* this late date.
30
#define CPU_RESOLVING_TYPE TYPE_ARM_CPU
20
*/
31
21
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &fpa11->fp_status);
32
+#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
22
+ /*
33
+
23
+ * Use the same default NaN value as Arm VFP. This doesn't match
34
#define cpu_signal_handler cpu_arm_signal_handler
24
+ * the Linux kernel's nwfpe emulation, which uses an all-1s value.
35
#define cpu_list arm_cpu_list
25
+ */
36
26
+ set_float_default_nan_pattern(0b01000000, &fpa11->fp_status);
37
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
27
}
38
new file mode 100644
28
39
index XXXXXXX..XXXXXXX
29
void SetRoundingMode(const unsigned int opcode)
40
--- /dev/null
41
+++ b/target/arm/hvf_arm.h
42
@@ -XXX,XX +XXX,XX @@
43
+/*
44
+ * QEMU Hypervisor.framework (HVF) support -- ARM specifics
45
+ *
46
+ * Copyright (c) 2021 Alexander Graf
47
+ *
48
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
49
+ * See the COPYING file in the top-level directory.
50
+ *
51
+ */
52
+
53
+#ifndef QEMU_HVF_ARM_H
54
+#define QEMU_HVF_ARM_H
55
+
56
+#include "cpu.h"
57
+
58
+void hvf_arm_set_cpu_features_from_host(struct ARMCPU *cpu);
59
+
60
+#endif
61
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/kvm_arm.h
64
+++ b/target/arm/kvm_arm.h
65
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
66
*/
67
void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
68
69
-#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
70
-
71
/**
72
* ARMHostCPUFeatures: information about the host CPU (identified
73
* by asking the host kernel)
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
75
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/cpu.c
32
--- a/target/arm/cpu.c
77
+++ b/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
78
@@ -XXX,XX +XXX,XX @@
34
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
79
#include "sysemu/tcg.h"
35
* the pseudocode function the arguments are in the order c, a, b.
80
#include "sysemu/hw_accel.h"
36
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
81
#include "kvm_arm.h"
37
* and the input NaN if it is signalling
82
+#include "hvf_arm.h"
38
+ * * Default NaN has sign bit clear, msb frac bit set
83
#include "disas/capstone.h"
39
*/
84
#include "fpu/softfloat.h"
40
static void arm_set_default_fp_behaviours(float_status *s)
85
41
{
86
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
42
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
87
* this is the first point where we can report it.
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
88
*/
44
set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
89
if (cpu->host_cpu_probe_failed) {
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
90
- if (!kvm_enabled()) {
46
+ set_float_default_nan_pattern(0b01000000, s);
91
- error_setg(errp, "The 'host' CPU type can only be used with KVM");
92
+ if (!kvm_enabled() && !hvf_enabled()) {
93
+ error_setg(errp, "The 'host' CPU type can only be used with KVM or HVF");
94
} else {
95
error_setg(errp, "Failed to retrieve host CPU features");
96
}
97
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
98
#endif /* CONFIG_TCG */
99
}
47
}
100
48
101
-#ifdef CONFIG_KVM
49
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
102
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
103
static void arm_host_initfn(Object *obj)
104
{
105
ARMCPU *cpu = ARM_CPU(obj);
106
107
+#ifdef CONFIG_KVM
108
kvm_arm_set_cpu_features_from_host(cpu);
109
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
110
aarch64_add_sve_properties(obj);
111
}
112
+#else
113
+ hvf_arm_set_cpu_features_from_host(cpu);
114
+#endif
115
arm_cpu_post_init(obj);
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_register_types(void)
119
{
120
type_register_static(&arm_cpu_type_info);
121
122
-#ifdef CONFIG_KVM
123
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
124
type_register_static(&host_arm_cpu_type_info);
125
#endif
126
}
127
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/hvf/hvf.c
130
+++ b/target/arm/hvf/hvf.c
131
@@ -XXX,XX +XXX,XX @@
132
#include "sysemu/hvf.h"
133
#include "sysemu/hvf_int.h"
134
#include "sysemu/hw_accel.h"
135
+#include "hvf_arm.h"
136
137
#include <mach/mach_time.h>
138
139
@@ -XXX,XX +XXX,XX @@ typedef struct HVFVTimer {
140
141
static HVFVTimer vtimer;
142
143
+typedef struct ARMHostCPUFeatures {
144
+ ARMISARegisters isar;
145
+ uint64_t features;
146
+ uint64_t midr;
147
+ uint32_t reset_sctlr;
148
+ const char *dtb_compatible;
149
+} ARMHostCPUFeatures;
150
+
151
+static ARMHostCPUFeatures arm_host_cpu_features;
152
+
153
struct hvf_reg_match {
154
int reg;
155
uint64_t offset;
156
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt)
157
return val;
158
}
159
160
+static bool hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
161
+{
162
+ ARMISARegisters host_isar = {};
163
+ const struct isar_regs {
164
+ int reg;
165
+ uint64_t *val;
166
+ } regs[] = {
167
+ { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
168
+ { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
169
+ { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
170
+ { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
171
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
172
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
173
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
174
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
175
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
176
+ };
177
+ hv_vcpu_t fd;
178
+ hv_return_t r = HV_SUCCESS;
179
+ hv_vcpu_exit_t *exit;
180
+ int i;
181
+
182
+ ahcf->dtb_compatible = "arm,arm-v8";
183
+ ahcf->features = (1ULL << ARM_FEATURE_V8) |
184
+ (1ULL << ARM_FEATURE_NEON) |
185
+ (1ULL << ARM_FEATURE_AARCH64) |
186
+ (1ULL << ARM_FEATURE_PMU) |
187
+ (1ULL << ARM_FEATURE_GENERIC_TIMER);
188
+
189
+ /* We set up a small vcpu to extract host registers */
190
+
191
+ if (hv_vcpu_create(&fd, &exit, NULL) != HV_SUCCESS) {
192
+ return false;
193
+ }
194
+
195
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
196
+ r |= hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val);
197
+ }
198
+ r |= hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr);
199
+ r |= hv_vcpu_destroy(fd);
200
+
201
+ ahcf->isar = host_isar;
202
+
203
+ /*
204
+ * A scratch vCPU returns SCTLR 0, so let's fill our default with the M1
205
+ * boot SCTLR from https://github.com/AsahiLinux/m1n1/issues/97
206
+ */
207
+ ahcf->reset_sctlr = 0x30100180;
208
+ /*
209
+ * SPAN is disabled by default when SCTLR.SPAN=1. To improve compatibility,
210
+ * let's disable it on boot and then allow guest software to turn it on by
211
+ * setting it to 0.
212
+ */
213
+ ahcf->reset_sctlr |= 0x00800000;
214
+
215
+ /* Make sure we don't advertise AArch32 support for EL0/EL1 */
216
+ if ((host_isar.id_aa64pfr0 & 0xff) != 0x11) {
217
+ return false;
218
+ }
219
+
220
+ return r == HV_SUCCESS;
221
+}
222
+
223
+void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu)
224
+{
225
+ if (!arm_host_cpu_features.dtb_compatible) {
226
+ if (!hvf_enabled() ||
227
+ !hvf_arm_get_host_cpu_features(&arm_host_cpu_features)) {
228
+ /*
229
+ * We can't report this error yet, so flag that we need to
230
+ * in arm_cpu_realizefn().
231
+ */
232
+ cpu->host_cpu_probe_failed = true;
233
+ return;
234
+ }
235
+ }
236
+
237
+ cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
238
+ cpu->isar = arm_host_cpu_features.isar;
239
+ cpu->env.features = arm_host_cpu_features.features;
240
+ cpu->midr = arm_host_cpu_features.midr;
241
+ cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr;
242
+}
243
+
244
void hvf_arch_vcpu_destroy(CPUState *cpu)
245
{
246
}
247
--
50
--
248
2.20.1
51
2.34.1
249
250
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for loongarch.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-42-peter.maydell@linaro.org
6
---
7
target/loongarch/tcg/fpu_helper.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/loongarch/tcg/fpu_helper.c
13
+++ b/target/loongarch/tcg/fpu_helper.c
14
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
15
*/
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
17
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
18
+ /* Default NaN: sign bit clear, msb frac bit set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
}
21
22
int ieee_ex_to_loongarch(int xcpt)
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for m68k.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-43-peter.maydell@linaro.org
6
---
7
target/m68k/cpu.c | 2 ++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/m68k/cpu.c
14
+++ b/target/m68k/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
16
* preceding paragraph for nonsignaling NaNs.
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
+ /* Default NaN: sign bit clear, all frac bits set */
20
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
21
22
nan = floatx80_default_nan(&env->fp_status);
23
for (i = 0; i < 8; i++) {
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
29
uint8_t dnan_pattern = status->default_nan_pattern;
30
31
if (dnan_pattern == 0) {
32
-#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
+#if defined(TARGET_SPARC)
34
/* Sign bit clear, all frac bits set */
35
dnan_pattern = 0b01111111;
36
#elif defined(TARGET_HEXAGON)
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for MIPS. Note that this
2
is our only target which currently changes the default NaN
3
at runtime (which it was previously doing indirectly when it
4
changed the snan_bit_is_one setting).
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-44-peter.maydell@linaro.org
9
---
10
target/mips/fpu_helper.h | 7 +++++++
11
target/mips/msa.c | 3 +++
12
2 files changed, 10 insertions(+)
13
14
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/mips/fpu_helper.h
17
+++ b/target/mips/fpu_helper.h
18
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
19
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
20
nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
21
set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
22
+ /*
23
+ * With nan2008, the default NaN value has the sign bit clear and the
24
+ * frac msb set; with the older mode, the sign bit is clear, and all
25
+ * frac bits except the msb are set.
26
+ */
27
+ set_float_default_nan_pattern(nan2008 ? 0b01000000 : 0b00111111,
28
+ &env->active_fpu.fp_status);
29
30
}
31
32
diff --git a/target/mips/msa.c b/target/mips/msa.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/mips/msa.c
35
+++ b/target/mips/msa.c
36
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
37
/* Inf * 0 + NaN returns the input NaN */
38
set_float_infzeronan_rule(float_infzeronan_dnan_never,
39
&env->active_tc.msa_fp_status);
40
+ /* Default NaN: sign bit clear, frac msb set */
41
+ set_float_default_nan_pattern(0b01000000,
42
+ &env->active_tc.msa_fp_status);
43
}
44
--
45
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for openrisc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-45-peter.maydell@linaro.org
6
---
7
target/openrisc/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/openrisc/cpu.c
13
+++ b/target/openrisc/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_reset_hold(Object *obj, ResetType type)
15
*/
16
set_float_2nan_prop_rule(float_2nan_prop_x87, &cpu->env.fp_status);
17
18
+ /* Default NaN: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &cpu->env.fp_status);
20
21
#ifndef CONFIG_USER_ONLY
22
cpu->env.picmr = 0x00000000;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for ppc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-46-peter.maydell@linaro.org
6
---
7
target/ppc/cpu_init.c | 4 ++++
8
1 file changed, 4 insertions(+)
9
10
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/ppc/cpu_init.c
13
+++ b/target/ppc/cpu_init.c
14
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
17
18
+ /* Default NaN: sign bit clear, set frac msb */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
+ set_float_default_nan_pattern(0b01000000, &env->vec_status);
21
+
22
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
23
ppc_spr_t *spr = &env->spr_cb[i];
24
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for sh4. Note that sh4
2
is one of the only three targets (the others being HPPA and
3
sometimes MIPS) that has snan_bit_is_one set.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-47-peter.maydell@linaro.org
8
---
9
target/sh4/cpu.c | 2 ++
10
1 file changed, 2 insertions(+)
11
12
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sh4/cpu.c
15
+++ b/target/sh4/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_reset_hold(Object *obj, ResetType type)
17
set_flush_to_zero(1, &env->fp_status);
18
#endif
19
set_default_nan_mode(1, &env->fp_status);
20
+ /* sign bit clear, set all frac bits other than msb */
21
+ set_float_default_nan_pattern(0b00111111, &env->fp_status);
22
}
23
24
static void superh_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for rx.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-48-peter.maydell@linaro.org
6
---
7
target/rx/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/rx/cpu.c
13
+++ b/target/rx/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_reset_hold(Object *obj, ResetType type)
15
* then prefer dest over source", which is float_2nan_prop_s_ab.
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
18
+ /* Default NaN value: sign bit clear, set frac msb */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
}
21
22
static ObjectClass *rx_cpu_class_by_name(const char *cpu_model)
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for s390x.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-49-peter.maydell@linaro.org
6
---
7
target/s390x/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/s390x/cpu.c
13
+++ b/target/s390x/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_always,
17
&env->fpu_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fpu_status);
20
/* fall through */
21
case RESET_TYPE_S390_CPU_NORMAL:
22
env->psw.mask &= ~PSW_MASK_RI;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for SPARC, and remove
2
the ifdef from parts64_default_nan.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-50-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 5 +----
10
2 files changed, 3 insertions(+), 4 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
18
/* For inf * 0 + NaN, return the input NaN */
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ /* Default NaN value: sign bit clear, all frac bits set */
21
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
uint8_t dnan_pattern = status->default_nan_pattern;
31
32
if (dnan_pattern == 0) {
33
-#if defined(TARGET_SPARC)
34
- /* Sign bit clear, all frac bits set */
35
- dnan_pattern = 0b01111111;
36
-#elif defined(TARGET_HEXAGON)
37
+#if defined(TARGET_HEXAGON)
38
/* Sign bit set, all frac bits set. */
39
dnan_pattern = 0b11111111;
40
#else
41
--
42
2.34.1
diff view generated by jsdifflib
1
Currently gen_jmp_tb() assumes that if it is called then the jump it
1
Set the default NaN pattern explicitly for xtensa.
2
is handling is the only reason that we might be trying to end the TB,
3
so it will use goto_tb if it can. This is usually the case: mostly
4
"we did something that means we must end the TB" happens on a
5
non-branch instruction. However, there are cases where we decide
6
early in handling an instruction that we need to end the TB and
7
return to the main loop, and then the insn is a complex one that
8
involves gen_jmp_tb(). For instance, for M-profile FP instructions,
9
in gen_preserve_fp_state() which is called from vfp_access_check() we
10
want to force an exit to the main loop if lazy state preservation is
11
active and we are in icount mode.
12
13
Make gen_jmp_tb() look at the current value of is_jmp, and only use
14
goto_tb if the previous is_jmp was DISAS_NEXT or DISAS_TOO_MANY.
15
2
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20210913095440.13462-2-peter.maydell@linaro.org
5
Message-id: 20241202131347.498124-51-peter.maydell@linaro.org
19
---
6
---
20
target/arm/translate.c | 34 +++++++++++++++++++++++++++++++++-
7
target/xtensa/cpu.c | 2 ++
21
1 file changed, 33 insertions(+), 1 deletion(-)
8
1 file changed, 2 insertions(+)
22
9
23
diff --git a/target/arm/translate.c b/target/arm/translate.c
10
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
24
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/translate.c
12
--- a/target/xtensa/cpu.c
26
+++ b/target/arm/translate.c
13
+++ b/target/xtensa/cpu.c
27
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
14
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
28
/* An indirect jump so that we still trigger the debug exception. */
15
/* For inf * 0 + NaN, return the input NaN */
29
gen_set_pc_im(s, dest);
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
30
s->base.is_jmp = DISAS_JUMP;
17
set_no_signaling_nans(!dfpu, &env->fp_status);
31
- } else {
18
+ /* Default NaN value: sign bit clear, set frac msb */
32
+ return;
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
33
+ }
20
xtensa_use_first_nan(env, !dfpu);
34
+ switch (s->base.is_jmp) {
35
+ case DISAS_NEXT:
36
+ case DISAS_TOO_MANY:
37
+ case DISAS_NORETURN:
38
+ /*
39
+ * The normal case: just go to the destination TB.
40
+ * NB: NORETURN happens if we generate code like
41
+ * gen_brcondi(l);
42
+ * gen_jmp();
43
+ * gen_set_label(l);
44
+ * gen_jmp();
45
+ * on the second call to gen_jmp().
46
+ */
47
gen_goto_tb(s, tbno, dest);
48
+ break;
49
+ case DISAS_UPDATE_NOCHAIN:
50
+ case DISAS_UPDATE_EXIT:
51
+ /*
52
+ * We already decided we're leaving the TB for some other reason.
53
+ * Avoid using goto_tb so we really do exit back to the main loop
54
+ * and don't chain to another TB.
55
+ */
56
+ gen_set_pc_im(s, dest);
57
+ gen_goto_ptr();
58
+ s->base.is_jmp = DISAS_NORETURN;
59
+ break;
60
+ default:
61
+ /*
62
+ * We shouldn't be emitting code for a jump and also have
63
+ * is_jmp set to one of the special cases like DISAS_SWI.
64
+ */
65
+ g_assert_not_reached();
66
}
67
}
21
}
68
22
69
--
23
--
70
2.20.1
24
2.34.1
71
72
diff view generated by jsdifflib
1
Optimize the MVE VMVN insn by using TCG vector ops when possible.
1
Set the default NaN pattern explicitly for hexagon.
2
Remove the ifdef from parts64_default_nan(); the only
3
remaining unconverted targets all use the default case.
2
4
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210913095440.13462-9-peter.maydell@linaro.org
7
Message-id: 20241202131347.498124-52-peter.maydell@linaro.org
6
---
8
---
7
target/arm/translate-mve.c | 2 +-
9
target/hexagon/cpu.c | 2 ++
8
1 file changed, 1 insertion(+), 1 deletion(-)
10
fpu/softfloat-specialize.c.inc | 5 -----
11
2 files changed, 2 insertions(+), 5 deletions(-)
9
12
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
13
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
11
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-mve.c
15
--- a/target/hexagon/cpu.c
13
+++ b/target/arm/translate-mve.c
16
+++ b/target/hexagon/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_1op *a)
17
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_reset_hold(Object *obj, ResetType type)
15
18
16
static bool trans_VMVN(DisasContext *s, arg_1op *a)
19
set_default_nan_mode(1, &env->fp_status);
17
{
20
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
18
- return do_1op(s, a, gen_helper_mve_vmvn);
21
+ /* Default NaN value: sign bit set, all frac bits set */
19
+ return do_1op_vec(s, a, gen_helper_mve_vmvn, tcg_gen_gvec_not);
22
+ set_float_default_nan_pattern(0b11111111, &env->fp_status);
20
}
23
}
21
24
22
static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
25
static void hexagon_cpu_disas_set_info(CPUState *s, disassemble_info *info)
26
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
27
index XXXXXXX..XXXXXXX 100644
28
--- a/fpu/softfloat-specialize.c.inc
29
+++ b/fpu/softfloat-specialize.c.inc
30
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
31
uint8_t dnan_pattern = status->default_nan_pattern;
32
33
if (dnan_pattern == 0) {
34
-#if defined(TARGET_HEXAGON)
35
- /* Sign bit set, all frac bits set. */
36
- dnan_pattern = 0b11111111;
37
-#else
38
/*
39
* This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
40
* S390, SH4, TriCore, and Xtensa. Our other supported targets
41
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
42
/* sign bit clear, set frac msb */
43
dnan_pattern = 0b01000000;
44
}
45
-#endif
46
}
47
assert(dnan_pattern != 0);
48
23
--
49
--
24
2.20.1
50
2.34.1
25
26
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for riscv.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-53-peter.maydell@linaro.org
6
---
7
target/riscv/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/riscv/cpu.c
13
+++ b/target/riscv/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj, ResetType type)
15
cs->exception_index = RISCV_EXCP_NONE;
16
env->load_res = -1;
17
set_default_nan_mode(1, &env->fp_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
env->vill = true;
21
22
#ifndef CONFIG_USER_ONLY
23
--
24
2.34.1
diff view generated by jsdifflib
1
Our current codegen for MVE always calls out to helper functions,
1
Set the default NaN pattern explicitly for tricore.
2
because some byte lanes might be predicated. The common case is that
3
in fact there is no predication active and all lanes should be
4
updated together, so we can produce better code by detecting that and
5
using the TCG generic vector infrastructure.
6
7
Add a TB flag that is set when we can guarantee that there is no
8
active MVE predication, and a bool in the DisasContext. Subsequent
9
patches will use this flag to generate improved code for some
10
instructions.
11
12
In most cases when the predication state changes we simply end the TB
13
after that instruction. For the code called from vfp_access_check()
14
that handles lazy state preservation and creating a new FP context,
15
we can usually avoid having to try to end the TB because luckily the
16
new value of the flag following the register changes in those
17
sequences doesn't depend on any runtime decisions. We do have to end
18
the TB if the guest has enabled lazy FP state preservation but not
19
automatic state preservation, but this is an odd corner case that is
20
not going to be common in real-world code.
21
2
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
5
Message-id: 20241202131347.498124-54-peter.maydell@linaro.org
25
---
6
---
26
target/arm/cpu.h | 4 +++-
7
target/tricore/helper.c | 2 ++
27
target/arm/translate.h | 2 ++
8
1 file changed, 2 insertions(+)
28
target/arm/helper.c | 33 +++++++++++++++++++++++++++++++++
29
target/arm/translate-m-nocp.c | 8 +++++++-
30
target/arm/translate-mve.c | 13 ++++++++++++-
31
target/arm/translate-vfp.c | 33 +++++++++++++++++++++++++++------
32
target/arm/translate.c | 8 ++++++++
33
7 files changed, 92 insertions(+), 9 deletions(-)
34
9
35
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
10
diff --git a/target/tricore/helper.c b/target/tricore/helper.c
36
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/cpu.h
12
--- a/target/tricore/helper.c
38
+++ b/target/arm/cpu.h
13
+++ b/target/tricore/helper.c
39
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
14
@@ -XXX,XX +XXX,XX @@ void fpu_set_state(CPUTriCoreState *env)
40
* | TBFLAG_AM32 | +-----+----------+
15
set_flush_to_zero(1, &env->fp_status);
41
* | | |TBFLAG_M32|
16
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
42
* +-------------+----------------+----------+
17
set_default_nan_mode(1, &env->fp_status);
43
- * 31 23 5 4 0
18
+ /* Default NaN pattern: sign bit clear, frac msb set */
44
+ * 31 23 6 5 0
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
45
*
46
* Unless otherwise noted, these bits are cached in env->hflags.
47
*/
48
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, LSPACT, 2, 1) /* Not cached. */
49
FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
50
/* Set if FPCCR.S does not match current security state */
51
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
52
+/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
53
+FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
54
55
/*
56
* Bit usage when in AArch64 state
57
diff --git a/target/arm/translate.h b/target/arm/translate.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate.h
60
+++ b/target/arm/translate.h
61
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
62
bool align_mem;
63
/* True if PSTATE.IL is set */
64
bool pstate_il;
65
+ /* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
66
+ bool mve_no_pred;
67
/*
68
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
69
* < 0, set by the current instruction.
70
diff --git a/target/arm/helper.c b/target/arm/helper.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/helper.c
73
+++ b/target/arm/helper.c
74
@@ -XXX,XX +XXX,XX @@ static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
75
#endif
76
}
20
}
77
21
78
+static bool mve_no_pred(CPUARMState *env)
22
uint32_t psw_read(CPUTriCoreState *env)
79
+{
80
+ /*
81
+ * Return true if there is definitely no predication of MVE
82
+ * instructions by VPR or LTPSIZE. (Returning false even if there
83
+ * isn't any predication is OK; generated code will just be
84
+ * a little worse.)
85
+ * If the CPU does not implement MVE then this TB flag is always 0.
86
+ *
87
+ * NOTE: if you change this logic, the "recalculate s->mve_no_pred"
88
+ * logic in gen_update_fp_context() needs to be updated to match.
89
+ *
90
+ * We do not include the effect of the ECI bits here -- they are
91
+ * tracked in other TB flags. This simplifies the logic for
92
+ * "when did we emit code that changes the MVE_NO_PRED TB flag
93
+ * and thus need to end the TB?".
94
+ */
95
+ if (cpu_isar_feature(aa32_mve, env_archcpu(env))) {
96
+ return false;
97
+ }
98
+ if (env->v7m.vpr) {
99
+ return false;
100
+ }
101
+ if (env->v7m.ltpsize < 4) {
102
+ return false;
103
+ }
104
+ return true;
105
+}
106
+
107
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
108
target_ulong *cs_base, uint32_t *pflags)
109
{
110
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
111
if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
112
DP_TBFLAG_M32(flags, LSPACT, 1);
113
}
114
+
115
+ if (mve_no_pred(env)) {
116
+ DP_TBFLAG_M32(flags, MVE_NO_PRED, 1);
117
+ }
118
} else {
119
/*
120
* Note that XSCALE_CPAR shares bits with VECSTRIDE.
121
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/translate-m-nocp.c
124
+++ b/target/arm/translate-m-nocp.c
125
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
126
127
clear_eci_state(s);
128
129
- /* End the TB, because we have updated FP control bits */
130
+ /*
131
+ * End the TB, because we have updated FP control bits,
132
+ * and possibly VPR or LTPSIZE.
133
+ */
134
s->base.is_jmp = DISAS_UPDATE_EXIT;
135
return true;
136
}
137
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
138
store_cpu_field(control, v7m.control[M_REG_S]);
139
tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
140
gen_helper_vfp_set_fpscr(cpu_env, tmp);
141
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
142
tcg_temp_free_i32(tmp);
143
tcg_temp_free_i32(sfpa);
144
break;
145
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
146
}
147
tmp = loadfn(s, opaque, true);
148
store_cpu_field(tmp, v7m.vpr);
149
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
150
break;
151
case ARM_VFP_P0:
152
{
153
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
154
tcg_gen_deposit_i32(vpr, vpr, tmp,
155
R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
156
store_cpu_field(vpr, v7m.vpr);
157
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
158
tcg_temp_free_i32(tmp);
159
break;
160
}
161
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-mve.c
164
+++ b/target/arm/translate-mve.c
165
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VORR, gen_helper_mve_vorr)
166
DO_LOGIC(VORN, gen_helper_mve_vorn)
167
DO_LOGIC(VEOR, gen_helper_mve_veor)
168
169
-DO_LOGIC(VPSEL, gen_helper_mve_vpsel)
170
+static bool trans_VPSEL(DisasContext *s, arg_2op *a)
171
+{
172
+ /* This insn updates predication bits */
173
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
174
+ return do_2op(s, a, gen_helper_mve_vpsel);
175
+}
176
177
#define DO_2OP(INSN, FN) \
178
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
179
@@ -XXX,XX +XXX,XX @@ static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
180
}
181
182
gen_helper_mve_vpnot(cpu_env);
183
+ /* This insn updates predication bits */
184
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
185
mve_update_eci(s);
186
return true;
187
}
188
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
189
/* VPT */
190
gen_vpst(s, a->mask);
191
}
192
+ /* This insn updates predication bits */
193
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
194
mve_update_eci(s);
195
return true;
196
}
197
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a,
198
/* VPT */
199
gen_vpst(s, a->mask);
200
}
201
+ /* This insn updates predication bits */
202
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
203
mve_update_eci(s);
204
return true;
205
}
206
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/translate-vfp.c
209
+++ b/target/arm/translate-vfp.c
210
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
211
* Generate code for M-profile lazy FP state preservation if needed;
212
* this corresponds to the pseudocode PreserveFPState() function.
213
*/
214
-static void gen_preserve_fp_state(DisasContext *s)
215
+static void gen_preserve_fp_state(DisasContext *s, bool skip_context_update)
216
{
217
if (s->v7m_lspact) {
218
/*
219
@@ -XXX,XX +XXX,XX @@ static void gen_preserve_fp_state(DisasContext *s)
220
* any further FP insns in this TB.
221
*/
222
s->v7m_lspact = false;
223
+ /*
224
+ * The helper might have zeroed VPR, so we do not know the
225
+ * correct value for the MVE_NO_PRED TB flag any more.
226
+ * If we're about to create a new fp context then that
227
+ * will precisely determine the MVE_NO_PRED value (see
228
+ * gen_update_fp_context()). Otherwise, we must:
229
+ * - set s->mve_no_pred to false, so this instruction
230
+ * is generated to use helper functions
231
+ * - end the TB now, without chaining to the next TB
232
+ */
233
+ if (skip_context_update || !s->v7m_new_fp_ctxt_needed) {
234
+ s->mve_no_pred = false;
235
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
236
+ }
237
}
238
}
239
240
@@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s)
241
TCGv_i32 z32 = tcg_const_i32(0);
242
store_cpu_field(z32, v7m.vpr);
243
}
244
-
245
/*
246
- * We don't need to arrange to end the TB, because the only
247
- * parts of FPSCR which we cache in the TB flags are the VECLEN
248
- * and VECSTRIDE, and those don't exist for M-profile.
249
+ * We just updated the FPSCR and VPR. Some of this state is cached
250
+ * in the MVE_NO_PRED TB flag. We want to avoid having to end the
251
+ * TB here, which means we need the new value of the MVE_NO_PRED
252
+ * flag to be exactly known here and the same for all executions.
253
+ * Luckily FPDSCR.LTPSIZE is always constant 4 and the VPR is
254
+ * always set to 0, so the new MVE_NO_PRED flag is always 1
255
+ * if and only if we have MVE.
256
+ *
257
+ * (The other FPSCR state cached in TB flags is VECLEN and VECSTRIDE,
258
+ * but those do not exist for M-profile, so are not relevant here.)
259
*/
260
+ s->mve_no_pred = dc_isar_feature(aa32_mve, s);
261
262
if (s->v8m_secure) {
263
bits |= R_V7M_CONTROL_SFPA_MASK;
264
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
265
/* Handle M-profile lazy FP state mechanics */
266
267
/* Trigger lazy-state preservation if necessary */
268
- gen_preserve_fp_state(s);
269
+ gen_preserve_fp_state(s, skip_context_update);
270
271
if (!skip_context_update) {
272
/* Update ownership of FP context and create new FP context if needed */
273
diff --git a/target/arm/translate.c b/target/arm/translate.c
274
index XXXXXXX..XXXXXXX 100644
275
--- a/target/arm/translate.c
276
+++ b/target/arm/translate.c
277
@@ -XXX,XX +XXX,XX @@ static bool trans_DLS(DisasContext *s, arg_DLS *a)
278
/* DLSTP: set FPSCR.LTPSIZE */
279
tmp = tcg_const_i32(a->size);
280
store_cpu_field(tmp, v7m.ltpsize);
281
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
282
}
283
return true;
284
}
285
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
286
assert(ok);
287
tmp = tcg_const_i32(a->size);
288
store_cpu_field(tmp, v7m.ltpsize);
289
+ /*
290
+ * LTPSIZE updated, but MVE_NO_PRED will always be the same thing (0)
291
+ * when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
292
+ */
293
}
294
gen_jmp_tb(s, s->base.pc_next, 1);
295
296
@@ -XXX,XX +XXX,XX @@ static bool trans_VCTP(DisasContext *s, arg_VCTP *a)
297
gen_helper_mve_vctp(cpu_env, masklen);
298
tcg_temp_free_i32(masklen);
299
tcg_temp_free_i32(rn_shifted);
300
+ /* This insn updates predication bits */
301
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
302
mve_update_eci(s);
303
return true;
304
}
305
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
306
dc->v7m_new_fp_ctxt_needed =
307
EX_TBFLAG_M32(tb_flags, NEW_FP_CTXT_NEEDED);
308
dc->v7m_lspact = EX_TBFLAG_M32(tb_flags, LSPACT);
309
+ dc->mve_no_pred = EX_TBFLAG_M32(tb_flags, MVE_NO_PRED);
310
} else {
311
dc->debug_target_el = EX_TBFLAG_ANY(tb_flags, DEBUG_TARGET_EL);
312
dc->sctlr_b = EX_TBFLAG_A32(tb_flags, SCTLR__B);
313
--
23
--
314
2.20.1
24
2.34.1
315
316
diff view generated by jsdifflib
1
Optimize the MVE shift-and-insert insns by using TCG
1
Now that all our targets have bene converted to explicitly specify
2
vector ops when possible.
2
their pattern for the default NaN value we can remove the remaining
3
fallback code in parts64_default_nan().
3
4
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210913095440.13462-12-peter.maydell@linaro.org
7
Message-id: 20241202131347.498124-55-peter.maydell@linaro.org
7
---
8
---
8
target/arm/translate-mve.c | 4 ++--
9
fpu/softfloat-specialize.c.inc | 14 --------------
9
1 file changed, 2 insertions(+), 2 deletions(-)
10
1 file changed, 14 deletions(-)
10
11
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
12
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-mve.c
14
--- a/fpu/softfloat-specialize.c.inc
14
+++ b/target/arm/translate-mve.c
15
+++ b/fpu/softfloat-specialize.c.inc
15
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
16
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
16
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
17
uint64_t frac;
17
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
18
uint8_t dnan_pattern = status->default_nan_pattern;
18
19
19
-DO_2SHIFT(VSRI, vsri, false)
20
- if (dnan_pattern == 0) {
20
-DO_2SHIFT(VSLI, vsli, false)
21
- /*
21
+DO_2SHIFT_VEC(VSRI, vsri, false, gen_gvec_sri)
22
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
22
+DO_2SHIFT_VEC(VSLI, vsli, false, gen_gvec_sli)
23
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
23
24
- * do not have floating-point.
24
#define DO_2SHIFT_FP(INSN, FN) \
25
- */
25
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
26
- if (snan_bit_is_one(status)) {
27
- /* sign bit clear, set all frac bits other than msb */
28
- dnan_pattern = 0b00111111;
29
- } else {
30
- /* sign bit clear, set frac msb */
31
- dnan_pattern = 0b01000000;
32
- }
33
- }
34
assert(dnan_pattern != 0);
35
36
sign = dnan_pattern >> 7;
26
--
37
--
27
2.20.1
38
2.34.1
28
29
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We need to handle PSCI calls. Most of the TCG code works for us,
3
Inline pickNaNMulAdd into its only caller. This makes
4
but we can simplify it to only handle aa64 mode and we need to
4
one assert redundant with the immediately preceding IF.
5
handle SUSPEND differently.
6
5
7
This patch takes the TCG code as template and duplicates it in HVF.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
To tell the guest that we support PSCI 0.2 now, update the check in
8
Message-id: 20241203203949.483774-3-richard.henderson@linaro.org
10
arm_cpu_initfn() as well.
9
[PMM: keep comment from old code in new location]
11
12
Signed-off-by: Alexander Graf <agraf@csgraf.de>
13
Reviewed-by: Sergio Lopez <slp@redhat.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20210916155404.86958-8-agraf@csgraf.de
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
11
---
18
target/arm/cpu.c | 4 +-
12
fpu/softfloat-parts.c.inc | 41 +++++++++++++++++++++++++-
19
target/arm/hvf/hvf.c | 141 ++++++++++++++++++++++++++++++++++--
13
fpu/softfloat-specialize.c.inc | 54 ----------------------------------
20
target/arm/hvf/trace-events | 1 +
14
2 files changed, 40 insertions(+), 55 deletions(-)
21
3 files changed, 139 insertions(+), 7 deletions(-)
22
15
23
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
24
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/cpu.c
18
--- a/fpu/softfloat-parts.c.inc
26
+++ b/target/arm/cpu.c
19
+++ b/fpu/softfloat-parts.c.inc
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
28
cpu->psci_version = 1; /* By default assume PSCI v0.1 */
21
}
29
cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
22
30
23
if (s->default_nan_mode) {
31
- if (tcg_enabled()) {
24
+ /*
32
- cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
25
+ * We guarantee not to require the target to tell us how to
33
+ if (tcg_enabled() || hvf_enabled()) {
26
+ * pick a NaN if we're always returning the default NaN.
34
+ cpu->psci_version = 2; /* TCG and HVF implement PSCI 0.2 */
27
+ * But if we're not in default-NaN mode then the target must
28
+ * specify.
29
+ */
30
which = 3;
31
+ } else if (infzero) {
32
+ /*
33
+ * Inf * 0 + NaN -- some implementations return the
34
+ * default NaN here, and some return the input NaN.
35
+ */
36
+ switch (s->float_infzeronan_rule) {
37
+ case float_infzeronan_dnan_never:
38
+ which = 2;
39
+ break;
40
+ case float_infzeronan_dnan_always:
41
+ which = 3;
42
+ break;
43
+ case float_infzeronan_dnan_if_qnan:
44
+ which = is_qnan(c->cls) ? 3 : 2;
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
} else {
50
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
51
+ FloatClass cls[3] = { a->cls, b->cls, c->cls };
52
+ Float3NaNPropRule rule = s->float_3nan_prop_rule;
53
+
54
+ assert(rule != float_3nan_prop_none);
55
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
56
+ /* We have at least one SNaN input and should prefer it */
57
+ do {
58
+ which = rule & R_3NAN_1ST_MASK;
59
+ rule >>= R_3NAN_1ST_LENGTH;
60
+ } while (!is_snan(cls[which]));
61
+ } else {
62
+ do {
63
+ which = rule & R_3NAN_1ST_MASK;
64
+ rule >>= R_3NAN_1ST_LENGTH;
65
+ } while (!is_nan(cls[which]));
66
+ }
67
}
68
69
if (which == 3) {
70
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
71
index XXXXXXX..XXXXXXX 100644
72
--- a/fpu/softfloat-specialize.c.inc
73
+++ b/fpu/softfloat-specialize.c.inc
74
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
35
}
75
}
36
}
76
}
37
77
38
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
78
-/*----------------------------------------------------------------------------
39
index XXXXXXX..XXXXXXX 100644
79
-| Select which NaN to propagate for a three-input operation.
40
--- a/target/arm/hvf/hvf.c
80
-| For the moment we assume that no CPU needs the 'larger significand'
41
+++ b/target/arm/hvf/hvf.c
81
-| information.
42
@@ -XXX,XX +XXX,XX @@
82
-| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
43
#include "hw/irq.h"
83
-*----------------------------------------------------------------------------*/
44
#include "qemu/main-loop.h"
84
-static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
45
#include "sysemu/cpus.h"
85
- bool infzero, bool have_snan, float_status *status)
46
+#include "arm-powerctl.h"
86
-{
47
#include "target/arm/cpu.h"
87
- FloatClass cls[3] = { a_cls, b_cls, c_cls };
48
#include "target/arm/internals.h"
88
- Float3NaNPropRule rule = status->float_3nan_prop_rule;
49
#include "trace/trace-target_arm_hvf.h"
89
- int which;
50
@@ -XXX,XX +XXX,XX @@
90
-
51
#define TMR_CTL_IMASK (1 << 1)
91
- /*
52
#define TMR_CTL_ISTATUS (1 << 2)
92
- * We guarantee not to require the target to tell us how to
53
93
- * pick a NaN if we're always returning the default NaN.
54
+static void hvf_wfi(CPUState *cpu);
94
- * But if we're not in default-NaN mode then the target must
55
+
95
- * specify.
56
typedef struct HVFVTimer {
96
- */
57
/* Vtimer value during migration and paused state */
97
- assert(!status->default_nan_mode);
58
uint64_t vtimer_val;
98
-
59
@@ -XXX,XX +XXX,XX @@ static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
99
- if (infzero) {
60
arm_cpu_do_interrupt(cpu);
100
- /*
61
}
101
- * Inf * 0 + NaN -- some implementations return the default NaN here,
62
102
- * and some return the input NaN.
63
+static void hvf_psci_cpu_off(ARMCPU *arm_cpu)
103
- */
64
+{
104
- switch (status->float_infzeronan_rule) {
65
+ int32_t ret = arm_set_cpu_off(arm_cpu->mp_affinity);
105
- case float_infzeronan_dnan_never:
66
+ assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
106
- return 2;
67
+}
107
- case float_infzeronan_dnan_always:
68
+
108
- return 3;
69
+/*
109
- case float_infzeronan_dnan_if_qnan:
70
+ * Handle a PSCI call.
110
- return is_qnan(c_cls) ? 3 : 2;
71
+ *
111
- default:
72
+ * Returns 0 on success
112
- g_assert_not_reached();
73
+ * -1 when the PSCI call is unknown,
113
- }
74
+ */
114
- }
75
+static bool hvf_handle_psci_call(CPUState *cpu)
115
-
76
+{
116
- assert(rule != float_3nan_prop_none);
77
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
117
- if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
78
+ CPUARMState *env = &arm_cpu->env;
118
- /* We have at least one SNaN input and should prefer it */
79
+ uint64_t param[4] = {
119
- do {
80
+ env->xregs[0],
120
- which = rule & R_3NAN_1ST_MASK;
81
+ env->xregs[1],
121
- rule >>= R_3NAN_1ST_LENGTH;
82
+ env->xregs[2],
122
- } while (!is_snan(cls[which]));
83
+ env->xregs[3]
123
- } else {
84
+ };
124
- do {
85
+ uint64_t context_id, mpidr;
125
- which = rule & R_3NAN_1ST_MASK;
86
+ bool target_aarch64 = true;
126
- rule >>= R_3NAN_1ST_LENGTH;
87
+ CPUState *target_cpu_state;
127
- } while (!is_nan(cls[which]));
88
+ ARMCPU *target_cpu;
128
- }
89
+ target_ulong entry;
129
- return which;
90
+ int target_el = 1;
130
-}
91
+ int32_t ret = 0;
131
-
92
+
132
/*----------------------------------------------------------------------------
93
+ trace_hvf_psci_call(param[0], param[1], param[2], param[3],
133
| Returns 1 if the double-precision floating-point value `a' is a quiet
94
+ arm_cpu->mp_affinity);
134
| NaN; otherwise returns 0.
95
+
96
+ switch (param[0]) {
97
+ case QEMU_PSCI_0_2_FN_PSCI_VERSION:
98
+ ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
99
+ break;
100
+ case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
101
+ ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
102
+ break;
103
+ case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
104
+ case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
105
+ mpidr = param[1];
106
+
107
+ switch (param[2]) {
108
+ case 0:
109
+ target_cpu_state = arm_get_cpu_by_id(mpidr);
110
+ if (!target_cpu_state) {
111
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
112
+ break;
113
+ }
114
+ target_cpu = ARM_CPU(target_cpu_state);
115
+
116
+ ret = target_cpu->power_state;
117
+ break;
118
+ default:
119
+ /* Everything above affinity level 0 is always on. */
120
+ ret = 0;
121
+ }
122
+ break;
123
+ case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
124
+ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
125
+ /*
126
+ * QEMU reset and shutdown are async requests, but PSCI
127
+ * mandates that we never return from the reset/shutdown
128
+ * call, so power the CPU off now so it doesn't execute
129
+ * anything further.
130
+ */
131
+ hvf_psci_cpu_off(arm_cpu);
132
+ break;
133
+ case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
134
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
135
+ hvf_psci_cpu_off(arm_cpu);
136
+ break;
137
+ case QEMU_PSCI_0_1_FN_CPU_ON:
138
+ case QEMU_PSCI_0_2_FN_CPU_ON:
139
+ case QEMU_PSCI_0_2_FN64_CPU_ON:
140
+ mpidr = param[1];
141
+ entry = param[2];
142
+ context_id = param[3];
143
+ ret = arm_set_cpu_on(mpidr, entry, context_id,
144
+ target_el, target_aarch64);
145
+ break;
146
+ case QEMU_PSCI_0_1_FN_CPU_OFF:
147
+ case QEMU_PSCI_0_2_FN_CPU_OFF:
148
+ hvf_psci_cpu_off(arm_cpu);
149
+ break;
150
+ case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
151
+ case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
152
+ case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
153
+ /* Affinity levels are not supported in QEMU */
154
+ if (param[1] & 0xfffe0000) {
155
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
156
+ break;
157
+ }
158
+ /* Powerdown is not supported, we always go into WFI */
159
+ env->xregs[0] = 0;
160
+ hvf_wfi(cpu);
161
+ break;
162
+ case QEMU_PSCI_0_1_FN_MIGRATE:
163
+ case QEMU_PSCI_0_2_FN_MIGRATE:
164
+ ret = QEMU_PSCI_RET_NOT_SUPPORTED;
165
+ break;
166
+ default:
167
+ return false;
168
+ }
169
+
170
+ env->xregs[0] = ret;
171
+ return true;
172
+}
173
+
174
static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
175
{
176
ARMCPU *arm_cpu = ARM_CPU(cpu);
177
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
178
break;
179
case EC_AA64_HVC:
180
cpu_synchronize_state(cpu);
181
- trace_hvf_unknown_hvc(env->xregs[0]);
182
- /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
183
- env->xregs[0] = -1;
184
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_HVC) {
185
+ if (!hvf_handle_psci_call(cpu)) {
186
+ trace_hvf_unknown_hvc(env->xregs[0]);
187
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
188
+ env->xregs[0] = -1;
189
+ }
190
+ } else {
191
+ trace_hvf_unknown_hvc(env->xregs[0]);
192
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
193
+ }
194
break;
195
case EC_AA64_SMC:
196
cpu_synchronize_state(cpu);
197
- trace_hvf_unknown_smc(env->xregs[0]);
198
- hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
199
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_SMC) {
200
+ advance_pc = true;
201
+
202
+ if (!hvf_handle_psci_call(cpu)) {
203
+ trace_hvf_unknown_smc(env->xregs[0]);
204
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
205
+ env->xregs[0] = -1;
206
+ }
207
+ } else {
208
+ trace_hvf_unknown_smc(env->xregs[0]);
209
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
210
+ }
211
break;
212
default:
213
cpu_synchronize_state(cpu);
214
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
215
index XXXXXXX..XXXXXXX 100644
216
--- a/target/arm/hvf/trace-events
217
+++ b/target/arm/hvf/trace-events
218
@@ -XXX,XX +XXX,XX @@ hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_
219
hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
220
hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
221
hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
222
+hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
223
--
135
--
224
2.20.1
136
2.34.1
225
137
226
138
diff view generated by jsdifflib
1
Coverity points out that we aren't checking the return value
1
From: Richard Henderson <richard.henderson@linaro.org>
2
from curl_easy_setopt().
3
2
4
Fixes: Coverity CID 1458895
3
Remove "3" as a special case for which and simply
5
Inspired-by: Peter Maydell <peter.maydell@linaro.org>
4
branch to return the desired value.
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
7
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20210910170656.366592-2-philmd@redhat.com
8
Message-id: 20241203203949.483774-4-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
contrib/elf2dmp/download.c | 22 ++++++++++------------
11
fpu/softfloat-parts.c.inc | 20 ++++++++++----------
13
1 file changed, 10 insertions(+), 12 deletions(-)
12
1 file changed, 10 insertions(+), 10 deletions(-)
14
13
15
diff --git a/contrib/elf2dmp/download.c b/contrib/elf2dmp/download.c
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/contrib/elf2dmp/download.c
16
--- a/fpu/softfloat-parts.c.inc
18
+++ b/contrib/elf2dmp/download.c
17
+++ b/fpu/softfloat-parts.c.inc
19
@@ -XXX,XX +XXX,XX @@ int download_url(const char *name, const char *url)
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
20
goto out_curl;
19
* But if we're not in default-NaN mode then the target must
20
* specify.
21
*/
22
- which = 3;
23
+ goto default_nan;
24
} else if (infzero) {
25
/*
26
* Inf * 0 + NaN -- some implementations return the
27
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
28
*/
29
switch (s->float_infzeronan_rule) {
30
case float_infzeronan_dnan_never:
31
- which = 2;
32
break;
33
case float_infzeronan_dnan_always:
34
- which = 3;
35
- break;
36
+ goto default_nan;
37
case float_infzeronan_dnan_if_qnan:
38
- which = is_qnan(c->cls) ? 3 : 2;
39
+ if (is_qnan(c->cls)) {
40
+ goto default_nan;
41
+ }
42
break;
43
default:
44
g_assert_not_reached();
45
}
46
+ which = 2;
47
} else {
48
FloatClass cls[3] = { a->cls, b->cls, c->cls };
49
Float3NaNPropRule rule = s->float_3nan_prop_rule;
50
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
51
}
21
}
52
}
22
53
23
- curl_easy_setopt(curl, CURLOPT_URL, url);
54
- if (which == 3) {
24
- curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL);
55
- parts_default_nan(a, s);
25
- curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
56
- return a;
26
- curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1);
57
- }
27
- curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0);
28
-
58
-
29
- if (curl_easy_perform(curl) != CURLE_OK) {
59
switch (which) {
30
- err = 1;
60
case 0:
31
- fclose(file);
61
break;
32
+ if (curl_easy_setopt(curl, CURLOPT_URL, url) != CURLE_OK
62
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
33
+ || curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL) != CURLE_OK
63
parts_silence_nan(a, s);
34
+ || curl_easy_setopt(curl, CURLOPT_WRITEDATA, file) != CURLE_OK
35
+ || curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1) != CURLE_OK
36
+ || curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0) != CURLE_OK
37
+ || curl_easy_perform(curl) != CURLE_OK) {
38
unlink(name);
39
- goto out_curl;
40
+ fclose(file);
41
+ err = 1;
42
+ } else {
43
+ err = fclose(file);
44
}
64
}
45
65
return a;
46
- err = fclose(file);
66
+
47
-
67
+ default_nan:
48
out_curl:
68
+ parts_default_nan(a, s);
49
curl_easy_cleanup(curl);
69
+ return a;
50
70
}
71
72
/*
51
--
73
--
52
2.20.1
74
2.34.1
53
75
54
76
diff view generated by jsdifflib
1
From: Shashi Mallela <shashi.mallela@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
During sbsa acs level 3 testing, it is seen that the GIC maintenance
3
Assign the pointer return value to 'a' directly,
4
interrupts are not triggered and the related test cases fail. This
4
rather than going through an intermediary index.
5
is because we were incorrectly passing the value of the MISR register
6
(from maintenance_interrupt_state()) to qemu_set_irq() as the level
7
argument, whereas the device on the other end of this irq line
8
expects a 0/1 value.
9
5
10
Fix the logic to pass a 0/1 level indication, rather than a
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
0/not-0 value.
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
8
Message-id: 20241203203949.483774-5-richard.henderson@linaro.org
13
Fixes: c5fc89b36c0 ("hw/intc/arm_gicv3: Implement gicv3_cpuif_virt_update()")
14
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20210915205809.59068-1-shashi.mallela@linaro.org
17
[PMM: tweaked commit message; collapsed nested if()s into one]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
10
---
21
hw/intc/arm_gicv3_cpuif.c | 5 +++--
11
fpu/softfloat-parts.c.inc | 32 ++++++++++----------------------
22
1 file changed, 3 insertions(+), 2 deletions(-)
12
1 file changed, 10 insertions(+), 22 deletions(-)
23
13
24
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/arm_gicv3_cpuif.c
16
--- a/fpu/softfloat-parts.c.inc
27
+++ b/hw/intc/arm_gicv3_cpuif.c
17
+++ b/fpu/softfloat-parts.c.inc
28
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
19
FloatPartsN *c, float_status *s,
20
int ab_mask, int abc_mask)
21
{
22
- int which;
23
bool infzero = (ab_mask == float_cmask_infzero);
24
bool have_snan = (abc_mask & float_cmask_snan);
25
+ FloatPartsN *ret;
26
27
if (unlikely(have_snan)) {
28
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
30
default:
31
g_assert_not_reached();
32
}
33
- which = 2;
34
+ ret = c;
35
} else {
36
- FloatClass cls[3] = { a->cls, b->cls, c->cls };
37
+ FloatPartsN *val[3] = { a, b, c };
38
Float3NaNPropRule rule = s->float_3nan_prop_rule;
39
40
assert(rule != float_3nan_prop_none);
41
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
42
/* We have at least one SNaN input and should prefer it */
43
do {
44
- which = rule & R_3NAN_1ST_MASK;
45
+ ret = val[rule & R_3NAN_1ST_MASK];
46
rule >>= R_3NAN_1ST_LENGTH;
47
- } while (!is_snan(cls[which]));
48
+ } while (!is_snan(ret->cls));
49
} else {
50
do {
51
- which = rule & R_3NAN_1ST_MASK;
52
+ ret = val[rule & R_3NAN_1ST_MASK];
53
rule >>= R_3NAN_1ST_LENGTH;
54
- } while (!is_nan(cls[which]));
55
+ } while (!is_nan(ret->cls));
29
}
56
}
30
}
57
}
31
58
32
- if (cs->ich_hcr_el2 & ICH_HCR_EL2_EN) {
59
- switch (which) {
33
- maintlevel = maintenance_interrupt_state(cs);
60
- case 0:
34
+ if ((cs->ich_hcr_el2 & ICH_HCR_EL2_EN) &&
61
- break;
35
+ maintenance_interrupt_state(cs) != 0) {
62
- case 1:
36
+ maintlevel = 1;
63
- a = b;
64
- break;
65
- case 2:
66
- a = c;
67
- break;
68
- default:
69
- g_assert_not_reached();
70
+ if (is_snan(ret->cls)) {
71
+ parts_silence_nan(ret, s);
37
}
72
}
38
73
- if (is_snan(a->cls)) {
39
trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel,
74
- parts_silence_nan(a, s);
75
- }
76
- return a;
77
+ return ret;
78
79
default_nan:
80
parts_default_nan(a, s);
40
--
81
--
41
2.20.1
82
2.34.1
42
83
43
84
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Now that we have all logic in place that we need to handle Hypervisor.framework
3
While all indices into val[] should be in [0-2], the mask
4
on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
4
applied is two bits. To help static analysis see there is
5
can build it.
5
no possibility of read beyond the end of the array, pad the
6
array to 4 entries, with the final being (implicitly) NULL.
6
7
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)
10
Message-id: 20241203203949.483774-6-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210916155404.86958-9-agraf@csgraf.de
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
12
---
15
meson.build | 7 +++++++
13
fpu/softfloat-parts.c.inc | 2 +-
16
target/arm/hvf/meson.build | 3 +++
14
1 file changed, 1 insertion(+), 1 deletion(-)
17
target/arm/meson.build | 2 ++
18
3 files changed, 12 insertions(+)
19
create mode 100644 target/arm/hvf/meson.build
20
15
21
diff --git a/meson.build b/meson.build
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/meson.build
18
--- a/fpu/softfloat-parts.c.inc
24
+++ b/meson.build
19
+++ b/fpu/softfloat-parts.c.inc
25
@@ -XXX,XX +XXX,XX @@ else
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
26
endif
21
}
27
22
ret = c;
28
accelerator_targets = { 'CONFIG_KVM': kvm_targets }
23
} else {
29
+
24
- FloatPartsN *val[3] = { a, b, c };
30
+if cpu in ['aarch64']
25
+ FloatPartsN *val[R_3NAN_1ST_MASK + 1] = { a, b, c };
31
+ accelerator_targets += {
26
Float3NaNPropRule rule = s->float_3nan_prop_rule;
32
+ 'CONFIG_HVF': ['aarch64-softmmu']
27
33
+ }
28
assert(rule != float_3nan_prop_none);
34
+endif
35
+
36
if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
37
# i386 emulator provides xenpv machine type for multiple architectures
38
accelerator_targets += {
39
diff --git a/target/arm/hvf/meson.build b/target/arm/hvf/meson.build
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/target/arm/hvf/meson.build
44
@@ -XXX,XX +XXX,XX @@
45
+arm_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
46
+ 'hvf.c',
47
+))
48
diff --git a/target/arm/meson.build b/target/arm/meson.build
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/meson.build
51
+++ b/target/arm/meson.build
52
@@ -XXX,XX +XXX,XX @@ arm_softmmu_ss.add(files(
53
'psci.c',
54
))
55
56
+subdir('hvf')
57
+
58
target_arch += {'arm': arm_ss}
59
target_softmmu_arch += {'arm': arm_softmmu_ss}
60
--
29
--
61
2.20.1
30
2.34.1
62
31
63
32
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We can expose cycle counters on the PMU easily. To be as compatible as
3
This function is part of the public interface and
4
possible, let's do so, but make sure we don't expose any other architectural
4
is not "specialized" to any target in any way.
5
counters that we can not model yet.
6
5
7
This allows OSs to work that require PMU support.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-10-agraf@csgraf.de
8
Message-id: 20241203203949.483774-7-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/hvf/hvf.c | 179 +++++++++++++++++++++++++++++++++++++++++++
11
fpu/softfloat.c | 52 ++++++++++++++++++++++++++++++++++
15
1 file changed, 179 insertions(+)
12
fpu/softfloat-specialize.c.inc | 52 ----------------------------------
13
2 files changed, 52 insertions(+), 52 deletions(-)
16
14
17
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/hvf/hvf.c
17
--- a/fpu/softfloat.c
20
+++ b/target/arm/hvf/hvf.c
18
+++ b/fpu/softfloat.c
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
22
#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
20
*zExpPtr = 1 - shiftCount;
23
#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
24
#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
25
+#define SYSREG_PMCR_EL0 SYSREG(3, 3, 9, 12, 0)
26
+#define SYSREG_PMUSERENR_EL0 SYSREG(3, 3, 9, 14, 0)
27
+#define SYSREG_PMCNTENSET_EL0 SYSREG(3, 3, 9, 12, 1)
28
+#define SYSREG_PMCNTENCLR_EL0 SYSREG(3, 3, 9, 12, 2)
29
+#define SYSREG_PMINTENCLR_EL1 SYSREG(3, 0, 9, 14, 2)
30
+#define SYSREG_PMOVSCLR_EL0 SYSREG(3, 3, 9, 12, 3)
31
+#define SYSREG_PMSWINC_EL0 SYSREG(3, 3, 9, 12, 4)
32
+#define SYSREG_PMSELR_EL0 SYSREG(3, 3, 9, 12, 5)
33
+#define SYSREG_PMCEID0_EL0 SYSREG(3, 3, 9, 12, 6)
34
+#define SYSREG_PMCEID1_EL0 SYSREG(3, 3, 9, 12, 7)
35
+#define SYSREG_PMCCNTR_EL0 SYSREG(3, 3, 9, 13, 0)
36
+#define SYSREG_PMCCFILTR_EL0 SYSREG(3, 3, 14, 15, 7)
37
38
#define WFX_IS_WFE (1 << 0)
39
40
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
41
val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
42
gt_cntfrq_period_ns(arm_cpu);
43
break;
44
+ case SYSREG_PMCR_EL0:
45
+ val = env->cp15.c9_pmcr;
46
+ break;
47
+ case SYSREG_PMCCNTR_EL0:
48
+ pmu_op_start(env);
49
+ val = env->cp15.c15_ccnt;
50
+ pmu_op_finish(env);
51
+ break;
52
+ case SYSREG_PMCNTENCLR_EL0:
53
+ val = env->cp15.c9_pmcnten;
54
+ break;
55
+ case SYSREG_PMOVSCLR_EL0:
56
+ val = env->cp15.c9_pmovsr;
57
+ break;
58
+ case SYSREG_PMSELR_EL0:
59
+ val = env->cp15.c9_pmselr;
60
+ break;
61
+ case SYSREG_PMINTENCLR_EL1:
62
+ val = env->cp15.c9_pminten;
63
+ break;
64
+ case SYSREG_PMCCFILTR_EL0:
65
+ val = env->cp15.pmccfiltr_el0;
66
+ break;
67
+ case SYSREG_PMCNTENSET_EL0:
68
+ val = env->cp15.c9_pmcnten;
69
+ break;
70
+ case SYSREG_PMUSERENR_EL0:
71
+ val = env->cp15.c9_pmuserenr;
72
+ break;
73
+ case SYSREG_PMCEID0_EL0:
74
+ case SYSREG_PMCEID1_EL0:
75
+ /* We can't really count anything yet, declare all events invalid */
76
+ val = 0;
77
+ break;
78
case SYSREG_OSLSR_EL1:
79
val = env->cp15.oslsr_el1;
80
break;
81
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
82
return 0;
83
}
21
}
84
22
85
+static void pmu_update_irq(CPUARMState *env)
23
+/*----------------------------------------------------------------------------
24
+| Takes two extended double-precision floating-point values `a' and `b', one
25
+| of which is a NaN, and returns the appropriate NaN result. If either `a' or
26
+| `b' is a signaling NaN, the invalid exception is raised.
27
+*----------------------------------------------------------------------------*/
28
+
29
+floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
86
+{
30
+{
87
+ ARMCPU *cpu = env_archcpu(env);
31
+ bool aIsLargerSignificand;
88
+ qemu_set_irq(cpu->pmu_interrupt, (env->cp15.c9_pmcr & PMCRE) &&
32
+ FloatClass a_cls, b_cls;
89
+ (env->cp15.c9_pminten & env->cp15.c9_pmovsr));
90
+}
91
+
33
+
92
+static bool pmu_event_supported(uint16_t number)
34
+ /* This is not complete, but is good enough for pickNaN. */
93
+{
35
+ a_cls = (!floatx80_is_any_nan(a)
94
+ return false;
36
+ ? float_class_normal
95
+}
37
+ : floatx80_is_signaling_nan(a, status)
38
+ ? float_class_snan
39
+ : float_class_qnan);
40
+ b_cls = (!floatx80_is_any_nan(b)
41
+ ? float_class_normal
42
+ : floatx80_is_signaling_nan(b, status)
43
+ ? float_class_snan
44
+ : float_class_qnan);
96
+
45
+
97
+/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
46
+ if (is_snan(a_cls) || is_snan(b_cls)) {
98
+ * the current EL, security state, and register configuration.
47
+ float_raise(float_flag_invalid, status);
99
+ */
100
+static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
101
+{
102
+ uint64_t filter;
103
+ bool enabled, filtered = true;
104
+ int el = arm_current_el(env);
105
+
106
+ enabled = (env->cp15.c9_pmcr & PMCRE) &&
107
+ (env->cp15.c9_pmcnten & (1 << counter));
108
+
109
+ if (counter == 31) {
110
+ filter = env->cp15.pmccfiltr_el0;
111
+ } else {
112
+ filter = env->cp15.c14_pmevtyper[counter];
113
+ }
48
+ }
114
+
49
+
115
+ if (el == 0) {
50
+ if (status->default_nan_mode) {
116
+ filtered = filter & PMXEVTYPER_U;
51
+ return floatx80_default_nan(status);
117
+ } else if (el == 1) {
118
+ filtered = filter & PMXEVTYPER_P;
119
+ }
52
+ }
120
+
53
+
121
+ if (counter != 31) {
54
+ if (a.low < b.low) {
122
+ /*
55
+ aIsLargerSignificand = 0;
123
+ * If not checking PMCCNTR, ensure the counter is setup to an event we
56
+ } else if (b.low < a.low) {
124
+ * support
57
+ aIsLargerSignificand = 1;
125
+ */
58
+ } else {
126
+ uint16_t event = filter & PMXEVTYPER_EVTCOUNT;
59
+ aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
127
+ if (!pmu_event_supported(event)) {
128
+ return false;
129
+ }
130
+ }
60
+ }
131
+
61
+
132
+ return enabled && !filtered;
62
+ if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
133
+}
63
+ if (is_snan(b_cls)) {
134
+
64
+ return floatx80_silence_nan(b, status);
135
+static void pmswinc_write(CPUARMState *env, uint64_t value)
136
+{
137
+ unsigned int i;
138
+ for (i = 0; i < pmu_num_counters(env); i++) {
139
+ /* Increment a counter's count iff: */
140
+ if ((value & (1 << i)) && /* counter's bit is set */
141
+ /* counter is enabled and not filtered */
142
+ pmu_counter_enabled(env, i) &&
143
+ /* counter is SW_INCR */
144
+ (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) {
145
+ /*
146
+ * Detect if this write causes an overflow since we can't predict
147
+ * PMSWINC overflows like we can for other events
148
+ */
149
+ uint32_t new_pmswinc = env->cp15.c14_pmevcntr[i] + 1;
150
+
151
+ if (env->cp15.c14_pmevcntr[i] & ~new_pmswinc & INT32_MIN) {
152
+ env->cp15.c9_pmovsr |= (1 << i);
153
+ pmu_update_irq(env);
154
+ }
155
+
156
+ env->cp15.c14_pmevcntr[i] = new_pmswinc;
157
+ }
65
+ }
66
+ return b;
67
+ } else {
68
+ if (is_snan(a_cls)) {
69
+ return floatx80_silence_nan(a, status);
70
+ }
71
+ return a;
158
+ }
72
+ }
159
+}
73
+}
160
+
74
+
161
static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
75
/*----------------------------------------------------------------------------
162
{
76
| Takes an abstract floating-point value having sign `zSign', exponent `zExp',
163
ARMCPU *arm_cpu = ARM_CPU(cpu);
77
| and extended significand formed by the concatenation of `zSig0' and `zSig1',
164
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
78
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
165
val);
79
index XXXXXXX..XXXXXXX 100644
166
80
--- a/fpu/softfloat-specialize.c.inc
167
switch (reg) {
81
+++ b/fpu/softfloat-specialize.c.inc
168
+ case SYSREG_PMCCNTR_EL0:
82
@@ -XXX,XX +XXX,XX @@ floatx80 floatx80_silence_nan(floatx80 a, float_status *status)
169
+ pmu_op_start(env);
83
return a;
170
+ env->cp15.c15_ccnt = val;
84
}
171
+ pmu_op_finish(env);
85
172
+ break;
86
-/*----------------------------------------------------------------------------
173
+ case SYSREG_PMCR_EL0:
87
-| Takes two extended double-precision floating-point values `a' and `b', one
174
+ pmu_op_start(env);
88
-| of which is a NaN, and returns the appropriate NaN result. If either `a' or
175
+
89
-| `b' is a signaling NaN, the invalid exception is raised.
176
+ if (val & PMCRC) {
90
-*----------------------------------------------------------------------------*/
177
+ /* The counter has been reset */
91
-
178
+ env->cp15.c15_ccnt = 0;
92
-floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
179
+ }
93
-{
180
+
94
- bool aIsLargerSignificand;
181
+ if (val & PMCRP) {
95
- FloatClass a_cls, b_cls;
182
+ unsigned int i;
96
-
183
+ for (i = 0; i < pmu_num_counters(env); i++) {
97
- /* This is not complete, but is good enough for pickNaN. */
184
+ env->cp15.c14_pmevcntr[i] = 0;
98
- a_cls = (!floatx80_is_any_nan(a)
185
+ }
99
- ? float_class_normal
186
+ }
100
- : floatx80_is_signaling_nan(a, status)
187
+
101
- ? float_class_snan
188
+ env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
102
- : float_class_qnan);
189
+ env->cp15.c9_pmcr |= (val & PMCR_WRITEABLE_MASK);
103
- b_cls = (!floatx80_is_any_nan(b)
190
+
104
- ? float_class_normal
191
+ pmu_op_finish(env);
105
- : floatx80_is_signaling_nan(b, status)
192
+ break;
106
- ? float_class_snan
193
+ case SYSREG_PMUSERENR_EL0:
107
- : float_class_qnan);
194
+ env->cp15.c9_pmuserenr = val & 0xf;
108
-
195
+ break;
109
- if (is_snan(a_cls) || is_snan(b_cls)) {
196
+ case SYSREG_PMCNTENSET_EL0:
110
- float_raise(float_flag_invalid, status);
197
+ env->cp15.c9_pmcnten |= (val & pmu_counter_mask(env));
111
- }
198
+ break;
112
-
199
+ case SYSREG_PMCNTENCLR_EL0:
113
- if (status->default_nan_mode) {
200
+ env->cp15.c9_pmcnten &= ~(val & pmu_counter_mask(env));
114
- return floatx80_default_nan(status);
201
+ break;
115
- }
202
+ case SYSREG_PMINTENCLR_EL1:
116
-
203
+ pmu_op_start(env);
117
- if (a.low < b.low) {
204
+ env->cp15.c9_pminten |= val;
118
- aIsLargerSignificand = 0;
205
+ pmu_op_finish(env);
119
- } else if (b.low < a.low) {
206
+ break;
120
- aIsLargerSignificand = 1;
207
+ case SYSREG_PMOVSCLR_EL0:
121
- } else {
208
+ pmu_op_start(env);
122
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
209
+ env->cp15.c9_pmovsr &= ~val;
123
- }
210
+ pmu_op_finish(env);
124
-
211
+ break;
125
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
212
+ case SYSREG_PMSWINC_EL0:
126
- if (is_snan(b_cls)) {
213
+ pmu_op_start(env);
127
- return floatx80_silence_nan(b, status);
214
+ pmswinc_write(env, val);
128
- }
215
+ pmu_op_finish(env);
129
- return b;
216
+ break;
130
- } else {
217
+ case SYSREG_PMSELR_EL0:
131
- if (is_snan(a_cls)) {
218
+ env->cp15.c9_pmselr = val & 0x1f;
132
- return floatx80_silence_nan(a, status);
219
+ break;
133
- }
220
+ case SYSREG_PMCCFILTR_EL0:
134
- return a;
221
+ pmu_op_start(env);
135
- }
222
+ env->cp15.pmccfiltr_el0 = val & PMCCFILTR_EL0;
136
-}
223
+ pmu_op_finish(env);
137
-
224
+ break;
138
/*----------------------------------------------------------------------------
225
case SYSREG_OSLAR_EL1:
139
| Returns 1 if the quadruple-precision floating-point value `a' is a quiet
226
env->cp15.oslsr_el1 = val & 1;
140
| NaN; otherwise returns 0.
227
break;
228
--
141
--
229
2.20.1
142
2.34.1
230
231
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Unpacking and repacking the parts may be slightly more work
4
than we did before, but we get to reuse more code. For a
5
code path handling exceptional values, this is an improvement.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241203203949.483774-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
fpu/softfloat.c | 43 +++++--------------------------------------
13
1 file changed, 5 insertions(+), 38 deletions(-)
14
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/fpu/softfloat.c
18
+++ b/fpu/softfloat.c
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
20
21
floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
22
{
23
- bool aIsLargerSignificand;
24
- FloatClass a_cls, b_cls;
25
+ FloatParts128 pa, pb, *pr;
26
27
- /* This is not complete, but is good enough for pickNaN. */
28
- a_cls = (!floatx80_is_any_nan(a)
29
- ? float_class_normal
30
- : floatx80_is_signaling_nan(a, status)
31
- ? float_class_snan
32
- : float_class_qnan);
33
- b_cls = (!floatx80_is_any_nan(b)
34
- ? float_class_normal
35
- : floatx80_is_signaling_nan(b, status)
36
- ? float_class_snan
37
- : float_class_qnan);
38
-
39
- if (is_snan(a_cls) || is_snan(b_cls)) {
40
- float_raise(float_flag_invalid, status);
41
- }
42
-
43
- if (status->default_nan_mode) {
44
+ if (!floatx80_unpack_canonical(&pa, a, status) ||
45
+ !floatx80_unpack_canonical(&pb, b, status)) {
46
return floatx80_default_nan(status);
47
}
48
49
- if (a.low < b.low) {
50
- aIsLargerSignificand = 0;
51
- } else if (b.low < a.low) {
52
- aIsLargerSignificand = 1;
53
- } else {
54
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
55
- }
56
-
57
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
58
- if (is_snan(b_cls)) {
59
- return floatx80_silence_nan(b, status);
60
- }
61
- return b;
62
- } else {
63
- if (is_snan(a_cls)) {
64
- return floatx80_silence_nan(a, status);
65
- }
66
- return a;
67
- }
68
+ pr = parts_pick_nan(&pa, &pb, status);
69
+ return floatx80_round_pack_canonical(pr, status);
70
}
71
72
/*----------------------------------------------------------------------------
73
--
74
2.34.1
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We will need PMC register definitions in accel specific code later.
3
Inline pickNaN into its only caller. This makes one assert
4
Move all constant definitions to common arm headers so we can reuse
4
redundant with the immediately preceding IF.
5
them.
5
6
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241203203949.483774-9-richard.henderson@linaro.org
9
Message-id: 20210916155404.86958-2-agraf@csgraf.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/internals.h | 44 ++++++++++++++++++++++++++++++++++++++++++
11
fpu/softfloat-parts.c.inc | 82 +++++++++++++++++++++++++----
13
target/arm/helper.c | 44 ------------------------------------------
12
fpu/softfloat-specialize.c.inc | 96 ----------------------------------
14
2 files changed, 44 insertions(+), 44 deletions(-)
13
2 files changed, 73 insertions(+), 105 deletions(-)
15
14
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
17
--- a/fpu/softfloat-parts.c.inc
19
+++ b/target/arm/internals.h
18
+++ b/fpu/softfloat-parts.c.inc
20
@@ -XXX,XX +XXX,XX @@ enum MVEECIState {
19
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
21
/* All other values reserved */
20
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
22
};
21
float_status *s)
23
22
{
24
+/* Definitions for the PMU registers */
23
+ int cmp, which;
25
+#define PMCRN_MASK 0xf800
26
+#define PMCRN_SHIFT 11
27
+#define PMCRLC 0x40
28
+#define PMCRDP 0x20
29
+#define PMCRX 0x10
30
+#define PMCRD 0x8
31
+#define PMCRC 0x4
32
+#define PMCRP 0x2
33
+#define PMCRE 0x1
34
+/*
35
+ * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
36
+ * which can be written as 1 to trigger behaviour but which stay RAZ).
37
+ */
38
+#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
39
+
24
+
40
+#define PMXEVTYPER_P 0x80000000
25
if (is_snan(a->cls) || is_snan(b->cls)) {
41
+#define PMXEVTYPER_U 0x40000000
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
42
+#define PMXEVTYPER_NSK 0x20000000
27
}
43
+#define PMXEVTYPER_NSU 0x10000000
28
44
+#define PMXEVTYPER_NSH 0x08000000
29
if (s->default_nan_mode) {
45
+#define PMXEVTYPER_M 0x04000000
30
parts_default_nan(a, s);
46
+#define PMXEVTYPER_MT 0x02000000
31
- } else {
47
+#define PMXEVTYPER_EVTCOUNT 0x0000ffff
32
- int cmp = frac_cmp(a, b);
48
+#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
33
- if (cmp == 0) {
49
+ PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
34
- cmp = a->sign < b->sign;
50
+ PMXEVTYPER_M | PMXEVTYPER_MT | \
35
- }
51
+ PMXEVTYPER_EVTCOUNT)
36
+ return a;
37
+ }
38
39
- if (pickNaN(a->cls, b->cls, cmp > 0, s)) {
40
- a = b;
41
- }
42
+ cmp = frac_cmp(a, b);
43
+ if (cmp == 0) {
44
+ cmp = a->sign < b->sign;
45
+ }
52
+
46
+
53
+#define PMCCFILTR 0xf8000000
47
+ switch (s->float_2nan_prop_rule) {
54
+#define PMCCFILTR_M PMXEVTYPER_M
48
+ case float_2nan_prop_s_ab:
55
+#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
49
if (is_snan(a->cls)) {
50
- parts_silence_nan(a, s);
51
+ which = 0;
52
+ } else if (is_snan(b->cls)) {
53
+ which = 1;
54
+ } else if (is_qnan(a->cls)) {
55
+ which = 0;
56
+ } else {
57
+ which = 1;
58
}
59
+ break;
60
+ case float_2nan_prop_s_ba:
61
+ if (is_snan(b->cls)) {
62
+ which = 1;
63
+ } else if (is_snan(a->cls)) {
64
+ which = 0;
65
+ } else if (is_qnan(b->cls)) {
66
+ which = 1;
67
+ } else {
68
+ which = 0;
69
+ }
70
+ break;
71
+ case float_2nan_prop_ab:
72
+ which = is_nan(a->cls) ? 0 : 1;
73
+ break;
74
+ case float_2nan_prop_ba:
75
+ which = is_nan(b->cls) ? 1 : 0;
76
+ break;
77
+ case float_2nan_prop_x87:
78
+ /*
79
+ * This implements x87 NaN propagation rules:
80
+ * SNaN + QNaN => return the QNaN
81
+ * two SNaNs => return the one with the larger significand, silenced
82
+ * two QNaNs => return the one with the larger significand
83
+ * SNaN and a non-NaN => return the SNaN, silenced
84
+ * QNaN and a non-NaN => return the QNaN
85
+ *
86
+ * If we get down to comparing significands and they are the same,
87
+ * return the NaN with the positive sign bit (if any).
88
+ */
89
+ if (is_snan(a->cls)) {
90
+ if (is_snan(b->cls)) {
91
+ which = cmp > 0 ? 0 : 1;
92
+ } else {
93
+ which = is_qnan(b->cls) ? 1 : 0;
94
+ }
95
+ } else if (is_qnan(a->cls)) {
96
+ if (is_snan(b->cls) || !is_qnan(b->cls)) {
97
+ which = 0;
98
+ } else {
99
+ which = cmp > 0 ? 0 : 1;
100
+ }
101
+ } else {
102
+ which = 1;
103
+ }
104
+ break;
105
+ default:
106
+ g_assert_not_reached();
107
+ }
56
+
108
+
57
+static inline uint32_t pmu_num_counters(CPUARMState *env)
109
+ if (which) {
58
+{
110
+ a = b;
59
+ return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
111
+ }
60
+}
112
+ if (is_snan(a->cls)) {
61
+
113
+ parts_silence_nan(a, s);
62
+/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
114
}
63
+static inline uint64_t pmu_counter_mask(CPUARMState *env)
115
return a;
64
+{
116
}
65
+ return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
117
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
66
+}
67
+
68
#endif
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
118
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
119
--- a/fpu/softfloat-specialize.c.inc
72
+++ b/target/arm/helper.c
120
+++ b/fpu/softfloat-specialize.c.inc
73
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
121
@@ -XXX,XX +XXX,XX @@ bool float32_is_signaling_nan(float32 a_, float_status *status)
74
REGINFO_SENTINEL
122
}
75
};
123
}
76
124
77
-/* Definitions for the PMU registers */
125
-/*----------------------------------------------------------------------------
78
-#define PMCRN_MASK 0xf800
126
-| Select which NaN to propagate for a two-input operation.
79
-#define PMCRN_SHIFT 11
127
-| IEEE754 doesn't specify all the details of this, so the
80
-#define PMCRLC 0x40
128
-| algorithm is target-specific.
81
-#define PMCRDP 0x20
129
-| The routine is passed various bits of information about the
82
-#define PMCRX 0x10
130
-| two NaNs and should return 0 to select NaN a and 1 for NaN b.
83
-#define PMCRD 0x8
131
-| Note that signalling NaNs are always squashed to quiet NaNs
84
-#define PMCRC 0x4
132
-| by the caller, by calling floatXX_silence_nan() before
85
-#define PMCRP 0x2
133
-| returning them.
86
-#define PMCRE 0x1
134
-|
87
-/*
135
-| aIsLargerSignificand is only valid if both a and b are NaNs
88
- * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
136
-| of some kind, and is true if a has the larger significand,
89
- * which can be written as 1 to trigger behaviour but which stay RAZ).
137
-| or if both a and b have the same significand but a is
90
- */
138
-| positive but b is negative. It is only needed for the x87
91
-#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
139
-| tie-break rule.
140
-*----------------------------------------------------------------------------*/
92
-
141
-
93
-#define PMXEVTYPER_P 0x80000000
142
-static int pickNaN(FloatClass a_cls, FloatClass b_cls,
94
-#define PMXEVTYPER_U 0x40000000
143
- bool aIsLargerSignificand, float_status *status)
95
-#define PMXEVTYPER_NSK 0x20000000
144
-{
96
-#define PMXEVTYPER_NSU 0x10000000
145
- /*
97
-#define PMXEVTYPER_NSH 0x08000000
146
- * We guarantee not to require the target to tell us how to
98
-#define PMXEVTYPER_M 0x04000000
147
- * pick a NaN if we're always returning the default NaN.
99
-#define PMXEVTYPER_MT 0x02000000
148
- * But if we're not in default-NaN mode then the target must
100
-#define PMXEVTYPER_EVTCOUNT 0x0000ffff
149
- * specify via set_float_2nan_prop_rule().
101
-#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
150
- */
102
- PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
151
- assert(!status->default_nan_mode);
103
- PMXEVTYPER_M | PMXEVTYPER_MT | \
104
- PMXEVTYPER_EVTCOUNT)
105
-
152
-
106
-#define PMCCFILTR 0xf8000000
153
- switch (status->float_2nan_prop_rule) {
107
-#define PMCCFILTR_M PMXEVTYPER_M
154
- case float_2nan_prop_s_ab:
108
-#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
155
- if (is_snan(a_cls)) {
109
-
156
- return 0;
110
-static inline uint32_t pmu_num_counters(CPUARMState *env)
157
- } else if (is_snan(b_cls)) {
111
-{
158
- return 1;
112
- return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
159
- } else if (is_qnan(a_cls)) {
160
- return 0;
161
- } else {
162
- return 1;
163
- }
164
- break;
165
- case float_2nan_prop_s_ba:
166
- if (is_snan(b_cls)) {
167
- return 1;
168
- } else if (is_snan(a_cls)) {
169
- return 0;
170
- } else if (is_qnan(b_cls)) {
171
- return 1;
172
- } else {
173
- return 0;
174
- }
175
- break;
176
- case float_2nan_prop_ab:
177
- if (is_nan(a_cls)) {
178
- return 0;
179
- } else {
180
- return 1;
181
- }
182
- break;
183
- case float_2nan_prop_ba:
184
- if (is_nan(b_cls)) {
185
- return 1;
186
- } else {
187
- return 0;
188
- }
189
- break;
190
- case float_2nan_prop_x87:
191
- /*
192
- * This implements x87 NaN propagation rules:
193
- * SNaN + QNaN => return the QNaN
194
- * two SNaNs => return the one with the larger significand, silenced
195
- * two QNaNs => return the one with the larger significand
196
- * SNaN and a non-NaN => return the SNaN, silenced
197
- * QNaN and a non-NaN => return the QNaN
198
- *
199
- * If we get down to comparing significands and they are the same,
200
- * return the NaN with the positive sign bit (if any).
201
- */
202
- if (is_snan(a_cls)) {
203
- if (is_snan(b_cls)) {
204
- return aIsLargerSignificand ? 0 : 1;
205
- }
206
- return is_qnan(b_cls) ? 1 : 0;
207
- } else if (is_qnan(a_cls)) {
208
- if (is_snan(b_cls) || !is_qnan(b_cls)) {
209
- return 0;
210
- } else {
211
- return aIsLargerSignificand ? 0 : 1;
212
- }
213
- } else {
214
- return 1;
215
- }
216
- default:
217
- g_assert_not_reached();
218
- }
113
-}
219
-}
114
-
220
-
115
-/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
221
/*----------------------------------------------------------------------------
116
-static inline uint64_t pmu_counter_mask(CPUARMState *env)
222
| Returns 1 if the double-precision floating-point value `a' is a quiet
117
-{
223
| NaN; otherwise returns 0.
118
- return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
119
-}
120
-
121
typedef struct pm_event {
122
uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */
123
/* If the event is supported on this CPU (used to generate PMCEID[01]) */
124
--
224
--
125
2.20.1
225
2.34.1
126
226
127
227
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remember if there was an SNaN, and use that to simplify
4
float_2nan_prop_s_{ab,ba} to only the snan component.
5
Then, fall through to the corresponding
6
float_2nan_prop_{ab,ba} case to handle any remaining
7
nans, which must be quiet.
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20241203203949.483774-10-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
fpu/softfloat-parts.c.inc | 32 ++++++++++++--------------------
15
1 file changed, 12 insertions(+), 20 deletions(-)
16
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
18
index XXXXXXX..XXXXXXX 100644
19
--- a/fpu/softfloat-parts.c.inc
20
+++ b/fpu/softfloat-parts.c.inc
21
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
22
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
23
float_status *s)
24
{
25
+ bool have_snan = false;
26
int cmp, which;
27
28
if (is_snan(a->cls) || is_snan(b->cls)) {
29
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
30
+ have_snan = true;
31
}
32
33
if (s->default_nan_mode) {
34
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
35
36
switch (s->float_2nan_prop_rule) {
37
case float_2nan_prop_s_ab:
38
- if (is_snan(a->cls)) {
39
- which = 0;
40
- } else if (is_snan(b->cls)) {
41
- which = 1;
42
- } else if (is_qnan(a->cls)) {
43
- which = 0;
44
- } else {
45
- which = 1;
46
+ if (have_snan) {
47
+ which = is_snan(a->cls) ? 0 : 1;
48
+ break;
49
}
50
- break;
51
- case float_2nan_prop_s_ba:
52
- if (is_snan(b->cls)) {
53
- which = 1;
54
- } else if (is_snan(a->cls)) {
55
- which = 0;
56
- } else if (is_qnan(b->cls)) {
57
- which = 1;
58
- } else {
59
- which = 0;
60
- }
61
- break;
62
+ /* fall through */
63
case float_2nan_prop_ab:
64
which = is_nan(a->cls) ? 0 : 1;
65
break;
66
+ case float_2nan_prop_s_ba:
67
+ if (have_snan) {
68
+ which = is_snan(b->cls) ? 1 : 0;
69
+ break;
70
+ }
71
+ /* fall through */
72
case float_2nan_prop_ba:
73
which = is_nan(b->cls) ? 1 : 0;
74
break;
75
--
76
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Move the fractional comparison to the end of the
4
float_2nan_prop_x87 case. This is not required for
5
any other 2nan propagation rule. Reorganize the
6
x87 case itself to break out of the switch when the
7
fractional comparison is not required.
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20241203203949.483774-11-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
fpu/softfloat-parts.c.inc | 19 +++++++++----------
15
1 file changed, 9 insertions(+), 10 deletions(-)
16
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
18
index XXXXXXX..XXXXXXX 100644
19
--- a/fpu/softfloat-parts.c.inc
20
+++ b/fpu/softfloat-parts.c.inc
21
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
22
return a;
23
}
24
25
- cmp = frac_cmp(a, b);
26
- if (cmp == 0) {
27
- cmp = a->sign < b->sign;
28
- }
29
-
30
switch (s->float_2nan_prop_rule) {
31
case float_2nan_prop_s_ab:
32
if (have_snan) {
33
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
34
* return the NaN with the positive sign bit (if any).
35
*/
36
if (is_snan(a->cls)) {
37
- if (is_snan(b->cls)) {
38
- which = cmp > 0 ? 0 : 1;
39
- } else {
40
+ if (!is_snan(b->cls)) {
41
which = is_qnan(b->cls) ? 1 : 0;
42
+ break;
43
}
44
} else if (is_qnan(a->cls)) {
45
if (is_snan(b->cls) || !is_qnan(b->cls)) {
46
which = 0;
47
- } else {
48
- which = cmp > 0 ? 0 : 1;
49
+ break;
50
}
51
} else {
52
which = 1;
53
+ break;
54
}
55
+ cmp = frac_cmp(a, b);
56
+ if (cmp == 0) {
57
+ cmp = a->sign < b->sign;
58
+ }
59
+ which = cmp > 0 ? 0 : 1;
60
break;
61
default:
62
g_assert_not_reached();
63
--
64
2.34.1
diff view generated by jsdifflib
1
From: Alexander Graf <agraf@csgraf.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We will need to install a migration helper for the ARM hvf backend.
3
Replace the "index" selecting between A and B with a result variable
4
Let's introduce an arch callback for the overall hvf init chain to
4
of the proper type. This improves clarity within the function.
5
do so.
6
5
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20210916155404.86958-4-agraf@csgraf.de
8
Message-id: 20241203203949.483774-12-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
include/sysemu/hvf_int.h | 1 +
11
fpu/softfloat-parts.c.inc | 28 +++++++++++++---------------
13
accel/hvf/hvf-accel-ops.c | 3 ++-
12
1 file changed, 13 insertions(+), 15 deletions(-)
14
target/i386/hvf/hvf.c | 5 +++++
15
3 files changed, 8 insertions(+), 1 deletion(-)
16
13
17
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/include/sysemu/hvf_int.h
16
--- a/fpu/softfloat-parts.c.inc
20
+++ b/include/sysemu/hvf_int.h
17
+++ b/fpu/softfloat-parts.c.inc
21
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
22
};
19
float_status *s)
23
20
{
24
void assert_hvf_ok(hv_return_t ret);
21
bool have_snan = false;
25
+int hvf_arch_init(void);
22
- int cmp, which;
26
int hvf_arch_init_vcpu(CPUState *cpu);
23
+ FloatPartsN *ret;
27
void hvf_arch_vcpu_destroy(CPUState *cpu);
24
+ int cmp;
28
int hvf_vcpu_exec(CPUState *);
25
29
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
26
if (is_snan(a->cls) || is_snan(b->cls)) {
30
index XXXXXXX..XXXXXXX 100644
27
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
31
--- a/accel/hvf/hvf-accel-ops.c
28
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
32
+++ b/accel/hvf/hvf-accel-ops.c
29
switch (s->float_2nan_prop_rule) {
33
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
30
case float_2nan_prop_s_ab:
34
31
if (have_snan) {
35
hvf_state = s;
32
- which = is_snan(a->cls) ? 0 : 1;
36
memory_listener_register(&hvf_memory_listener, &address_space_memory);
33
+ ret = is_snan(a->cls) ? a : b;
37
- return 0;
34
break;
38
+
35
}
39
+ return hvf_arch_init();
36
/* fall through */
37
case float_2nan_prop_ab:
38
- which = is_nan(a->cls) ? 0 : 1;
39
+ ret = is_nan(a->cls) ? a : b;
40
break;
41
case float_2nan_prop_s_ba:
42
if (have_snan) {
43
- which = is_snan(b->cls) ? 1 : 0;
44
+ ret = is_snan(b->cls) ? b : a;
45
break;
46
}
47
/* fall through */
48
case float_2nan_prop_ba:
49
- which = is_nan(b->cls) ? 1 : 0;
50
+ ret = is_nan(b->cls) ? b : a;
51
break;
52
case float_2nan_prop_x87:
53
/*
54
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
55
*/
56
if (is_snan(a->cls)) {
57
if (!is_snan(b->cls)) {
58
- which = is_qnan(b->cls) ? 1 : 0;
59
+ ret = is_qnan(b->cls) ? b : a;
60
break;
61
}
62
} else if (is_qnan(a->cls)) {
63
if (is_snan(b->cls) || !is_qnan(b->cls)) {
64
- which = 0;
65
+ ret = a;
66
break;
67
}
68
} else {
69
- which = 1;
70
+ ret = b;
71
break;
72
}
73
cmp = frac_cmp(a, b);
74
if (cmp == 0) {
75
cmp = a->sign < b->sign;
76
}
77
- which = cmp > 0 ? 0 : 1;
78
+ ret = cmp > 0 ? a : b;
79
break;
80
default:
81
g_assert_not_reached();
82
}
83
84
- if (which) {
85
- a = b;
86
+ if (is_snan(ret->cls)) {
87
+ parts_silence_nan(ret, s);
88
}
89
- if (is_snan(a->cls)) {
90
- parts_silence_nan(a, s);
91
- }
92
- return a;
93
+ return ret;
40
}
94
}
41
95
42
static void hvf_accel_class_init(ObjectClass *oc, void *data)
96
static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
43
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/i386/hvf/hvf.c
46
+++ b/target/i386/hvf/hvf.c
47
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
48
return env->apic_bus_freq != 0;
49
}
50
51
+int hvf_arch_init(void)
52
+{
53
+ return 0;
54
+}
55
+
56
int hvf_arch_init_vcpu(CPUState *cpu)
57
{
58
X86CPU *x86cpu = X86_CPU(cpu);
59
--
97
--
60
2.20.1
98
2.34.1
61
99
62
100
diff view generated by jsdifflib
1
Coverity points out that if the PDB file we're trying to read
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
has a header specifying a block_size of zero then we will
3
end up trying to divide by zero in pdb_ds_read_file().
4
Check for this and fail cleanly instead.
5
2
6
Fixes: Coverity CID 1458869
3
I'm migrating to Qualcomm's new open source email infrastructure, so
4
update my email address, and update the mailmap to match.
5
6
Signed-off-by: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
8
Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20241205114047.1125842-1-leif.lindholm@oss.qualcomm.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
11
Message-id: 20210910170656.366592-3-philmd@redhat.com
12
Message-Id: <20210901143910.17112-3-peter.maydell@linaro.org>
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
---
13
---
15
contrib/elf2dmp/pdb.c | 4 ++++
14
MAINTAINERS | 2 +-
16
1 file changed, 4 insertions(+)
15
.mailmap | 5 +++--
16
2 files changed, 4 insertions(+), 3 deletions(-)
17
17
18
diff --git a/contrib/elf2dmp/pdb.c b/contrib/elf2dmp/pdb.c
18
diff --git a/MAINTAINERS b/MAINTAINERS
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/contrib/elf2dmp/pdb.c
20
--- a/MAINTAINERS
21
+++ b/contrib/elf2dmp/pdb.c
21
+++ b/MAINTAINERS
22
@@ -XXX,XX +XXX,XX @@ out_symbols:
22
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
23
23
SBSA-REF
24
static int pdb_reader_ds_init(struct pdb_reader *r, PDB_DS_HEADER *hdr)
24
M: Radoslaw Biernacki <rad@semihalf.com>
25
{
25
M: Peter Maydell <peter.maydell@linaro.org>
26
+ if (hdr->block_size == 0) {
26
-R: Leif Lindholm <quic_llindhol@quicinc.com>
27
+ return 1;
27
+R: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
28
+ }
28
R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
29
+
29
L: qemu-arm@nongnu.org
30
memset(r->file_used, 0, sizeof(r->file_used));
30
S: Maintained
31
r->ds.header = hdr;
31
diff --git a/.mailmap b/.mailmap
32
r->ds.toc = pdb_ds_read(hdr, (uint32_t *)((uint8_t *)hdr +
32
index XXXXXXX..XXXXXXX 100644
33
--- a/.mailmap
34
+++ b/.mailmap
35
@@ -XXX,XX +XXX,XX @@ Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
36
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
37
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
38
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
39
-Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
40
-Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
41
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <quic_llindhol@quicinc.com>
42
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif.lindholm@linaro.org>
43
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif@nuviainc.com>
44
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
45
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
46
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
33
--
47
--
34
2.20.1
48
2.34.1
35
49
36
50
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <vikram.garhwal@bytedance.com>
1
2
3
Previously, maintainer role was paused due to inactive email id. Commit id:
4
c009d715721861984c4987bcc78b7ee183e86d75.
5
6
Signed-off-by: Vikram Garhwal <vikram.garhwal@bytedance.com>
7
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
8
Message-id: 20241204184205.12952-1-vikram.garhwal@bytedance.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
MAINTAINERS | 2 ++
12
1 file changed, 2 insertions(+)
13
14
diff --git a/MAINTAINERS b/MAINTAINERS
15
index XXXXXXX..XXXXXXX 100644
16
--- a/MAINTAINERS
17
+++ b/MAINTAINERS
18
@@ -XXX,XX +XXX,XX @@ F: tests/qtest/fuzz-sb16-test.c
19
20
Xilinx CAN
21
M: Francisco Iglesias <francisco.iglesias@amd.com>
22
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
23
S: Maintained
24
F: hw/net/can/xlnx-*
25
F: include/hw/net/xlnx-*
26
@@ -XXX,XX +XXX,XX @@ F: include/hw/rx/
27
CAN bus subsystem and hardware
28
M: Pavel Pisa <pisa@cmp.felk.cvut.cz>
29
M: Francisco Iglesias <francisco.iglesias@amd.com>
30
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
31
S: Maintained
32
W: https://canbus.pages.fel.cvut.cz/
33
F: net/can/*
34
--
35
2.34.1
diff view generated by jsdifflib