1
Hi; here's the first target-arm pullreq for the 9.0 cycle.
1
First arm pullreq of the cycle; this is mostly my softfloat NaN
2
The bulk of this is some cleanup/refactoring in the Arm
2
handling series. (Lots more in my to-review queue, but I don't
3
KVM code.
3
like pullreqs growing too close to a hundred patches at a time :-))
4
4
5
thanks
5
thanks
6
-- PMM
6
-- PMM
7
7
8
The following changes since commit bd00730ec0f621706d0179768436f82c39048499:
8
The following changes since commit 97f2796a3736ed37a1b85dc1c76a6c45b829dd17:
9
9
10
Open 9.0 development tree (2023-12-19 09:46:22 -0500)
10
Open 10.0 development tree (2024-12-10 17:41:17 +0000)
11
11
12
are available in the Git repository at:
12
are available in the Git repository at:
13
13
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20231219
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20241211
15
15
16
for you to fetch changes up to 6f9c3aaa34e937d8deaab44671e7562e4027436b:
16
for you to fetch changes up to 1abe28d519239eea5cf9620bb13149423e5665f8:
17
17
18
fsl-imx: add simple RTC emulation for i.MX6 and i.MX7 boards (2023-12-19 18:03:32 +0000)
18
MAINTAINERS: Add correct email address for Vikram Garhwal (2024-12-11 15:31:09 +0000)
19
19
20
----------------------------------------------------------------
20
----------------------------------------------------------------
21
target-arm queue:
21
target-arm queue:
22
* arm/kvm: drop the split between "common KVM support" and
22
* hw/net/lan9118: Extract PHY model, reuse with imx_fec, fix bugs
23
"64-bit KVM support", since 32-bit Arm KVM no longer exists
23
* fpu: Make muladd NaN handling runtime-selected, not compile-time
24
* arm/kvm: clean up APIs to be consistent about CPU arguments
24
* fpu: Make default NaN pattern runtime-selected, not compile-time
25
* Don't implement *32_EL2 registers when EL1 is AArch64 only
25
* fpu: Minor NaN-related cleanups
26
* Restrict DC CVAP & DC CVADP instructions to TCG accel
26
* MAINTAINERS: email address updates
27
* Restrict TCG specific helpers
28
* Propagate MDCR_EL2.HPMN into PMCR_EL0.N
29
* Include missing 'exec/exec-all.h' header
30
* fsl-imx: add simple RTC emulation for i.MX6 and i.MX7 boards
31
27
32
----------------------------------------------------------------
28
----------------------------------------------------------------
33
Chao Du (1):
29
Bernhard Beschow (5):
34
target/arm: kvm64: remove a redundant KVM_CAP_SET_GUEST_DEBUG probe
30
hw/net/lan9118: Extract lan9118_phy
31
hw/net/lan9118_phy: Reuse in imx_fec and consolidate implementations
32
hw/net/lan9118_phy: Fix off-by-one error in MII_ANLPAR register
33
hw/net/lan9118_phy: Reuse MII constants
34
hw/net/lan9118_phy: Add missing 100 mbps full duplex advertisement
35
35
36
Jean-Philippe Brucker (1):
36
Leif Lindholm (1):
37
target/arm/helper: Propagate MDCR_EL2.HPMN into PMCR_EL0.N
37
MAINTAINERS: update email address for Leif Lindholm
38
38
39
Nikita Ostrenkov (1):
39
Peter Maydell (54):
40
fsl-imx: add simple RTC emulation for i.MX6 and i.MX7 boards
40
fpu: handle raising Invalid for infzero in pick_nan_muladd
41
fpu: Check for default_nan_mode before calling pickNaNMulAdd
42
softfloat: Allow runtime choice of inf * 0 + NaN result
43
tests/fp: Explicitly set inf-zero-nan rule
44
target/arm: Set FloatInfZeroNaNRule explicitly
45
target/s390: Set FloatInfZeroNaNRule explicitly
46
target/ppc: Set FloatInfZeroNaNRule explicitly
47
target/mips: Set FloatInfZeroNaNRule explicitly
48
target/sparc: Set FloatInfZeroNaNRule explicitly
49
target/xtensa: Set FloatInfZeroNaNRule explicitly
50
target/x86: Set FloatInfZeroNaNRule explicitly
51
target/loongarch: Set FloatInfZeroNaNRule explicitly
52
target/hppa: Set FloatInfZeroNaNRule explicitly
53
softfloat: Pass have_snan to pickNaNMulAdd
54
softfloat: Allow runtime choice of NaN propagation for muladd
55
tests/fp: Explicitly set 3-NaN propagation rule
56
target/arm: Set Float3NaNPropRule explicitly
57
target/loongarch: Set Float3NaNPropRule explicitly
58
target/ppc: Set Float3NaNPropRule explicitly
59
target/s390x: Set Float3NaNPropRule explicitly
60
target/sparc: Set Float3NaNPropRule explicitly
61
target/mips: Set Float3NaNPropRule explicitly
62
target/xtensa: Set Float3NaNPropRule explicitly
63
target/i386: Set Float3NaNPropRule explicitly
64
target/hppa: Set Float3NaNPropRule explicitly
65
fpu: Remove use_first_nan field from float_status
66
target/m68k: Don't pass NULL float_status to floatx80_default_nan()
67
softfloat: Create floatx80 default NaN from parts64_default_nan
68
target/loongarch: Use normal float_status in fclass_s and fclass_d helpers
69
target/m68k: In frem helper, initialize local float_status from env->fp_status
70
target/m68k: Init local float_status from env fp_status in gdb get/set reg
71
target/sparc: Initialize local scratch float_status from env->fp_status
72
target/ppc: Use env->fp_status in helper_compute_fprf functions
73
fpu: Allow runtime choice of default NaN value
74
tests/fp: Set default NaN pattern explicitly
75
target/microblaze: Set default NaN pattern explicitly
76
target/i386: Set default NaN pattern explicitly
77
target/hppa: Set default NaN pattern explicitly
78
target/alpha: Set default NaN pattern explicitly
79
target/arm: Set default NaN pattern explicitly
80
target/loongarch: Set default NaN pattern explicitly
81
target/m68k: Set default NaN pattern explicitly
82
target/mips: Set default NaN pattern explicitly
83
target/openrisc: Set default NaN pattern explicitly
84
target/ppc: Set default NaN pattern explicitly
85
target/sh4: Set default NaN pattern explicitly
86
target/rx: Set default NaN pattern explicitly
87
target/s390x: Set default NaN pattern explicitly
88
target/sparc: Set default NaN pattern explicitly
89
target/xtensa: Set default NaN pattern explicitly
90
target/hexagon: Set default NaN pattern explicitly
91
target/riscv: Set default NaN pattern explicitly
92
target/tricore: Set default NaN pattern explicitly
93
fpu: Remove default handling for dnan_pattern
41
94
42
Peter Maydell (1):
95
Richard Henderson (11):
43
target/arm: Don't implement *32_EL2 registers when EL1 is AArch64 only
96
target/arm: Copy entire float_status in is_ebf
97
softfloat: Inline pickNaNMulAdd
98
softfloat: Use goto for default nan case in pick_nan_muladd
99
softfloat: Remove which from parts_pick_nan_muladd
100
softfloat: Pad array size in pick_nan_muladd
101
softfloat: Move propagateFloatx80NaN to softfloat.c
102
softfloat: Use parts_pick_nan in propagateFloatx80NaN
103
softfloat: Inline pickNaN
104
softfloat: Share code between parts_pick_nan cases
105
softfloat: Sink frac_cmp in parts_pick_nan until needed
106
softfloat: Replace WHICH with RET in parts_pick_nan
44
107
45
Philippe Mathieu-Daudé (19):
108
Vikram Garhwal (1):
46
hw/intc/arm_gicv3: Include missing 'qemu/error-report.h' header
109
MAINTAINERS: Add correct email address for Vikram Garhwal
47
target/arm/kvm: Remove unused includes
48
target/arm/kvm: Have kvm_arm_add_vcpu_properties take a ARMCPU argument
49
target/arm/kvm: Have kvm_arm_sve_set_vls take a ARMCPU argument
50
target/arm/kvm: Have kvm_arm_sve_get_vls take a ARMCPU argument
51
target/arm/kvm: Have kvm_arm_set_device_attr take a ARMCPU argument
52
target/arm/kvm: Have kvm_arm_pvtime_init take a ARMCPU argument
53
target/arm/kvm: Have kvm_arm_pmu_init take a ARMCPU argument
54
target/arm/kvm: Have kvm_arm_pmu_set_irq take a ARMCPU argument
55
target/arm/kvm: Have kvm_arm_vcpu_init take a ARMCPU argument
56
target/arm/kvm: Have kvm_arm_vcpu_finalize take a ARMCPU argument
57
target/arm/kvm: Have kvm_arm_[get|put]_virtual_time take ARMCPU argument
58
target/arm/kvm: Have kvm_arm_verify_ext_dabt_pending take a ARMCPU arg
59
target/arm/kvm: Have kvm_arm_handle_dabt_nisv take a ARMCPU argument
60
target/arm/kvm: Have kvm_arm_handle_debug take a ARMCPU argument
61
target/arm/kvm: Have kvm_arm_hw_debug_active take a ARMCPU argument
62
target/arm: Restrict TCG specific helpers
63
target/arm: Restrict DC CVAP & DC CVADP instructions to TCG accel
64
target/arm/tcg: Including missing 'exec/exec-all.h' header
65
110
66
Richard Henderson (20):
111
MAINTAINERS | 4 +-
67
accel/kvm: Make kvm_has_guest_debug static
112
include/fpu/softfloat-helpers.h | 38 +++-
68
target/arm/kvm: Merge kvm_arm_init_debug into kvm_arch_init
113
include/fpu/softfloat-types.h | 89 +++++++-
69
target/arm/kvm: Move kvm_arm_verify_ext_dabt_pending and unexport
114
include/hw/net/imx_fec.h | 9 +-
70
target/arm/kvm: Move kvm_arm_copy_hw_debug_data and unexport
115
include/hw/net/lan9118_phy.h | 37 ++++
71
target/arm/kvm: Move kvm_arm_hw_debug_active and unexport
116
include/hw/net/mii.h | 6 +
72
target/arm/kvm: Move kvm_arm_handle_debug and unexport
117
target/mips/fpu_helper.h | 20 ++
73
target/arm/kvm: Unexport kvm_arm_{get, put}_virtual_time
118
target/sparc/helper.h | 4 +-
74
target/arm/kvm: Inline kvm_arm_steal_time_supported
119
fpu/softfloat.c | 19 ++
75
target/arm/kvm: Move kvm_arm_get_host_cpu_features and unexport
120
hw/net/imx_fec.c | 146 ++------------
76
target/arm/kvm: Use a switch for kvm_arm_cpreg_level
121
hw/net/lan9118.c | 137 ++-----------
77
target/arm/kvm: Move kvm_arm_cpreg_level and unexport
122
hw/net/lan9118_phy.c | 222 ++++++++++++++++++++
78
target/arm/kvm: Move kvm_arm_reg_syncs_via_cpreg_list and unexport
123
linux-user/arm/nwfpe/fpa11.c | 5 +
79
target/arm/kvm: Merge kvm64.c into kvm.c
124
target/alpha/cpu.c | 2 +
80
target/arm/kvm: Unexport kvm_arm_vcpu_init
125
target/arm/cpu.c | 10 +
81
target/arm/kvm: Unexport kvm_arm_vcpu_finalize
126
target/arm/tcg/vec_helper.c | 20 +-
82
target/arm/kvm: Unexport kvm_arm_init_cpreg_list
127
target/hexagon/cpu.c | 2 +
83
target/arm/kvm: Init cap_has_inject_serror_esr in kvm_arch_init
128
target/hppa/fpu_helper.c | 12 ++
84
target/arm/kvm: Unexport kvm_{get,put}_vcpu_events
129
target/i386/tcg/fpu_helper.c | 12 ++
85
target/arm/kvm: Unexport and tidy kvm_arm_sync_mpstate_to_{kvm, qemu}
130
target/loongarch/tcg/fpu_helper.c | 14 +-
86
target/arm/kvm: Unexport kvm_arm_vm_state_change
131
target/m68k/cpu.c | 14 +-
87
132
target/m68k/fpu_helper.c | 6 +-
88
include/hw/misc/imx7_snvs.h | 7 +-
133
target/m68k/helper.c | 6 +-
89
target/arm/kvm_arm.h | 231 +------
134
target/microblaze/cpu.c | 2 +
90
accel/kvm/kvm-all.c | 2 +-
135
target/mips/msa.c | 10 +
91
hw/arm/virt.c | 9 +-
136
target/openrisc/cpu.c | 2 +
92
hw/intc/arm_gicv3_its_kvm.c | 1 +
137
target/ppc/cpu_init.c | 19 ++
93
hw/misc/imx7_snvs.c | 93 ++-
138
target/ppc/fpu_helper.c | 3 +-
94
target/arm/cpu.c | 2 +-
139
target/riscv/cpu.c | 2 +
95
target/arm/cpu64.c | 2 +-
140
target/rx/cpu.c | 2 +
96
target/arm/debug_helper.c | 23 +-
141
target/s390x/cpu.c | 5 +
97
target/arm/helper.c | 117 ++--
142
target/sh4/cpu.c | 2 +
98
target/arm/kvm.c | 1409 ++++++++++++++++++++++++++++++++++++++--
143
target/sparc/cpu.c | 6 +
99
target/arm/kvm64.c | 1290 ------------------------------------
144
target/sparc/fop_helper.c | 8 +-
100
target/arm/tcg/op_helper.c | 55 ++
145
target/sparc/translate.c | 4 +-
101
target/arm/tcg/translate-a64.c | 1 +
146
target/tricore/helper.c | 2 +
102
hw/misc/trace-events | 4 +-
147
target/xtensa/cpu.c | 4 +
103
target/arm/meson.build | 2 +-
148
target/xtensa/fpu_helper.c | 3 +-
104
16 files changed, 1592 insertions(+), 1656 deletions(-)
149
tests/fp/fp-bench.c | 7 +
105
delete mode 100644 target/arm/kvm64.c
150
tests/fp/fp-test-log2.c | 1 +
106
151
tests/fp/fp-test.c | 7 +
152
fpu/softfloat-parts.c.inc | 152 +++++++++++---
153
fpu/softfloat-specialize.c.inc | 412 ++------------------------------------
154
.mailmap | 5 +-
155
hw/net/Kconfig | 5 +
156
hw/net/meson.build | 1 +
157
hw/net/trace-events | 10 +-
158
47 files changed, 778 insertions(+), 730 deletions(-)
159
create mode 100644 include/hw/net/lan9118_phy.h
160
create mode 100644 hw/net/lan9118_phy.c
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
3
A very similar implementation of the same device exists in imx_fec. Prepare for
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
a common implementation by extracting a device model into its own files.
5
Message-id: 20231130142519.28417-2-philmd@linaro.org
5
6
Some migration state has been moved into the new device model which breaks
7
migration compatibility for the following machines:
8
* smdkc210
9
* realview-*
10
* vexpress-*
11
* kzm
12
* mps2-*
13
14
While breaking migration ABI, fix the size of the MII registers to be 16 bit,
15
as defined by IEEE 802.3u.
16
17
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
18
Tested-by: Guenter Roeck <linux@roeck-us.net>
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Message-id: 20241102125724.532843-2-shentey@gmail.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
22
---
8
target/arm/helper.c | 55 --------------------------------------
23
include/hw/net/lan9118_phy.h | 37 ++++++++
9
target/arm/tcg/op_helper.c | 55 ++++++++++++++++++++++++++++++++++++++
24
hw/net/lan9118.c | 137 +++++-----------------------
10
2 files changed, 55 insertions(+), 55 deletions(-)
25
hw/net/lan9118_phy.c | 169 +++++++++++++++++++++++++++++++++++
26
hw/net/Kconfig | 4 +
27
hw/net/meson.build | 1 +
28
5 files changed, 233 insertions(+), 115 deletions(-)
29
create mode 100644 include/hw/net/lan9118_phy.h
30
create mode 100644 hw/net/lan9118_phy.c
11
31
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
diff --git a/include/hw/net/lan9118_phy.h b/include/hw/net/lan9118_phy.h
33
new file mode 100644
34
index XXXXXXX..XXXXXXX
35
--- /dev/null
36
+++ b/include/hw/net/lan9118_phy.h
37
@@ -XXX,XX +XXX,XX @@
38
+/*
39
+ * SMSC LAN9118 PHY emulation
40
+ *
41
+ * Copyright (c) 2009 CodeSourcery, LLC.
42
+ * Written by Paul Brook
43
+ *
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
45
+ * See the COPYING file in the top-level directory.
46
+ */
47
+
48
+#ifndef HW_NET_LAN9118_PHY_H
49
+#define HW_NET_LAN9118_PHY_H
50
+
51
+#include "qom/object.h"
52
+#include "hw/sysbus.h"
53
+
54
+#define TYPE_LAN9118_PHY "lan9118-phy"
55
+OBJECT_DECLARE_SIMPLE_TYPE(Lan9118PhyState, LAN9118_PHY)
56
+
57
+typedef struct Lan9118PhyState {
58
+ SysBusDevice parent_obj;
59
+
60
+ uint16_t status;
61
+ uint16_t control;
62
+ uint16_t advertise;
63
+ uint16_t ints;
64
+ uint16_t int_mask;
65
+ qemu_irq irq;
66
+ bool link_down;
67
+} Lan9118PhyState;
68
+
69
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down);
70
+void lan9118_phy_reset(Lan9118PhyState *s);
71
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg);
72
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val);
73
+
74
+#endif
75
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
13
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
77
--- a/hw/net/lan9118.c
15
+++ b/target/arm/helper.c
78
+++ b/hw/net/lan9118.c
16
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
79
@@ -XXX,XX +XXX,XX @@
80
#include "net/net.h"
81
#include "net/eth.h"
82
#include "hw/irq.h"
83
+#include "hw/net/lan9118_phy.h"
84
#include "hw/net/lan9118.h"
85
#include "hw/ptimer.h"
86
#include "hw/qdev-properties.h"
87
@@ -XXX,XX +XXX,XX @@ do { printf("lan9118: " fmt , ## __VA_ARGS__); } while (0)
88
#define MAC_CR_RXEN 0x00000004
89
#define MAC_CR_RESERVED 0x7f404213
90
91
-#define PHY_INT_ENERGYON 0x80
92
-#define PHY_INT_AUTONEG_COMPLETE 0x40
93
-#define PHY_INT_FAULT 0x20
94
-#define PHY_INT_DOWN 0x10
95
-#define PHY_INT_AUTONEG_LP 0x08
96
-#define PHY_INT_PARFAULT 0x04
97
-#define PHY_INT_AUTONEG_PAGE 0x02
98
-
99
#define GPT_TIMER_EN 0x20000000
100
101
/*
102
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
103
uint32_t mac_mii_data;
104
uint32_t mac_flow;
105
106
- uint32_t phy_status;
107
- uint32_t phy_control;
108
- uint32_t phy_advertise;
109
- uint32_t phy_int;
110
- uint32_t phy_int_mask;
111
+ Lan9118PhyState mii;
112
+ IRQState mii_irq;
113
114
int32_t eeprom_writable;
115
uint8_t eeprom[128];
116
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
117
118
static const VMStateDescription vmstate_lan9118 = {
119
.name = "lan9118",
120
- .version_id = 2,
121
- .minimum_version_id = 1,
122
+ .version_id = 3,
123
+ .minimum_version_id = 3,
124
.fields = (const VMStateField[]) {
125
VMSTATE_PTIMER(timer, lan9118_state),
126
VMSTATE_UINT32(irq_cfg, lan9118_state),
127
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118 = {
128
VMSTATE_UINT32(mac_mii_acc, lan9118_state),
129
VMSTATE_UINT32(mac_mii_data, lan9118_state),
130
VMSTATE_UINT32(mac_flow, lan9118_state),
131
- VMSTATE_UINT32(phy_status, lan9118_state),
132
- VMSTATE_UINT32(phy_control, lan9118_state),
133
- VMSTATE_UINT32(phy_advertise, lan9118_state),
134
- VMSTATE_UINT32(phy_int, lan9118_state),
135
- VMSTATE_UINT32(phy_int_mask, lan9118_state),
136
VMSTATE_INT32(eeprom_writable, lan9118_state),
137
VMSTATE_UINT8_ARRAY(eeprom, lan9118_state, 128),
138
VMSTATE_INT32(tx_fifo_size, lan9118_state),
139
@@ -XXX,XX +XXX,XX @@ static void lan9118_reload_eeprom(lan9118_state *s)
140
lan9118_mac_changed(s);
141
}
142
143
-static void phy_update_irq(lan9118_state *s)
144
+static void lan9118_update_irq(void *opaque, int n, int level)
145
{
146
- if (s->phy_int & s->phy_int_mask) {
147
+ lan9118_state *s = opaque;
148
+
149
+ if (level) {
150
s->int_sts |= PHY_INT;
151
} else {
152
s->int_sts &= ~PHY_INT;
153
@@ -XXX,XX +XXX,XX @@ static void phy_update_irq(lan9118_state *s)
154
lan9118_update(s);
155
}
156
157
-static void phy_update_link(lan9118_state *s)
158
-{
159
- /* Autonegotiation status mirrors link status. */
160
- if (qemu_get_queue(s->nic)->link_down) {
161
- s->phy_status &= ~0x0024;
162
- s->phy_int |= PHY_INT_DOWN;
163
- } else {
164
- s->phy_status |= 0x0024;
165
- s->phy_int |= PHY_INT_ENERGYON;
166
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
167
- }
168
- phy_update_irq(s);
169
-}
170
-
171
static void lan9118_set_link(NetClientState *nc)
172
{
173
- phy_update_link(qemu_get_nic_opaque(nc));
174
-}
175
-
176
-static void phy_reset(lan9118_state *s)
177
-{
178
- s->phy_status = 0x7809;
179
- s->phy_control = 0x3000;
180
- s->phy_advertise = 0x01e1;
181
- s->phy_int_mask = 0;
182
- s->phy_int = 0;
183
- phy_update_link(s);
184
+ lan9118_phy_update_link(&LAN9118(qemu_get_nic_opaque(nc))->mii,
185
+ nc->link_down);
186
}
187
188
static void lan9118_reset(DeviceState *d)
189
@@ -XXX,XX +XXX,XX @@ static void lan9118_reset(DeviceState *d)
190
s->read_word_n = 0;
191
s->write_word_n = 0;
192
193
- phy_reset(s);
194
-
195
s->eeprom_writable = 0;
196
lan9118_reload_eeprom(s);
197
}
198
@@ -XXX,XX +XXX,XX @@ static void do_tx_packet(lan9118_state *s)
199
uint32_t status;
200
201
/* FIXME: Honor TX disable, and allow queueing of packets. */
202
- if (s->phy_control & 0x4000) {
203
+ if (s->mii.control & 0x4000) {
204
/* This assumes the receive routine doesn't touch the VLANClient. */
205
qemu_receive_packet(qemu_get_queue(s->nic), s->txp->data, s->txp->len);
206
} else {
207
@@ -XXX,XX +XXX,XX @@ static void tx_fifo_push(lan9118_state *s, uint32_t val)
17
}
208
}
18
}
209
}
19
210
20
-/* Sign/zero extend */
211
-static uint32_t do_phy_read(lan9118_state *s, int reg)
21
-uint32_t HELPER(sxtb16)(uint32_t x)
22
-{
212
-{
23
- uint32_t res;
213
- uint32_t val;
24
- res = (uint16_t)(int8_t)x;
214
-
25
- res |= (uint32_t)(int8_t)(x >> 16) << 16;
215
- switch (reg) {
26
- return res;
216
- case 0: /* Basic Control */
27
-}
217
- return s->phy_control;
28
-
218
- case 1: /* Basic Status */
29
-static void handle_possible_div0_trap(CPUARMState *env, uintptr_t ra)
219
- return s->phy_status;
30
-{
220
- case 2: /* ID1 */
31
- /*
221
- return 0x0007;
32
- * Take a division-by-zero exception if necessary; otherwise return
222
- case 3: /* ID2 */
33
- * to get the usual non-trapping division behaviour (result of 0)
223
- return 0xc0d1;
34
- */
224
- case 4: /* Auto-neg advertisement */
35
- if (arm_feature(env, ARM_FEATURE_M)
225
- return s->phy_advertise;
36
- && (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_DIV_0_TRP_MASK)) {
226
- case 5: /* Auto-neg Link Partner Ability */
37
- raise_exception_ra(env, EXCP_DIVBYZERO, 0, 1, ra);
227
- return 0x0f71;
228
- case 6: /* Auto-neg Expansion */
229
- return 1;
230
- /* TODO 17, 18, 27, 29, 30, 31 */
231
- case 29: /* Interrupt source. */
232
- val = s->phy_int;
233
- s->phy_int = 0;
234
- phy_update_irq(s);
235
- return val;
236
- case 30: /* Interrupt mask */
237
- return s->phy_int_mask;
238
- default:
239
- qemu_log_mask(LOG_GUEST_ERROR,
240
- "do_phy_read: PHY read reg %d\n", reg);
241
- return 0;
38
- }
242
- }
39
-}
243
-}
40
-
244
-
41
-uint32_t HELPER(uxtb16)(uint32_t x)
245
-static void do_phy_write(lan9118_state *s, int reg, uint32_t val)
42
-{
246
-{
43
- uint32_t res;
247
- switch (reg) {
44
- res = (uint16_t)(uint8_t)x;
248
- case 0: /* Basic Control */
45
- res |= (uint32_t)(uint8_t)(x >> 16) << 16;
249
- if (val & 0x8000) {
46
- return res;
250
- phy_reset(s);
251
- break;
252
- }
253
- s->phy_control = val & 0x7980;
254
- /* Complete autonegotiation immediately. */
255
- if (val & 0x1000) {
256
- s->phy_status |= 0x0020;
257
- }
258
- break;
259
- case 4: /* Auto-neg advertisement */
260
- s->phy_advertise = (val & 0x2d7f) | 0x80;
261
- break;
262
- /* TODO 17, 18, 27, 31 */
263
- case 30: /* Interrupt mask */
264
- s->phy_int_mask = val & 0xff;
265
- phy_update_irq(s);
266
- break;
267
- default:
268
- qemu_log_mask(LOG_GUEST_ERROR,
269
- "do_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
270
- }
47
-}
271
-}
48
-
272
-
49
-int32_t HELPER(sdiv)(CPUARMState *env, int32_t num, int32_t den)
273
static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
50
-{
274
{
51
- if (den == 0) {
275
switch (reg) {
52
- handle_possible_div0_trap(env, GETPC());
276
@@ -XXX,XX +XXX,XX @@ static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
53
- return 0;
277
if (val & 2) {
54
- }
278
DPRINTF("PHY write %d = 0x%04x\n",
55
- if (num == INT_MIN && den == -1) {
279
(val >> 6) & 0x1f, s->mac_mii_data);
56
- return INT_MIN;
280
- do_phy_write(s, (val >> 6) & 0x1f, s->mac_mii_data);
57
- }
281
+ lan9118_phy_write(&s->mii, (val >> 6) & 0x1f, s->mac_mii_data);
58
- return num / den;
282
} else {
59
-}
283
- s->mac_mii_data = do_phy_read(s, (val >> 6) & 0x1f);
60
-
284
+ s->mac_mii_data = lan9118_phy_read(&s->mii, (val >> 6) & 0x1f);
61
-uint32_t HELPER(udiv)(CPUARMState *env, uint32_t num, uint32_t den)
285
DPRINTF("PHY read %d = 0x%04x\n",
62
-{
286
(val >> 6) & 0x1f, s->mac_mii_data);
63
- if (den == 0) {
287
}
64
- handle_possible_div0_trap(env, GETPC());
288
@@ -XXX,XX +XXX,XX @@ static void lan9118_writel(void *opaque, hwaddr offset,
65
- return 0;
289
break;
66
- }
290
case CSR_PMT_CTRL:
67
- return num / den;
291
if (val & 0x400) {
68
-}
292
- phy_reset(s);
69
-
293
+ lan9118_phy_reset(&s->mii);
70
-uint32_t HELPER(rbit)(uint32_t x)
294
}
71
-{
295
s->pmt_ctrl &= ~0x34e;
72
- return revbit32(x);
296
s->pmt_ctrl |= (val & 0x34e);
73
-}
297
@@ -XXX,XX +XXX,XX @@ static void lan9118_realize(DeviceState *dev, Error **errp)
74
-
298
const MemoryRegionOps *mem_ops =
75
#ifdef CONFIG_USER_ONLY
299
s->mode_16bit ? &lan9118_16bit_mem_ops : &lan9118_mem_ops;
76
300
77
static void switch_mode(CPUARMState *env, int mode)
301
+ qemu_init_irq(&s->mii_irq, lan9118_update_irq, s, 0);
78
diff --git a/target/arm/tcg/op_helper.c b/target/arm/tcg/op_helper.c
302
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
79
index XXXXXXX..XXXXXXX 100644
303
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
80
--- a/target/arm/tcg/op_helper.c
304
+ return;
81
+++ b/target/arm/tcg/op_helper.c
82
@@ -XXX,XX +XXX,XX @@ void HELPER(v8m_stackcheck)(CPUARMState *env, uint32_t newvalue)
83
}
84
}
85
86
+/* Sign/zero extend */
87
+uint32_t HELPER(sxtb16)(uint32_t x)
88
+{
89
+ uint32_t res;
90
+ res = (uint16_t)(int8_t)x;
91
+ res |= (uint32_t)(int8_t)(x >> 16) << 16;
92
+ return res;
93
+}
94
+
95
+static void handle_possible_div0_trap(CPUARMState *env, uintptr_t ra)
96
+{
97
+ /*
98
+ * Take a division-by-zero exception if necessary; otherwise return
99
+ * to get the usual non-trapping division behaviour (result of 0)
100
+ */
101
+ if (arm_feature(env, ARM_FEATURE_M)
102
+ && (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_DIV_0_TRP_MASK)) {
103
+ raise_exception_ra(env, EXCP_DIVBYZERO, 0, 1, ra);
104
+ }
305
+ }
105
+}
306
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
106
+
307
+
107
+uint32_t HELPER(uxtb16)(uint32_t x)
308
memory_region_init_io(&s->mmio, OBJECT(dev), mem_ops, s,
108
+{
309
"lan9118-mmio", 0x100);
109
+ uint32_t res;
310
sysbus_init_mmio(sbd, &s->mmio);
110
+ res = (uint16_t)(uint8_t)x;
311
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
111
+ res |= (uint32_t)(uint8_t)(x >> 16) << 16;
312
new file mode 100644
112
+ return res;
313
index XXXXXXX..XXXXXXX
113
+}
314
--- /dev/null
114
+
315
+++ b/hw/net/lan9118_phy.c
115
+int32_t HELPER(sdiv)(CPUARMState *env, int32_t num, int32_t den)
316
@@ -XXX,XX +XXX,XX @@
116
+{
317
+/*
117
+ if (den == 0) {
318
+ * SMSC LAN9118 PHY emulation
118
+ handle_possible_div0_trap(env, GETPC());
319
+ *
320
+ * Copyright (c) 2009 CodeSourcery, LLC.
321
+ * Written by Paul Brook
322
+ *
323
+ * This code is licensed under the GNU GPL v2
324
+ *
325
+ * Contributions after 2012-01-13 are licensed under the terms of the
326
+ * GNU GPL, version 2 or (at your option) any later version.
327
+ */
328
+
329
+#include "qemu/osdep.h"
330
+#include "hw/net/lan9118_phy.h"
331
+#include "hw/irq.h"
332
+#include "hw/resettable.h"
333
+#include "migration/vmstate.h"
334
+#include "qemu/log.h"
335
+
336
+#define PHY_INT_ENERGYON (1 << 7)
337
+#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
338
+#define PHY_INT_FAULT (1 << 5)
339
+#define PHY_INT_DOWN (1 << 4)
340
+#define PHY_INT_AUTONEG_LP (1 << 3)
341
+#define PHY_INT_PARFAULT (1 << 2)
342
+#define PHY_INT_AUTONEG_PAGE (1 << 1)
343
+
344
+static void lan9118_phy_update_irq(Lan9118PhyState *s)
345
+{
346
+ qemu_set_irq(s->irq, !!(s->ints & s->int_mask));
347
+}
348
+
349
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
350
+{
351
+ uint16_t val;
352
+
353
+ switch (reg) {
354
+ case 0: /* Basic Control */
355
+ return s->control;
356
+ case 1: /* Basic Status */
357
+ return s->status;
358
+ case 2: /* ID1 */
359
+ return 0x0007;
360
+ case 3: /* ID2 */
361
+ return 0xc0d1;
362
+ case 4: /* Auto-neg advertisement */
363
+ return s->advertise;
364
+ case 5: /* Auto-neg Link Partner Ability */
365
+ return 0x0f71;
366
+ case 6: /* Auto-neg Expansion */
367
+ return 1;
368
+ /* TODO 17, 18, 27, 29, 30, 31 */
369
+ case 29: /* Interrupt source. */
370
+ val = s->ints;
371
+ s->ints = 0;
372
+ lan9118_phy_update_irq(s);
373
+ return val;
374
+ case 30: /* Interrupt mask */
375
+ return s->int_mask;
376
+ default:
377
+ qemu_log_mask(LOG_GUEST_ERROR,
378
+ "lan9118_phy_read: PHY read reg %d\n", reg);
119
+ return 0;
379
+ return 0;
120
+ }
380
+ }
121
+ if (num == INT_MIN && den == -1) {
381
+}
122
+ return INT_MIN;
382
+
383
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
384
+{
385
+ switch (reg) {
386
+ case 0: /* Basic Control */
387
+ if (val & 0x8000) {
388
+ lan9118_phy_reset(s);
389
+ break;
390
+ }
391
+ s->control = val & 0x7980;
392
+ /* Complete autonegotiation immediately. */
393
+ if (val & 0x1000) {
394
+ s->status |= 0x0020;
395
+ }
396
+ break;
397
+ case 4: /* Auto-neg advertisement */
398
+ s->advertise = (val & 0x2d7f) | 0x80;
399
+ break;
400
+ /* TODO 17, 18, 27, 31 */
401
+ case 30: /* Interrupt mask */
402
+ s->int_mask = val & 0xff;
403
+ lan9118_phy_update_irq(s);
404
+ break;
405
+ default:
406
+ qemu_log_mask(LOG_GUEST_ERROR,
407
+ "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
123
+ }
408
+ }
124
+ return num / den;
409
+}
125
+}
410
+
126
+
411
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
127
+uint32_t HELPER(udiv)(CPUARMState *env, uint32_t num, uint32_t den)
412
+{
128
+{
413
+ s->link_down = link_down;
129
+ if (den == 0) {
414
+
130
+ handle_possible_div0_trap(env, GETPC());
415
+ /* Autonegotiation status mirrors link status. */
131
+ return 0;
416
+ if (link_down) {
417
+ s->status &= ~0x0024;
418
+ s->ints |= PHY_INT_DOWN;
419
+ } else {
420
+ s->status |= 0x0024;
421
+ s->ints |= PHY_INT_ENERGYON;
422
+ s->ints |= PHY_INT_AUTONEG_COMPLETE;
132
+ }
423
+ }
133
+ return num / den;
424
+ lan9118_phy_update_irq(s);
134
+}
425
+}
135
+
426
+
136
+uint32_t HELPER(rbit)(uint32_t x)
427
+void lan9118_phy_reset(Lan9118PhyState *s)
137
+{
428
+{
138
+ return revbit32(x);
429
+ s->control = 0x3000;
139
+}
430
+ s->status = 0x7809;
140
+
431
+ s->advertise = 0x01e1;
141
uint32_t HELPER(add_setq)(CPUARMState *env, uint32_t a, uint32_t b)
432
+ s->int_mask = 0;
142
{
433
+ s->ints = 0;
143
uint32_t res = a + b;
434
+ lan9118_phy_update_link(s, s->link_down);
435
+}
436
+
437
+static void lan9118_phy_reset_hold(Object *obj, ResetType type)
438
+{
439
+ Lan9118PhyState *s = LAN9118_PHY(obj);
440
+
441
+ lan9118_phy_reset(s);
442
+}
443
+
444
+static void lan9118_phy_init(Object *obj)
445
+{
446
+ Lan9118PhyState *s = LAN9118_PHY(obj);
447
+
448
+ qdev_init_gpio_out(DEVICE(s), &s->irq, 1);
449
+}
450
+
451
+static const VMStateDescription vmstate_lan9118_phy = {
452
+ .name = "lan9118-phy",
453
+ .version_id = 1,
454
+ .minimum_version_id = 1,
455
+ .fields = (const VMStateField[]) {
456
+ VMSTATE_UINT16(control, Lan9118PhyState),
457
+ VMSTATE_UINT16(status, Lan9118PhyState),
458
+ VMSTATE_UINT16(advertise, Lan9118PhyState),
459
+ VMSTATE_UINT16(ints, Lan9118PhyState),
460
+ VMSTATE_UINT16(int_mask, Lan9118PhyState),
461
+ VMSTATE_BOOL(link_down, Lan9118PhyState),
462
+ VMSTATE_END_OF_LIST()
463
+ }
464
+};
465
+
466
+static void lan9118_phy_class_init(ObjectClass *klass, void *data)
467
+{
468
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
469
+ DeviceClass *dc = DEVICE_CLASS(klass);
470
+
471
+ rc->phases.hold = lan9118_phy_reset_hold;
472
+ dc->vmsd = &vmstate_lan9118_phy;
473
+}
474
+
475
+static const TypeInfo types[] = {
476
+ {
477
+ .name = TYPE_LAN9118_PHY,
478
+ .parent = TYPE_SYS_BUS_DEVICE,
479
+ .instance_size = sizeof(Lan9118PhyState),
480
+ .instance_init = lan9118_phy_init,
481
+ .class_init = lan9118_phy_class_init,
482
+ }
483
+};
484
+
485
+DEFINE_TYPES(types)
486
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
487
index XXXXXXX..XXXXXXX 100644
488
--- a/hw/net/Kconfig
489
+++ b/hw/net/Kconfig
490
@@ -XXX,XX +XXX,XX @@ config VMXNET3_PCI
491
config SMC91C111
492
bool
493
494
+config LAN9118_PHY
495
+ bool
496
+
497
config LAN9118
498
bool
499
+ select LAN9118_PHY
500
select PTIMER
501
502
config NE2000_ISA
503
diff --git a/hw/net/meson.build b/hw/net/meson.build
504
index XXXXXXX..XXXXXXX 100644
505
--- a/hw/net/meson.build
506
+++ b/hw/net/meson.build
507
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_VMXNET3_PCI', if_true: files('vmxnet3.c'))
508
509
system_ss.add(when: 'CONFIG_SMC91C111', if_true: files('smc91c111.c'))
510
system_ss.add(when: 'CONFIG_LAN9118', if_true: files('lan9118.c'))
511
+system_ss.add(when: 'CONFIG_LAN9118_PHY', if_true: files('lan9118_phy.c'))
512
system_ss.add(when: 'CONFIG_NE2000_ISA', if_true: files('ne2000-isa.c'))
513
system_ss.add(when: 'CONFIG_OPENCORES_ETH', if_true: files('opencores_eth.c'))
514
system_ss.add(when: 'CONFIG_XGMAC', if_true: files('xgmac.c'))
144
--
515
--
145
2.34.1
516
2.34.1
146
147
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
imx_fec models the same PHY as lan9118_phy. The code is almost the same with
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
4
imx_fec having more logging and tracing. Merge these improvements into
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
lan9118_phy and reuse in imx_fec to fix the code duplication.
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
7
Some migration state how resides in the new device model which breaks migration
8
compatibility for the following machines:
9
* imx25-pdk
10
* sabrelite
11
* mcimx7d-sabre
12
* mcimx6ul-evk
13
14
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
15
Tested-by: Guenter Roeck <linux@roeck-us.net>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20241102125724.532843-3-shentey@gmail.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
19
---
9
target/arm/kvm_arm.h | 10 --------
20
include/hw/net/imx_fec.h | 9 ++-
10
target/arm/kvm.c | 57 ++++++++++++++++++++++++++++++++++++++++++++
21
hw/net/imx_fec.c | 146 ++++-----------------------------------
11
target/arm/kvm64.c | 49 -------------------------------------
22
hw/net/lan9118_phy.c | 82 ++++++++++++++++------
12
3 files changed, 57 insertions(+), 59 deletions(-)
23
hw/net/Kconfig | 1 +
24
hw/net/trace-events | 10 +--
25
5 files changed, 85 insertions(+), 163 deletions(-)
13
26
14
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
27
diff --git a/include/hw/net/imx_fec.h b/include/hw/net/imx_fec.h
15
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm_arm.h
29
--- a/include/hw/net/imx_fec.h
17
+++ b/target/arm/kvm_arm.h
30
+++ b/include/hw/net/imx_fec.h
18
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_hw_debug_active(CPUState *cs);
31
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(IMXFECState, IMX_FEC)
19
struct kvm_guest_debug_arch;
32
#define TYPE_IMX_ENET "imx.enet"
20
void kvm_arm_copy_hw_debug_data(struct kvm_guest_debug_arch *ptr);
33
21
34
#include "hw/sysbus.h"
22
-/**
35
+#include "hw/net/lan9118_phy.h"
23
- * kvm_arm_verify_ext_dabt_pending:
36
+#include "hw/irq.h"
24
- * @cs: CPUState
37
#include "net/net.h"
25
- *
38
26
- * Verify the fault status code wrt the Ext DABT injection
39
#define ENET_EIR 1
27
- *
40
@@ -XXX,XX +XXX,XX @@ struct IMXFECState {
28
- * Returns: true if the fault status code is as expected, false otherwise
41
uint32_t tx_descriptor[ENET_TX_RING_NUM];
29
- */
42
uint32_t tx_ring_num;
30
-bool kvm_arm_verify_ext_dabt_pending(CPUState *cs);
43
31
-
44
- uint32_t phy_status;
32
#endif
45
- uint32_t phy_control;
33
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
46
- uint32_t phy_advertise;
47
- uint32_t phy_int;
48
- uint32_t phy_int_mask;
49
+ Lan9118PhyState mii;
50
+ IRQState mii_irq;
51
uint32_t phy_num;
52
bool phy_connected;
53
struct IMXFECState *phy_consumer;
54
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
34
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/kvm.c
56
--- a/hw/net/imx_fec.c
36
+++ b/target/arm/kvm.c
57
+++ b/hw/net/imx_fec.c
37
@@ -XXX,XX +XXX,XX @@ int kvm_get_vcpu_events(ARMCPU *cpu)
58
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth_txdescs = {
38
return 0;
59
39
}
60
static const VMStateDescription vmstate_imx_eth = {
40
61
.name = TYPE_IMX_FEC,
41
+#define ARM64_REG_ESR_EL1 ARM64_SYS_REG(3, 0, 5, 2, 0)
62
- .version_id = 2,
42
+#define ARM64_REG_TCR_EL1 ARM64_SYS_REG(3, 0, 2, 0, 2)
63
- .minimum_version_id = 2,
43
+
64
+ .version_id = 3,
44
+/*
65
+ .minimum_version_id = 3,
45
+ * ESR_EL1
66
.fields = (const VMStateField[]) {
46
+ * ISS encoding
67
VMSTATE_UINT32_ARRAY(regs, IMXFECState, ENET_MAX),
47
+ * AARCH64: DFSC, bits [5:0]
68
VMSTATE_UINT32(rx_descriptor, IMXFECState),
48
+ * AARCH32:
69
VMSTATE_UINT32(tx_descriptor[0], IMXFECState),
49
+ * TTBCR.EAE == 0
70
- VMSTATE_UINT32(phy_status, IMXFECState),
50
+ * FS[4] - DFSR[10]
71
- VMSTATE_UINT32(phy_control, IMXFECState),
51
+ * FS[3:0] - DFSR[3:0]
72
- VMSTATE_UINT32(phy_advertise, IMXFECState),
52
+ * TTBCR.EAE == 1
73
- VMSTATE_UINT32(phy_int, IMXFECState),
53
+ * FS, bits [5:0]
74
- VMSTATE_UINT32(phy_int_mask, IMXFECState),
54
+ */
75
VMSTATE_END_OF_LIST()
55
+#define ESR_DFSC(aarch64, lpae, v) \
76
},
56
+ ((aarch64 || (lpae)) ? ((v) & 0x3F) \
77
.subsections = (const VMStateDescription * const []) {
57
+ : (((v) >> 6) | ((v) & 0x1F)))
78
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth = {
58
+
79
},
59
+#define ESR_DFSC_EXTABT(aarch64, lpae) \
80
};
60
+ ((aarch64) ? 0x10 : (lpae) ? 0x10 : 0x8)
81
61
+
82
-#define PHY_INT_ENERGYON (1 << 7)
62
+/**
83
-#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
63
+ * kvm_arm_verify_ext_dabt_pending:
84
-#define PHY_INT_FAULT (1 << 5)
64
+ * @cs: CPUState
85
-#define PHY_INT_DOWN (1 << 4)
65
+ *
86
-#define PHY_INT_AUTONEG_LP (1 << 3)
66
+ * Verify the fault status code wrt the Ext DABT injection
87
-#define PHY_INT_PARFAULT (1 << 2)
67
+ *
88
-#define PHY_INT_AUTONEG_PAGE (1 << 1)
68
+ * Returns: true if the fault status code is as expected, false otherwise
89
-
69
+ */
90
static void imx_eth_update(IMXFECState *s);
70
+static bool kvm_arm_verify_ext_dabt_pending(CPUState *cs)
91
71
+{
92
/*
72
+ uint64_t dfsr_val;
93
@@ -XXX,XX +XXX,XX @@ static void imx_eth_update(IMXFECState *s);
73
+
94
* For now we don't handle any GPIO/interrupt line, so the OS will
74
+ if (!kvm_get_one_reg(cs, ARM64_REG_ESR_EL1, &dfsr_val)) {
95
* have to poll for the PHY status.
75
+ ARMCPU *cpu = ARM_CPU(cs);
96
*/
76
+ CPUARMState *env = &cpu->env;
97
-static void imx_phy_update_irq(IMXFECState *s)
77
+ int aarch64_mode = arm_feature(env, ARM_FEATURE_AARCH64);
98
+static void imx_phy_update_irq(void *opaque, int n, int level)
78
+ int lpae = 0;
79
+
80
+ if (!aarch64_mode) {
81
+ uint64_t ttbcr;
82
+
83
+ if (!kvm_get_one_reg(cs, ARM64_REG_TCR_EL1, &ttbcr)) {
84
+ lpae = arm_feature(env, ARM_FEATURE_LPAE)
85
+ && (ttbcr & TTBCR_EAE);
86
+ }
87
+ }
88
+ /*
89
+ * The verification here is based on the DFSC bits
90
+ * of the ESR_EL1 reg only
91
+ */
92
+ return (ESR_DFSC(aarch64_mode, lpae, dfsr_val) ==
93
+ ESR_DFSC_EXTABT(aarch64_mode, lpae));
94
+ }
95
+ return false;
96
+}
97
+
98
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
99
{
99
{
100
ARMCPU *cpu = ARM_CPU(cs);
100
- imx_eth_update(s);
101
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
101
-}
102
index XXXXXXX..XXXXXXX 100644
102
-
103
--- a/target/arm/kvm64.c
103
-static void imx_phy_update_link(IMXFECState *s)
104
+++ b/target/arm/kvm64.c
105
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
106
107
return false;
108
}
109
-
110
-#define ARM64_REG_ESR_EL1 ARM64_SYS_REG(3, 0, 5, 2, 0)
111
-#define ARM64_REG_TCR_EL1 ARM64_SYS_REG(3, 0, 2, 0, 2)
112
-
113
-/*
114
- * ESR_EL1
115
- * ISS encoding
116
- * AARCH64: DFSC, bits [5:0]
117
- * AARCH32:
118
- * TTBCR.EAE == 0
119
- * FS[4] - DFSR[10]
120
- * FS[3:0] - DFSR[3:0]
121
- * TTBCR.EAE == 1
122
- * FS, bits [5:0]
123
- */
124
-#define ESR_DFSC(aarch64, lpae, v) \
125
- ((aarch64 || (lpae)) ? ((v) & 0x3F) \
126
- : (((v) >> 6) | ((v) & 0x1F)))
127
-
128
-#define ESR_DFSC_EXTABT(aarch64, lpae) \
129
- ((aarch64) ? 0x10 : (lpae) ? 0x10 : 0x8)
130
-
131
-bool kvm_arm_verify_ext_dabt_pending(CPUState *cs)
132
-{
104
-{
133
- uint64_t dfsr_val;
105
- /* Autonegotiation status mirrors link status. */
134
-
106
- if (qemu_get_queue(s->nic)->link_down) {
135
- if (!kvm_get_one_reg(cs, ARM64_REG_ESR_EL1, &dfsr_val)) {
107
- trace_imx_phy_update_link("down");
136
- ARMCPU *cpu = ARM_CPU(cs);
108
- s->phy_status &= ~0x0024;
137
- CPUARMState *env = &cpu->env;
109
- s->phy_int |= PHY_INT_DOWN;
138
- int aarch64_mode = arm_feature(env, ARM_FEATURE_AARCH64);
110
- } else {
139
- int lpae = 0;
111
- trace_imx_phy_update_link("up");
140
-
112
- s->phy_status |= 0x0024;
141
- if (!aarch64_mode) {
113
- s->phy_int |= PHY_INT_ENERGYON;
142
- uint64_t ttbcr;
114
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
143
-
115
- }
144
- if (!kvm_get_one_reg(cs, ARM64_REG_TCR_EL1, &ttbcr)) {
116
- imx_phy_update_irq(s);
145
- lpae = arm_feature(env, ARM_FEATURE_LPAE)
117
+ imx_eth_update(opaque);
146
- && (ttbcr & TTBCR_EAE);
118
}
119
120
static void imx_eth_set_link(NetClientState *nc)
121
{
122
- imx_phy_update_link(IMX_FEC(qemu_get_nic_opaque(nc)));
123
-}
124
-
125
-static void imx_phy_reset(IMXFECState *s)
126
-{
127
- trace_imx_phy_reset();
128
-
129
- s->phy_status = 0x7809;
130
- s->phy_control = 0x3000;
131
- s->phy_advertise = 0x01e1;
132
- s->phy_int_mask = 0;
133
- s->phy_int = 0;
134
- imx_phy_update_link(s);
135
+ lan9118_phy_update_link(&IMX_FEC(qemu_get_nic_opaque(nc))->mii,
136
+ nc->link_down);
137
}
138
139
static uint32_t imx_phy_read(IMXFECState *s, int reg)
140
{
141
- uint32_t val;
142
uint32_t phy = reg / 32;
143
144
if (!s->phy_connected) {
145
@@ -XXX,XX +XXX,XX @@ static uint32_t imx_phy_read(IMXFECState *s, int reg)
146
147
reg %= 32;
148
149
- switch (reg) {
150
- case 0: /* Basic Control */
151
- val = s->phy_control;
152
- break;
153
- case 1: /* Basic Status */
154
- val = s->phy_status;
155
- break;
156
- case 2: /* ID1 */
157
- val = 0x0007;
158
- break;
159
- case 3: /* ID2 */
160
- val = 0xc0d1;
161
- break;
162
- case 4: /* Auto-neg advertisement */
163
- val = s->phy_advertise;
164
- break;
165
- case 5: /* Auto-neg Link Partner Ability */
166
- val = 0x0f71;
167
- break;
168
- case 6: /* Auto-neg Expansion */
169
- val = 1;
170
- break;
171
- case 29: /* Interrupt source. */
172
- val = s->phy_int;
173
- s->phy_int = 0;
174
- imx_phy_update_irq(s);
175
- break;
176
- case 30: /* Interrupt mask */
177
- val = s->phy_int_mask;
178
- break;
179
- case 17:
180
- case 18:
181
- case 27:
182
- case 31:
183
- qemu_log_mask(LOG_UNIMP, "[%s.phy]%s: reg %d not implemented\n",
184
- TYPE_IMX_FEC, __func__, reg);
185
- val = 0;
186
- break;
187
- default:
188
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
189
- TYPE_IMX_FEC, __func__, reg);
190
- val = 0;
191
- break;
192
- }
193
-
194
- trace_imx_phy_read(val, phy, reg);
195
-
196
- return val;
197
+ return lan9118_phy_read(&s->mii, reg);
198
}
199
200
static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
201
@@ -XXX,XX +XXX,XX @@ static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
202
203
reg %= 32;
204
205
- trace_imx_phy_write(val, phy, reg);
206
-
207
- switch (reg) {
208
- case 0: /* Basic Control */
209
- if (val & 0x8000) {
210
- imx_phy_reset(s);
211
- } else {
212
- s->phy_control = val & 0x7980;
213
- /* Complete autonegotiation immediately. */
214
- if (val & 0x1000) {
215
- s->phy_status |= 0x0020;
147
- }
216
- }
148
- }
217
- }
149
- /*
218
- break;
150
- * The verification here is based on the DFSC bits
219
- case 4: /* Auto-neg advertisement */
151
- * of the ESR_EL1 reg only
220
- s->phy_advertise = (val & 0x2d7f) | 0x80;
152
- */
221
- break;
153
- return (ESR_DFSC(aarch64_mode, lpae, dfsr_val) ==
222
- case 30: /* Interrupt mask */
154
- ESR_DFSC_EXTABT(aarch64_mode, lpae));
223
- s->phy_int_mask = val & 0xff;
224
- imx_phy_update_irq(s);
225
- break;
226
- case 17:
227
- case 18:
228
- case 27:
229
- case 31:
230
- qemu_log_mask(LOG_UNIMP, "[%s.phy)%s: reg %d not implemented\n",
231
- TYPE_IMX_FEC, __func__, reg);
232
- break;
233
- default:
234
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
235
- TYPE_IMX_FEC, __func__, reg);
236
- break;
155
- }
237
- }
156
- return false;
238
+ lan9118_phy_write(&s->mii, reg, val);
157
-}
239
}
240
241
static void imx_fec_read_bd(IMXFECBufDesc *bd, dma_addr_t addr)
242
@@ -XXX,XX +XXX,XX @@ static void imx_eth_reset(DeviceState *d)
243
244
s->rx_descriptor = 0;
245
memset(s->tx_descriptor, 0, sizeof(s->tx_descriptor));
246
-
247
- /* We also reset the PHY */
248
- imx_phy_reset(s);
249
}
250
251
static uint32_t imx_default_read(IMXFECState *s, uint32_t index)
252
@@ -XXX,XX +XXX,XX @@ static void imx_eth_realize(DeviceState *dev, Error **errp)
253
sysbus_init_irq(sbd, &s->irq[0]);
254
sysbus_init_irq(sbd, &s->irq[1]);
255
256
+ qemu_init_irq(&s->mii_irq, imx_phy_update_irq, s, 0);
257
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
258
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
259
+ return;
260
+ }
261
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
262
+
263
qemu_macaddr_default_if_unset(&s->conf.macaddr);
264
265
s->nic = qemu_new_nic(&imx_eth_net_info, &s->conf,
266
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
267
index XXXXXXX..XXXXXXX 100644
268
--- a/hw/net/lan9118_phy.c
269
+++ b/hw/net/lan9118_phy.c
270
@@ -XXX,XX +XXX,XX @@
271
* Copyright (c) 2009 CodeSourcery, LLC.
272
* Written by Paul Brook
273
*
274
+ * Copyright (c) 2013 Jean-Christophe Dubois. <jcd@tribudubois.net>
275
+ *
276
* This code is licensed under the GNU GPL v2
277
*
278
* Contributions after 2012-01-13 are licensed under the terms of the
279
@@ -XXX,XX +XXX,XX @@
280
#include "hw/resettable.h"
281
#include "migration/vmstate.h"
282
#include "qemu/log.h"
283
+#include "trace.h"
284
285
#define PHY_INT_ENERGYON (1 << 7)
286
#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
287
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
288
289
switch (reg) {
290
case 0: /* Basic Control */
291
- return s->control;
292
+ val = s->control;
293
+ break;
294
case 1: /* Basic Status */
295
- return s->status;
296
+ val = s->status;
297
+ break;
298
case 2: /* ID1 */
299
- return 0x0007;
300
+ val = 0x0007;
301
+ break;
302
case 3: /* ID2 */
303
- return 0xc0d1;
304
+ val = 0xc0d1;
305
+ break;
306
case 4: /* Auto-neg advertisement */
307
- return s->advertise;
308
+ val = s->advertise;
309
+ break;
310
case 5: /* Auto-neg Link Partner Ability */
311
- return 0x0f71;
312
+ val = 0x0f71;
313
+ break;
314
case 6: /* Auto-neg Expansion */
315
- return 1;
316
- /* TODO 17, 18, 27, 29, 30, 31 */
317
+ val = 1;
318
+ break;
319
case 29: /* Interrupt source. */
320
val = s->ints;
321
s->ints = 0;
322
lan9118_phy_update_irq(s);
323
- return val;
324
+ break;
325
case 30: /* Interrupt mask */
326
- return s->int_mask;
327
+ val = s->int_mask;
328
+ break;
329
+ case 17:
330
+ case 18:
331
+ case 27:
332
+ case 31:
333
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
334
+ __func__, reg);
335
+ val = 0;
336
+ break;
337
default:
338
- qemu_log_mask(LOG_GUEST_ERROR,
339
- "lan9118_phy_read: PHY read reg %d\n", reg);
340
- return 0;
341
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
342
+ __func__, reg);
343
+ val = 0;
344
+ break;
345
}
346
+
347
+ trace_lan9118_phy_read(val, reg);
348
+
349
+ return val;
350
}
351
352
void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
353
{
354
+ trace_lan9118_phy_write(val, reg);
355
+
356
switch (reg) {
357
case 0: /* Basic Control */
358
if (val & 0x8000) {
359
lan9118_phy_reset(s);
360
- break;
361
- }
362
- s->control = val & 0x7980;
363
- /* Complete autonegotiation immediately. */
364
- if (val & 0x1000) {
365
- s->status |= 0x0020;
366
+ } else {
367
+ s->control = val & 0x7980;
368
+ /* Complete autonegotiation immediately. */
369
+ if (val & 0x1000) {
370
+ s->status |= 0x0020;
371
+ }
372
}
373
break;
374
case 4: /* Auto-neg advertisement */
375
s->advertise = (val & 0x2d7f) | 0x80;
376
break;
377
- /* TODO 17, 18, 27, 31 */
378
case 30: /* Interrupt mask */
379
s->int_mask = val & 0xff;
380
lan9118_phy_update_irq(s);
381
break;
382
+ case 17:
383
+ case 18:
384
+ case 27:
385
+ case 31:
386
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
387
+ __func__, reg);
388
+ break;
389
default:
390
- qemu_log_mask(LOG_GUEST_ERROR,
391
- "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
392
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
393
+ __func__, reg);
394
+ break;
395
}
396
}
397
398
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
399
400
/* Autonegotiation status mirrors link status. */
401
if (link_down) {
402
+ trace_lan9118_phy_update_link("down");
403
s->status &= ~0x0024;
404
s->ints |= PHY_INT_DOWN;
405
} else {
406
+ trace_lan9118_phy_update_link("up");
407
s->status |= 0x0024;
408
s->ints |= PHY_INT_ENERGYON;
409
s->ints |= PHY_INT_AUTONEG_COMPLETE;
410
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
411
412
void lan9118_phy_reset(Lan9118PhyState *s)
413
{
414
+ trace_lan9118_phy_reset();
415
+
416
s->control = 0x3000;
417
s->status = 0x7809;
418
s->advertise = 0x01e1;
419
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_phy = {
420
.version_id = 1,
421
.minimum_version_id = 1,
422
.fields = (const VMStateField[]) {
423
- VMSTATE_UINT16(control, Lan9118PhyState),
424
VMSTATE_UINT16(status, Lan9118PhyState),
425
+ VMSTATE_UINT16(control, Lan9118PhyState),
426
VMSTATE_UINT16(advertise, Lan9118PhyState),
427
VMSTATE_UINT16(ints, Lan9118PhyState),
428
VMSTATE_UINT16(int_mask, Lan9118PhyState),
429
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
430
index XXXXXXX..XXXXXXX 100644
431
--- a/hw/net/Kconfig
432
+++ b/hw/net/Kconfig
433
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_SUN8I_EMAC
434
435
config IMX_FEC
436
bool
437
+ select LAN9118_PHY
438
439
config CADENCE
440
bool
441
diff --git a/hw/net/trace-events b/hw/net/trace-events
442
index XXXXXXX..XXXXXXX 100644
443
--- a/hw/net/trace-events
444
+++ b/hw/net/trace-events
445
@@ -XXX,XX +XXX,XX @@ allwinner_sun8i_emac_set_link(bool active) "Set link: active=%u"
446
allwinner_sun8i_emac_read(uint64_t offset, uint64_t val) "MMIO read: offset=0x%" PRIx64 " value=0x%" PRIx64
447
allwinner_sun8i_emac_write(uint64_t offset, uint64_t val) "MMIO write: offset=0x%" PRIx64 " value=0x%" PRIx64
448
449
+# lan9118_phy.c
450
+lan9118_phy_read(uint16_t val, int reg) "[0x%02x] -> 0x%04" PRIx16
451
+lan9118_phy_write(uint16_t val, int reg) "[0x%02x] <- 0x%04" PRIx16
452
+lan9118_phy_update_link(const char *s) "%s"
453
+lan9118_phy_reset(void) ""
454
+
455
# lance.c
456
lance_mem_readw(uint64_t addr, uint32_t ret) "addr=0x%"PRIx64"val=0x%04x"
457
lance_mem_writew(uint64_t addr, uint32_t val) "addr=0x%"PRIx64"val=0x%04x"
458
@@ -XXX,XX +XXX,XX @@ i82596_set_multicast(uint16_t count) "Added %d multicast entries"
459
i82596_channel_attention(void *s) "%p: Received CHANNEL ATTENTION"
460
461
# imx_fec.c
462
-imx_phy_read(uint32_t val, int phy, int reg) "0x%04"PRIx32" <= phy[%d].reg[%d]"
463
imx_phy_read_num(int phy, int configured) "read request from unconfigured phy %d (configured %d)"
464
-imx_phy_write(uint32_t val, int phy, int reg) "0x%04"PRIx32" => phy[%d].reg[%d]"
465
imx_phy_write_num(int phy, int configured) "write request to unconfigured phy %d (configured %d)"
466
-imx_phy_update_link(const char *s) "%s"
467
-imx_phy_reset(void) ""
468
imx_fec_read_bd(uint64_t addr, int flags, int len, int data) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x"
469
imx_enet_read_bd(uint64_t addr, int flags, int len, int data, int options, int status) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x option 0x%04x status 0x%04x"
470
imx_eth_tx_bd_busy(void) "tx_bd ran out of descriptors to transmit"
158
--
471
--
159
2.34.1
472
2.34.1
160
161
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
translate_insn() ends up calling probe_access_full(), itself
3
Turns 0x70 into 0xe0 (== 0x70 << 1) which adds the missing MII_ANLPAR_TX and
4
declared in "exec/exec-all.h":
4
fixes the MSB of selector field to be zero, as specified in the datasheet.
5
5
6
TranslatorOps::translate_insn
6
Fixes: 2a424990170b "LAN9118 emulation"
7
-> aarch64_tr_translate_insn()
7
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
8
-> is_guarded_page()
8
Tested-by: Guenter Roeck <linux@roeck-us.net>
9
-> probe_access_full()
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
10
Message-id: 20241102125724.532843-4-shentey@gmail.com
11
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20231130142519.28417-4-philmd@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
12
---
16
target/arm/tcg/translate-a64.c | 1 +
13
hw/net/lan9118_phy.c | 2 +-
17
1 file changed, 1 insertion(+)
14
1 file changed, 1 insertion(+), 1 deletion(-)
18
15
19
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/tcg/translate-a64.c
18
--- a/hw/net/lan9118_phy.c
22
+++ b/target/arm/tcg/translate-a64.c
19
+++ b/hw/net/lan9118_phy.c
23
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
24
*/
21
val = s->advertise;
25
#include "qemu/osdep.h"
22
break;
26
23
case 5: /* Auto-neg Link Partner Ability */
27
+#include "exec/exec-all.h"
24
- val = 0x0f71;
28
#include "translate.h"
25
+ val = 0x0fe1;
29
#include "translate-a64.h"
26
break;
30
#include "qemu/log.h"
27
case 6: /* Auto-neg Expansion */
28
val = 1;
31
--
29
--
32
2.34.1
30
2.34.1
33
34
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
3
Prefer named constants over magic values for better readability.
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
5
calling the generic vCPU API from "sysemu/kvm.h".
6
4
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
7
Tested-by: Guenter Roeck <linux@roeck-us.net>
10
Message-id: 20231123183518.64569-8-philmd@linaro.org
8
Message-id: 20241102125724.532843-5-shentey@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
target/arm/kvm_arm.h | 6 +++---
11
include/hw/net/mii.h | 6 +++++
14
hw/arm/virt.c | 5 +++--
12
hw/net/lan9118_phy.c | 63 ++++++++++++++++++++++++++++----------------
15
target/arm/kvm.c | 6 +++---
13
2 files changed, 46 insertions(+), 23 deletions(-)
16
3 files changed, 9 insertions(+), 8 deletions(-)
17
14
18
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
15
diff --git a/include/hw/net/mii.h b/include/hw/net/mii.h
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm_arm.h
17
--- a/include/hw/net/mii.h
21
+++ b/target/arm/kvm_arm.h
18
+++ b/include/hw/net/mii.h
22
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_init(CPUState *cs);
19
@@ -XXX,XX +XXX,XX @@
23
20
#define MII_BMSR_JABBER (1 << 1) /* Jabber detected */
24
/**
21
#define MII_BMSR_EXTCAP (1 << 0) /* Ext-reg capability */
25
* kvm_arm_pvtime_init:
22
26
- * @cs: CPUState
23
+#define MII_ANAR_RFAULT (1 << 13) /* Say we can detect faults */
27
+ * @cpu: ARMCPU
24
#define MII_ANAR_PAUSE_ASYM (1 << 11) /* Try for asymmetric pause */
28
* @ipa: Per-vcpu guest physical base address of the pvtime structures
25
#define MII_ANAR_PAUSE (1 << 10) /* Try for pause */
29
*
26
#define MII_ANAR_TXFD (1 << 8)
30
* Initializes PVTIME for the VCPU, setting the PVTIME IPA to @ipa.
27
@@ -XXX,XX +XXX,XX @@
31
*/
28
#define MII_ANAR_10FD (1 << 6)
32
-void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa);
29
#define MII_ANAR_10 (1 << 5)
33
+void kvm_arm_pvtime_init(ARMCPU *cpu, uint64_t ipa);
30
#define MII_ANAR_CSMACD (1 << 0)
34
31
+#define MII_ANAR_SELECT (0x001f) /* Selector bits */
35
int kvm_arm_set_irq(int cpu, int irqtype, int irq, int level);
32
36
33
#define MII_ANLPAR_ACK (1 << 14)
37
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_pmu_init(CPUState *cs)
34
#define MII_ANLPAR_PAUSEASY (1 << 11) /* can pause asymmetrically */
38
g_assert_not_reached();
35
@@ -XXX,XX +XXX,XX @@
39
}
36
#define RTL8201CP_PHYID1 0x0000
40
37
#define RTL8201CP_PHYID2 0x8201
41
-static inline void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
38
42
+static inline void kvm_arm_pvtime_init(ARMCPU *cpu, uint64_t ipa)
39
+/* SMSC LAN9118 */
43
{
40
+#define SMSCLAN9118_PHYID1 0x0007
44
g_assert_not_reached();
41
+#define SMSCLAN9118_PHYID2 0xc0d1
45
}
42
+
46
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
43
/* RealTek 8211E */
44
#define RTL8211E_PHYID1 0x001c
45
#define RTL8211E_PHYID2 0xc915
46
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
47
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/virt.c
48
--- a/hw/net/lan9118_phy.c
49
+++ b/hw/arm/virt.c
49
+++ b/hw/net/lan9118_phy.c
50
@@ -XXX,XX +XXX,XX @@ static void virt_cpu_post_init(VirtMachineState *vms, MemoryRegion *sysmem)
50
@@ -XXX,XX +XXX,XX @@
51
kvm_arm_pmu_init(cpu);
51
52
}
52
#include "qemu/osdep.h"
53
if (steal_time) {
53
#include "hw/net/lan9118_phy.h"
54
- kvm_arm_pvtime_init(cpu, pvtime_reg_base +
54
+#include "hw/net/mii.h"
55
- cpu->cpu_index * PVTIME_SIZE_PER_CPU);
55
#include "hw/irq.h"
56
+ kvm_arm_pvtime_init(ARM_CPU(cpu), pvtime_reg_base
56
#include "hw/resettable.h"
57
+ + cpu->cpu_index
57
#include "migration/vmstate.h"
58
+ * PVTIME_SIZE_PER_CPU);
58
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
59
uint16_t val;
60
61
switch (reg) {
62
- case 0: /* Basic Control */
63
+ case MII_BMCR:
64
val = s->control;
65
break;
66
- case 1: /* Basic Status */
67
+ case MII_BMSR:
68
val = s->status;
69
break;
70
- case 2: /* ID1 */
71
- val = 0x0007;
72
+ case MII_PHYID1:
73
+ val = SMSCLAN9118_PHYID1;
74
break;
75
- case 3: /* ID2 */
76
- val = 0xc0d1;
77
+ case MII_PHYID2:
78
+ val = SMSCLAN9118_PHYID2;
79
break;
80
- case 4: /* Auto-neg advertisement */
81
+ case MII_ANAR:
82
val = s->advertise;
83
break;
84
- case 5: /* Auto-neg Link Partner Ability */
85
- val = 0x0fe1;
86
+ case MII_ANLPAR:
87
+ val = MII_ANLPAR_PAUSEASY | MII_ANLPAR_PAUSE | MII_ANLPAR_T4 |
88
+ MII_ANLPAR_TXFD | MII_ANLPAR_TX | MII_ANLPAR_10FD |
89
+ MII_ANLPAR_10 | MII_ANLPAR_CSMACD;
90
break;
91
- case 6: /* Auto-neg Expansion */
92
- val = 1;
93
+ case MII_ANER:
94
+ val = MII_ANER_NWAY;
95
break;
96
case 29: /* Interrupt source. */
97
val = s->ints;
98
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
99
trace_lan9118_phy_write(val, reg);
100
101
switch (reg) {
102
- case 0: /* Basic Control */
103
- if (val & 0x8000) {
104
+ case MII_BMCR:
105
+ if (val & MII_BMCR_RESET) {
106
lan9118_phy_reset(s);
107
} else {
108
- s->control = val & 0x7980;
109
+ s->control = val & (MII_BMCR_LOOPBACK | MII_BMCR_SPEED100 |
110
+ MII_BMCR_AUTOEN | MII_BMCR_PDOWN | MII_BMCR_FD |
111
+ MII_BMCR_CTST);
112
/* Complete autonegotiation immediately. */
113
- if (val & 0x1000) {
114
- s->status |= 0x0020;
115
+ if (val & MII_BMCR_AUTOEN) {
116
+ s->status |= MII_BMSR_AN_COMP;
59
}
117
}
60
}
118
}
119
break;
120
- case 4: /* Auto-neg advertisement */
121
- s->advertise = (val & 0x2d7f) | 0x80;
122
+ case MII_ANAR:
123
+ s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
124
+ MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
125
+ MII_ANAR_SELECT))
126
+ | MII_ANAR_TX;
127
break;
128
case 30: /* Interrupt mask */
129
s->int_mask = val & 0xff;
130
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
131
/* Autonegotiation status mirrors link status. */
132
if (link_down) {
133
trace_lan9118_phy_update_link("down");
134
- s->status &= ~0x0024;
135
+ s->status &= ~(MII_BMSR_AN_COMP | MII_BMSR_LINK_ST);
136
s->ints |= PHY_INT_DOWN;
61
} else {
137
} else {
62
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
138
trace_lan9118_phy_update_link("up");
63
index XXXXXXX..XXXXXXX 100644
139
- s->status |= 0x0024;
64
--- a/target/arm/kvm.c
140
+ s->status |= MII_BMSR_AN_COMP | MII_BMSR_LINK_ST;
65
+++ b/target/arm/kvm.c
141
s->ints |= PHY_INT_ENERGYON;
66
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
142
s->ints |= PHY_INT_AUTONEG_COMPLETE;
67
}
143
}
68
}
144
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_reset(Lan9118PhyState *s)
69
70
-void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
71
+void kvm_arm_pvtime_init(ARMCPU *cpu, uint64_t ipa)
72
{
145
{
73
struct kvm_device_attr attr = {
146
trace_lan9118_phy_reset();
74
.group = KVM_ARM_VCPU_PVTIME_CTRL,
147
75
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
148
- s->control = 0x3000;
76
.addr = (uint64_t)&ipa,
149
- s->status = 0x7809;
77
};
150
- s->advertise = 0x01e1;
78
151
+ s->control = MII_BMCR_AUTOEN | MII_BMCR_SPEED100;
79
- if (ARM_CPU(cs)->kvm_steal_time == ON_OFF_AUTO_OFF) {
152
+ s->status = MII_BMSR_100TX_FD
80
+ if (cpu->kvm_steal_time == ON_OFF_AUTO_OFF) {
153
+ | MII_BMSR_100TX_HD
81
return;
154
+ | MII_BMSR_10T_FD
82
}
155
+ | MII_BMSR_10T_HD
83
- if (!kvm_arm_set_device_attr(ARM_CPU(cs), &attr, "PVTIME IPA")) {
156
+ | MII_BMSR_AUTONEG
84
+ if (!kvm_arm_set_device_attr(cpu, &attr, "PVTIME IPA")) {
157
+ | MII_BMSR_EXTCAP;
85
error_report("failed to init PVTIME IPA");
158
+ s->advertise = MII_ANAR_TXFD
86
abort();
159
+ | MII_ANAR_TX
87
}
160
+ | MII_ANAR_10FD
161
+ | MII_ANAR_10
162
+ | MII_ANAR_CSMACD;
163
s->int_mask = 0;
164
s->ints = 0;
165
lan9118_phy_update_link(s, s->link_down);
88
--
166
--
89
2.34.1
167
2.34.1
90
91
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
3
The real device advertises this mode and the device model already advertises
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
4
100 mbps half duplex and 10 mbps full+half duplex. So advertise this mode to
5
calling the generic vCPU API from "sysemu/kvm.h".
5
make the model more realistic.
6
6
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
9
Tested-by: Guenter Roeck <linux@roeck-us.net>
10
Message-id: 20231123183518.64569-16-philmd@linaro.org
10
Message-id: 20241102125724.532843-6-shentey@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
12
---
13
target/arm/kvm.c | 8 ++++----
13
hw/net/lan9118_phy.c | 4 ++--
14
1 file changed, 4 insertions(+), 4 deletions(-)
14
1 file changed, 2 insertions(+), 2 deletions(-)
15
15
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
18
--- a/hw/net/lan9118_phy.c
19
+++ b/target/arm/kvm.c
19
+++ b/hw/net/lan9118_phy.c
20
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_handle_dabt_nisv(ARMCPU *cpu, uint64_t esr_iss,
20
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
21
22
/**
23
* kvm_arm_handle_debug:
24
- * @cs: CPUState
25
+ * @cpu: ARMCPU
26
* @debug_exit: debug part of the KVM exit structure
27
*
28
* Returns: TRUE if the debug exception was handled.
29
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_handle_dabt_nisv(ARMCPU *cpu, uint64_t esr_iss,
30
* ABI just provides user-space with the full exception syndrome
31
* register value to be decoded in QEMU.
32
*/
33
-static bool kvm_arm_handle_debug(CPUState *cs,
34
+static bool kvm_arm_handle_debug(ARMCPU *cpu,
35
struct kvm_debug_exit_arch *debug_exit)
36
{
37
int hsr_ec = syn_get_ec(debug_exit->hsr);
38
- ARMCPU *cpu = ARM_CPU(cs);
39
+ CPUState *cs = CPU(cpu);
40
CPUARMState *env = &cpu->env;
41
42
/* Ensure PC is synchronised */
43
@@ -XXX,XX +XXX,XX @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
44
45
switch (run->exit_reason) {
46
case KVM_EXIT_DEBUG:
47
- if (kvm_arm_handle_debug(cs, &run->debug.arch)) {
48
+ if (kvm_arm_handle_debug(cpu, &run->debug.arch)) {
49
ret = EXCP_DEBUG;
50
} /* otherwise return to guest */
51
break;
21
break;
22
case MII_ANAR:
23
s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
24
- MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
25
- MII_ANAR_SELECT))
26
+ MII_ANAR_PAUSE | MII_ANAR_TXFD | MII_ANAR_10FD |
27
+ MII_ANAR_10 | MII_ANAR_SELECT))
28
| MII_ANAR_TX;
29
break;
30
case 30: /* Interrupt mask */
52
--
31
--
53
2.34.1
32
2.34.1
54
55
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
For IEEE fused multiply-add, the (0 * inf) + NaN case should raise
2
Invalid for the multiplication of 0 by infinity. Currently we handle
3
this in the per-architecture ifdef ladder in pickNaNMulAdd().
4
However, since this isn't really architecture specific we can hoist
5
it up to the generic code.
2
6
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
7
For the cases where the infzero test in pickNaNMulAdd was
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
8
returning 2, we can delete the check entirely and allow the
5
calling the generic vCPU API from "sysemu/kvm.h".
9
code to fall into the normal pick-a-NaN handling, because this
10
will return 2 anyway (input 'c' being the only NaN in this case).
11
For the cases where infzero was returning 3 to indicate "return
12
the default NaN", we must retain that "return 3".
6
13
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
For Arm, this looks like it might be a behaviour change because we
15
used to set float_flag_invalid | float_flag_invalid_imz only if C is
16
a quiet NaN. However, it is not, because Arm target code never looks
17
at float_flag_invalid_imz, and for the (0 * inf) + SNaN case we
18
already raised float_flag_invalid via the "abc_mask &
19
float_cmask_snan" check in pick_nan_muladd.
20
21
For any target architecture using the "default implementation" at the
22
bottom of the ifdef, this is a behaviour change but will be fixing a
23
bug (where we failed to raise the Invalid exception for (0 * inf +
24
QNaN). The architectures using the default case are:
25
* hppa
26
* i386
27
* sh4
28
* tricore
29
30
The x86, Tricore and SH4 CPU architecture manuals are clear that this
31
should have raised Invalid; HPPA is a bit vaguer but still seems
32
clear enough.
33
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
36
Message-id: 20241202131347.498124-2-peter.maydell@linaro.org
10
Message-id: 20231123183518.64569-13-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
37
---
13
target/arm/kvm.c | 23 ++++++++++-------------
38
fpu/softfloat-parts.c.inc | 13 +++++++------
14
1 file changed, 10 insertions(+), 13 deletions(-)
39
fpu/softfloat-specialize.c.inc | 29 +----------------------------
40
2 files changed, 8 insertions(+), 34 deletions(-)
15
41
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
42
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
17
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
44
--- a/fpu/softfloat-parts.c.inc
19
+++ b/target/arm/kvm.c
45
+++ b/fpu/softfloat-parts.c.inc
20
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
46
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
21
47
int ab_mask, int abc_mask)
22
/**
23
* kvm_arm_get_virtual_time:
24
- * @cs: CPUState
25
+ * @cpu: ARMCPU
26
*
27
* Gets the VCPU's virtual counter and stores it in the KVM CPU state.
28
*/
29
-static void kvm_arm_get_virtual_time(CPUState *cs)
30
+static void kvm_arm_get_virtual_time(ARMCPU *cpu)
31
{
48
{
32
- ARMCPU *cpu = ARM_CPU(cs);
49
int which;
33
int ret;
50
+ bool infzero = (ab_mask == float_cmask_infzero);
34
51
35
if (cpu->kvm_vtime_dirty) {
52
if (unlikely(abc_mask & float_cmask_snan)) {
36
return;
53
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
37
}
54
}
38
55
39
- ret = kvm_get_one_reg(cs, KVM_REG_ARM_TIMER_CNT, &cpu->kvm_vtime);
56
- which = pickNaNMulAdd(a->cls, b->cls, c->cls,
40
+ ret = kvm_get_one_reg(CPU(cpu), KVM_REG_ARM_TIMER_CNT, &cpu->kvm_vtime);
57
- ab_mask == float_cmask_infzero, s);
41
if (ret) {
58
+ if (infzero) {
42
error_report("Failed to get KVM_REG_ARM_TIMER_CNT");
59
+ /* This is (0 * inf) + NaN or (inf * 0) + NaN */
43
abort();
60
+ float_raise(float_flag_invalid | float_flag_invalid_imz, s);
44
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_get_virtual_time(CPUState *cs)
61
+ }
45
62
+
46
/**
63
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
47
* kvm_arm_put_virtual_time:
64
48
- * @cs: CPUState
65
if (s->default_nan_mode || which == 3) {
49
+ * @cpu: ARMCPU
66
- /*
50
*
67
- * Note that this check is after pickNaNMulAdd so that function
51
* Sets the VCPU's virtual counter to the value stored in the KVM CPU state.
68
- * has an opportunity to set the Invalid flag for infzero.
52
*/
69
- */
53
-static void kvm_arm_put_virtual_time(CPUState *cs)
70
parts_default_nan(a, s);
54
+static void kvm_arm_put_virtual_time(ARMCPU *cpu)
71
return a;
55
{
56
- ARMCPU *cpu = ARM_CPU(cs);
57
int ret;
58
59
if (!cpu->kvm_vtime_dirty) {
60
return;
61
}
72
}
62
73
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
63
- ret = kvm_set_one_reg(cs, KVM_REG_ARM_TIMER_CNT, &cpu->kvm_vtime);
74
index XXXXXXX..XXXXXXX 100644
64
+ ret = kvm_set_one_reg(CPU(cpu), KVM_REG_ARM_TIMER_CNT, &cpu->kvm_vtime);
75
--- a/fpu/softfloat-specialize.c.inc
65
if (ret) {
76
+++ b/fpu/softfloat-specialize.c.inc
66
error_report("Failed to set KVM_REG_ARM_TIMER_CNT");
77
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
67
abort();
78
* the default NaN
68
@@ -XXX,XX +XXX,XX @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
79
*/
69
80
if (infzero && is_qnan(c_cls)) {
70
static void kvm_arm_vm_state_change(void *opaque, bool running, RunState state)
81
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
71
{
82
return 3;
72
- CPUState *cs = opaque;
83
}
73
- ARMCPU *cpu = ARM_CPU(cs);
84
74
+ ARMCPU *cpu = opaque;
85
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
75
86
* case sets InvalidOp and returns the default NaN
76
if (running) {
87
*/
77
if (cpu->kvm_adjvtime) {
88
if (infzero) {
78
- kvm_arm_put_virtual_time(cs);
89
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
79
+ kvm_arm_put_virtual_time(cpu);
90
return 3;
80
}
91
}
81
} else {
92
/* Prefer sNaN over qNaN, in the a, b, c order. */
82
if (cpu->kvm_adjvtime) {
93
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
83
- kvm_arm_get_virtual_time(cs);
94
* For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
84
+ kvm_arm_get_virtual_time(cpu);
95
* case sets InvalidOp and returns the input value 'c'
85
}
96
*/
97
- if (infzero) {
98
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
99
- return 2;
100
- }
101
/* Prefer sNaN over qNaN, in the c, a, b order. */
102
if (is_snan(c_cls)) {
103
return 2;
104
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
105
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
106
* case sets InvalidOp and returns the input value 'c'
107
*/
108
- if (infzero) {
109
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
110
- return 2;
111
- }
112
+
113
/* Prefer sNaN over qNaN, in the c, a, b order. */
114
if (is_snan(c_cls)) {
115
return 2;
116
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
117
* to return an input NaN if we have one (ie c) rather than generating
118
* a default NaN
119
*/
120
- if (infzero) {
121
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
122
- return 2;
123
- }
124
125
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
126
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
127
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
128
return 1;
86
}
129
}
87
}
130
#elif defined(TARGET_RISCV)
88
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
131
- /* For RISC-V, InvalidOp is set when multiplicands are Inf and zero */
89
return -EINVAL;
132
- if (infzero) {
133
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
134
- }
135
return 3; /* default NaN */
136
#elif defined(TARGET_S390X)
137
if (infzero) {
138
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
139
return 3;
90
}
140
}
91
141
92
- qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs);
142
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
93
+ qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cpu);
143
return 2;
94
144
}
95
/* Determine init features for this CPU */
145
#elif defined(TARGET_SPARC)
96
memset(cpu->kvm_init_features, 0, sizeof(cpu->kvm_init_features));
146
- /* For (inf,0,nan) return c. */
147
- if (infzero) {
148
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
149
- return 2;
150
- }
151
/* Prefer SNaN over QNaN, order C, B, A. */
152
if (is_snan(c_cls)) {
153
return 2;
154
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
155
* For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
156
* an input NaN if we have one (ie c).
157
*/
158
- if (infzero) {
159
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
160
- return 2;
161
- }
162
if (status->use_first_nan) {
163
if (is_nan(a_cls)) {
164
return 0;
97
--
165
--
98
2.34.1
166
2.34.1
99
100
diff view generated by jsdifflib
New patch
1
If the target sets default_nan_mode then we're always going to return
2
the default NaN, and pickNaNMulAdd() no longer has any side effects.
3
For consistency with pickNaN(), check for default_nan_mode before
4
calling pickNaNMulAdd().
1
5
6
When we convert pickNaNMulAdd() to allow runtime selection of the NaN
7
propagation rule, this means we won't have to make the targets which
8
use default_nan_mode also set a propagation rule.
9
10
Since RiscV always uses default_nan_mode, this allows us to remove
11
its ifdef case from pickNaNMulAdd().
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-3-peter.maydell@linaro.org
16
---
17
fpu/softfloat-parts.c.inc | 8 ++++++--
18
fpu/softfloat-specialize.c.inc | 9 +++++++--
19
2 files changed, 13 insertions(+), 4 deletions(-)
20
21
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
22
index XXXXXXX..XXXXXXX 100644
23
--- a/fpu/softfloat-parts.c.inc
24
+++ b/fpu/softfloat-parts.c.inc
25
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
26
float_raise(float_flag_invalid | float_flag_invalid_imz, s);
27
}
28
29
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
30
+ if (s->default_nan_mode) {
31
+ which = 3;
32
+ } else {
33
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ }
35
36
- if (s->default_nan_mode || which == 3) {
37
+ if (which == 3) {
38
parts_default_nan(a, s);
39
return a;
40
}
41
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
42
index XXXXXXX..XXXXXXX 100644
43
--- a/fpu/softfloat-specialize.c.inc
44
+++ b/fpu/softfloat-specialize.c.inc
45
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
46
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
47
bool infzero, float_status *status)
48
{
49
+ /*
50
+ * We guarantee not to require the target to tell us how to
51
+ * pick a NaN if we're always returning the default NaN.
52
+ * But if we're not in default-NaN mode then the target must
53
+ * specify.
54
+ */
55
+ assert(!status->default_nan_mode);
56
#if defined(TARGET_ARM)
57
/* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
58
* the default NaN
59
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
60
} else {
61
return 1;
62
}
63
-#elif defined(TARGET_RISCV)
64
- return 3; /* default NaN */
65
#elif defined(TARGET_S390X)
66
if (infzero) {
67
return 3;
68
--
69
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
IEEE 758 does not define a fixed rule for what NaN to return in
2
2
the case of a fused multiply-add of inf * 0 + NaN. Different
3
Since kvm32.c was removed, there is no need to keep them separate.
3
architectures thus do different things:
4
This will allow more symbols to be unexported.
4
* some return the default NaN
5
5
* some return the input NaN
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
* Arm returns the default NaN if the input NaN is quiet,
7
Reviewed-by: Gavin Shan <gshan@redhat.com>
7
and the input NaN if it is signalling
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
We want to make this logic be runtime selected rather than
10
[PMM: retain copyright lines from kvm64.c in kvm.c]
10
hardcoded into the binary, because:
11
* this will let us have multiple targets in one QEMU binary
12
* the Arm FEAT_AFP architectural feature includes letting
13
the guest select a NaN propagation rule at runtime
14
15
In this commit we add an enum for the propagation rule, the field in
16
float_status, and the corresponding getters and setters. We change
17
pickNaNMulAdd to honour this, but because all targets still leave
18
this field at its default 0 value, the fallback logic will pick the
19
rule type with the old ifdef ladder.
20
21
Note that four architectures both use the muladd softfloat functions
22
and did not have a branch of the ifdef ladder to specify their
23
behaviour (and so were ending up with the "default" case, probably
24
wrongly): i386, HPPA, SH4 and Tricore. SH4 and Tricore both set
25
default_nan_mode, and so will never get into pickNaNMulAdd(). For
26
HPPA and i386 we retain the same behaviour as the old default-case,
27
which is to not ever return the default NaN. This might not be
28
correct but it is not a behaviour change.
29
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
32
Message-id: 20241202131347.498124-4-peter.maydell@linaro.org
12
---
33
---
13
target/arm/kvm.c | 791 +++++++++++++++++++++++++++++++++++++++
34
include/fpu/softfloat-helpers.h | 11 ++++
14
target/arm/kvm64.c | 820 -----------------------------------------
35
include/fpu/softfloat-types.h | 23 +++++++++
15
target/arm/meson.build | 2 +-
36
fpu/softfloat-specialize.c.inc | 91 ++++++++++++++++++++++-----------
16
3 files changed, 792 insertions(+), 821 deletions(-)
37
3 files changed, 95 insertions(+), 30 deletions(-)
17
delete mode 100644 target/arm/kvm64.c
38
18
39
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
19
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
20
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/kvm.c
41
--- a/include/fpu/softfloat-helpers.h
22
+++ b/target/arm/kvm.c
42
+++ b/include/fpu/softfloat-helpers.h
23
@@ -XXX,XX +XXX,XX @@
43
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
24
* ARM implementation of KVM hooks
44
status->float_2nan_prop_rule = rule;
25
*
26
* Copyright Christoffer Dall 2009-2010
27
+ * Copyright Mian-M. Hamayun 2013, Virtual Open Systems
28
+ * Copyright Alex Bennée 2014, Linaro
29
*
30
* This work is licensed under the terms of the GNU GPL, version 2 or later.
31
* See the COPYING file in the top-level directory.
32
@@ -XXX,XX +XXX,XX @@
33
#include "qom/object.h"
34
#include "qapi/error.h"
35
#include "sysemu/sysemu.h"
36
+#include "sysemu/runstate.h"
37
#include "sysemu/kvm.h"
38
#include "sysemu/kvm_int.h"
39
#include "kvm_arm.h"
40
@@ -XXX,XX +XXX,XX @@
41
#include "hw/pci/pci.h"
42
#include "exec/memattrs.h"
43
#include "exec/address-spaces.h"
44
+#include "exec/gdbstub.h"
45
#include "hw/boards.h"
46
#include "hw/irq.h"
47
#include "qapi/visitor.h"
48
#include "qemu/log.h"
49
+#include "hw/acpi/acpi.h"
50
+#include "hw/acpi/ghes.h"
51
52
const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
53
KVM_CAP_LAST_INFO
54
@@ -XXX,XX +XXX,XX @@ void kvm_arch_accel_class_init(ObjectClass *oc)
55
object_class_property_set_description(oc, "eager-split-size",
56
"Eager Page Split chunk size for hugepages. (default: 0, disabled)");
57
}
45
}
58
+
46
59
+int kvm_arch_insert_hw_breakpoint(vaddr addr, vaddr len, int type)
47
+static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
48
+ float_status *status)
60
+{
49
+{
61
+ switch (type) {
50
+ status->float_infzeronan_rule = rule;
62
+ case GDB_BREAKPOINT_HW:
63
+ return insert_hw_breakpoint(addr);
64
+ break;
65
+ case GDB_WATCHPOINT_READ:
66
+ case GDB_WATCHPOINT_WRITE:
67
+ case GDB_WATCHPOINT_ACCESS:
68
+ return insert_hw_watchpoint(addr, len, type);
69
+ default:
70
+ return -ENOSYS;
71
+ }
72
+}
51
+}
73
+
52
+
74
+int kvm_arch_remove_hw_breakpoint(vaddr addr, vaddr len, int type)
53
static inline void set_flush_to_zero(bool val, float_status *status)
54
{
55
status->flush_to_zero = val;
56
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
57
return status->float_2nan_prop_rule;
58
}
59
60
+static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
75
+{
61
+{
76
+ switch (type) {
62
+ return status->float_infzeronan_rule;
77
+ case GDB_BREAKPOINT_HW:
78
+ return delete_hw_breakpoint(addr);
79
+ case GDB_WATCHPOINT_READ:
80
+ case GDB_WATCHPOINT_WRITE:
81
+ case GDB_WATCHPOINT_ACCESS:
82
+ return delete_hw_watchpoint(addr, len, type);
83
+ default:
84
+ return -ENOSYS;
85
+ }
86
+}
63
+}
87
+
64
+
88
+void kvm_arch_remove_all_hw_breakpoints(void)
65
static inline bool get_flush_to_zero(float_status *status)
89
+{
66
{
90
+ if (cur_hw_wps > 0) {
67
return status->flush_to_zero;
91
+ g_array_remove_range(hw_watchpoints, 0, cur_hw_wps);
68
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
92
+ }
69
index XXXXXXX..XXXXXXX 100644
93
+ if (cur_hw_bps > 0) {
70
--- a/include/fpu/softfloat-types.h
94
+ g_array_remove_range(hw_breakpoints, 0, cur_hw_bps);
71
+++ b/include/fpu/softfloat-types.h
95
+ }
72
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
96
+}
73
float_2nan_prop_x87,
97
+
74
} Float2NaNPropRule;
98
+static bool kvm_arm_set_device_attr(CPUState *cs, struct kvm_device_attr *attr,
75
99
+ const char *name)
76
+/*
100
+{
77
+ * Rule for result of fused multiply-add 0 * Inf + NaN.
101
+ int err;
78
+ * This must be a NaN, but implementations differ on whether this
102
+
79
+ * is the input NaN or the default NaN.
103
+ err = kvm_vcpu_ioctl(cs, KVM_HAS_DEVICE_ATTR, attr);
80
+ *
104
+ if (err != 0) {
81
+ * You don't need to set this if default_nan_mode is enabled.
105
+ error_report("%s: KVM_HAS_DEVICE_ATTR: %s", name, strerror(-err));
82
+ * When not in default-NaN mode, it is an error for the target
106
+ return false;
83
+ * not to set the rule in float_status if it uses muladd, and we
107
+ }
84
+ * will assert if we need to handle an input NaN and no rule was
108
+
85
+ * selected.
109
+ err = kvm_vcpu_ioctl(cs, KVM_SET_DEVICE_ATTR, attr);
86
+ */
110
+ if (err != 0) {
87
+typedef enum __attribute__((__packed__)) {
111
+ error_report("%s: KVM_SET_DEVICE_ATTR: %s", name, strerror(-err));
88
+ /* No propagation rule specified */
112
+ return false;
89
+ float_infzeronan_none = 0,
113
+ }
90
+ /* Result is never the default NaN (so always the input NaN) */
114
+
91
+ float_infzeronan_dnan_never,
115
+ return true;
92
+ /* Result is always the default NaN */
116
+}
93
+ float_infzeronan_dnan_always,
117
+
94
+ /* Result is the default NaN if the input NaN is quiet */
118
+void kvm_arm_pmu_init(CPUState *cs)
95
+ float_infzeronan_dnan_if_qnan,
119
+{
96
+} FloatInfZeroNaNRule;
120
+ struct kvm_device_attr attr = {
97
+
121
+ .group = KVM_ARM_VCPU_PMU_V3_CTRL,
98
/*
122
+ .attr = KVM_ARM_VCPU_PMU_V3_INIT,
99
* Floating Point Status. Individual architectures may maintain
123
+ };
100
* several versions of float_status for different functions. The
124
+
101
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
125
+ if (!ARM_CPU(cs)->has_pmu) {
102
FloatRoundMode float_rounding_mode;
126
+ return;
103
FloatX80RoundPrec floatx80_rounding_precision;
127
+ }
104
Float2NaNPropRule float_2nan_prop_rule;
128
+ if (!kvm_arm_set_device_attr(cs, &attr, "PMU")) {
105
+ FloatInfZeroNaNRule float_infzeronan_rule;
129
+ error_report("failed to init PMU");
106
bool tininess_before_rounding;
130
+ abort();
107
/* should denormalised results go to zero and set the inexact flag? */
131
+ }
108
bool flush_to_zero;
132
+}
109
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
133
+
110
index XXXXXXX..XXXXXXX 100644
134
+void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
111
--- a/fpu/softfloat-specialize.c.inc
135
+{
112
+++ b/fpu/softfloat-specialize.c.inc
136
+ struct kvm_device_attr attr = {
113
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
137
+ .group = KVM_ARM_VCPU_PMU_V3_CTRL,
114
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
138
+ .addr = (intptr_t)&irq,
115
bool infzero, float_status *status)
139
+ .attr = KVM_ARM_VCPU_PMU_V3_IRQ,
116
{
140
+ };
117
+ FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
141
+
118
+
142
+ if (!ARM_CPU(cs)->has_pmu) {
119
/*
143
+ return;
120
* We guarantee not to require the target to tell us how to
144
+ }
121
* pick a NaN if we're always returning the default NaN.
145
+ if (!kvm_arm_set_device_attr(cs, &attr, "PMU")) {
122
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
146
+ error_report("failed to set irq for PMU");
123
* specify.
147
+ abort();
124
*/
148
+ }
125
assert(!status->default_nan_mode);
149
+}
126
+
150
+
127
+ if (rule == float_infzeronan_none) {
151
+void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
128
+ /*
152
+{
129
+ * Temporarily fall back to ifdef ladder
153
+ struct kvm_device_attr attr = {
130
+ */
154
+ .group = KVM_ARM_VCPU_PVTIME_CTRL,
131
#if defined(TARGET_ARM)
155
+ .attr = KVM_ARM_VCPU_PVTIME_IPA,
132
- /* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
156
+ .addr = (uint64_t)&ipa,
133
- * the default NaN
157
+ };
134
- */
158
+
135
- if (infzero && is_qnan(c_cls)) {
159
+ if (ARM_CPU(cs)->kvm_steal_time == ON_OFF_AUTO_OFF) {
136
- return 3;
160
+ return;
137
+ /*
161
+ }
138
+ * For ARM, the (inf,zero,qnan) case returns the default NaN,
162
+ if (!kvm_arm_set_device_attr(cs, &attr, "PVTIME IPA")) {
139
+ * but (inf,zero,snan) returns the input NaN.
163
+ error_report("failed to init PVTIME IPA");
140
+ */
164
+ abort();
141
+ rule = float_infzeronan_dnan_if_qnan;
165
+ }
142
+#elif defined(TARGET_MIPS)
166
+}
143
+ if (snan_bit_is_one(status)) {
167
+
144
+ /*
168
+void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp)
145
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
169
+{
146
+ * case sets InvalidOp and returns the default NaN
170
+ bool has_steal_time = kvm_check_extension(kvm_state, KVM_CAP_STEAL_TIME);
147
+ */
171
+
148
+ rule = float_infzeronan_dnan_always;
172
+ if (cpu->kvm_steal_time == ON_OFF_AUTO_AUTO) {
173
+ if (!has_steal_time || !arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
174
+ cpu->kvm_steal_time = ON_OFF_AUTO_OFF;
175
+ } else {
149
+ } else {
176
+ cpu->kvm_steal_time = ON_OFF_AUTO_ON;
150
+ /*
151
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
152
+ * case sets InvalidOp and returns the input value 'c'
153
+ */
154
+ rule = float_infzeronan_dnan_never;
177
+ }
155
+ }
178
+ } else if (cpu->kvm_steal_time == ON_OFF_AUTO_ON) {
156
+#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
179
+ if (!has_steal_time) {
157
+ defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
180
+ error_setg(errp, "'kvm-steal-time' cannot be enabled "
158
+ defined(TARGET_I386) || defined(TARGET_LOONGARCH)
181
+ "on this host");
159
+ /*
182
+ return;
160
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
183
+ } else if (!arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
161
+ * case sets InvalidOp and returns the input value 'c'
184
+ /*
162
+ */
185
+ * DEN0057A chapter 2 says "This specification only covers
163
+ /*
186
+ * systems in which the Execution state of the hypervisor
164
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
187
+ * as well as EL1 of virtual machines is AArch64.". And,
165
+ * to return an input NaN if we have one (ie c) rather than generating
188
+ * to ensure that, the smc/hvc calls are only specified as
166
+ * a default NaN
189
+ * smc64/hvc64.
167
+ */
190
+ */
168
+ rule = float_infzeronan_dnan_never;
191
+ error_setg(errp, "'kvm-steal-time' cannot be enabled "
169
+#elif defined(TARGET_S390X)
192
+ "for AArch32 guests");
170
+ rule = float_infzeronan_dnan_always;
193
+ return;
171
+#endif
172
}
173
174
+ if (infzero) {
175
+ /*
176
+ * Inf * 0 + NaN -- some implementations return the default NaN here,
177
+ * and some return the input NaN.
178
+ */
179
+ switch (rule) {
180
+ case float_infzeronan_dnan_never:
181
+ return 2;
182
+ case float_infzeronan_dnan_always:
183
+ return 3;
184
+ case float_infzeronan_dnan_if_qnan:
185
+ return is_qnan(c_cls) ? 3 : 2;
186
+ default:
187
+ g_assert_not_reached();
194
+ }
188
+ }
195
+ }
189
+ }
196
+}
190
+
197
+
191
+#if defined(TARGET_ARM)
198
+bool kvm_arm_aarch32_supported(void)
192
+
199
+{
193
/* This looks different from the ARM ARM pseudocode, because the ARM ARM
200
+ return kvm_check_extension(kvm_state, KVM_CAP_ARM_EL1_32BIT);
194
* puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
201
+}
195
*/
202
+
196
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
203
+bool kvm_arm_sve_supported(void)
197
}
204
+{
198
#elif defined(TARGET_MIPS)
205
+ return kvm_check_extension(kvm_state, KVM_CAP_ARM_SVE);
199
if (snan_bit_is_one(status)) {
206
+}
200
- /*
207
+
201
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
208
+QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
202
- * case sets InvalidOp and returns the default NaN
209
+
203
- */
210
+uint32_t kvm_arm_sve_get_vls(CPUState *cs)
204
- if (infzero) {
211
+{
205
- return 3;
212
+ /* Only call this function if kvm_arm_sve_supported() returns true. */
206
- }
213
+ static uint64_t vls[KVM_ARM64_SVE_VLS_WORDS];
207
/* Prefer sNaN over qNaN, in the a, b, c order. */
214
+ static bool probed;
208
if (is_snan(a_cls)) {
215
+ uint32_t vq = 0;
209
return 0;
216
+ int i;
210
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
217
+
211
return 2;
218
+ /*
212
}
219
+ * KVM ensures all host CPUs support the same set of vector lengths.
213
} else {
220
+ * So we only need to create the scratch VCPUs once and then cache
214
- /*
221
+ * the results.
215
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
222
+ */
216
- * case sets InvalidOp and returns the input value 'c'
223
+ if (!probed) {
217
- */
224
+ struct kvm_vcpu_init init = {
218
/* Prefer sNaN over qNaN, in the c, a, b order. */
225
+ .target = -1,
219
if (is_snan(c_cls)) {
226
+ .features[0] = (1 << KVM_ARM_VCPU_SVE),
220
return 2;
227
+ };
221
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
228
+ struct kvm_one_reg reg = {
222
}
229
+ .id = KVM_REG_ARM64_SVE_VLS,
223
}
230
+ .addr = (uint64_t)&vls[0],
224
#elif defined(TARGET_LOONGARCH64)
231
+ };
225
- /*
232
+ int fdarray[3], ret;
226
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
233
+
227
- * case sets InvalidOp and returns the input value 'c'
234
+ probed = true;
228
- */
235
+
236
+ if (!kvm_arm_create_scratch_host_vcpu(NULL, fdarray, &init)) {
237
+ error_report("failed to create scratch VCPU with SVE enabled");
238
+ abort();
239
+ }
240
+ ret = ioctl(fdarray[2], KVM_GET_ONE_REG, &reg);
241
+ kvm_arm_destroy_scratch_host_vcpu(fdarray);
242
+ if (ret) {
243
+ error_report("failed to get KVM_REG_ARM64_SVE_VLS: %s",
244
+ strerror(errno));
245
+ abort();
246
+ }
247
+
248
+ for (i = KVM_ARM64_SVE_VLS_WORDS - 1; i >= 0; --i) {
249
+ if (vls[i]) {
250
+ vq = 64 - clz64(vls[i]) + i * 64;
251
+ break;
252
+ }
253
+ }
254
+ if (vq > ARM_MAX_VQ) {
255
+ warn_report("KVM supports vector lengths larger than "
256
+ "QEMU can enable");
257
+ vls[0] &= MAKE_64BIT_MASK(0, ARM_MAX_VQ);
258
+ }
259
+ }
260
+
261
+ return vls[0];
262
+}
263
+
264
+static int kvm_arm_sve_set_vls(CPUState *cs)
265
+{
266
+ ARMCPU *cpu = ARM_CPU(cs);
267
+ uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = { cpu->sve_vq.map };
268
+
269
+ assert(cpu->sve_max_vq <= KVM_ARM64_SVE_VQ_MAX);
270
+
271
+ return kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_VLS, &vls[0]);
272
+}
273
+
274
+#define ARM_CPU_ID_MPIDR 3, 0, 0, 0, 5
275
+
276
+int kvm_arch_init_vcpu(CPUState *cs)
277
+{
278
+ int ret;
279
+ uint64_t mpidr;
280
+ ARMCPU *cpu = ARM_CPU(cs);
281
+ CPUARMState *env = &cpu->env;
282
+ uint64_t psciver;
283
+
284
+ if (cpu->kvm_target == QEMU_KVM_ARM_TARGET_NONE ||
285
+ !object_dynamic_cast(OBJECT(cpu), TYPE_AARCH64_CPU)) {
286
+ error_report("KVM is not supported for this guest CPU type");
287
+ return -EINVAL;
288
+ }
289
+
290
+ qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs);
291
+
292
+ /* Determine init features for this CPU */
293
+ memset(cpu->kvm_init_features, 0, sizeof(cpu->kvm_init_features));
294
+ if (cs->start_powered_off) {
295
+ cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_POWER_OFF;
296
+ }
297
+ if (kvm_check_extension(cs->kvm_state, KVM_CAP_ARM_PSCI_0_2)) {
298
+ cpu->psci_version = QEMU_PSCI_VERSION_0_2;
299
+ cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_PSCI_0_2;
300
+ }
301
+ if (!arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
302
+ cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_EL1_32BIT;
303
+ }
304
+ if (!kvm_check_extension(cs->kvm_state, KVM_CAP_ARM_PMU_V3)) {
305
+ cpu->has_pmu = false;
306
+ }
307
+ if (cpu->has_pmu) {
308
+ cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
309
+ } else {
310
+ env->features &= ~(1ULL << ARM_FEATURE_PMU);
311
+ }
312
+ if (cpu_isar_feature(aa64_sve, cpu)) {
313
+ assert(kvm_arm_sve_supported());
314
+ cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_SVE;
315
+ }
316
+ if (cpu_isar_feature(aa64_pauth, cpu)) {
317
+ cpu->kvm_init_features[0] |= (1 << KVM_ARM_VCPU_PTRAUTH_ADDRESS |
318
+ 1 << KVM_ARM_VCPU_PTRAUTH_GENERIC);
319
+ }
320
+
321
+ /* Do KVM_ARM_VCPU_INIT ioctl */
322
+ ret = kvm_arm_vcpu_init(cs);
323
+ if (ret) {
324
+ return ret;
325
+ }
326
+
327
+ if (cpu_isar_feature(aa64_sve, cpu)) {
328
+ ret = kvm_arm_sve_set_vls(cs);
329
+ if (ret) {
330
+ return ret;
331
+ }
332
+ ret = kvm_arm_vcpu_finalize(cs, KVM_ARM_VCPU_SVE);
333
+ if (ret) {
334
+ return ret;
335
+ }
336
+ }
337
+
338
+ /*
339
+ * KVM reports the exact PSCI version it is implementing via a
340
+ * special sysreg. If it is present, use its contents to determine
341
+ * what to report to the guest in the dtb (it is the PSCI version,
342
+ * in the same 15-bits major 16-bits minor format that PSCI_VERSION
343
+ * returns).
344
+ */
345
+ if (!kvm_get_one_reg(cs, KVM_REG_ARM_PSCI_VERSION, &psciver)) {
346
+ cpu->psci_version = psciver;
347
+ }
348
+
349
+ /*
350
+ * When KVM is in use, PSCI is emulated in-kernel and not by qemu.
351
+ * Currently KVM has its own idea about MPIDR assignment, so we
352
+ * override our defaults with what we get from KVM.
353
+ */
354
+ ret = kvm_get_one_reg(cs, ARM64_SYS_REG(ARM_CPU_ID_MPIDR), &mpidr);
355
+ if (ret) {
356
+ return ret;
357
+ }
358
+ cpu->mp_affinity = mpidr & ARM64_AFFINITY_MASK;
359
+
360
+ /* Check whether user space can specify guest syndrome value */
361
+ kvm_arm_init_serror_injection(cs);
362
+
363
+ return kvm_arm_init_cpreg_list(cpu);
364
+}
365
+
366
+int kvm_arch_destroy_vcpu(CPUState *cs)
367
+{
368
+ return 0;
369
+}
370
+
371
+/* Callers must hold the iothread mutex lock */
372
+static void kvm_inject_arm_sea(CPUState *c)
373
+{
374
+ ARMCPU *cpu = ARM_CPU(c);
375
+ CPUARMState *env = &cpu->env;
376
+ uint32_t esr;
377
+ bool same_el;
378
+
379
+ c->exception_index = EXCP_DATA_ABORT;
380
+ env->exception.target_el = 1;
381
+
382
+ /*
383
+ * Set the DFSC to synchronous external abort and set FnV to not valid,
384
+ * this will tell guest the FAR_ELx is UNKNOWN for this abort.
385
+ */
386
+ same_el = arm_current_el(env) == env->exception.target_el;
387
+ esr = syn_data_abort_no_iss(same_el, 1, 0, 0, 0, 0, 0x10);
388
+
389
+ env->exception.syndrome = esr;
390
+
391
+ arm_cpu_do_interrupt(c);
392
+}
393
+
394
+#define AARCH64_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
395
+ KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
396
+
397
+#define AARCH64_SIMD_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U128 | \
398
+ KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
399
+
400
+#define AARCH64_SIMD_CTRL_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U32 | \
401
+ KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
402
+
403
+static int kvm_arch_put_fpsimd(CPUState *cs)
404
+{
405
+ CPUARMState *env = &ARM_CPU(cs)->env;
406
+ int i, ret;
407
+
408
+ for (i = 0; i < 32; i++) {
409
+ uint64_t *q = aa64_vfp_qreg(env, i);
410
+#if HOST_BIG_ENDIAN
411
+ uint64_t fp_val[2] = { q[1], q[0] };
412
+ ret = kvm_set_one_reg(cs, AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]),
413
+ fp_val);
414
+#else
415
+ ret = kvm_set_one_reg(cs, AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]), q);
416
+#endif
417
+ if (ret) {
418
+ return ret;
419
+ }
420
+ }
421
+
422
+ return 0;
423
+}
424
+
425
+/*
426
+ * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits
427
+ * and PREGS and the FFR have a slice size of 256 bits. However we simply hard
428
+ * code the slice index to zero for now as it's unlikely we'll need more than
429
+ * one slice for quite some time.
430
+ */
431
+static int kvm_arch_put_sve(CPUState *cs)
432
+{
433
+ ARMCPU *cpu = ARM_CPU(cs);
434
+ CPUARMState *env = &cpu->env;
435
+ uint64_t tmp[ARM_MAX_VQ * 2];
436
+ uint64_t *r;
437
+ int n, ret;
438
+
439
+ for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) {
440
+ r = sve_bswap64(tmp, &env->vfp.zregs[n].d[0], cpu->sve_max_vq * 2);
441
+ ret = kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_ZREG(n, 0), r);
442
+ if (ret) {
443
+ return ret;
444
+ }
445
+ }
446
+
447
+ for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) {
448
+ r = sve_bswap64(tmp, r = &env->vfp.pregs[n].p[0],
449
+ DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
450
+ ret = kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_PREG(n, 0), r);
451
+ if (ret) {
452
+ return ret;
453
+ }
454
+ }
455
+
456
+ r = sve_bswap64(tmp, &env->vfp.pregs[FFR_PRED_NUM].p[0],
457
+ DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
458
+ ret = kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_FFR(0), r);
459
+ if (ret) {
460
+ return ret;
461
+ }
462
+
463
+ return 0;
464
+}
465
+
466
+int kvm_arch_put_registers(CPUState *cs, int level)
467
+{
468
+ uint64_t val;
469
+ uint32_t fpr;
470
+ int i, ret;
471
+ unsigned int el;
472
+
473
+ ARMCPU *cpu = ARM_CPU(cs);
474
+ CPUARMState *env = &cpu->env;
475
+
476
+ /* If we are in AArch32 mode then we need to copy the AArch32 regs to the
477
+ * AArch64 registers before pushing them out to 64-bit KVM.
478
+ */
479
+ if (!is_a64(env)) {
480
+ aarch64_sync_32_to_64(env);
481
+ }
482
+
483
+ for (i = 0; i < 31; i++) {
484
+ ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(regs.regs[i]),
485
+ &env->xregs[i]);
486
+ if (ret) {
487
+ return ret;
488
+ }
489
+ }
490
+
491
+ /* KVM puts SP_EL0 in regs.sp and SP_EL1 in regs.sp_el1. On the
492
+ * QEMU side we keep the current SP in xregs[31] as well.
493
+ */
494
+ aarch64_save_sp(env, 1);
495
+
496
+ ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(regs.sp), &env->sp_el[0]);
497
+ if (ret) {
498
+ return ret;
499
+ }
500
+
501
+ ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(sp_el1), &env->sp_el[1]);
502
+ if (ret) {
503
+ return ret;
504
+ }
505
+
506
+ /* Note that KVM thinks pstate is 64 bit but we use a uint32_t */
507
+ if (is_a64(env)) {
508
+ val = pstate_read(env);
509
+ } else {
510
+ val = cpsr_read(env);
511
+ }
512
+ ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(regs.pstate), &val);
513
+ if (ret) {
514
+ return ret;
515
+ }
516
+
517
+ ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(regs.pc), &env->pc);
518
+ if (ret) {
519
+ return ret;
520
+ }
521
+
522
+ ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(elr_el1), &env->elr_el[1]);
523
+ if (ret) {
524
+ return ret;
525
+ }
526
+
527
+ /* Saved Program State Registers
528
+ *
529
+ * Before we restore from the banked_spsr[] array we need to
530
+ * ensure that any modifications to env->spsr are correctly
531
+ * reflected in the banks.
532
+ */
533
+ el = arm_current_el(env);
534
+ if (el > 0 && !is_a64(env)) {
535
+ i = bank_number(env->uncached_cpsr & CPSR_M);
536
+ env->banked_spsr[i] = env->spsr;
537
+ }
538
+
539
+ /* KVM 0-4 map to QEMU banks 1-5 */
540
+ for (i = 0; i < KVM_NR_SPSR; i++) {
541
+ ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(spsr[i]),
542
+ &env->banked_spsr[i + 1]);
543
+ if (ret) {
544
+ return ret;
545
+ }
546
+ }
547
+
548
+ if (cpu_isar_feature(aa64_sve, cpu)) {
549
+ ret = kvm_arch_put_sve(cs);
550
+ } else {
551
+ ret = kvm_arch_put_fpsimd(cs);
552
+ }
553
+ if (ret) {
554
+ return ret;
555
+ }
556
+
557
+ fpr = vfp_get_fpsr(env);
558
+ ret = kvm_set_one_reg(cs, AARCH64_SIMD_CTRL_REG(fp_regs.fpsr), &fpr);
559
+ if (ret) {
560
+ return ret;
561
+ }
562
+
563
+ fpr = vfp_get_fpcr(env);
564
+ ret = kvm_set_one_reg(cs, AARCH64_SIMD_CTRL_REG(fp_regs.fpcr), &fpr);
565
+ if (ret) {
566
+ return ret;
567
+ }
568
+
569
+ write_cpustate_to_list(cpu, true);
570
+
571
+ if (!write_list_to_kvmstate(cpu, level)) {
572
+ return -EINVAL;
573
+ }
574
+
575
+ /*
576
+ * Setting VCPU events should be triggered after syncing the registers
577
+ * to avoid overwriting potential changes made by KVM upon calling
578
+ * KVM_SET_VCPU_EVENTS ioctl
579
+ */
580
+ ret = kvm_put_vcpu_events(cpu);
581
+ if (ret) {
582
+ return ret;
583
+ }
584
+
585
+ kvm_arm_sync_mpstate_to_kvm(cpu);
586
+
587
+ return ret;
588
+}
589
+
590
+static int kvm_arch_get_fpsimd(CPUState *cs)
591
+{
592
+ CPUARMState *env = &ARM_CPU(cs)->env;
593
+ int i, ret;
594
+
595
+ for (i = 0; i < 32; i++) {
596
+ uint64_t *q = aa64_vfp_qreg(env, i);
597
+ ret = kvm_get_one_reg(cs, AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]), q);
598
+ if (ret) {
599
+ return ret;
600
+ } else {
601
+#if HOST_BIG_ENDIAN
602
+ uint64_t t;
603
+ t = q[0], q[0] = q[1], q[1] = t;
604
+#endif
605
+ }
606
+ }
607
+
608
+ return 0;
609
+}
610
+
611
+/*
612
+ * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits
613
+ * and PREGS and the FFR have a slice size of 256 bits. However we simply hard
614
+ * code the slice index to zero for now as it's unlikely we'll need more than
615
+ * one slice for quite some time.
616
+ */
617
+static int kvm_arch_get_sve(CPUState *cs)
618
+{
619
+ ARMCPU *cpu = ARM_CPU(cs);
620
+ CPUARMState *env = &cpu->env;
621
+ uint64_t *r;
622
+ int n, ret;
623
+
624
+ for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) {
625
+ r = &env->vfp.zregs[n].d[0];
626
+ ret = kvm_get_one_reg(cs, KVM_REG_ARM64_SVE_ZREG(n, 0), r);
627
+ if (ret) {
628
+ return ret;
629
+ }
630
+ sve_bswap64(r, r, cpu->sve_max_vq * 2);
631
+ }
632
+
633
+ for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) {
634
+ r = &env->vfp.pregs[n].p[0];
635
+ ret = kvm_get_one_reg(cs, KVM_REG_ARM64_SVE_PREG(n, 0), r);
636
+ if (ret) {
637
+ return ret;
638
+ }
639
+ sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
640
+ }
641
+
642
+ r = &env->vfp.pregs[FFR_PRED_NUM].p[0];
643
+ ret = kvm_get_one_reg(cs, KVM_REG_ARM64_SVE_FFR(0), r);
644
+ if (ret) {
645
+ return ret;
646
+ }
647
+ sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
648
+
649
+ return 0;
650
+}
651
+
652
+int kvm_arch_get_registers(CPUState *cs)
653
+{
654
+ uint64_t val;
655
+ unsigned int el;
656
+ uint32_t fpr;
657
+ int i, ret;
658
+
659
+ ARMCPU *cpu = ARM_CPU(cs);
660
+ CPUARMState *env = &cpu->env;
661
+
662
+ for (i = 0; i < 31; i++) {
663
+ ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(regs.regs[i]),
664
+ &env->xregs[i]);
665
+ if (ret) {
666
+ return ret;
667
+ }
668
+ }
669
+
670
+ ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(regs.sp), &env->sp_el[0]);
671
+ if (ret) {
672
+ return ret;
673
+ }
674
+
675
+ ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(sp_el1), &env->sp_el[1]);
676
+ if (ret) {
677
+ return ret;
678
+ }
679
+
680
+ ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(regs.pstate), &val);
681
+ if (ret) {
682
+ return ret;
683
+ }
684
+
685
+ env->aarch64 = ((val & PSTATE_nRW) == 0);
686
+ if (is_a64(env)) {
687
+ pstate_write(env, val);
688
+ } else {
689
+ cpsr_write(env, val, 0xffffffff, CPSRWriteRaw);
690
+ }
691
+
692
+ /* KVM puts SP_EL0 in regs.sp and SP_EL1 in regs.sp_el1. On the
693
+ * QEMU side we keep the current SP in xregs[31] as well.
694
+ */
695
+ aarch64_restore_sp(env, 1);
696
+
697
+ ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(regs.pc), &env->pc);
698
+ if (ret) {
699
+ return ret;
700
+ }
701
+
702
+ /* If we are in AArch32 mode then we need to sync the AArch32 regs with the
703
+ * incoming AArch64 regs received from 64-bit KVM.
704
+ * We must perform this after all of the registers have been acquired from
705
+ * the kernel.
706
+ */
707
+ if (!is_a64(env)) {
708
+ aarch64_sync_64_to_32(env);
709
+ }
710
+
711
+ ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(elr_el1), &env->elr_el[1]);
712
+ if (ret) {
713
+ return ret;
714
+ }
715
+
716
+ /* Fetch the SPSR registers
717
+ *
718
+ * KVM SPSRs 0-4 map to QEMU banks 1-5
719
+ */
720
+ for (i = 0; i < KVM_NR_SPSR; i++) {
721
+ ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(spsr[i]),
722
+ &env->banked_spsr[i + 1]);
723
+ if (ret) {
724
+ return ret;
725
+ }
726
+ }
727
+
728
+ el = arm_current_el(env);
729
+ if (el > 0 && !is_a64(env)) {
730
+ i = bank_number(env->uncached_cpsr & CPSR_M);
731
+ env->spsr = env->banked_spsr[i];
732
+ }
733
+
734
+ if (cpu_isar_feature(aa64_sve, cpu)) {
735
+ ret = kvm_arch_get_sve(cs);
736
+ } else {
737
+ ret = kvm_arch_get_fpsimd(cs);
738
+ }
739
+ if (ret) {
740
+ return ret;
741
+ }
742
+
743
+ ret = kvm_get_one_reg(cs, AARCH64_SIMD_CTRL_REG(fp_regs.fpsr), &fpr);
744
+ if (ret) {
745
+ return ret;
746
+ }
747
+ vfp_set_fpsr(env, fpr);
748
+
749
+ ret = kvm_get_one_reg(cs, AARCH64_SIMD_CTRL_REG(fp_regs.fpcr), &fpr);
750
+ if (ret) {
751
+ return ret;
752
+ }
753
+ vfp_set_fpcr(env, fpr);
754
+
755
+ ret = kvm_get_vcpu_events(cpu);
756
+ if (ret) {
757
+ return ret;
758
+ }
759
+
760
+ if (!write_kvmstate_to_list(cpu)) {
761
+ return -EINVAL;
762
+ }
763
+ /* Note that it's OK to have registers which aren't in CPUState,
764
+ * so we can ignore a failure return here.
765
+ */
766
+ write_list_to_cpustate(cpu);
767
+
768
+ kvm_arm_sync_mpstate_to_qemu(cpu);
769
+
770
+ /* TODO: other registers */
771
+ return ret;
772
+}
773
+
774
+void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr)
775
+{
776
+ ram_addr_t ram_addr;
777
+ hwaddr paddr;
778
+
779
+ assert(code == BUS_MCEERR_AR || code == BUS_MCEERR_AO);
780
+
781
+ if (acpi_ghes_present() && addr) {
782
+ ram_addr = qemu_ram_addr_from_host(addr);
783
+ if (ram_addr != RAM_ADDR_INVALID &&
784
+ kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) {
785
+ kvm_hwpoison_page_add(ram_addr);
786
+ /*
787
+ * If this is a BUS_MCEERR_AR, we know we have been called
788
+ * synchronously from the vCPU thread, so we can easily
789
+ * synchronize the state and inject an error.
790
+ *
791
+ * TODO: we currently don't tell the guest at all about
792
+ * BUS_MCEERR_AO. In that case we might either be being
793
+ * called synchronously from the vCPU thread, or a bit
794
+ * later from the main thread, so doing the injection of
795
+ * the error would be more complicated.
796
+ */
797
+ if (code == BUS_MCEERR_AR) {
798
+ kvm_cpu_synchronize_state(c);
799
+ if (!acpi_ghes_record_errors(ACPI_HEST_SRC_ID_SEA, paddr)) {
800
+ kvm_inject_arm_sea(c);
801
+ } else {
802
+ error_report("failed to record the error");
803
+ abort();
804
+ }
805
+ }
806
+ return;
807
+ }
808
+ if (code == BUS_MCEERR_AO) {
809
+ error_report("Hardware memory error at addr %p for memory used by "
810
+ "QEMU itself instead of guest system!", addr);
811
+ }
812
+ }
813
+
814
+ if (code == BUS_MCEERR_AR) {
815
+ error_report("Hardware memory error!");
816
+ exit(1);
817
+ }
818
+}
819
+
820
+/* C6.6.29 BRK instruction */
821
+static const uint32_t brk_insn = 0xd4200000;
822
+
823
+int kvm_arch_insert_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
824
+{
825
+ if (cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&bp->saved_insn, 4, 0) ||
826
+ cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&brk_insn, 4, 1)) {
827
+ return -EINVAL;
828
+ }
829
+ return 0;
830
+}
831
+
832
+int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
833
+{
834
+ static uint32_t brk;
835
+
836
+ if (cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&brk, 4, 0) ||
837
+ brk != brk_insn ||
838
+ cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&bp->saved_insn, 4, 1)) {
839
+ return -EINVAL;
840
+ }
841
+ return 0;
842
+}
843
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
844
deleted file mode 100644
845
index XXXXXXX..XXXXXXX
846
--- a/target/arm/kvm64.c
847
+++ /dev/null
848
@@ -XXX,XX +XXX,XX @@
849
-/*
850
- * ARM implementation of KVM hooks, 64 bit specific code
851
- *
852
- * Copyright Mian-M. Hamayun 2013, Virtual Open Systems
853
- * Copyright Alex Bennée 2014, Linaro
854
- *
855
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
856
- * See the COPYING file in the top-level directory.
857
- *
858
- */
859
-
229
-
860
-#include "qemu/osdep.h"
230
/* Prefer sNaN over qNaN, in the c, a, b order. */
861
-#include <sys/ioctl.h>
231
if (is_snan(c_cls)) {
862
-#include <sys/ptrace.h>
232
return 2;
233
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
234
return 1;
235
}
236
#elif defined(TARGET_PPC)
237
- /* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
238
- * to return an input NaN if we have one (ie c) rather than generating
239
- * a default NaN
240
- */
863
-
241
-
864
-#include <linux/elf.h>
242
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
865
-#include <linux/kvm.h>
243
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
866
-
244
*/
867
-#include "qapi/error.h"
245
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
868
-#include "cpu.h"
246
return 1;
869
-#include "qemu/timer.h"
247
}
870
-#include "qemu/error-report.h"
248
#elif defined(TARGET_S390X)
871
-#include "qemu/host-utils.h"
249
- if (infzero) {
872
-#include "qemu/main-loop.h"
250
- return 3;
873
-#include "exec/gdbstub.h"
874
-#include "sysemu/runstate.h"
875
-#include "sysemu/kvm.h"
876
-#include "sysemu/kvm_int.h"
877
-#include "kvm_arm.h"
878
-#include "internals.h"
879
-#include "cpu-features.h"
880
-#include "hw/acpi/acpi.h"
881
-#include "hw/acpi/ghes.h"
882
-
883
-
884
-int kvm_arch_insert_hw_breakpoint(vaddr addr, vaddr len, int type)
885
-{
886
- switch (type) {
887
- case GDB_BREAKPOINT_HW:
888
- return insert_hw_breakpoint(addr);
889
- break;
890
- case GDB_WATCHPOINT_READ:
891
- case GDB_WATCHPOINT_WRITE:
892
- case GDB_WATCHPOINT_ACCESS:
893
- return insert_hw_watchpoint(addr, len, type);
894
- default:
895
- return -ENOSYS;
896
- }
897
-}
898
-
899
-int kvm_arch_remove_hw_breakpoint(vaddr addr, vaddr len, int type)
900
-{
901
- switch (type) {
902
- case GDB_BREAKPOINT_HW:
903
- return delete_hw_breakpoint(addr);
904
- case GDB_WATCHPOINT_READ:
905
- case GDB_WATCHPOINT_WRITE:
906
- case GDB_WATCHPOINT_ACCESS:
907
- return delete_hw_watchpoint(addr, len, type);
908
- default:
909
- return -ENOSYS;
910
- }
911
-}
912
-
913
-
914
-void kvm_arch_remove_all_hw_breakpoints(void)
915
-{
916
- if (cur_hw_wps > 0) {
917
- g_array_remove_range(hw_watchpoints, 0, cur_hw_wps);
918
- }
919
- if (cur_hw_bps > 0) {
920
- g_array_remove_range(hw_breakpoints, 0, cur_hw_bps);
921
- }
922
-}
923
-
924
-static bool kvm_arm_set_device_attr(CPUState *cs, struct kvm_device_attr *attr,
925
- const char *name)
926
-{
927
- int err;
928
-
929
- err = kvm_vcpu_ioctl(cs, KVM_HAS_DEVICE_ATTR, attr);
930
- if (err != 0) {
931
- error_report("%s: KVM_HAS_DEVICE_ATTR: %s", name, strerror(-err));
932
- return false;
933
- }
251
- }
934
-
252
-
935
- err = kvm_vcpu_ioctl(cs, KVM_SET_DEVICE_ATTR, attr);
253
if (is_snan(a_cls)) {
936
- if (err != 0) {
254
return 0;
937
- error_report("%s: KVM_SET_DEVICE_ATTR: %s", name, strerror(-err));
255
} else if (is_snan(b_cls)) {
938
- return false;
939
- }
940
-
941
- return true;
942
-}
943
-
944
-void kvm_arm_pmu_init(CPUState *cs)
945
-{
946
- struct kvm_device_attr attr = {
947
- .group = KVM_ARM_VCPU_PMU_V3_CTRL,
948
- .attr = KVM_ARM_VCPU_PMU_V3_INIT,
949
- };
950
-
951
- if (!ARM_CPU(cs)->has_pmu) {
952
- return;
953
- }
954
- if (!kvm_arm_set_device_attr(cs, &attr, "PMU")) {
955
- error_report("failed to init PMU");
956
- abort();
957
- }
958
-}
959
-
960
-void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
961
-{
962
- struct kvm_device_attr attr = {
963
- .group = KVM_ARM_VCPU_PMU_V3_CTRL,
964
- .addr = (intptr_t)&irq,
965
- .attr = KVM_ARM_VCPU_PMU_V3_IRQ,
966
- };
967
-
968
- if (!ARM_CPU(cs)->has_pmu) {
969
- return;
970
- }
971
- if (!kvm_arm_set_device_attr(cs, &attr, "PMU")) {
972
- error_report("failed to set irq for PMU");
973
- abort();
974
- }
975
-}
976
-
977
-void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
978
-{
979
- struct kvm_device_attr attr = {
980
- .group = KVM_ARM_VCPU_PVTIME_CTRL,
981
- .attr = KVM_ARM_VCPU_PVTIME_IPA,
982
- .addr = (uint64_t)&ipa,
983
- };
984
-
985
- if (ARM_CPU(cs)->kvm_steal_time == ON_OFF_AUTO_OFF) {
986
- return;
987
- }
988
- if (!kvm_arm_set_device_attr(cs, &attr, "PVTIME IPA")) {
989
- error_report("failed to init PVTIME IPA");
990
- abort();
991
- }
992
-}
993
-
994
-void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp)
995
-{
996
- bool has_steal_time = kvm_check_extension(kvm_state, KVM_CAP_STEAL_TIME);
997
-
998
- if (cpu->kvm_steal_time == ON_OFF_AUTO_AUTO) {
999
- if (!has_steal_time || !arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
1000
- cpu->kvm_steal_time = ON_OFF_AUTO_OFF;
1001
- } else {
1002
- cpu->kvm_steal_time = ON_OFF_AUTO_ON;
1003
- }
1004
- } else if (cpu->kvm_steal_time == ON_OFF_AUTO_ON) {
1005
- if (!has_steal_time) {
1006
- error_setg(errp, "'kvm-steal-time' cannot be enabled "
1007
- "on this host");
1008
- return;
1009
- } else if (!arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
1010
- /*
1011
- * DEN0057A chapter 2 says "This specification only covers
1012
- * systems in which the Execution state of the hypervisor
1013
- * as well as EL1 of virtual machines is AArch64.". And,
1014
- * to ensure that, the smc/hvc calls are only specified as
1015
- * smc64/hvc64.
1016
- */
1017
- error_setg(errp, "'kvm-steal-time' cannot be enabled "
1018
- "for AArch32 guests");
1019
- return;
1020
- }
1021
- }
1022
-}
1023
-
1024
-bool kvm_arm_aarch32_supported(void)
1025
-{
1026
- return kvm_check_extension(kvm_state, KVM_CAP_ARM_EL1_32BIT);
1027
-}
1028
-
1029
-bool kvm_arm_sve_supported(void)
1030
-{
1031
- return kvm_check_extension(kvm_state, KVM_CAP_ARM_SVE);
1032
-}
1033
-
1034
-QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
1035
-
1036
-uint32_t kvm_arm_sve_get_vls(CPUState *cs)
1037
-{
1038
- /* Only call this function if kvm_arm_sve_supported() returns true. */
1039
- static uint64_t vls[KVM_ARM64_SVE_VLS_WORDS];
1040
- static bool probed;
1041
- uint32_t vq = 0;
1042
- int i;
1043
-
1044
- /*
1045
- * KVM ensures all host CPUs support the same set of vector lengths.
1046
- * So we only need to create the scratch VCPUs once and then cache
1047
- * the results.
1048
- */
1049
- if (!probed) {
1050
- struct kvm_vcpu_init init = {
1051
- .target = -1,
1052
- .features[0] = (1 << KVM_ARM_VCPU_SVE),
1053
- };
1054
- struct kvm_one_reg reg = {
1055
- .id = KVM_REG_ARM64_SVE_VLS,
1056
- .addr = (uint64_t)&vls[0],
1057
- };
1058
- int fdarray[3], ret;
1059
-
1060
- probed = true;
1061
-
1062
- if (!kvm_arm_create_scratch_host_vcpu(NULL, fdarray, &init)) {
1063
- error_report("failed to create scratch VCPU with SVE enabled");
1064
- abort();
1065
- }
1066
- ret = ioctl(fdarray[2], KVM_GET_ONE_REG, &reg);
1067
- kvm_arm_destroy_scratch_host_vcpu(fdarray);
1068
- if (ret) {
1069
- error_report("failed to get KVM_REG_ARM64_SVE_VLS: %s",
1070
- strerror(errno));
1071
- abort();
1072
- }
1073
-
1074
- for (i = KVM_ARM64_SVE_VLS_WORDS - 1; i >= 0; --i) {
1075
- if (vls[i]) {
1076
- vq = 64 - clz64(vls[i]) + i * 64;
1077
- break;
1078
- }
1079
- }
1080
- if (vq > ARM_MAX_VQ) {
1081
- warn_report("KVM supports vector lengths larger than "
1082
- "QEMU can enable");
1083
- vls[0] &= MAKE_64BIT_MASK(0, ARM_MAX_VQ);
1084
- }
1085
- }
1086
-
1087
- return vls[0];
1088
-}
1089
-
1090
-static int kvm_arm_sve_set_vls(CPUState *cs)
1091
-{
1092
- ARMCPU *cpu = ARM_CPU(cs);
1093
- uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = { cpu->sve_vq.map };
1094
-
1095
- assert(cpu->sve_max_vq <= KVM_ARM64_SVE_VQ_MAX);
1096
-
1097
- return kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_VLS, &vls[0]);
1098
-}
1099
-
1100
-#define ARM_CPU_ID_MPIDR 3, 0, 0, 0, 5
1101
-
1102
-int kvm_arch_init_vcpu(CPUState *cs)
1103
-{
1104
- int ret;
1105
- uint64_t mpidr;
1106
- ARMCPU *cpu = ARM_CPU(cs);
1107
- CPUARMState *env = &cpu->env;
1108
- uint64_t psciver;
1109
-
1110
- if (cpu->kvm_target == QEMU_KVM_ARM_TARGET_NONE ||
1111
- !object_dynamic_cast(OBJECT(cpu), TYPE_AARCH64_CPU)) {
1112
- error_report("KVM is not supported for this guest CPU type");
1113
- return -EINVAL;
1114
- }
1115
-
1116
- qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs);
1117
-
1118
- /* Determine init features for this CPU */
1119
- memset(cpu->kvm_init_features, 0, sizeof(cpu->kvm_init_features));
1120
- if (cs->start_powered_off) {
1121
- cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_POWER_OFF;
1122
- }
1123
- if (kvm_check_extension(cs->kvm_state, KVM_CAP_ARM_PSCI_0_2)) {
1124
- cpu->psci_version = QEMU_PSCI_VERSION_0_2;
1125
- cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_PSCI_0_2;
1126
- }
1127
- if (!arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
1128
- cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_EL1_32BIT;
1129
- }
1130
- if (!kvm_check_extension(cs->kvm_state, KVM_CAP_ARM_PMU_V3)) {
1131
- cpu->has_pmu = false;
1132
- }
1133
- if (cpu->has_pmu) {
1134
- cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
1135
- } else {
1136
- env->features &= ~(1ULL << ARM_FEATURE_PMU);
1137
- }
1138
- if (cpu_isar_feature(aa64_sve, cpu)) {
1139
- assert(kvm_arm_sve_supported());
1140
- cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_SVE;
1141
- }
1142
- if (cpu_isar_feature(aa64_pauth, cpu)) {
1143
- cpu->kvm_init_features[0] |= (1 << KVM_ARM_VCPU_PTRAUTH_ADDRESS |
1144
- 1 << KVM_ARM_VCPU_PTRAUTH_GENERIC);
1145
- }
1146
-
1147
- /* Do KVM_ARM_VCPU_INIT ioctl */
1148
- ret = kvm_arm_vcpu_init(cs);
1149
- if (ret) {
1150
- return ret;
1151
- }
1152
-
1153
- if (cpu_isar_feature(aa64_sve, cpu)) {
1154
- ret = kvm_arm_sve_set_vls(cs);
1155
- if (ret) {
1156
- return ret;
1157
- }
1158
- ret = kvm_arm_vcpu_finalize(cs, KVM_ARM_VCPU_SVE);
1159
- if (ret) {
1160
- return ret;
1161
- }
1162
- }
1163
-
1164
- /*
1165
- * KVM reports the exact PSCI version it is implementing via a
1166
- * special sysreg. If it is present, use its contents to determine
1167
- * what to report to the guest in the dtb (it is the PSCI version,
1168
- * in the same 15-bits major 16-bits minor format that PSCI_VERSION
1169
- * returns).
1170
- */
1171
- if (!kvm_get_one_reg(cs, KVM_REG_ARM_PSCI_VERSION, &psciver)) {
1172
- cpu->psci_version = psciver;
1173
- }
1174
-
1175
- /*
1176
- * When KVM is in use, PSCI is emulated in-kernel and not by qemu.
1177
- * Currently KVM has its own idea about MPIDR assignment, so we
1178
- * override our defaults with what we get from KVM.
1179
- */
1180
- ret = kvm_get_one_reg(cs, ARM64_SYS_REG(ARM_CPU_ID_MPIDR), &mpidr);
1181
- if (ret) {
1182
- return ret;
1183
- }
1184
- cpu->mp_affinity = mpidr & ARM64_AFFINITY_MASK;
1185
-
1186
- /* Check whether user space can specify guest syndrome value */
1187
- kvm_arm_init_serror_injection(cs);
1188
-
1189
- return kvm_arm_init_cpreg_list(cpu);
1190
-}
1191
-
1192
-int kvm_arch_destroy_vcpu(CPUState *cs)
1193
-{
1194
- return 0;
1195
-}
1196
-
1197
-/* Callers must hold the iothread mutex lock */
1198
-static void kvm_inject_arm_sea(CPUState *c)
1199
-{
1200
- ARMCPU *cpu = ARM_CPU(c);
1201
- CPUARMState *env = &cpu->env;
1202
- uint32_t esr;
1203
- bool same_el;
1204
-
1205
- c->exception_index = EXCP_DATA_ABORT;
1206
- env->exception.target_el = 1;
1207
-
1208
- /*
1209
- * Set the DFSC to synchronous external abort and set FnV to not valid,
1210
- * this will tell guest the FAR_ELx is UNKNOWN for this abort.
1211
- */
1212
- same_el = arm_current_el(env) == env->exception.target_el;
1213
- esr = syn_data_abort_no_iss(same_el, 1, 0, 0, 0, 0, 0x10);
1214
-
1215
- env->exception.syndrome = esr;
1216
-
1217
- arm_cpu_do_interrupt(c);
1218
-}
1219
-
1220
-#define AARCH64_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
1221
- KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
1222
-
1223
-#define AARCH64_SIMD_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U128 | \
1224
- KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
1225
-
1226
-#define AARCH64_SIMD_CTRL_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U32 | \
1227
- KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
1228
-
1229
-static int kvm_arch_put_fpsimd(CPUState *cs)
1230
-{
1231
- CPUARMState *env = &ARM_CPU(cs)->env;
1232
- int i, ret;
1233
-
1234
- for (i = 0; i < 32; i++) {
1235
- uint64_t *q = aa64_vfp_qreg(env, i);
1236
-#if HOST_BIG_ENDIAN
1237
- uint64_t fp_val[2] = { q[1], q[0] };
1238
- ret = kvm_set_one_reg(cs, AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]),
1239
- fp_val);
1240
-#else
1241
- ret = kvm_set_one_reg(cs, AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]), q);
1242
-#endif
1243
- if (ret) {
1244
- return ret;
1245
- }
1246
- }
1247
-
1248
- return 0;
1249
-}
1250
-
1251
-/*
1252
- * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits
1253
- * and PREGS and the FFR have a slice size of 256 bits. However we simply hard
1254
- * code the slice index to zero for now as it's unlikely we'll need more than
1255
- * one slice for quite some time.
1256
- */
1257
-static int kvm_arch_put_sve(CPUState *cs)
1258
-{
1259
- ARMCPU *cpu = ARM_CPU(cs);
1260
- CPUARMState *env = &cpu->env;
1261
- uint64_t tmp[ARM_MAX_VQ * 2];
1262
- uint64_t *r;
1263
- int n, ret;
1264
-
1265
- for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) {
1266
- r = sve_bswap64(tmp, &env->vfp.zregs[n].d[0], cpu->sve_max_vq * 2);
1267
- ret = kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_ZREG(n, 0), r);
1268
- if (ret) {
1269
- return ret;
1270
- }
1271
- }
1272
-
1273
- for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) {
1274
- r = sve_bswap64(tmp, r = &env->vfp.pregs[n].p[0],
1275
- DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
1276
- ret = kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_PREG(n, 0), r);
1277
- if (ret) {
1278
- return ret;
1279
- }
1280
- }
1281
-
1282
- r = sve_bswap64(tmp, &env->vfp.pregs[FFR_PRED_NUM].p[0],
1283
- DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
1284
- ret = kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_FFR(0), r);
1285
- if (ret) {
1286
- return ret;
1287
- }
1288
-
1289
- return 0;
1290
-}
1291
-
1292
-int kvm_arch_put_registers(CPUState *cs, int level)
1293
-{
1294
- uint64_t val;
1295
- uint32_t fpr;
1296
- int i, ret;
1297
- unsigned int el;
1298
-
1299
- ARMCPU *cpu = ARM_CPU(cs);
1300
- CPUARMState *env = &cpu->env;
1301
-
1302
- /* If we are in AArch32 mode then we need to copy the AArch32 regs to the
1303
- * AArch64 registers before pushing them out to 64-bit KVM.
1304
- */
1305
- if (!is_a64(env)) {
1306
- aarch64_sync_32_to_64(env);
1307
- }
1308
-
1309
- for (i = 0; i < 31; i++) {
1310
- ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(regs.regs[i]),
1311
- &env->xregs[i]);
1312
- if (ret) {
1313
- return ret;
1314
- }
1315
- }
1316
-
1317
- /* KVM puts SP_EL0 in regs.sp and SP_EL1 in regs.sp_el1. On the
1318
- * QEMU side we keep the current SP in xregs[31] as well.
1319
- */
1320
- aarch64_save_sp(env, 1);
1321
-
1322
- ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(regs.sp), &env->sp_el[0]);
1323
- if (ret) {
1324
- return ret;
1325
- }
1326
-
1327
- ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(sp_el1), &env->sp_el[1]);
1328
- if (ret) {
1329
- return ret;
1330
- }
1331
-
1332
- /* Note that KVM thinks pstate is 64 bit but we use a uint32_t */
1333
- if (is_a64(env)) {
1334
- val = pstate_read(env);
1335
- } else {
1336
- val = cpsr_read(env);
1337
- }
1338
- ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(regs.pstate), &val);
1339
- if (ret) {
1340
- return ret;
1341
- }
1342
-
1343
- ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(regs.pc), &env->pc);
1344
- if (ret) {
1345
- return ret;
1346
- }
1347
-
1348
- ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(elr_el1), &env->elr_el[1]);
1349
- if (ret) {
1350
- return ret;
1351
- }
1352
-
1353
- /* Saved Program State Registers
1354
- *
1355
- * Before we restore from the banked_spsr[] array we need to
1356
- * ensure that any modifications to env->spsr are correctly
1357
- * reflected in the banks.
1358
- */
1359
- el = arm_current_el(env);
1360
- if (el > 0 && !is_a64(env)) {
1361
- i = bank_number(env->uncached_cpsr & CPSR_M);
1362
- env->banked_spsr[i] = env->spsr;
1363
- }
1364
-
1365
- /* KVM 0-4 map to QEMU banks 1-5 */
1366
- for (i = 0; i < KVM_NR_SPSR; i++) {
1367
- ret = kvm_set_one_reg(cs, AARCH64_CORE_REG(spsr[i]),
1368
- &env->banked_spsr[i + 1]);
1369
- if (ret) {
1370
- return ret;
1371
- }
1372
- }
1373
-
1374
- if (cpu_isar_feature(aa64_sve, cpu)) {
1375
- ret = kvm_arch_put_sve(cs);
1376
- } else {
1377
- ret = kvm_arch_put_fpsimd(cs);
1378
- }
1379
- if (ret) {
1380
- return ret;
1381
- }
1382
-
1383
- fpr = vfp_get_fpsr(env);
1384
- ret = kvm_set_one_reg(cs, AARCH64_SIMD_CTRL_REG(fp_regs.fpsr), &fpr);
1385
- if (ret) {
1386
- return ret;
1387
- }
1388
-
1389
- fpr = vfp_get_fpcr(env);
1390
- ret = kvm_set_one_reg(cs, AARCH64_SIMD_CTRL_REG(fp_regs.fpcr), &fpr);
1391
- if (ret) {
1392
- return ret;
1393
- }
1394
-
1395
- write_cpustate_to_list(cpu, true);
1396
-
1397
- if (!write_list_to_kvmstate(cpu, level)) {
1398
- return -EINVAL;
1399
- }
1400
-
1401
- /*
1402
- * Setting VCPU events should be triggered after syncing the registers
1403
- * to avoid overwriting potential changes made by KVM upon calling
1404
- * KVM_SET_VCPU_EVENTS ioctl
1405
- */
1406
- ret = kvm_put_vcpu_events(cpu);
1407
- if (ret) {
1408
- return ret;
1409
- }
1410
-
1411
- kvm_arm_sync_mpstate_to_kvm(cpu);
1412
-
1413
- return ret;
1414
-}
1415
-
1416
-static int kvm_arch_get_fpsimd(CPUState *cs)
1417
-{
1418
- CPUARMState *env = &ARM_CPU(cs)->env;
1419
- int i, ret;
1420
-
1421
- for (i = 0; i < 32; i++) {
1422
- uint64_t *q = aa64_vfp_qreg(env, i);
1423
- ret = kvm_get_one_reg(cs, AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]), q);
1424
- if (ret) {
1425
- return ret;
1426
- } else {
1427
-#if HOST_BIG_ENDIAN
1428
- uint64_t t;
1429
- t = q[0], q[0] = q[1], q[1] = t;
1430
-#endif
1431
- }
1432
- }
1433
-
1434
- return 0;
1435
-}
1436
-
1437
-/*
1438
- * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits
1439
- * and PREGS and the FFR have a slice size of 256 bits. However we simply hard
1440
- * code the slice index to zero for now as it's unlikely we'll need more than
1441
- * one slice for quite some time.
1442
- */
1443
-static int kvm_arch_get_sve(CPUState *cs)
1444
-{
1445
- ARMCPU *cpu = ARM_CPU(cs);
1446
- CPUARMState *env = &cpu->env;
1447
- uint64_t *r;
1448
- int n, ret;
1449
-
1450
- for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) {
1451
- r = &env->vfp.zregs[n].d[0];
1452
- ret = kvm_get_one_reg(cs, KVM_REG_ARM64_SVE_ZREG(n, 0), r);
1453
- if (ret) {
1454
- return ret;
1455
- }
1456
- sve_bswap64(r, r, cpu->sve_max_vq * 2);
1457
- }
1458
-
1459
- for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) {
1460
- r = &env->vfp.pregs[n].p[0];
1461
- ret = kvm_get_one_reg(cs, KVM_REG_ARM64_SVE_PREG(n, 0), r);
1462
- if (ret) {
1463
- return ret;
1464
- }
1465
- sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
1466
- }
1467
-
1468
- r = &env->vfp.pregs[FFR_PRED_NUM].p[0];
1469
- ret = kvm_get_one_reg(cs, KVM_REG_ARM64_SVE_FFR(0), r);
1470
- if (ret) {
1471
- return ret;
1472
- }
1473
- sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
1474
-
1475
- return 0;
1476
-}
1477
-
1478
-int kvm_arch_get_registers(CPUState *cs)
1479
-{
1480
- uint64_t val;
1481
- unsigned int el;
1482
- uint32_t fpr;
1483
- int i, ret;
1484
-
1485
- ARMCPU *cpu = ARM_CPU(cs);
1486
- CPUARMState *env = &cpu->env;
1487
-
1488
- for (i = 0; i < 31; i++) {
1489
- ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(regs.regs[i]),
1490
- &env->xregs[i]);
1491
- if (ret) {
1492
- return ret;
1493
- }
1494
- }
1495
-
1496
- ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(regs.sp), &env->sp_el[0]);
1497
- if (ret) {
1498
- return ret;
1499
- }
1500
-
1501
- ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(sp_el1), &env->sp_el[1]);
1502
- if (ret) {
1503
- return ret;
1504
- }
1505
-
1506
- ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(regs.pstate), &val);
1507
- if (ret) {
1508
- return ret;
1509
- }
1510
-
1511
- env->aarch64 = ((val & PSTATE_nRW) == 0);
1512
- if (is_a64(env)) {
1513
- pstate_write(env, val);
1514
- } else {
1515
- cpsr_write(env, val, 0xffffffff, CPSRWriteRaw);
1516
- }
1517
-
1518
- /* KVM puts SP_EL0 in regs.sp and SP_EL1 in regs.sp_el1. On the
1519
- * QEMU side we keep the current SP in xregs[31] as well.
1520
- */
1521
- aarch64_restore_sp(env, 1);
1522
-
1523
- ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(regs.pc), &env->pc);
1524
- if (ret) {
1525
- return ret;
1526
- }
1527
-
1528
- /* If we are in AArch32 mode then we need to sync the AArch32 regs with the
1529
- * incoming AArch64 regs received from 64-bit KVM.
1530
- * We must perform this after all of the registers have been acquired from
1531
- * the kernel.
1532
- */
1533
- if (!is_a64(env)) {
1534
- aarch64_sync_64_to_32(env);
1535
- }
1536
-
1537
- ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(elr_el1), &env->elr_el[1]);
1538
- if (ret) {
1539
- return ret;
1540
- }
1541
-
1542
- /* Fetch the SPSR registers
1543
- *
1544
- * KVM SPSRs 0-4 map to QEMU banks 1-5
1545
- */
1546
- for (i = 0; i < KVM_NR_SPSR; i++) {
1547
- ret = kvm_get_one_reg(cs, AARCH64_CORE_REG(spsr[i]),
1548
- &env->banked_spsr[i + 1]);
1549
- if (ret) {
1550
- return ret;
1551
- }
1552
- }
1553
-
1554
- el = arm_current_el(env);
1555
- if (el > 0 && !is_a64(env)) {
1556
- i = bank_number(env->uncached_cpsr & CPSR_M);
1557
- env->spsr = env->banked_spsr[i];
1558
- }
1559
-
1560
- if (cpu_isar_feature(aa64_sve, cpu)) {
1561
- ret = kvm_arch_get_sve(cs);
1562
- } else {
1563
- ret = kvm_arch_get_fpsimd(cs);
1564
- }
1565
- if (ret) {
1566
- return ret;
1567
- }
1568
-
1569
- ret = kvm_get_one_reg(cs, AARCH64_SIMD_CTRL_REG(fp_regs.fpsr), &fpr);
1570
- if (ret) {
1571
- return ret;
1572
- }
1573
- vfp_set_fpsr(env, fpr);
1574
-
1575
- ret = kvm_get_one_reg(cs, AARCH64_SIMD_CTRL_REG(fp_regs.fpcr), &fpr);
1576
- if (ret) {
1577
- return ret;
1578
- }
1579
- vfp_set_fpcr(env, fpr);
1580
-
1581
- ret = kvm_get_vcpu_events(cpu);
1582
- if (ret) {
1583
- return ret;
1584
- }
1585
-
1586
- if (!write_kvmstate_to_list(cpu)) {
1587
- return -EINVAL;
1588
- }
1589
- /* Note that it's OK to have registers which aren't in CPUState,
1590
- * so we can ignore a failure return here.
1591
- */
1592
- write_list_to_cpustate(cpu);
1593
-
1594
- kvm_arm_sync_mpstate_to_qemu(cpu);
1595
-
1596
- /* TODO: other registers */
1597
- return ret;
1598
-}
1599
-
1600
-void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr)
1601
-{
1602
- ram_addr_t ram_addr;
1603
- hwaddr paddr;
1604
-
1605
- assert(code == BUS_MCEERR_AR || code == BUS_MCEERR_AO);
1606
-
1607
- if (acpi_ghes_present() && addr) {
1608
- ram_addr = qemu_ram_addr_from_host(addr);
1609
- if (ram_addr != RAM_ADDR_INVALID &&
1610
- kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) {
1611
- kvm_hwpoison_page_add(ram_addr);
1612
- /*
1613
- * If this is a BUS_MCEERR_AR, we know we have been called
1614
- * synchronously from the vCPU thread, so we can easily
1615
- * synchronize the state and inject an error.
1616
- *
1617
- * TODO: we currently don't tell the guest at all about
1618
- * BUS_MCEERR_AO. In that case we might either be being
1619
- * called synchronously from the vCPU thread, or a bit
1620
- * later from the main thread, so doing the injection of
1621
- * the error would be more complicated.
1622
- */
1623
- if (code == BUS_MCEERR_AR) {
1624
- kvm_cpu_synchronize_state(c);
1625
- if (!acpi_ghes_record_errors(ACPI_HEST_SRC_ID_SEA, paddr)) {
1626
- kvm_inject_arm_sea(c);
1627
- } else {
1628
- error_report("failed to record the error");
1629
- abort();
1630
- }
1631
- }
1632
- return;
1633
- }
1634
- if (code == BUS_MCEERR_AO) {
1635
- error_report("Hardware memory error at addr %p for memory used by "
1636
- "QEMU itself instead of guest system!", addr);
1637
- }
1638
- }
1639
-
1640
- if (code == BUS_MCEERR_AR) {
1641
- error_report("Hardware memory error!");
1642
- exit(1);
1643
- }
1644
-}
1645
-
1646
-/* C6.6.29 BRK instruction */
1647
-static const uint32_t brk_insn = 0xd4200000;
1648
-
1649
-int kvm_arch_insert_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
1650
-{
1651
- if (cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&bp->saved_insn, 4, 0) ||
1652
- cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&brk_insn, 4, 1)) {
1653
- return -EINVAL;
1654
- }
1655
- return 0;
1656
-}
1657
-
1658
-int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
1659
-{
1660
- static uint32_t brk;
1661
-
1662
- if (cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&brk, 4, 0) ||
1663
- brk != brk_insn ||
1664
- cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&bp->saved_insn, 4, 1)) {
1665
- return -EINVAL;
1666
- }
1667
- return 0;
1668
-}
1669
diff --git a/target/arm/meson.build b/target/arm/meson.build
1670
index XXXXXXX..XXXXXXX 100644
1671
--- a/target/arm/meson.build
1672
+++ b/target/arm/meson.build
1673
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
1674
))
1675
arm_ss.add(zlib)
1676
1677
-arm_ss.add(when: 'CONFIG_KVM', if_true: files('hyp_gdbstub.c', 'kvm.c', 'kvm64.c'), if_false: files('kvm-stub.c'))
1678
+arm_ss.add(when: 'CONFIG_KVM', if_true: files('hyp_gdbstub.c', 'kvm.c'), if_false: files('kvm-stub.c'))
1679
arm_ss.add(when: 'CONFIG_HVF', if_true: files('hyp_gdbstub.c'))
1680
1681
arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
1682
--
256
--
1683
2.34.1
257
2.34.1
1684
1685
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for the inf-zero-nan
2
muladd special case. In meson.build we put -DTARGET_ARM in fpcflags,
3
and so we should select here the Arm rule of
4
float_infzeronan_dnan_if_qnan.
1
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241202131347.498124-5-peter.maydell@linaro.org
9
---
10
tests/fp/fp-bench.c | 5 +++++
11
tests/fp/fp-test.c | 5 +++++
12
2 files changed, 10 insertions(+)
13
14
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/fp/fp-bench.c
17
+++ b/tests/fp/fp-bench.c
18
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
19
{
20
bench_func_t f;
21
22
+ /*
23
+ * These implementation-defined choices for various things IEEE
24
+ * doesn't specify match those used by the Arm architecture.
25
+ */
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
28
29
f = bench_funcs[operation][precision];
30
g_assert(f);
31
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/tests/fp/fp-test.c
34
+++ b/tests/fp/fp-test.c
35
@@ -XXX,XX +XXX,XX @@ void run_test(void)
36
{
37
unsigned int i;
38
39
+ /*
40
+ * These implementation-defined choices for various things IEEE
41
+ * doesn't specify match those used by the Arm architecture.
42
+ */
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
44
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
45
46
genCases_setLevel(test_level);
47
verCases_maxErrorCount = n_max_errors;
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the Arm target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-6-peter.maydell@linaro.org
7
---
8
target/arm/cpu.c | 3 +++
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
2 files changed, 4 insertions(+), 7 deletions(-)
11
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
17
* * tininess-before-rounding
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
20
+ * * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
21
+ * and the input NaN if it is signalling
22
*/
23
static void arm_set_default_fp_behaviours(float_status *s)
24
{
25
set_float_detect_tininess(float_tininess_before_rounding, s);
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
28
}
29
30
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
/*
37
* Temporarily fall back to ifdef ladder
38
*/
39
-#if defined(TARGET_ARM)
40
- /*
41
- * For ARM, the (inf,zero,qnan) case returns the default NaN,
42
- * but (inf,zero,snan) returns the input NaN.
43
- */
44
- rule = float_infzeronan_dnan_if_qnan;
45
-#elif defined(TARGET_MIPS)
46
+#if defined(TARGET_MIPS)
47
if (snan_bit_is_one(status)) {
48
/*
49
* For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
50
--
51
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for s390, so we
2
can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-7-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/cpu.c
15
+++ b/target/s390x/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
17
set_float_detect_tininess(float_tininess_before_rounding,
18
&env->fpu_status);
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
20
+ set_float_infzeronan_rule(float_infzeronan_dnan_always,
21
+ &env->fpu_status);
22
/* fall through */
23
case RESET_TYPE_S390_CPU_NORMAL:
24
env->psw.mask &= ~PSW_MASK_RI;
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
* a default NaN
31
*/
32
rule = float_infzeronan_dnan_never;
33
-#elif defined(TARGET_S390X)
34
- rule = float_infzeronan_dnan_always;
35
#endif
36
}
37
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the PPC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-8-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 7 +++++++
9
fpu/softfloat-specialize.c.inc | 7 +------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
22
+ * to return an input NaN if we have one (ie c) rather than generating
23
+ * a default NaN
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
27
28
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
29
ppc_spr_t *spr = &env->spr_cb[i];
30
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
31
index XXXXXXX..XXXXXXX 100644
32
--- a/fpu/softfloat-specialize.c.inc
33
+++ b/fpu/softfloat-specialize.c.inc
34
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
*/
36
rule = float_infzeronan_dnan_never;
37
}
38
-#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
39
+#elif defined(TARGET_SPARC) || \
40
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
41
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
42
/*
43
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
44
* case sets InvalidOp and returns the input value 'c'
45
*/
46
- /*
47
- * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
48
- * to return an input NaN if we have one (ie c) rather than generating
49
- * a default NaN
50
- */
51
rule = float_infzeronan_dnan_never;
52
#endif
53
}
54
--
55
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the FloatInfZeroNaNRule explicitly for the MIPS target,
2
so we can remove the ifdef from pickNaNMulAdd().
2
3
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
6
Message-id: 20241202131347.498124-9-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 9 +++++++++
9
target/mips/msa.c | 4 ++++
10
fpu/softfloat-specialize.c.inc | 16 +---------------
11
3 files changed, 14 insertions(+), 15 deletions(-)
6
12
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-12-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm.c | 8 ++++----
14
1 file changed, 4 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
15
--- a/target/mips/fpu_helper.h
19
+++ b/target/arm/kvm.c
16
+++ b/target/mips/fpu_helper.h
20
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_vcpu_init(ARMCPU *cpu)
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_flush_mode(CPUMIPSState *env)
21
18
static inline void restore_snan_bit_mode(CPUMIPSState *env)
22
/**
23
* kvm_arm_vcpu_finalize:
24
- * @cs: CPUState
25
+ * @cpu: ARMCPU
26
* @feature: feature to finalize
27
*
28
* Finalizes the configuration of the specified VCPU feature by
29
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_vcpu_init(ARMCPU *cpu)
30
*
31
* Returns: 0 if success else < 0 error code
32
*/
33
-static int kvm_arm_vcpu_finalize(CPUState *cs, int feature)
34
+static int kvm_arm_vcpu_finalize(ARMCPU *cpu, int feature)
35
{
19
{
36
- return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_FINALIZE, &feature);
20
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
37
+ return kvm_vcpu_ioctl(CPU(cpu), KVM_ARM_VCPU_FINALIZE, &feature);
21
+ FloatInfZeroNaNRule izn_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
set_snan_bit_is_one(!nan2008, &env->active_fpu.fp_status);
28
set_default_nan_mode(!nan2008, &env->active_fpu.fp_status);
29
+ /*
30
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
31
+ * case sets InvalidOp and returns the default NaN.
32
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
33
+ * case sets InvalidOp and returns the input value 'c'.
34
+ */
35
+ izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
36
+ set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
38
}
37
}
39
38
40
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
39
static inline void restore_fp_status(CPUMIPSState *env)
41
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
40
diff --git a/target/mips/msa.c b/target/mips/msa.c
42
if (ret) {
41
index XXXXXXX..XXXXXXX 100644
43
return ret;
42
--- a/target/mips/msa.c
44
}
43
+++ b/target/mips/msa.c
45
- ret = kvm_arm_vcpu_finalize(cs, KVM_ARM_VCPU_SVE);
44
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
46
+ ret = kvm_arm_vcpu_finalize(cpu, KVM_ARM_VCPU_SVE);
45
47
if (ret) {
46
/* set proper signanling bit meaning ("1" means "quiet") */
48
return ret;
47
set_snan_bit_is_one(0, &env->active_tc.msa_fp_status);
49
}
48
+
49
+ /* Inf * 0 + NaN returns the input NaN */
50
+ set_float_infzeronan_rule(float_infzeronan_dnan_never,
51
+ &env->active_tc.msa_fp_status);
52
}
53
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
54
index XXXXXXX..XXXXXXX 100644
55
--- a/fpu/softfloat-specialize.c.inc
56
+++ b/fpu/softfloat-specialize.c.inc
57
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
58
/*
59
* Temporarily fall back to ifdef ladder
60
*/
61
-#if defined(TARGET_MIPS)
62
- if (snan_bit_is_one(status)) {
63
- /*
64
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
65
- * case sets InvalidOp and returns the default NaN
66
- */
67
- rule = float_infzeronan_dnan_always;
68
- } else {
69
- /*
70
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
71
- * case sets InvalidOp and returns the input value 'c'
72
- */
73
- rule = float_infzeronan_dnan_never;
74
- }
75
-#elif defined(TARGET_SPARC) || \
76
+#if defined(TARGET_SPARC) || \
77
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
78
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
79
/*
50
--
80
--
51
2.34.1
81
2.34.1
52
53
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the SPARC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-10-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_SPARC) || \
34
- defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
35
+#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
36
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
37
/*
38
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the xtensa target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-11-peter.maydell@linaro.org
7
---
8
target/xtensa/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 +-
10
2 files changed, 3 insertions(+), 1 deletion(-)
11
12
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/xtensa/cpu.c
15
+++ b/target/xtensa/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
17
reset_mmu(env);
18
cs->halted = env->runstall;
19
#endif
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
set_no_signaling_nans(!dfpu, &env->fp_status);
23
xtensa_use_first_nan(env, !dfpu);
24
}
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
34
+#if defined(TARGET_HPPA) || \
35
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
36
/*
37
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the x86 target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-12-peter.maydell@linaro.org
6
---
7
target/i386/tcg/fpu_helper.c | 7 +++++++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 8 insertions(+), 1 deletion(-)
10
11
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/i386/tcg/fpu_helper.c
14
+++ b/target/i386/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->mmx_status);
18
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->sse_status);
19
+ /*
20
+ * Only SSE has multiply-add instructions. In the SDM Section 14.5.2
21
+ * "Fused-Multiply-ADD (FMA) Numeric Behavior" the NaN handling is
22
+ * specified -- for 0 * inf + NaN the input NaN is selected, and if
23
+ * there are multiple input NaNs they are selected in the order a, b, c.
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
26
}
27
28
static inline uint8_t save_exception_flags(CPUX86State *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
34
* Temporarily fall back to ifdef ladder
35
*/
36
#if defined(TARGET_HPPA) || \
37
- defined(TARGET_I386) || defined(TARGET_LOONGARCH)
38
+ defined(TARGET_LOONGARCH)
39
/*
40
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
41
* case sets InvalidOp and returns the input value 'c'
42
--
43
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the FloatInfZeroNaNRule explicitly for the loongarch target.
2
2
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
5
Message-id: 20241202131347.498124-13-peter.maydell@linaro.org
6
---
7
target/loongarch/tcg/fpu_helper.c | 5 +++++
8
fpu/softfloat-specialize.c.inc | 7 +------
9
2 files changed, 6 insertions(+), 6 deletions(-)
6
10
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-10-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm_arm.h | 4 ++--
14
hw/arm/virt.c | 2 +-
15
target/arm/kvm.c | 6 +++---
16
3 files changed, 6 insertions(+), 6 deletions(-)
17
18
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
19
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm_arm.h
13
--- a/target/loongarch/tcg/fpu_helper.c
21
+++ b/target/arm/kvm_arm.h
14
+++ b/target/loongarch/tcg/fpu_helper.c
22
@@ -XXX,XX +XXX,XX @@ int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa);
15
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
23
int kvm_arm_vgic_probe(void);
16
&env->fp_status);
24
17
set_flush_to_zero(0, &env->fp_status);
25
void kvm_arm_pmu_init(ARMCPU *cpu);
18
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
26
-void kvm_arm_pmu_set_irq(CPUState *cs, int irq);
19
+ /*
27
+void kvm_arm_pmu_set_irq(ARMCPU *cpu, int irq);
20
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
28
21
+ * case sets InvalidOp and returns the input value 'c'
29
/**
22
+ */
30
* kvm_arm_pvtime_init:
23
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
31
@@ -XXX,XX +XXX,XX @@ static inline int kvm_arm_vgic_probe(void)
32
g_assert_not_reached();
33
}
24
}
34
25
35
-static inline void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
26
int ieee_ex_to_loongarch(int xcpt)
36
+static inline void kvm_arm_pmu_set_irq(ARMCPU *cpu, int irq)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
37
{
38
g_assert_not_reached();
39
}
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
41
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
29
--- a/fpu/softfloat-specialize.c.inc
43
+++ b/hw/arm/virt.c
30
+++ b/fpu/softfloat-specialize.c.inc
44
@@ -XXX,XX +XXX,XX @@ static void virt_cpu_post_init(VirtMachineState *vms, MemoryRegion *sysmem)
31
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
45
if (pmu) {
32
/*
46
assert(arm_feature(&ARM_CPU(cpu)->env, ARM_FEATURE_PMU));
33
* Temporarily fall back to ifdef ladder
47
if (kvm_irqchip_in_kernel()) {
34
*/
48
- kvm_arm_pmu_set_irq(cpu, VIRTUAL_PMU_IRQ);
35
-#if defined(TARGET_HPPA) || \
49
+ kvm_arm_pmu_set_irq(ARM_CPU(cpu), VIRTUAL_PMU_IRQ);
36
- defined(TARGET_LOONGARCH)
50
}
37
- /*
51
kvm_arm_pmu_init(ARM_CPU(cpu));
38
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
52
}
39
- * case sets InvalidOp and returns the input value 'c'
53
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
40
- */
54
index XXXXXXX..XXXXXXX 100644
41
+#if defined(TARGET_HPPA)
55
--- a/target/arm/kvm.c
42
rule = float_infzeronan_dnan_never;
56
+++ b/target/arm/kvm.c
43
#endif
57
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_init(ARMCPU *cpu)
58
}
59
}
60
61
-void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
62
+void kvm_arm_pmu_set_irq(ARMCPU *cpu, int irq)
63
{
64
struct kvm_device_attr attr = {
65
.group = KVM_ARM_VCPU_PMU_V3_CTRL,
66
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
67
.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
68
};
69
70
- if (!ARM_CPU(cs)->has_pmu) {
71
+ if (!cpu->has_pmu) {
72
return;
73
}
74
- if (!kvm_arm_set_device_attr(ARM_CPU(cs), &attr, "PMU")) {
75
+ if (!kvm_arm_set_device_attr(cpu, &attr, "PMU")) {
76
error_report("failed to set irq for PMU");
77
abort();
78
}
44
}
79
--
45
--
80
2.34.1
46
2.34.1
81
82
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the HPPA target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
As this is the last target to be converted to explicitly setting
5
the rule, we can remove the fallback code in pickNaNMulAdd()
6
entirely.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20241202131347.498124-14-peter.maydell@linaro.org
11
---
12
target/hppa/fpu_helper.c | 2 ++
13
fpu/softfloat-specialize.c.inc | 13 +------------
14
2 files changed, 3 insertions(+), 12 deletions(-)
15
16
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/hppa/fpu_helper.c
19
+++ b/target/hppa/fpu_helper.c
20
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
21
* HPPA does note implement a CPU reset method at all...
22
*/
23
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
24
+ /* For inf * 0 + NaN, return the input NaN */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
}
27
28
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
34
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
bool infzero, float_status *status)
36
{
37
- FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
38
-
39
/*
40
* We guarantee not to require the target to tell us how to
41
* pick a NaN if we're always returning the default NaN.
42
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
43
*/
44
assert(!status->default_nan_mode);
45
46
- if (rule == float_infzeronan_none) {
47
- /*
48
- * Temporarily fall back to ifdef ladder
49
- */
50
-#if defined(TARGET_HPPA)
51
- rule = float_infzeronan_dnan_never;
52
-#endif
53
- }
54
-
55
if (infzero) {
56
/*
57
* Inf * 0 + NaN -- some implementations return the default NaN here,
58
* and some return the input NaN.
59
*/
60
- switch (rule) {
61
+ switch (status->float_infzeronan_rule) {
62
case float_infzeronan_dnan_never:
63
return 2;
64
case float_infzeronan_dnan_always:
65
--
66
2.34.1
diff view generated by jsdifflib
New patch
1
The new implementation of pickNaNMulAdd() will find it convenient
2
to know whether at least one of the three arguments to the muladd
3
was a signaling NaN. We already calculate that in the caller,
4
so pass it in as a new bool have_snan.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-15-peter.maydell@linaro.org
9
---
10
fpu/softfloat-parts.c.inc | 5 +++--
11
fpu/softfloat-specialize.c.inc | 2 +-
12
2 files changed, 4 insertions(+), 3 deletions(-)
13
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
16
--- a/fpu/softfloat-parts.c.inc
17
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
19
{
20
int which;
21
bool infzero = (ab_mask == float_cmask_infzero);
22
+ bool have_snan = (abc_mask & float_cmask_snan);
23
24
- if (unlikely(abc_mask & float_cmask_snan)) {
25
+ if (unlikely(have_snan)) {
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
27
}
28
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
30
if (s->default_nan_mode) {
31
which = 3;
32
} else {
33
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
35
}
36
37
if (which == 3) {
38
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
39
index XXXXXXX..XXXXXXX 100644
40
--- a/fpu/softfloat-specialize.c.inc
41
+++ b/fpu/softfloat-specialize.c.inc
42
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
43
| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
44
*----------------------------------------------------------------------------*/
45
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
46
- bool infzero, float_status *status)
47
+ bool infzero, bool have_snan, float_status *status)
48
{
49
/*
50
* We guarantee not to require the target to tell us how to
51
--
52
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
IEEE 758 does not define a fixed rule for which NaN to pick as the
2
2
result if both operands of a 3-operand fused multiply-add operation
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
3
are NaNs. As a result different architectures have ended up with
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
4
different rules for propagating NaNs.
5
calling the generic vCPU API from "sysemu/kvm.h".
5
6
6
QEMU currently hardcodes the NaN propagation logic into the binary
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
because pickNaNMulAdd() has an ifdef ladder for different targets.
8
We want to make the propagation rule instead be selectable at
9
runtime, because:
10
* this will let us have multiple targets in one QEMU binary
11
* the Arm FEAT_AFP architectural feature includes letting
12
the guest select a NaN propagation rule at runtime
13
14
In this commit we add an enum for the propagation rule, the field in
15
float_status, and the corresponding getters and setters. We change
16
pickNaNMulAdd to honour this, but because all targets still leave
17
this field at its default 0 value, the fallback logic will pick the
18
rule type with the old ifdef ladder.
19
20
It's valid not to set a propagation rule if default_nan_mode is
21
enabled, because in that case there's no need to pick a NaN; all the
22
callers of pickNaNMulAdd() catch this case and skip calling it.
23
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
26
Message-id: 20241202131347.498124-16-peter.maydell@linaro.org
10
Message-id: 20231123183518.64569-5-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
27
---
13
target/arm/kvm.c | 7 +++----
28
include/fpu/softfloat-helpers.h | 11 +++
14
1 file changed, 3 insertions(+), 4 deletions(-)
29
include/fpu/softfloat-types.h | 55 +++++++++++
15
30
fpu/softfloat-specialize.c.inc | 167 ++++++++------------------------
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
31
3 files changed, 107 insertions(+), 126 deletions(-)
32
33
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
17
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
35
--- a/include/fpu/softfloat-helpers.h
19
+++ b/target/arm/kvm.c
36
+++ b/include/fpu/softfloat-helpers.h
20
@@ -XXX,XX +XXX,XX @@ uint32_t kvm_arm_sve_get_vls(CPUState *cs)
37
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
21
return vls[0];
38
status->float_2nan_prop_rule = rule;
22
}
39
}
23
40
24
-static int kvm_arm_sve_set_vls(CPUState *cs)
41
+static inline void set_float_3nan_prop_rule(Float3NaNPropRule rule,
25
+static int kvm_arm_sve_set_vls(ARMCPU *cpu)
42
+ float_status *status)
43
+{
44
+ status->float_3nan_prop_rule = rule;
45
+}
46
+
47
static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
48
float_status *status)
26
{
49
{
27
- ARMCPU *cpu = ARM_CPU(cs);
50
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
28
uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = { cpu->sve_vq.map };
51
return status->float_2nan_prop_rule;
29
30
assert(cpu->sve_max_vq <= KVM_ARM64_SVE_VQ_MAX);
31
32
- return kvm_set_one_reg(cs, KVM_REG_ARM64_SVE_VLS, &vls[0]);
33
+ return kvm_set_one_reg(CPU(cpu), KVM_REG_ARM64_SVE_VLS, &vls[0]);
34
}
52
}
35
53
36
#define ARM_CPU_ID_MPIDR 3, 0, 0, 0, 5
54
+static inline Float3NaNPropRule get_float_3nan_prop_rule(float_status *status)
37
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
55
+{
56
+ return status->float_3nan_prop_rule;
57
+}
58
+
59
static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
60
{
61
return status->float_infzeronan_rule;
62
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
63
index XXXXXXX..XXXXXXX 100644
64
--- a/include/fpu/softfloat-types.h
65
+++ b/include/fpu/softfloat-types.h
66
@@ -XXX,XX +XXX,XX @@ this code that are retained.
67
#ifndef SOFTFLOAT_TYPES_H
68
#define SOFTFLOAT_TYPES_H
69
70
+#include "hw/registerfields.h"
71
+
72
/*
73
* Software IEC/IEEE floating-point types.
74
*/
75
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
76
float_2nan_prop_x87,
77
} Float2NaNPropRule;
78
79
+/*
80
+ * 3-input NaN propagation rule, for fused multiply-add. Individual
81
+ * architectures have different rules for which input NaN is
82
+ * propagated to the output when there is more than one NaN on the
83
+ * input.
84
+ *
85
+ * If default_nan_mode is enabled then it is valid not to set a NaN
86
+ * propagation rule, because the softfloat code guarantees not to try
87
+ * to pick a NaN to propagate in default NaN mode. When not in
88
+ * default-NaN mode, it is an error for the target not to set the rule
89
+ * in float_status if it uses a muladd, and we will assert if we need
90
+ * to handle an input NaN and no rule was selected.
91
+ *
92
+ * The naming scheme for Float3NaNPropRule values is:
93
+ * float_3nan_prop_s_abc:
94
+ * = "Prefer SNaN over QNaN, then operand A over B over C"
95
+ * float_3nan_prop_abc:
96
+ * = "Prefer A over B over C regardless of SNaN vs QNAN"
97
+ *
98
+ * For QEMU, the multiply-add operation is A * B + C.
99
+ */
100
+
101
+/*
102
+ * We set the Float3NaNPropRule enum values up so we can select the
103
+ * right value in pickNaNMulAdd in a data driven way.
104
+ */
105
+FIELD(3NAN, 1ST, 0, 2) /* which operand is most preferred ? */
106
+FIELD(3NAN, 2ND, 2, 2) /* which operand is next most preferred ? */
107
+FIELD(3NAN, 3RD, 4, 2) /* which operand is least preferred ? */
108
+FIELD(3NAN, SNAN, 6, 1) /* do we prefer SNaN over QNaN ? */
109
+
110
+#define PROPRULE(X, Y, Z) \
111
+ ((X << R_3NAN_1ST_SHIFT) | (Y << R_3NAN_2ND_SHIFT) | (Z << R_3NAN_3RD_SHIFT))
112
+
113
+typedef enum __attribute__((__packed__)) {
114
+ float_3nan_prop_none = 0, /* No propagation rule specified */
115
+ float_3nan_prop_abc = PROPRULE(0, 1, 2),
116
+ float_3nan_prop_acb = PROPRULE(0, 2, 1),
117
+ float_3nan_prop_bac = PROPRULE(1, 0, 2),
118
+ float_3nan_prop_bca = PROPRULE(1, 2, 0),
119
+ float_3nan_prop_cab = PROPRULE(2, 0, 1),
120
+ float_3nan_prop_cba = PROPRULE(2, 1, 0),
121
+ float_3nan_prop_s_abc = float_3nan_prop_abc | R_3NAN_SNAN_MASK,
122
+ float_3nan_prop_s_acb = float_3nan_prop_acb | R_3NAN_SNAN_MASK,
123
+ float_3nan_prop_s_bac = float_3nan_prop_bac | R_3NAN_SNAN_MASK,
124
+ float_3nan_prop_s_bca = float_3nan_prop_bca | R_3NAN_SNAN_MASK,
125
+ float_3nan_prop_s_cab = float_3nan_prop_cab | R_3NAN_SNAN_MASK,
126
+ float_3nan_prop_s_cba = float_3nan_prop_cba | R_3NAN_SNAN_MASK,
127
+} Float3NaNPropRule;
128
+
129
+#undef PROPRULE
130
+
131
/*
132
* Rule for result of fused multiply-add 0 * Inf + NaN.
133
* This must be a NaN, but implementations differ on whether this
134
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
135
FloatRoundMode float_rounding_mode;
136
FloatX80RoundPrec floatx80_rounding_precision;
137
Float2NaNPropRule float_2nan_prop_rule;
138
+ Float3NaNPropRule float_3nan_prop_rule;
139
FloatInfZeroNaNRule float_infzeronan_rule;
140
bool tininess_before_rounding;
141
/* should denormalised results go to zero and set the inexact flag? */
142
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
143
index XXXXXXX..XXXXXXX 100644
144
--- a/fpu/softfloat-specialize.c.inc
145
+++ b/fpu/softfloat-specialize.c.inc
146
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
147
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
148
bool infzero, bool have_snan, float_status *status)
149
{
150
+ FloatClass cls[3] = { a_cls, b_cls, c_cls };
151
+ Float3NaNPropRule rule = status->float_3nan_prop_rule;
152
+ int which;
153
+
154
/*
155
* We guarantee not to require the target to tell us how to
156
* pick a NaN if we're always returning the default NaN.
157
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
158
}
38
}
159
}
39
160
40
if (cpu_isar_feature(aa64_sve, cpu)) {
161
+ if (rule == float_3nan_prop_none) {
41
- ret = kvm_arm_sve_set_vls(cs);
162
#if defined(TARGET_ARM)
42
+ ret = kvm_arm_sve_set_vls(cpu);
163
-
43
if (ret) {
164
- /* This looks different from the ARM ARM pseudocode, because the ARM ARM
44
return ret;
165
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
166
- */
167
- if (is_snan(c_cls)) {
168
- return 2;
169
- } else if (is_snan(a_cls)) {
170
- return 0;
171
- } else if (is_snan(b_cls)) {
172
- return 1;
173
- } else if (is_qnan(c_cls)) {
174
- return 2;
175
- } else if (is_qnan(a_cls)) {
176
- return 0;
177
- } else {
178
- return 1;
179
- }
180
+ /*
181
+ * This looks different from the ARM ARM pseudocode, because the ARM ARM
182
+ * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
183
+ */
184
+ rule = float_3nan_prop_s_cab;
185
#elif defined(TARGET_MIPS)
186
- if (snan_bit_is_one(status)) {
187
- /* Prefer sNaN over qNaN, in the a, b, c order. */
188
- if (is_snan(a_cls)) {
189
- return 0;
190
- } else if (is_snan(b_cls)) {
191
- return 1;
192
- } else if (is_snan(c_cls)) {
193
- return 2;
194
- } else if (is_qnan(a_cls)) {
195
- return 0;
196
- } else if (is_qnan(b_cls)) {
197
- return 1;
198
+ if (snan_bit_is_one(status)) {
199
+ rule = float_3nan_prop_s_abc;
200
} else {
201
- return 2;
202
+ rule = float_3nan_prop_s_cab;
45
}
203
}
204
- } else {
205
- /* Prefer sNaN over qNaN, in the c, a, b order. */
206
- if (is_snan(c_cls)) {
207
- return 2;
208
- } else if (is_snan(a_cls)) {
209
- return 0;
210
- } else if (is_snan(b_cls)) {
211
- return 1;
212
- } else if (is_qnan(c_cls)) {
213
- return 2;
214
- } else if (is_qnan(a_cls)) {
215
- return 0;
216
- } else {
217
- return 1;
218
- }
219
- }
220
#elif defined(TARGET_LOONGARCH64)
221
- /* Prefer sNaN over qNaN, in the c, a, b order. */
222
- if (is_snan(c_cls)) {
223
- return 2;
224
- } else if (is_snan(a_cls)) {
225
- return 0;
226
- } else if (is_snan(b_cls)) {
227
- return 1;
228
- } else if (is_qnan(c_cls)) {
229
- return 2;
230
- } else if (is_qnan(a_cls)) {
231
- return 0;
232
- } else {
233
- return 1;
234
- }
235
+ rule = float_3nan_prop_s_cab;
236
#elif defined(TARGET_PPC)
237
- /* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
238
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
239
- */
240
- if (is_nan(a_cls)) {
241
- return 0;
242
- } else if (is_nan(c_cls)) {
243
- return 2;
244
- } else {
245
- return 1;
246
- }
247
+ /*
248
+ * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
249
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
250
+ */
251
+ rule = float_3nan_prop_acb;
252
#elif defined(TARGET_S390X)
253
- if (is_snan(a_cls)) {
254
- return 0;
255
- } else if (is_snan(b_cls)) {
256
- return 1;
257
- } else if (is_snan(c_cls)) {
258
- return 2;
259
- } else if (is_qnan(a_cls)) {
260
- return 0;
261
- } else if (is_qnan(b_cls)) {
262
- return 1;
263
- } else {
264
- return 2;
265
- }
266
+ rule = float_3nan_prop_s_abc;
267
#elif defined(TARGET_SPARC)
268
- /* Prefer SNaN over QNaN, order C, B, A. */
269
- if (is_snan(c_cls)) {
270
- return 2;
271
- } else if (is_snan(b_cls)) {
272
- return 1;
273
- } else if (is_snan(a_cls)) {
274
- return 0;
275
- } else if (is_qnan(c_cls)) {
276
- return 2;
277
- } else if (is_qnan(b_cls)) {
278
- return 1;
279
- } else {
280
- return 0;
281
- }
282
+ rule = float_3nan_prop_s_cba;
283
#elif defined(TARGET_XTENSA)
284
- /*
285
- * For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
286
- * an input NaN if we have one (ie c).
287
- */
288
- if (status->use_first_nan) {
289
- if (is_nan(a_cls)) {
290
- return 0;
291
- } else if (is_nan(b_cls)) {
292
- return 1;
293
+ if (status->use_first_nan) {
294
+ rule = float_3nan_prop_abc;
295
} else {
296
- return 2;
297
+ rule = float_3nan_prop_cba;
298
}
299
- } else {
300
- if (is_nan(c_cls)) {
301
- return 2;
302
- } else if (is_nan(b_cls)) {
303
- return 1;
304
- } else {
305
- return 0;
306
- }
307
- }
308
#else
309
- /* A default implementation: prefer a to b to c.
310
- * This is unlikely to actually match any real implementation.
311
- */
312
- if (is_nan(a_cls)) {
313
- return 0;
314
- } else if (is_nan(b_cls)) {
315
- return 1;
316
- } else {
317
- return 2;
318
- }
319
+ rule = float_3nan_prop_abc;
320
#endif
321
+ }
322
+
323
+ assert(rule != float_3nan_prop_none);
324
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
325
+ /* We have at least one SNaN input and should prefer it */
326
+ do {
327
+ which = rule & R_3NAN_1ST_MASK;
328
+ rule >>= R_3NAN_1ST_LENGTH;
329
+ } while (!is_snan(cls[which]));
330
+ } else {
331
+ do {
332
+ which = rule & R_3NAN_1ST_MASK;
333
+ rule >>= R_3NAN_1ST_LENGTH;
334
+ } while (!is_nan(cls[which]));
335
+ }
336
+ return which;
337
}
338
339
/*----------------------------------------------------------------------------
46
--
340
--
47
2.34.1
341
2.34.1
48
49
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for propagating NaNs in
2
the muladd case. In meson.build we put -DTARGET_ARM in fpcflags, and
3
so we should select here the Arm rule of float_3nan_prop_s_cab.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-17-peter.maydell@linaro.org
8
---
9
tests/fp/fp-bench.c | 1 +
10
tests/fp/fp-test.c | 1 +
11
2 files changed, 2 insertions(+)
12
13
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/fp/fp-bench.c
16
+++ b/tests/fp/fp-bench.c
17
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
18
* doesn't specify match those used by the Arm architecture.
19
*/
20
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
22
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
23
24
f = bench_funcs[operation][precision];
25
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/fp/fp-test.c
28
+++ b/tests/fp/fp-test.c
29
@@ -XXX,XX +XXX,XX @@ void run_test(void)
30
* doesn't specify match those used by the Arm architecture.
31
*/
32
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
33
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
34
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
35
36
genCases_setLevel(test_level);
37
--
38
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
6
Message-id: 20241202131347.498124-18-peter.maydell@linaro.org
7
---
8
target/arm/cpu.c | 5 +++++
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
2 files changed, 6 insertions(+), 7 deletions(-)
6
11
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-6-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm_arm.h | 6 +++---
14
target/arm/cpu64.c | 2 +-
15
target/arm/kvm.c | 2 +-
16
3 files changed, 5 insertions(+), 5 deletions(-)
17
18
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
19
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm_arm.h
14
--- a/target/arm/cpu.c
21
+++ b/target/arm/kvm_arm.h
15
+++ b/target/arm/cpu.c
22
@@ -XXX,XX +XXX,XX @@ void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
23
17
* * tininess-before-rounding
24
/**
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
25
* kvm_arm_sve_get_vls:
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
26
- * @cs: CPUState
20
+ * * 3-input NaN propagation prefers SNaN over QNaN, and then
27
+ * @cpu: ARMCPU
21
+ * operand C over A over B (see FPProcessNaNs3() pseudocode,
28
*
22
+ * but note that for QEMU muladd is a * b + c, whereas for
29
* Get all the SVE vector lengths supported by the KVM host, setting
23
+ * the pseudocode function the arguments are in the order c, a, b.
30
* the bits corresponding to their length in quadwords minus one
24
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
31
* (vq - 1) up to ARM_MAX_VQ. Return the resulting map.
25
* and the input NaN if it is signalling
32
*/
26
*/
33
-uint32_t kvm_arm_sve_get_vls(CPUState *cs);
27
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
34
+uint32_t kvm_arm_sve_get_vls(ARMCPU *cpu);
28
{
35
29
set_float_detect_tininess(float_tininess_before_rounding, s);
36
/**
30
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
37
* kvm_arm_set_cpu_features_from_host:
31
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
38
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp)
32
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
39
g_assert_not_reached();
40
}
33
}
41
34
42
-static inline uint32_t kvm_arm_sve_get_vls(CPUState *cs)
35
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
43
+static inline uint32_t kvm_arm_sve_get_vls(ARMCPU *cpu)
44
{
45
g_assert_not_reached();
46
}
47
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
48
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/cpu64.c
37
--- a/fpu/softfloat-specialize.c.inc
50
+++ b/target/arm/cpu64.c
38
+++ b/fpu/softfloat-specialize.c.inc
51
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
39
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
52
*/
40
}
53
if (kvm_enabled()) {
41
54
if (kvm_arm_sve_supported()) {
42
if (rule == float_3nan_prop_none) {
55
- cpu->sve_vq.supported = kvm_arm_sve_get_vls(CPU(cpu));
43
-#if defined(TARGET_ARM)
56
+ cpu->sve_vq.supported = kvm_arm_sve_get_vls(cpu);
44
- /*
57
vq_supported = cpu->sve_vq.supported;
45
- * This looks different from the ARM ARM pseudocode, because the ARM ARM
46
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
47
- */
48
- rule = float_3nan_prop_s_cab;
49
-#elif defined(TARGET_MIPS)
50
+#if defined(TARGET_MIPS)
51
if (snan_bit_is_one(status)) {
52
rule = float_3nan_prop_s_abc;
58
} else {
53
} else {
59
assert(!cpu_isar_feature(aa64_sve, cpu));
60
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/kvm.c
63
+++ b/target/arm/kvm.c
64
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_sve_supported(void)
65
66
QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
67
68
-uint32_t kvm_arm_sve_get_vls(CPUState *cs)
69
+uint32_t kvm_arm_sve_get_vls(ARMCPU *cpu)
70
{
71
/* Only call this function if kvm_arm_sve_supported() returns true. */
72
static uint64_t vls[KVM_ARM64_SVE_VLS_WORDS];
73
--
54
--
74
2.34.1
55
2.34.1
75
76
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for loongarch, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-19-peter.maydell@linaro.org
7
---
8
target/loongarch/tcg/fpu_helper.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/loongarch/tcg/fpu_helper.c
15
+++ b/target/loongarch/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
17
* case sets InvalidOp and returns the input value 'c'
18
*/
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
21
}
22
23
int ieee_ex_to_loongarch(int xcpt)
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_LOONGARCH64)
33
- rule = float_3nan_prop_s_cab;
34
#elif defined(TARGET_PPC)
35
/*
36
* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for PPC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-20-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 8 ++++++++
9
fpu/softfloat-specialize.c.inc | 6 ------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * NaN propagation for fused multiply-add:
22
+ * if fRA is a NaN return it; otherwise if fRB is a NaN return it;
23
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
24
+ * whereas QEMU labels the operands as (a * b) + c.
25
+ */
26
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->fp_status);
27
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->vec_status);
28
/*
29
* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
30
* to return an input NaN if we have one (ie c) rather than generating
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
} else {
37
rule = float_3nan_prop_s_cab;
38
}
39
-#elif defined(TARGET_PPC)
40
- /*
41
- * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
42
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
43
- */
44
- rule = float_3nan_prop_acb;
45
#elif defined(TARGET_S390X)
46
rule = float_3nan_prop_s_abc;
47
#elif defined(TARGET_SPARC)
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for s390x, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-21-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/cpu.c
15
+++ b/target/s390x/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
17
set_float_detect_tininess(float_tininess_before_rounding,
18
&env->fpu_status);
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
21
set_float_infzeronan_rule(float_infzeronan_dnan_always,
22
&env->fpu_status);
23
/* fall through */
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_S390X)
33
- rule = float_3nan_prop_s_abc;
34
#elif defined(TARGET_SPARC)
35
rule = float_3nan_prop_s_cba;
36
#elif defined(TARGET_XTENSA)
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for SPARC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-22-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For fused-multiply add, prefer SNaN over QNaN, then C->B->A */
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
22
/* For inf * 0 + NaN, return the input NaN */
23
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
} else {
31
rule = float_3nan_prop_s_cab;
32
}
33
-#elif defined(TARGET_SPARC)
34
- rule = float_3nan_prop_s_cba;
35
#elif defined(TARGET_XTENSA)
36
if (status->use_first_nan) {
37
rule = float_3nan_prop_abc;
38
--
39
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
6
Message-id: 20241202131347.498124-23-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 4 ++++
9
target/mips/msa.c | 3 +++
10
fpu/softfloat-specialize.c.inc | 8 +-------
11
3 files changed, 8 insertions(+), 7 deletions(-)
6
12
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-11-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm.c | 11 +++++------
14
1 file changed, 5 insertions(+), 6 deletions(-)
15
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
15
--- a/target/mips/fpu_helper.h
19
+++ b/target/arm/kvm.c
16
+++ b/target/mips/fpu_helper.h
20
@@ -XXX,XX +XXX,XX @@ static ARMHostCPUFeatures arm_host_cpu_features;
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
21
22
/**
23
* kvm_arm_vcpu_init:
24
- * @cs: CPUState
25
+ * @cpu: ARMCPU
26
*
27
* Initialize (or reinitialize) the VCPU by invoking the
28
* KVM_ARM_VCPU_INIT ioctl with the CPU type and feature
29
@@ -XXX,XX +XXX,XX @@ static ARMHostCPUFeatures arm_host_cpu_features;
30
*
31
* Returns: 0 if success else < 0 error code
32
*/
33
-static int kvm_arm_vcpu_init(CPUState *cs)
34
+static int kvm_arm_vcpu_init(ARMCPU *cpu)
35
{
18
{
36
- ARMCPU *cpu = ARM_CPU(cs);
19
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
37
struct kvm_vcpu_init init;
20
FloatInfZeroNaNRule izn_rule;
38
21
+ Float3NaNPropRule nan3_rule;
39
init.target = cpu->kvm_target;
22
40
memcpy(init.features, cpu->kvm_init_features, sizeof(init.features));
23
/*
41
24
* With nan2008, SNaNs are silenced in the usual way.
42
- return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
43
+ return kvm_vcpu_ioctl(CPU(cpu), KVM_ARM_VCPU_INIT, &init);
26
*/
27
izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
28
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
29
+ nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
30
+ set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
31
+
44
}
32
}
45
33
46
/**
34
static inline void restore_fp_status(CPUMIPSState *env)
47
@@ -XXX,XX +XXX,XX @@ void kvm_arm_reset_vcpu(ARMCPU *cpu)
35
diff --git a/target/mips/msa.c b/target/mips/msa.c
48
/* Re-init VCPU so that all registers are set to
36
index XXXXXXX..XXXXXXX 100644
49
* their respective reset values.
37
--- a/target/mips/msa.c
50
*/
38
+++ b/target/mips/msa.c
51
- ret = kvm_arm_vcpu_init(CPU(cpu));
39
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
52
+ ret = kvm_arm_vcpu_init(cpu);
40
set_float_2nan_prop_rule(float_2nan_prop_s_ab,
53
if (ret < 0) {
41
&env->active_tc.msa_fp_status);
54
fprintf(stderr, "kvm_arm_vcpu_init failed: %s\n", strerror(-ret));
42
55
abort();
43
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab,
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
44
+ &env->active_tc.msa_fp_status);
45
+
46
/* clear float_status exception flags */
47
set_float_exception_flags(0, &env->active_tc.msa_fp_status);
48
49
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
50
index XXXXXXX..XXXXXXX 100644
51
--- a/fpu/softfloat-specialize.c.inc
52
+++ b/fpu/softfloat-specialize.c.inc
53
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
57
}
54
}
58
55
59
/* Do KVM_ARM_VCPU_INIT ioctl */
56
if (rule == float_3nan_prop_none) {
60
- ret = kvm_arm_vcpu_init(cs);
57
-#if defined(TARGET_MIPS)
61
+ ret = kvm_arm_vcpu_init(cpu);
58
- if (snan_bit_is_one(status)) {
62
if (ret) {
59
- rule = float_3nan_prop_s_abc;
63
return ret;
60
- } else {
64
}
61
- rule = float_3nan_prop_s_cab;
62
- }
63
-#elif defined(TARGET_XTENSA)
64
+#if defined(TARGET_XTENSA)
65
if (status->use_first_nan) {
66
rule = float_3nan_prop_abc;
67
} else {
65
--
68
--
66
2.34.1
69
2.34.1
67
68
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the Float3NaNPropRule explicitly for xtensa, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
6
Message-id: 20241202131347.498124-24-peter.maydell@linaro.org
7
---
8
target/xtensa/fpu_helper.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 8 --------
10
2 files changed, 2 insertions(+), 8 deletions(-)
6
11
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-7-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm.c | 12 ++++++------
14
1 file changed, 6 insertions(+), 6 deletions(-)
15
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
14
--- a/target/xtensa/fpu_helper.c
19
+++ b/target/arm/kvm.c
15
+++ b/target/xtensa/fpu_helper.c
20
@@ -XXX,XX +XXX,XX @@ void kvm_arch_remove_all_hw_breakpoints(void)
16
@@ -XXX,XX +XXX,XX @@ void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
17
set_use_first_nan(use_first, &env->fp_status);
18
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
19
&env->fp_status);
20
+ set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
21
+ &env->fp_status);
22
}
23
24
void HELPER(wur_fpu2k_fcr)(CPUXtensaState *env, uint32_t v)
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
21
}
30
}
22
}
31
23
32
if (rule == float_3nan_prop_none) {
24
-static bool kvm_arm_set_device_attr(CPUState *cs, struct kvm_device_attr *attr,
33
-#if defined(TARGET_XTENSA)
25
+static bool kvm_arm_set_device_attr(ARMCPU *cpu, struct kvm_device_attr *attr,
34
- if (status->use_first_nan) {
26
const char *name)
35
- rule = float_3nan_prop_abc;
27
{
36
- } else {
28
int err;
37
- rule = float_3nan_prop_cba;
29
38
- }
30
- err = kvm_vcpu_ioctl(cs, KVM_HAS_DEVICE_ATTR, attr);
39
-#else
31
+ err = kvm_vcpu_ioctl(CPU(cpu), KVM_HAS_DEVICE_ATTR, attr);
40
rule = float_3nan_prop_abc;
32
if (err != 0) {
41
-#endif
33
error_report("%s: KVM_HAS_DEVICE_ATTR: %s", name, strerror(-err));
34
return false;
35
}
42
}
36
43
37
- err = kvm_vcpu_ioctl(cs, KVM_SET_DEVICE_ATTR, attr);
44
assert(rule != float_3nan_prop_none);
38
+ err = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_DEVICE_ATTR, attr);
39
if (err != 0) {
40
error_report("%s: KVM_SET_DEVICE_ATTR: %s", name, strerror(-err));
41
return false;
42
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_init(CPUState *cs)
43
if (!ARM_CPU(cs)->has_pmu) {
44
return;
45
}
46
- if (!kvm_arm_set_device_attr(cs, &attr, "PMU")) {
47
+ if (!kvm_arm_set_device_attr(ARM_CPU(cs), &attr, "PMU")) {
48
error_report("failed to init PMU");
49
abort();
50
}
51
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
52
if (!ARM_CPU(cs)->has_pmu) {
53
return;
54
}
55
- if (!kvm_arm_set_device_attr(cs, &attr, "PMU")) {
56
+ if (!kvm_arm_set_device_attr(ARM_CPU(cs), &attr, "PMU")) {
57
error_report("failed to set irq for PMU");
58
abort();
59
}
60
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
61
if (ARM_CPU(cs)->kvm_steal_time == ON_OFF_AUTO_OFF) {
62
return;
63
}
64
- if (!kvm_arm_set_device_attr(cs, &attr, "PVTIME IPA")) {
65
+ if (!kvm_arm_set_device_attr(ARM_CPU(cs), &attr, "PVTIME IPA")) {
66
error_report("failed to init PVTIME IPA");
67
abort();
68
}
69
--
45
--
70
2.34.1
46
2.34.1
71
72
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the Float3NaNPropRule explicitly for i386. We had no
2
i386-specific behaviour in the old ifdef ladder, so we were using the
3
default "prefer a then b then c" fallback; this is actually the
4
correct per-the-spec handling for i386.
2
5
3
kvm_arm_its_reset_hold() calls warn_report(), itself declared
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
in "qemu/error-report.h".
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Gavin Shan <gshan@redhat.com>
8
Message-id: 20241202131347.498124-25-peter.maydell@linaro.org
9
Message-id: 20231123183518.64569-2-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
hw/intc/arm_gicv3_its_kvm.c | 1 +
10
target/i386/tcg/fpu_helper.c | 1 +
13
1 file changed, 1 insertion(+)
11
1 file changed, 1 insertion(+)
14
12
15
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
13
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gicv3_its_kvm.c
15
--- a/target/i386/tcg/fpu_helper.c
18
+++ b/hw/intc/arm_gicv3_its_kvm.c
16
+++ b/target/i386/tcg/fpu_helper.c
19
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
20
#include "qemu/osdep.h"
18
* there are multiple input NaNs they are selected in the order a, b, c.
21
#include "qapi/error.h"
19
*/
22
#include "qemu/module.h"
20
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
23
+#include "qemu/error-report.h"
21
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
24
#include "hw/intc/arm_gicv3_its_common.h"
22
}
25
#include "hw/qdev-properties.h"
23
26
#include "sysemu/runstate.h"
24
static inline uint8_t save_exception_flags(CPUX86State *env)
27
--
25
--
28
2.34.1
26
2.34.1
29
30
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for HPPA, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
HPPA is the only target that was using the default branch of the
5
ifdef ladder (other targets either do not use muladd or set
6
default_nan_mode), so we can remove the ifdef fallback entirely now
7
(allowing the "rule not set" case to fall into the default of the
8
switch statement and assert).
9
10
We add a TODO note that the HPPA rule is probably wrong; this is
11
not a behavioural change for this refactoring.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-26-peter.maydell@linaro.org
16
---
17
target/hppa/fpu_helper.c | 8 ++++++++
18
fpu/softfloat-specialize.c.inc | 4 ----
19
2 files changed, 8 insertions(+), 4 deletions(-)
20
21
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/hppa/fpu_helper.c
24
+++ b/target/hppa/fpu_helper.c
25
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
26
* HPPA does note implement a CPU reset method at all...
27
*/
28
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
29
+ /*
30
+ * TODO: The HPPA architecture reference only documents its NaN
31
+ * propagation rule for 2-operand operations. Testing on real hardware
32
+ * might be necessary to confirm whether this order for muladd is correct.
33
+ * Not preferring the SNaN is almost certainly incorrect as it diverges
34
+ * from the documented rules for 2-operand operations.
35
+ */
36
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
37
/* For inf * 0 + NaN, return the input NaN */
38
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
39
}
40
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
41
index XXXXXXX..XXXXXXX 100644
42
--- a/fpu/softfloat-specialize.c.inc
43
+++ b/fpu/softfloat-specialize.c.inc
44
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
45
}
46
}
47
48
- if (rule == float_3nan_prop_none) {
49
- rule = float_3nan_prop_abc;
50
- }
51
-
52
assert(rule != float_3nan_prop_none);
53
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
54
/* We have at least one SNaN input and should prefer it */
55
--
56
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The use_first_nan field in float_status was an xtensa-specific way to
2
select at runtime from two different NaN propagation rules. Now that
3
xtensa is using the target-agnostic NaN propagation rule selection
4
that we've just added, we can remove use_first_nan, because there is
5
no longer any code that reads it.
2
6
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
[PMM: merged two duplicate comments, as suggested by Gavin]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241202131347.498124-27-peter.maydell@linaro.org
9
---
10
---
10
target/arm/kvm_arm.h | 10 ----------
11
include/fpu/softfloat-helpers.h | 5 -----
11
target/arm/kvm.c | 19 +++++++++++++++++++
12
include/fpu/softfloat-types.h | 1 -
12
target/arm/kvm64.c | 15 ---------------
13
target/xtensa/fpu_helper.c | 1 -
13
3 files changed, 19 insertions(+), 25 deletions(-)
14
3 files changed, 7 deletions(-)
14
15
15
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
16
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/kvm_arm.h
18
--- a/include/fpu/softfloat-helpers.h
18
+++ b/target/arm/kvm_arm.h
19
+++ b/include/fpu/softfloat-helpers.h
19
@@ -XXX,XX +XXX,XX @@ void kvm_arm_register_device(MemoryRegion *mr, uint64_t devid, uint64_t group,
20
@@ -XXX,XX +XXX,XX @@ static inline void set_snan_bit_is_one(bool val, float_status *status)
20
*/
21
status->snan_bit_is_one = val;
21
int kvm_arm_init_cpreg_list(ARMCPU *cpu);
22
23
-/**
24
- * kvm_arm_reg_syncs_via_cpreg_list:
25
- * @regidx: KVM register index
26
- *
27
- * Return true if this KVM register should be synchronized via the
28
- * cpreg list of arbitrary system registers, false if it is synchronized
29
- * by hand using code in kvm_arch_get/put_registers().
30
- */
31
-bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx);
32
-
33
/**
34
* write_list_to_kvmstate:
35
* @cpu: ARMCPU
36
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/kvm.c
39
+++ b/target/arm/kvm.c
40
@@ -XXX,XX +XXX,XX @@ static uint64_t *kvm_arm_get_cpreg_ptr(ARMCPU *cpu, uint64_t regidx)
41
return &cpu->cpreg_values[res - cpu->cpreg_indexes];
42
}
22
}
43
23
44
+/**
24
-static inline void set_use_first_nan(bool val, float_status *status)
45
+ * kvm_arm_reg_syncs_via_cpreg_list:
46
+ * @regidx: KVM register index
47
+ *
48
+ * Return true if this KVM register should be synchronized via the
49
+ * cpreg list of arbitrary system registers, false if it is synchronized
50
+ * by hand using code in kvm_arch_get/put_registers().
51
+ */
52
+static bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx)
53
+{
54
+ switch (regidx & KVM_REG_ARM_COPROC_MASK) {
55
+ case KVM_REG_ARM_CORE:
56
+ case KVM_REG_ARM64_SVE:
57
+ return false;
58
+ default:
59
+ return true;
60
+ }
61
+}
62
+
63
/* Initialize the ARMCPU cpreg list according to the kernel's
64
* definition of what CPU registers it knows about (and throw away
65
* the previous TCG-created cpreg list).
66
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/kvm64.c
69
+++ b/target/arm/kvm64.c
70
@@ -XXX,XX +XXX,XX @@ int kvm_arch_destroy_vcpu(CPUState *cs)
71
return 0;
72
}
73
74
-bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx)
75
-{
25
-{
76
- /* Return true if the regidx is a register we should synchronize
26
- status->use_first_nan = val;
77
- * via the cpreg_tuples array (ie is not a core or sve reg that
78
- * we sync by hand in kvm_arch_get/put_registers())
79
- */
80
- switch (regidx & KVM_REG_ARM_COPROC_MASK) {
81
- case KVM_REG_ARM_CORE:
82
- case KVM_REG_ARM64_SVE:
83
- return false;
84
- default:
85
- return true;
86
- }
87
-}
27
-}
88
-
28
-
89
/* Callers must hold the iothread mutex lock */
29
static inline void set_no_signaling_nans(bool val, float_status *status)
90
static void kvm_inject_arm_sea(CPUState *c)
91
{
30
{
31
status->no_signaling_nans = val;
32
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/include/fpu/softfloat-types.h
35
+++ b/include/fpu/softfloat-types.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
37
* softfloat-specialize.inc.c)
38
*/
39
bool snan_bit_is_one;
40
- bool use_first_nan;
41
bool no_signaling_nans;
42
/* should overflowed results subtract re_bias to its exponent? */
43
bool rebias_overflow;
44
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/xtensa/fpu_helper.c
47
+++ b/target/xtensa/fpu_helper.c
48
@@ -XXX,XX +XXX,XX @@ static const struct {
49
50
void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
51
{
52
- set_use_first_nan(use_first, &env->fp_status);
53
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
54
&env->fp_status);
55
set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
92
--
56
--
93
2.34.1
57
2.34.1
94
95
diff view generated by jsdifflib
New patch
1
Currently m68k_cpu_reset_hold() calls floatx80_default_nan(NULL)
2
to get the NaN bit pattern to reset the FPU registers. This
3
works because it happens that our implementation of
4
floatx80_default_nan() doesn't actually look at the float_status
5
pointer except for TARGET_MIPS. However, this isn't guaranteed,
6
and to be able to remove the ifdef in floatx80_default_nan()
7
we're going to need a real float_status here.
1
8
9
Rearrange m68k_cpu_reset_hold() so that we initialize env->fp_status
10
earlier, and thus can pass it to floatx80_default_nan().
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20241202131347.498124-28-peter.maydell@linaro.org
15
---
16
target/m68k/cpu.c | 12 +++++++-----
17
1 file changed, 7 insertions(+), 5 deletions(-)
18
19
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/m68k/cpu.c
22
+++ b/target/m68k/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
24
CPUState *cs = CPU(obj);
25
M68kCPUClass *mcc = M68K_CPU_GET_CLASS(obj);
26
CPUM68KState *env = cpu_env(cs);
27
- floatx80 nan = floatx80_default_nan(NULL);
28
+ floatx80 nan;
29
int i;
30
31
if (mcc->parent_phases.hold) {
32
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
33
#else
34
cpu_m68k_set_sr(env, SR_S | SR_I);
35
#endif
36
- for (i = 0; i < 8; i++) {
37
- env->fregs[i].d = nan;
38
- }
39
- cpu_m68k_set_fpcr(env, 0);
40
/*
41
* M68000 FAMILY PROGRAMMER'S REFERENCE MANUAL
42
* 3.4 FLOATING-POINT INSTRUCTION DETAILS
43
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
44
* preceding paragraph for nonsignaling NaNs.
45
*/
46
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
47
+
48
+ nan = floatx80_default_nan(&env->fp_status);
49
+ for (i = 0; i < 8; i++) {
50
+ env->fregs[i].d = nan;
51
+ }
52
+ cpu_m68k_set_fpcr(env, 0);
53
env->fpsr = 0;
54
55
/* TODO: We should set PC from the interrupt vector. */
56
--
57
2.34.1
diff view generated by jsdifflib
New patch
1
We create our 128-bit default NaN by calling parts64_default_nan()
2
and then adjusting the result. We can do the same trick for creating
3
the floatx80 default NaN, which lets us drop a target ifdef.
1
4
5
floatx80 is used only by:
6
i386
7
m68k
8
arm nwfpe old floating-point emulation emulation support
9
(which is essentially dead, especially the parts involving floatx80)
10
PPC (only in the xsrqpxp instruction, which just rounds an input
11
value by converting to floatx80 and back, so will never generate
12
the default NaN)
13
14
The floatx80 default NaN as currently implemented is:
15
m68k: sign = 0, exp = 1...1, int = 1, frac = 1....1
16
i386: sign = 1, exp = 1...1, int = 1, frac = 10...0
17
18
These are the same as the parts64_default_nan for these architectures.
19
20
This is technically a possible behaviour change for arm linux-user
21
nwfpe emulation emulation, because the default NaN will now have the
22
sign bit clear. But we were already generating a different floatx80
23
default NaN from the real kernel emulation we are supposedly
24
following, which appears to use an all-bits-1 value:
25
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L267
26
27
This won't affect the only "real" use of the nwfpe emulation, which
28
is ancient binaries that used it as part of the old floating point
29
calling convention; that only uses loads and stores of 32 and 64 bit
30
floats, not any of the floatx80 behaviour the original hardware had.
31
We also get the nwfpe float64 default NaN value wrong:
32
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L166
33
so if we ever cared about this obscure corner the right fix would be
34
to correct that so nwfpe used its own default-NaN setting rather
35
than the Arm VFP one.
36
37
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
39
Message-id: 20241202131347.498124-29-peter.maydell@linaro.org
40
---
41
fpu/softfloat-specialize.c.inc | 20 ++++++++++----------
42
1 file changed, 10 insertions(+), 10 deletions(-)
43
44
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
45
index XXXXXXX..XXXXXXX 100644
46
--- a/fpu/softfloat-specialize.c.inc
47
+++ b/fpu/softfloat-specialize.c.inc
48
@@ -XXX,XX +XXX,XX @@ static void parts128_silence_nan(FloatParts128 *p, float_status *status)
49
floatx80 floatx80_default_nan(float_status *status)
50
{
51
floatx80 r;
52
+ /*
53
+ * Extrapolate from the choices made by parts64_default_nan to fill
54
+ * in the floatx80 format. We assume that floatx80's explicit
55
+ * integer bit is always set (this is true for i386 and m68k,
56
+ * which are the only real users of this format).
57
+ */
58
+ FloatParts64 p64;
59
+ parts64_default_nan(&p64, status);
60
61
- /* None of the targets that have snan_bit_is_one use floatx80. */
62
- assert(!snan_bit_is_one(status));
63
-#if defined(TARGET_M68K)
64
- r.low = UINT64_C(0xFFFFFFFFFFFFFFFF);
65
- r.high = 0x7FFF;
66
-#else
67
- /* X86 */
68
- r.low = UINT64_C(0xC000000000000000);
69
- r.high = 0xFFFF;
70
-#endif
71
+ r.high = 0x7FFF | (p64.sign << 15);
72
+ r.low = (1ULL << DECOMPOSED_BINARY_POINT) | p64.frac;
73
return r;
74
}
75
76
--
77
2.34.1
diff view generated by jsdifflib
New patch
1
In target/loongarch's helper_fclass_s() and helper_fclass_d() we pass
2
a zero-initialized float_status struct to float32_is_quiet_nan() and
3
float64_is_quiet_nan(), with the cryptic comment "for
4
snan_bit_is_one".
1
5
6
This pattern appears to have been copied from target/riscv, where it
7
is used because the functions there do not have ready access to the
8
CPU state struct. The comment presumably refers to the fact that the
9
main reason the is_quiet_nan() functions want the float_state is
10
because they want to know about the snan_bit_is_one config.
11
12
In the loongarch helpers, though, we have the CPU state struct
13
to hand. Use the usual env->fp_status here. This avoids our needing
14
to track that we need to update the initializer of the local
15
float_status structs when the core softfloat code adds new
16
options for targets to configure their behaviour.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20241202131347.498124-30-peter.maydell@linaro.org
21
---
22
target/loongarch/tcg/fpu_helper.c | 6 ++----
23
1 file changed, 2 insertions(+), 4 deletions(-)
24
25
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/loongarch/tcg/fpu_helper.c
28
+++ b/target/loongarch/tcg/fpu_helper.c
29
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_s(CPULoongArchState *env, uint64_t fj)
30
} else if (float32_is_zero_or_denormal(f)) {
31
return sign ? 1 << 4 : 1 << 8;
32
} else if (float32_is_any_nan(f)) {
33
- float_status s = { }; /* for snan_bit_is_one */
34
- return float32_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
35
+ return float32_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
36
} else {
37
return sign ? 1 << 3 : 1 << 7;
38
}
39
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_d(CPULoongArchState *env, uint64_t fj)
40
} else if (float64_is_zero_or_denormal(f)) {
41
return sign ? 1 << 4 : 1 << 8;
42
} else if (float64_is_any_nan(f)) {
43
- float_status s = { }; /* for snan_bit_is_one */
44
- return float64_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
45
+ return float64_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
46
} else {
47
return sign ? 1 << 3 : 1 << 7;
48
}
49
--
50
2.34.1
diff view generated by jsdifflib
New patch
1
In the frem helper, we have a local float_status because we want to
2
execute the floatx80_div() with a custom rounding mode. Instead of
3
zero-initializing the local float_status and then having to set it up
4
with the m68k standard behaviour (including the NaN propagation rule
5
and copying the rounding precision from env->fp_status), initialize
6
it as a complete copy of env->fp_status. This will avoid our having
7
to add new code in this function for every new config knob we add
8
to fp_status.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-31-peter.maydell@linaro.org
13
---
14
target/m68k/fpu_helper.c | 6 ++----
15
1 file changed, 2 insertions(+), 4 deletions(-)
16
17
diff --git a/target/m68k/fpu_helper.c b/target/m68k/fpu_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/m68k/fpu_helper.c
20
+++ b/target/m68k/fpu_helper.c
21
@@ -XXX,XX +XXX,XX @@ void HELPER(frem)(CPUM68KState *env, FPReg *res, FPReg *val0, FPReg *val1)
22
23
fp_rem = floatx80_rem(val1->d, val0->d, &env->fp_status);
24
if (!floatx80_is_any_nan(fp_rem)) {
25
- float_status fp_status = { };
26
+ /* Use local temporary fp_status to set different rounding mode */
27
+ float_status fp_status = env->fp_status;
28
uint32_t quotient;
29
int sign;
30
31
/* Calculate quotient directly using round to nearest mode */
32
- set_float_2nan_prop_rule(float_2nan_prop_ab, &fp_status);
33
set_float_rounding_mode(float_round_nearest_even, &fp_status);
34
- set_floatx80_rounding_precision(
35
- get_floatx80_rounding_precision(&env->fp_status), &fp_status);
36
fp_quot.d = floatx80_div(val1->d, val0->d, &fp_status);
37
38
sign = extractFloatx80Sign(fp_quot.d);
39
--
40
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
In cf_fpu_gdb_get_reg() and cf_fpu_gdb_set_reg() we do the conversion
2
from float64 to floatx80 using a scratch float_status, because we
3
don't want the conversion to affect the CPU's floating point exception
4
status. Currently we use a zero-initialized float_status. This will
5
get steadily more awkward as we add config knobs to float_status
6
that the target must initialize. Avoid having to add any of that
7
configuration here by instead initializing our local float_status
8
from the env->fp_status.
2
9
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
12
Message-id: 20241202131347.498124-32-peter.maydell@linaro.org
13
---
14
target/m68k/helper.c | 6 ++++--
15
1 file changed, 4 insertions(+), 2 deletions(-)
6
16
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
17
diff --git a/target/m68k/helper.c b/target/m68k/helper.c
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-17-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm.c | 6 +++---
14
1 file changed, 3 insertions(+), 3 deletions(-)
15
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
19
--- a/target/m68k/helper.c
19
+++ b/target/arm/kvm.c
20
+++ b/target/m68k/helper.c
20
@@ -XXX,XX +XXX,XX @@ int kvm_arch_process_async_events(CPUState *cs)
21
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_get_reg(CPUState *cs, GByteArray *mem_buf, int n)
21
22
CPUM68KState *env = &cpu->env;
22
/**
23
23
* kvm_arm_hw_debug_active:
24
if (n < 8) {
24
- * @cs: CPU State
25
- float_status s = {};
25
+ * @cpu: ARMCPU
26
+ /* Use scratch float_status so any exceptions don't change CPU state */
26
*
27
+ float_status s = env->fp_status;
27
* Return: TRUE if any hardware breakpoints in use.
28
return gdb_get_reg64(mem_buf, floatx80_to_float64(env->fregs[n].d, &s));
28
*/
29
-static bool kvm_arm_hw_debug_active(CPUState *cs)
30
+static bool kvm_arm_hw_debug_active(ARMCPU *cpu)
31
{
32
return ((cur_hw_wps > 0) || (cur_hw_bps > 0));
33
}
34
@@ -XXX,XX +XXX,XX @@ void kvm_arch_update_guest_debug(CPUState *cs, struct kvm_guest_debug *dbg)
35
if (kvm_sw_breakpoints_active(cs)) {
36
dbg->control |= KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP;
37
}
29
}
38
- if (kvm_arm_hw_debug_active(cs)) {
30
switch (n) {
39
+ if (kvm_arm_hw_debug_active(ARM_CPU(cs))) {
31
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_set_reg(CPUState *cs, uint8_t *mem_buf, int n)
40
dbg->control |= KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_HW;
32
CPUM68KState *env = &cpu->env;
41
kvm_arm_copy_hw_debug_data(&dbg->arch);
33
34
if (n < 8) {
35
- float_status s = {};
36
+ /* Use scratch float_status so any exceptions don't change CPU state */
37
+ float_status s = env->fp_status;
38
env->fregs[n].d = float64_to_floatx80(ldq_be_p(mem_buf), &s);
39
return 8;
42
}
40
}
43
--
41
--
44
2.34.1
42
2.34.1
45
46
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
In the helper functions flcmps and flcmpd we use a scratch float_status
2
so that we don't change the CPU state if the comparison raises any
3
floating point exception flags. Instead of zero-initializing this
4
scratch float_status, initialize it as a copy of env->fp_status. This
5
avoids the need to explicitly initialize settings like the NaN
6
propagation rule or others we might add to softfloat in future.
2
7
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
8
To do this we need to pass the CPU env pointer in to the helper.
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
5
calling the generic vCPU API from "sysemu/kvm.h".
6
9
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
12
Message-id: 20241202131347.498124-33-peter.maydell@linaro.org
10
Message-id: 20231123183518.64569-9-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
target/arm/kvm_arm.h | 4 ++--
14
target/sparc/helper.h | 4 ++--
14
hw/arm/virt.c | 2 +-
15
target/sparc/fop_helper.c | 8 ++++----
15
target/arm/kvm.c | 6 +++---
16
target/sparc/translate.c | 4 ++--
16
3 files changed, 6 insertions(+), 6 deletions(-)
17
3 files changed, 8 insertions(+), 8 deletions(-)
17
18
18
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
19
diff --git a/target/sparc/helper.h b/target/sparc/helper.h
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm_arm.h
21
--- a/target/sparc/helper.h
21
+++ b/target/arm/kvm_arm.h
22
+++ b/target/sparc/helper.h
22
@@ -XXX,XX +XXX,XX @@ int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa);
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(fcmpd, TCG_CALL_NO_WG, i32, env, f64, f64)
23
24
DEF_HELPER_FLAGS_3(fcmped, TCG_CALL_NO_WG, i32, env, f64, f64)
24
int kvm_arm_vgic_probe(void);
25
DEF_HELPER_FLAGS_3(fcmpq, TCG_CALL_NO_WG, i32, env, i128, i128)
25
26
DEF_HELPER_FLAGS_3(fcmpeq, TCG_CALL_NO_WG, i32, env, i128, i128)
26
+void kvm_arm_pmu_init(ARMCPU *cpu);
27
-DEF_HELPER_FLAGS_2(flcmps, TCG_CALL_NO_RWG_SE, i32, f32, f32)
27
void kvm_arm_pmu_set_irq(CPUState *cs, int irq);
28
-DEF_HELPER_FLAGS_2(flcmpd, TCG_CALL_NO_RWG_SE, i32, f64, f64)
28
-void kvm_arm_pmu_init(CPUState *cs);
29
+DEF_HELPER_FLAGS_3(flcmps, TCG_CALL_NO_RWG_SE, i32, env, f32, f32)
29
30
+DEF_HELPER_FLAGS_3(flcmpd, TCG_CALL_NO_RWG_SE, i32, env, f64, f64)
30
/**
31
DEF_HELPER_2(raise_exception, noreturn, env, int)
31
* kvm_arm_pvtime_init:
32
32
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
33
DEF_HELPER_FLAGS_3(faddd, TCG_CALL_NO_WG, f64, env, f64, f64)
34
diff --git a/target/sparc/fop_helper.c b/target/sparc/fop_helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/sparc/fop_helper.c
37
+++ b/target/sparc/fop_helper.c
38
@@ -XXX,XX +XXX,XX @@ uint32_t helper_fcmpeq(CPUSPARCState *env, Int128 src1, Int128 src2)
39
return finish_fcmp(env, r, GETPC());
40
}
41
42
-uint32_t helper_flcmps(float32 src1, float32 src2)
43
+uint32_t helper_flcmps(CPUSPARCState *env, float32 src1, float32 src2)
44
{
45
/*
46
* FLCMP never raises an exception nor modifies any FSR fields.
47
* Perform the comparison with a dummy fp environment.
48
*/
49
- float_status discard = { };
50
+ float_status discard = env->fp_status;
51
FloatRelation r;
52
53
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
54
@@ -XXX,XX +XXX,XX @@ uint32_t helper_flcmps(float32 src1, float32 src2)
33
g_assert_not_reached();
55
g_assert_not_reached();
34
}
56
}
35
57
36
-static inline void kvm_arm_pmu_init(CPUState *cs)
58
-uint32_t helper_flcmpd(float64 src1, float64 src2)
37
+static inline void kvm_arm_pmu_init(ARMCPU *cpu)
59
+uint32_t helper_flcmpd(CPUSPARCState *env, float64 src1, float64 src2)
38
{
60
{
39
g_assert_not_reached();
61
- float_status discard = { };
62
+ float_status discard = env->fp_status;
63
FloatRelation r;
64
65
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
66
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/sparc/translate.c
69
+++ b/target/sparc/translate.c
70
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPs(DisasContext *dc, arg_FLCMPs *a)
71
72
src1 = gen_load_fpr_F(dc, a->rs1);
73
src2 = gen_load_fpr_F(dc, a->rs2);
74
- gen_helper_flcmps(cpu_fcc[a->cc], src1, src2);
75
+ gen_helper_flcmps(cpu_fcc[a->cc], tcg_env, src1, src2);
76
return advance_pc(dc);
40
}
77
}
41
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
78
42
index XXXXXXX..XXXXXXX 100644
79
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPd(DisasContext *dc, arg_FLCMPd *a)
43
--- a/hw/arm/virt.c
80
44
+++ b/hw/arm/virt.c
81
src1 = gen_load_fpr_D(dc, a->rs1);
45
@@ -XXX,XX +XXX,XX @@ static void virt_cpu_post_init(VirtMachineState *vms, MemoryRegion *sysmem)
82
src2 = gen_load_fpr_D(dc, a->rs2);
46
if (kvm_irqchip_in_kernel()) {
83
- gen_helper_flcmpd(cpu_fcc[a->cc], src1, src2);
47
kvm_arm_pmu_set_irq(cpu, VIRTUAL_PMU_IRQ);
84
+ gen_helper_flcmpd(cpu_fcc[a->cc], tcg_env, src1, src2);
48
}
85
return advance_pc(dc);
49
- kvm_arm_pmu_init(cpu);
50
+ kvm_arm_pmu_init(ARM_CPU(cpu));
51
}
52
if (steal_time) {
53
kvm_arm_pvtime_init(ARM_CPU(cpu), pvtime_reg_base
54
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/kvm.c
57
+++ b/target/arm/kvm.c
58
@@ -XXX,XX +XXX,XX @@ static bool kvm_arm_set_device_attr(ARMCPU *cpu, struct kvm_device_attr *attr,
59
return true;
60
}
86
}
61
87
62
-void kvm_arm_pmu_init(CPUState *cs)
63
+void kvm_arm_pmu_init(ARMCPU *cpu)
64
{
65
struct kvm_device_attr attr = {
66
.group = KVM_ARM_VCPU_PMU_V3_CTRL,
67
.attr = KVM_ARM_VCPU_PMU_V3_INIT,
68
};
69
70
- if (!ARM_CPU(cs)->has_pmu) {
71
+ if (!cpu->has_pmu) {
72
return;
73
}
74
- if (!kvm_arm_set_device_attr(ARM_CPU(cs), &attr, "PMU")) {
75
+ if (!kvm_arm_set_device_attr(cpu, &attr, "PMU")) {
76
error_report("failed to init PMU");
77
abort();
78
}
79
--
88
--
80
2.34.1
89
2.34.1
81
82
diff view generated by jsdifflib
New patch
1
In the helper_compute_fprf functions, we pass a dummy float_status
2
in to the is_signaling_nan() function. This is unnecessary, because
3
we have convenient access to the CPU env pointer here and that
4
is already set up with the correct values for the snan_bit_is_one
5
and no_signaling_nans config settings. is_signaling_nan() doesn't
6
ever update the fp_status with any exception flags, so there is
7
no reason not to use env->fp_status here.
1
8
9
Use env->fp_status instead of the dummy fp_status.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20241202131347.498124-34-peter.maydell@linaro.org
14
---
15
target/ppc/fpu_helper.c | 3 +--
16
1 file changed, 1 insertion(+), 2 deletions(-)
17
18
diff --git a/target/ppc/fpu_helper.c b/target/ppc/fpu_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/ppc/fpu_helper.c
21
+++ b/target/ppc/fpu_helper.c
22
@@ -XXX,XX +XXX,XX @@ void helper_compute_fprf_##tp(CPUPPCState *env, tp arg) \
23
} else if (tp##_is_infinity(arg)) { \
24
fprf = neg ? 0x09 << FPSCR_FPRF : 0x05 << FPSCR_FPRF; \
25
} else { \
26
- float_status dummy = { }; /* snan_bit_is_one = 0 */ \
27
- if (tp##_is_signaling_nan(arg, &dummy)) { \
28
+ if (tp##_is_signaling_nan(arg, &env->fp_status)) { \
29
fprf = 0x00 << FPSCR_FPRF; \
30
} else { \
31
fprf = 0x11 << FPSCR_FPRF; \
32
--
33
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Use a switch instead of a linear search through data.
3
Now that float_status has a bunch of fp parameters,
4
it is easier to copy an existing structure than create
5
one from scratch. Begin by copying the structure that
6
corresponds to the FPSR and make only the adjustments
7
required for BFloat16 semantics.
4
8
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20241203203949.483774-2-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
14
---
10
target/arm/kvm64.c | 32 +++++++++-----------------------
15
target/arm/tcg/vec_helper.c | 20 +++++++-------------
11
1 file changed, 9 insertions(+), 23 deletions(-)
16
1 file changed, 7 insertions(+), 13 deletions(-)
12
17
13
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
18
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/kvm64.c
20
--- a/target/arm/tcg/vec_helper.c
16
+++ b/target/arm/kvm64.c
21
+++ b/target/arm/tcg/vec_helper.c
17
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx)
22
@@ -XXX,XX +XXX,XX @@ bool is_ebf(CPUARMState *env, float_status *statusp, float_status *oddstatusp)
18
}
23
* no effect on AArch32 instructions.
19
}
24
*/
20
25
bool ebf = is_a64(env) && env->vfp.fpcr & FPCR_EBF;
21
-typedef struct CPRegStateLevel {
26
- *statusp = (float_status){
22
- uint64_t regidx;
27
- .tininess_before_rounding = float_tininess_before_rounding,
23
- int level;
28
- .float_rounding_mode = float_round_to_odd_inf,
24
-} CPRegStateLevel;
29
- .flush_to_zero = true,
30
- .flush_inputs_to_zero = true,
31
- .default_nan_mode = true,
32
- };
33
+
34
+ *statusp = env->vfp.fp_status;
35
+ set_default_nan_mode(true, statusp);
36
37
if (ebf) {
38
- float_status *fpst = &env->vfp.fp_status;
39
- set_flush_to_zero(get_flush_to_zero(fpst), statusp);
40
- set_flush_inputs_to_zero(get_flush_inputs_to_zero(fpst), statusp);
41
- set_float_rounding_mode(get_float_rounding_mode(fpst), statusp);
25
-
42
-
26
-/* All system registers not listed in the following table are assumed to be
43
/* EBF=1 needs to do a step with round-to-odd semantics */
27
- * of the level KVM_PUT_RUNTIME_STATE. If a register should be written less
44
*oddstatusp = *statusp;
28
- * often, you must add it to this table with a state of either
45
set_float_rounding_mode(float_round_to_odd, oddstatusp);
29
- * KVM_PUT_RESET_STATE or KVM_PUT_FULL_STATE.
46
+ } else {
30
- */
47
+ set_flush_to_zero(true, statusp);
31
-static const CPRegStateLevel non_runtime_cpregs[] = {
48
+ set_flush_inputs_to_zero(true, statusp);
32
- { KVM_REG_ARM_TIMER_CNT, KVM_PUT_FULL_STATE },
49
+ set_float_rounding_mode(float_round_to_odd_inf, statusp);
33
- { KVM_REG_ARM_PTIMER_CNT, KVM_PUT_FULL_STATE },
34
-};
35
-
36
int kvm_arm_cpreg_level(uint64_t regidx)
37
{
38
- int i;
39
-
40
- for (i = 0; i < ARRAY_SIZE(non_runtime_cpregs); i++) {
41
- const CPRegStateLevel *l = &non_runtime_cpregs[i];
42
- if (l->regidx == regidx) {
43
- return l->level;
44
- }
45
+ /*
46
+ * All system registers are assumed to be level KVM_PUT_RUNTIME_STATE.
47
+ * If a register should be written less often, you must add it here
48
+ * with a state of either KVM_PUT_RESET_STATE or KVM_PUT_FULL_STATE.
49
+ */
50
+ switch (regidx) {
51
+ case KVM_REG_ARM_TIMER_CNT:
52
+ case KVM_REG_ARM_PTIMER_CNT:
53
+ return KVM_PUT_FULL_STATE;
54
}
50
}
55
-
51
-
56
return KVM_PUT_RUNTIME_STATE;
52
return ebf;
57
}
53
}
58
54
59
--
55
--
60
2.34.1
56
2.34.1
61
57
62
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Currently we hardcode the default NaN value in parts64_default_nan()
2
using a compile-time ifdef ladder. This is awkward for two cases:
3
* for single-QEMU-binary we can't hard-code target-specifics like this
4
* for Arm FEAT_AFP the default NaN value depends on FPCR.AH
5
(specifically the sign bit is different)
2
6
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Add a field to float_status to specify the default NaN value; fall
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
8
back to the old ifdef behaviour if these are not set.
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
The default NaN value is specified by setting a uint8_t to a
11
pattern corresponding to the sign and upper fraction parts of
12
the NaN; the lower bits of the fraction are set from bit 0 of
13
the pattern.
14
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20241202131347.498124-35-peter.maydell@linaro.org
8
---
18
---
9
target/arm/kvm_arm.h | 8 --------
19
include/fpu/softfloat-helpers.h | 11 +++++++
10
target/arm/kvm.c | 11 +++++++++++
20
include/fpu/softfloat-types.h | 10 ++++++
11
target/arm/kvm64.c | 5 -----
21
fpu/softfloat-specialize.c.inc | 55 ++++++++++++++++++++-------------
12
3 files changed, 11 insertions(+), 13 deletions(-)
22
3 files changed, 54 insertions(+), 22 deletions(-)
13
23
14
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
24
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm_arm.h
26
--- a/include/fpu/softfloat-helpers.h
17
+++ b/target/arm/kvm_arm.h
27
+++ b/include/fpu/softfloat-helpers.h
18
@@ -XXX,XX +XXX,XX @@ static inline uint32_t kvm_arm_sve_get_vls(CPUState *cs)
28
@@ -XXX,XX +XXX,XX @@ static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
19
*/
29
status->float_infzeronan_rule = rule;
20
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit);
21
22
-/**
23
- * kvm_arm_hw_debug_active:
24
- * @cs: CPU State
25
- *
26
- * Return: TRUE if any hardware breakpoints in use.
27
- */
28
-bool kvm_arm_hw_debug_active(CPUState *cs);
29
-
30
#endif
31
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/kvm.c
34
+++ b/target/arm/kvm.c
35
@@ -XXX,XX +XXX,XX @@ int kvm_arch_process_async_events(CPUState *cs)
36
return 0;
37
}
30
}
38
31
39
+/**
32
+static inline void set_float_default_nan_pattern(uint8_t dnan_pattern,
40
+ * kvm_arm_hw_debug_active:
33
+ float_status *status)
41
+ * @cs: CPU State
42
+ *
43
+ * Return: TRUE if any hardware breakpoints in use.
44
+ */
45
+static bool kvm_arm_hw_debug_active(CPUState *cs)
46
+{
34
+{
47
+ return ((cur_hw_wps > 0) || (cur_hw_bps > 0));
35
+ status->default_nan_pattern = dnan_pattern;
48
+}
36
+}
49
+
37
+
50
/**
38
static inline void set_flush_to_zero(bool val, float_status *status)
51
* kvm_arm_copy_hw_debug_data:
39
{
52
* @ptr: kvm_guest_debug_arch structure
40
status->flush_to_zero = val;
53
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
41
@@ -XXX,XX +XXX,XX @@ static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status
42
return status->float_infzeronan_rule;
43
}
44
45
+static inline uint8_t get_float_default_nan_pattern(float_status *status)
46
+{
47
+ return status->default_nan_pattern;
48
+}
49
+
50
static inline bool get_flush_to_zero(float_status *status)
51
{
52
return status->flush_to_zero;
53
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
54
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/kvm64.c
55
--- a/include/fpu/softfloat-types.h
56
+++ b/target/arm/kvm64.c
56
+++ b/include/fpu/softfloat-types.h
57
@@ -XXX,XX +XXX,XX @@ void kvm_arch_remove_all_hw_breakpoints(void)
57
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
58
}
58
/* should denormalised inputs go to zero and set the input_denormal flag? */
59
}
59
bool flush_inputs_to_zero;
60
60
bool default_nan_mode;
61
-bool kvm_arm_hw_debug_active(CPUState *cs)
61
+ /*
62
-{
62
+ * The pattern to use for the default NaN. Here the high bit specifies
63
- return ((cur_hw_wps > 0) || (cur_hw_bps > 0));
63
+ * the default NaN's sign bit, and bits 6..0 specify the high bits of the
64
-}
64
+ * fractional part. The low bits of the fractional part are copies of bit 0.
65
-
65
+ * The exponent of the default NaN is (as for any NaN) always all 1s.
66
static bool kvm_arm_set_device_attr(CPUState *cs, struct kvm_device_attr *attr,
66
+ * Note that a value of 0 here is not a valid NaN. The target must set
67
const char *name)
67
+ * this to the correct non-zero value, or we will assert when trying to
68
+ * create a default NaN.
69
+ */
70
+ uint8_t default_nan_pattern;
71
/*
72
* The flags below are not used on all specializations and may
73
* constant fold away (see snan_bit_is_one()/no_signalling_nans() in
74
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
75
index XXXXXXX..XXXXXXX 100644
76
--- a/fpu/softfloat-specialize.c.inc
77
+++ b/fpu/softfloat-specialize.c.inc
78
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
68
{
79
{
80
bool sign = 0;
81
uint64_t frac;
82
+ uint8_t dnan_pattern = status->default_nan_pattern;
83
84
+ if (dnan_pattern == 0) {
85
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
86
- /* !snan_bit_is_one, set all bits */
87
- frac = (1ULL << DECOMPOSED_BINARY_POINT) - 1;
88
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
89
+ /* Sign bit clear, all frac bits set */
90
+ dnan_pattern = 0b01111111;
91
+#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
92
|| defined(TARGET_MICROBLAZE)
93
- /* !snan_bit_is_one, set sign and msb */
94
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
95
- sign = 1;
96
+ /* Sign bit set, most significant frac bit set */
97
+ dnan_pattern = 0b11000000;
98
#elif defined(TARGET_HPPA)
99
- /* snan_bit_is_one, set msb-1. */
100
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 2);
101
+ /* Sign bit clear, msb-1 frac bit set */
102
+ dnan_pattern = 0b00100000;
103
#elif defined(TARGET_HEXAGON)
104
- sign = 1;
105
- frac = ~0ULL;
106
+ /* Sign bit set, all frac bits set. */
107
+ dnan_pattern = 0b11111111;
108
#else
109
- /*
110
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
111
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
112
- * do not have floating-point.
113
- */
114
- if (snan_bit_is_one(status)) {
115
- /* set all bits other than msb */
116
- frac = (1ULL << (DECOMPOSED_BINARY_POINT - 1)) - 1;
117
- } else {
118
- /* set msb */
119
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
120
- }
121
+ /*
122
+ * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
123
+ * S390, SH4, TriCore, and Xtensa. Our other supported targets
124
+ * do not have floating-point.
125
+ */
126
+ if (snan_bit_is_one(status)) {
127
+ /* sign bit clear, set all frac bits other than msb */
128
+ dnan_pattern = 0b00111111;
129
+ } else {
130
+ /* sign bit clear, set frac msb */
131
+ dnan_pattern = 0b01000000;
132
+ }
133
#endif
134
+ }
135
+ assert(dnan_pattern != 0);
136
+
137
+ sign = dnan_pattern >> 7;
138
+ /*
139
+ * Place default_nan_pattern [6:0] into bits [62:56],
140
+ * and replecate bit [0] down into [55:0]
141
+ */
142
+ frac = deposit64(0, DECOMPOSED_BINARY_POINT - 7, 7, dnan_pattern);
143
+ frac = deposit64(frac, 0, DECOMPOSED_BINARY_POINT - 7, -(dnan_pattern & 1));
144
145
*p = (FloatParts64) {
146
.cls = float_class_qnan,
69
--
147
--
70
2.34.1
148
2.34.1
71
72
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the tests/fp code.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-36-peter.maydell@linaro.org
6
---
7
tests/fp/fp-bench.c | 1 +
8
tests/fp/fp-test-log2.c | 1 +
9
tests/fp/fp-test.c | 1 +
10
3 files changed, 3 insertions(+)
11
12
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tests/fp/fp-bench.c
15
+++ b/tests/fp/fp-bench.c
16
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
17
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
18
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
19
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
20
+ set_float_default_nan_pattern(0b01000000, &soft_status);
21
22
f = bench_funcs[operation][precision];
23
g_assert(f);
24
diff --git a/tests/fp/fp-test-log2.c b/tests/fp/fp-test-log2.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/tests/fp/fp-test-log2.c
27
+++ b/tests/fp/fp-test-log2.c
28
@@ -XXX,XX +XXX,XX @@ int main(int ac, char **av)
29
int i;
30
31
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
32
+ set_float_default_nan_pattern(0b01000000, &qsf);
33
set_float_rounding_mode(float_round_nearest_even, &qsf);
34
35
test.d = 0.0;
36
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/tests/fp/fp-test.c
39
+++ b/tests/fp/fp-test.c
40
@@ -XXX,XX +XXX,XX @@ void run_test(void)
41
*/
42
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
43
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
44
+ set_float_default_nan_pattern(0b01000000, &qsf);
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
46
47
genCases_setLevel(test_level);
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-37-peter.maydell@linaro.org
7
---
8
target/microblaze/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/microblaze/cpu.c
15
+++ b/target/microblaze/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_reset_hold(Object *obj, ResetType type)
17
* this architecture.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
23
#if defined(CONFIG_USER_ONLY)
24
/* start in user mode with interrupts enabled. */
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
34
- || defined(TARGET_MICROBLAZE)
35
+#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
/* Sign bit set, most significant frac bit set */
37
dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-38-peter.maydell@linaro.org
7
---
8
target/i386/tcg/fpu_helper.c | 4 ++++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 4 insertions(+), 3 deletions(-)
11
12
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/i386/tcg/fpu_helper.c
15
+++ b/target/i386/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
17
*/
18
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
19
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
+ set_float_default_nan_pattern(0b11000000, &env->mmx_status);
23
+ set_float_default_nan_pattern(0b11000000, &env->sse_status);
24
}
25
26
static inline uint8_t save_exception_flags(CPUX86State *env)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
32
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
/* Sign bit clear, all frac bits set */
34
dnan_pattern = 0b01111111;
35
-#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
- /* Sign bit set, most significant frac bit set */
37
- dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
/* Sign bit clear, msb-1 frac bit set */
40
dnan_pattern = 0b00100000;
41
--
42
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
2
3
3
Hardware accelerators handle that in *hardware*.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-39-peter.maydell@linaro.org
7
---
8
target/hppa/fpu_helper.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 2 insertions(+), 3 deletions(-)
4
11
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20231130142519.28417-3-philmd@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.c | 5 +++++
11
1 file changed, 5 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
14
--- a/target/hppa/fpu_helper.c
16
+++ b/target/arm/helper.c
15
+++ b/target/hppa/fpu_helper.c
17
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rndr_reginfo[] = {
16
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
18
static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque,
17
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
19
uint64_t value)
18
/* For inf * 0 + NaN, return the input NaN */
20
{
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
21
+#ifdef CONFIG_TCG
20
+ /* Default NaN: sign bit clear, msb-1 frac bit set */
22
ARMCPU *cpu = env_archcpu(env);
21
+ set_float_default_nan_pattern(0b00100000, &env->fp_status);
23
/* CTR_EL0 System register -> DminLine, bits [19:16] */
24
uint64_t dline_size = 4 << ((cpu->ctr >> 16) & 0xF);
25
@@ -XXX,XX +XXX,XX @@ static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque,
26
}
27
#endif /*CONFIG_USER_ONLY*/
28
}
29
+#else
30
+ /* Handled by hardware accelerator. */
31
+ g_assert_not_reached();
32
+#endif /* CONFIG_TCG */
33
}
22
}
34
23
35
static const ARMCPRegInfo dcpop_reg[] = {
24
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_HPPA)
34
- /* Sign bit clear, msb-1 frac bit set */
35
- dnan_pattern = 0b00100000;
36
#elif defined(TARGET_HEXAGON)
37
/* Sign bit set, all frac bits set. */
38
dnan_pattern = 0b11111111;
36
--
39
--
37
2.34.1
40
2.34.1
38
39
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the alpha target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-40-peter.maydell@linaro.org
6
---
7
target/alpha/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/alpha/cpu.c
13
+++ b/target/alpha/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_initfn(Object *obj)
15
* operand in Fa. That is float_2nan_prop_ba.
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
18
+ /* Default NaN: sign bit clear, msb frac bit set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
#if defined(CONFIG_USER_ONLY)
21
env->flags = ENV_FLAG_PS_USER | ENV_FLAG_FEN;
22
cpu_alpha_store_fpcr(env, (uint64_t)(FPCR_INVD | FPCR_DZED | FPCR_OVFD
23
--
24
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the default NaN pattern explicitly for the arm target.
2
This includes setting it for the old linux-user nwfpe emulation.
3
For nwfpe, our default doesn't match the real kernel, but we
4
avoid making a behaviour change in this commit.
2
5
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
8
Message-id: 20241202131347.498124-41-peter.maydell@linaro.org
9
---
10
linux-user/arm/nwfpe/fpa11.c | 5 +++++
11
target/arm/cpu.c | 2 ++
12
2 files changed, 7 insertions(+)
6
13
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
diff --git a/linux-user/arm/nwfpe/fpa11.c b/linux-user/arm/nwfpe/fpa11.c
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-4-philmd@linaro.org
11
[PMM: fix parameter name in doc comment too]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/kvm_arm.h | 6 +++---
15
target/arm/cpu.c | 2 +-
16
target/arm/kvm.c | 4 ++--
17
3 files changed, 6 insertions(+), 6 deletions(-)
18
19
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/kvm_arm.h
16
--- a/linux-user/arm/nwfpe/fpa11.c
22
+++ b/target/arm/kvm_arm.h
17
+++ b/linux-user/arm/nwfpe/fpa11.c
23
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu);
18
@@ -XXX,XX +XXX,XX @@ void resetFPA11(void)
24
19
* this late date.
25
/**
20
*/
26
* kvm_arm_add_vcpu_properties:
21
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &fpa11->fp_status);
27
- * @obj: The CPU object to add the properties to
22
+ /*
28
+ * @cpu: The CPU object to add the properties to
23
+ * Use the same default NaN value as Arm VFP. This doesn't match
29
*
24
+ * the Linux kernel's nwfpe emulation, which uses an all-1s value.
30
* Add all KVM specific CPU properties to the CPU object. These
25
+ */
31
* are the CPU properties with "kvm-" prefixed names.
26
+ set_float_default_nan_pattern(0b01000000, &fpa11->fp_status);
32
*/
33
-void kvm_arm_add_vcpu_properties(Object *obj);
34
+void kvm_arm_add_vcpu_properties(ARMCPU *cpu);
35
36
/**
37
* kvm_arm_steal_time_finalize:
38
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
39
g_assert_not_reached();
40
}
27
}
41
28
42
-static inline void kvm_arm_add_vcpu_properties(Object *obj)
29
void SetRoundingMode(const unsigned int opcode)
43
+static inline void kvm_arm_add_vcpu_properties(ARMCPU *cpu)
44
{
45
g_assert_not_reached();
46
}
47
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
48
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/cpu.c
32
--- a/target/arm/cpu.c
50
+++ b/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
51
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
34
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
52
}
35
* the pseudocode function the arguments are in the order c, a, b.
53
36
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
54
if (kvm_enabled()) {
37
* and the input NaN if it is signalling
55
- kvm_arm_add_vcpu_properties(obj);
38
+ * * Default NaN has sign bit clear, msb frac bit set
56
+ kvm_arm_add_vcpu_properties(cpu);
39
*/
57
}
40
static void arm_set_default_fp_behaviours(float_status *s)
58
41
{
59
#ifndef CONFIG_USER_ONLY
42
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
60
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
61
index XXXXXXX..XXXXXXX 100644
44
set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
62
--- a/target/arm/kvm.c
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
63
+++ b/target/arm/kvm.c
46
+ set_float_default_nan_pattern(0b01000000, s);
64
@@ -XXX,XX +XXX,XX @@ static void kvm_steal_time_set(Object *obj, bool value, Error **errp)
65
}
47
}
66
48
67
/* KVM VCPU properties should be prefixed with "kvm-". */
49
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
68
-void kvm_arm_add_vcpu_properties(Object *obj)
69
+void kvm_arm_add_vcpu_properties(ARMCPU *cpu)
70
{
71
- ARMCPU *cpu = ARM_CPU(obj);
72
CPUARMState *env = &cpu->env;
73
+ Object *obj = OBJECT(cpu);
74
75
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
76
cpu->kvm_adjvtime = true;
77
--
50
--
78
2.34.1
51
2.34.1
79
80
diff view generated by jsdifflib
1
From: Nikita Ostrenkov <n.ostrenkov@gmail.com>
1
Set the default NaN pattern explicitly for loongarch.
2
2
3
Signed-off-by: Nikita Ostrenkov <n.ostrenkov@gmail.com>
4
Message-id: 20231216133408.2884-1-n.ostrenkov@gmail.com
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-42-peter.maydell@linaro.org
7
---
6
---
8
include/hw/misc/imx7_snvs.h | 7 ++-
7
target/loongarch/tcg/fpu_helper.c | 2 ++
9
hw/misc/imx7_snvs.c | 93 ++++++++++++++++++++++++++++++++++---
8
1 file changed, 2 insertions(+)
10
hw/misc/trace-events | 4 +-
11
3 files changed, 94 insertions(+), 10 deletions(-)
12
9
13
diff --git a/include/hw/misc/imx7_snvs.h b/include/hw/misc/imx7_snvs.h
10
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
14
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/misc/imx7_snvs.h
12
--- a/target/loongarch/tcg/fpu_helper.c
16
+++ b/include/hw/misc/imx7_snvs.h
13
+++ b/target/loongarch/tcg/fpu_helper.c
17
@@ -XXX,XX +XXX,XX @@
14
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
18
enum IMX7SNVSRegisters {
15
*/
19
SNVS_LPCR = 0x38,
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
SNVS_LPCR_TOP = BIT(6),
17
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
21
- SNVS_LPCR_DP_EN = BIT(5)
18
+ /* Default NaN: sign bit clear, msb frac bit set */
22
+ SNVS_LPCR_DP_EN = BIT(5),
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
23
+ SNVS_LPSRTCMR = 0x050, /* Secure Real Time Counter MSB Register */
24
+ SNVS_LPSRTCLR = 0x054, /* Secure Real Time Counter LSB Register */
25
};
26
27
#define TYPE_IMX7_SNVS "imx7.snvs"
28
@@ -XXX,XX +XXX,XX @@ struct IMX7SNVSState {
29
SysBusDevice parent_obj;
30
31
MemoryRegion mmio;
32
+
33
+ uint64_t tick_offset;
34
+ uint64_t lpcr;
35
};
36
37
#endif /* IMX7_SNVS_H */
38
diff --git a/hw/misc/imx7_snvs.c b/hw/misc/imx7_snvs.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/misc/imx7_snvs.c
41
+++ b/hw/misc/imx7_snvs.c
42
@@ -XXX,XX +XXX,XX @@
43
*/
44
45
#include "qemu/osdep.h"
46
+#include "qemu/bitops.h"
47
+#include "qemu/timer.h"
48
+#include "migration/vmstate.h"
49
#include "hw/misc/imx7_snvs.h"
50
+#include "qemu/cutils.h"
51
#include "qemu/module.h"
52
+#include "sysemu/sysemu.h"
53
+#include "sysemu/rtc.h"
54
#include "sysemu/runstate.h"
55
#include "trace.h"
56
57
+#define RTC_FREQ 32768ULL
58
+
59
+static const VMStateDescription vmstate_imx7_snvs = {
60
+ .name = TYPE_IMX7_SNVS,
61
+ .version_id = 1,
62
+ .minimum_version_id = 1,
63
+ .fields = (VMStateField[]) {
64
+ VMSTATE_UINT64(tick_offset, IMX7SNVSState),
65
+ VMSTATE_UINT64(lpcr, IMX7SNVSState),
66
+ VMSTATE_END_OF_LIST()
67
+ }
68
+};
69
+
70
+static uint64_t imx7_snvs_get_count(IMX7SNVSState *s)
71
+{
72
+ uint64_t ticks = muldiv64(qemu_clock_get_ns(rtc_clock), RTC_FREQ,
73
+ NANOSECONDS_PER_SECOND);
74
+ return s->tick_offset + ticks;
75
+}
76
+
77
static uint64_t imx7_snvs_read(void *opaque, hwaddr offset, unsigned size)
78
{
79
- trace_imx7_snvs_read(offset, 0);
80
+ IMX7SNVSState *s = IMX7_SNVS(opaque);
81
+ uint64_t ret = 0;
82
83
- return 0;
84
+ switch (offset) {
85
+ case SNVS_LPSRTCMR:
86
+ ret = extract64(imx7_snvs_get_count(s), 32, 15);
87
+ break;
88
+ case SNVS_LPSRTCLR:
89
+ ret = extract64(imx7_snvs_get_count(s), 0, 32);
90
+ break;
91
+ case SNVS_LPCR:
92
+ ret = s->lpcr;
93
+ break;
94
+ }
95
+
96
+ trace_imx7_snvs_read(offset, ret, size);
97
+
98
+ return ret;
99
+}
100
+
101
+static void imx7_snvs_reset(DeviceState *dev)
102
+{
103
+ IMX7SNVSState *s = IMX7_SNVS(dev);
104
+
105
+ s->lpcr = 0;
106
}
20
}
107
21
108
static void imx7_snvs_write(void *opaque, hwaddr offset,
22
int ieee_ex_to_loongarch(int xcpt)
109
uint64_t v, unsigned size)
110
{
111
- const uint32_t value = v;
112
- const uint32_t mask = SNVS_LPCR_TOP | SNVS_LPCR_DP_EN;
113
+ trace_imx7_snvs_write(offset, v, size);
114
115
- trace_imx7_snvs_write(offset, value);
116
+ IMX7SNVSState *s = IMX7_SNVS(opaque);
117
118
- if (offset == SNVS_LPCR && ((value & mask) == mask)) {
119
- qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
120
+ uint64_t new_value = 0, snvs_count = 0;
121
+
122
+ if (offset == SNVS_LPSRTCMR || offset == SNVS_LPSRTCLR) {
123
+ snvs_count = imx7_snvs_get_count(s);
124
+ }
125
+
126
+ switch (offset) {
127
+ case SNVS_LPSRTCMR:
128
+ new_value = deposit64(snvs_count, 32, 32, v);
129
+ break;
130
+ case SNVS_LPSRTCLR:
131
+ new_value = deposit64(snvs_count, 0, 32, v);
132
+ break;
133
+ case SNVS_LPCR: {
134
+ s->lpcr = v;
135
+
136
+ const uint32_t mask = SNVS_LPCR_TOP | SNVS_LPCR_DP_EN;
137
+
138
+ if ((v & mask) == mask) {
139
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
140
+ }
141
+ break;
142
+ }
143
+ }
144
+
145
+ if (offset == SNVS_LPSRTCMR || offset == SNVS_LPSRTCLR) {
146
+ s->tick_offset += new_value - snvs_count;
147
}
148
}
149
150
@@ -XXX,XX +XXX,XX @@ static void imx7_snvs_init(Object *obj)
151
{
152
SysBusDevice *sd = SYS_BUS_DEVICE(obj);
153
IMX7SNVSState *s = IMX7_SNVS(obj);
154
+ struct tm tm;
155
156
memory_region_init_io(&s->mmio, obj, &imx7_snvs_ops, s,
157
TYPE_IMX7_SNVS, 0x1000);
158
159
sysbus_init_mmio(sd, &s->mmio);
160
+
161
+ qemu_get_timedate(&tm, 0);
162
+ s->tick_offset = mktimegm(&tm) -
163
+ qemu_clock_get_ns(rtc_clock) / NANOSECONDS_PER_SECOND;
164
}
165
166
static void imx7_snvs_class_init(ObjectClass *klass, void *data)
167
{
168
DeviceClass *dc = DEVICE_CLASS(klass);
169
170
+ dc->reset = imx7_snvs_reset;
171
+ dc->vmsd = &vmstate_imx7_snvs;
172
dc->desc = "i.MX7 Secure Non-Volatile Storage Module";
173
}
174
175
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
176
index XXXXXXX..XXXXXXX 100644
177
--- a/hw/misc/trace-events
178
+++ b/hw/misc/trace-events
179
@@ -XXX,XX +XXX,XX @@ imx7_gpr_read(uint64_t offset) "addr 0x%08" PRIx64
180
imx7_gpr_write(uint64_t offset, uint64_t value) "addr 0x%08" PRIx64 "value 0x%08" PRIx64
181
182
# imx7_snvs.c
183
-imx7_snvs_read(uint64_t offset, uint32_t value) "addr 0x%08" PRIx64 "value 0x%08" PRIx32
184
-imx7_snvs_write(uint64_t offset, uint32_t value) "addr 0x%08" PRIx64 "value 0x%08" PRIx32
185
+imx7_snvs_read(uint64_t offset, uint64_t value, unsigned size) "i.MX SNVS read: offset 0x%08" PRIx64 " value 0x%08" PRIx64 " size %u"
186
+imx7_snvs_write(uint64_t offset, uint64_t value, unsigned size) "i.MX SNVS write: offset 0x%08" PRIx64 " value 0x%08" PRIx64 " size %u"
187
188
# mos6522.c
189
mos6522_set_counter(int index, unsigned int val) "T%d.counter=%d"
190
--
23
--
191
2.34.1
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for m68k.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-43-peter.maydell@linaro.org
6
---
7
target/m68k/cpu.c | 2 ++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/m68k/cpu.c
14
+++ b/target/m68k/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
16
* preceding paragraph for nonsignaling NaNs.
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
+ /* Default NaN: sign bit clear, all frac bits set */
20
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
21
22
nan = floatx80_default_nan(&env->fp_status);
23
for (i = 0; i < 8; i++) {
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
29
uint8_t dnan_pattern = status->default_nan_pattern;
30
31
if (dnan_pattern == 0) {
32
-#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
+#if defined(TARGET_SPARC)
34
/* Sign bit clear, all frac bits set */
35
dnan_pattern = 0b01111111;
36
#elif defined(TARGET_HEXAGON)
37
--
38
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for MIPS. Note that this
2
is our only target which currently changes the default NaN
3
at runtime (which it was previously doing indirectly when it
4
changed the snan_bit_is_one setting).
2
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-44-peter.maydell@linaro.org
8
---
9
---
9
target/arm/kvm_arm.h | 14 --------------
10
target/mips/fpu_helper.h | 7 +++++++
10
target/arm/kvm.c | 14 +++++++++++++-
11
target/mips/msa.c | 3 +++
11
2 files changed, 13 insertions(+), 15 deletions(-)
12
2 files changed, 10 insertions(+)
12
13
13
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
14
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/kvm_arm.h
16
--- a/target/mips/fpu_helper.h
16
+++ b/target/arm/kvm_arm.h
17
+++ b/target/mips/fpu_helper.h
17
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
18
#define KVM_ARM_VGIC_V2 (1 << 0)
19
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
19
#define KVM_ARM_VGIC_V3 (1 << 1)
20
nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
20
21
set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
21
-/**
22
+ /*
22
- * kvm_arm_vcpu_finalize:
23
+ * With nan2008, the default NaN value has the sign bit clear and the
23
- * @cs: CPUState
24
+ * frac msb set; with the older mode, the sign bit is clear, and all
24
- * @feature: feature to finalize
25
+ * frac bits except the msb are set.
25
- *
26
+ */
26
- * Finalizes the configuration of the specified VCPU feature by
27
+ set_float_default_nan_pattern(nan2008 ? 0b01000000 : 0b00111111,
27
- * invoking the KVM_ARM_VCPU_FINALIZE ioctl. Features requiring
28
+ &env->active_fpu.fp_status);
28
- * this are documented in the "KVM_ARM_VCPU_FINALIZE" section of
29
29
- * KVM's API documentation.
30
}
30
- *
31
31
- * Returns: 0 if success else < 0 error code
32
diff --git a/target/mips/msa.c b/target/mips/msa.c
32
- */
33
-int kvm_arm_vcpu_finalize(CPUState *cs, int feature);
34
-
35
/**
36
* kvm_arm_register_device:
37
* @mr: memory region for this device
38
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
39
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/kvm.c
34
--- a/target/mips/msa.c
41
+++ b/target/arm/kvm.c
35
+++ b/target/mips/msa.c
42
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_vcpu_init(CPUState *cs)
36
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
43
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
37
/* Inf * 0 + NaN returns the input NaN */
44
}
38
set_float_infzeronan_rule(float_infzeronan_dnan_never,
45
39
&env->active_tc.msa_fp_status);
46
-int kvm_arm_vcpu_finalize(CPUState *cs, int feature)
40
+ /* Default NaN: sign bit clear, frac msb set */
47
+/**
41
+ set_float_default_nan_pattern(0b01000000,
48
+ * kvm_arm_vcpu_finalize:
42
+ &env->active_tc.msa_fp_status);
49
+ * @cs: CPUState
50
+ * @feature: feature to finalize
51
+ *
52
+ * Finalizes the configuration of the specified VCPU feature by
53
+ * invoking the KVM_ARM_VCPU_FINALIZE ioctl. Features requiring
54
+ * this are documented in the "KVM_ARM_VCPU_FINALIZE" section of
55
+ * KVM's API documentation.
56
+ *
57
+ * Returns: 0 if success else < 0 error code
58
+ */
59
+static int kvm_arm_vcpu_finalize(CPUState *cs, int feature)
60
{
61
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_FINALIZE, &feature);
62
}
43
}
63
--
44
--
64
2.34.1
45
2.34.1
65
66
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for openrisc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-45-peter.maydell@linaro.org
6
---
7
target/openrisc/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/openrisc/cpu.c
13
+++ b/target/openrisc/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_reset_hold(Object *obj, ResetType type)
15
*/
16
set_float_2nan_prop_rule(float_2nan_prop_x87, &cpu->env.fp_status);
17
18
+ /* Default NaN: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &cpu->env.fp_status);
20
21
#ifndef CONFIG_USER_ONLY
22
cpu->env.picmr = 0x00000000;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for ppc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-46-peter.maydell@linaro.org
6
---
7
target/ppc/cpu_init.c | 4 ++++
8
1 file changed, 4 insertions(+)
9
10
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/ppc/cpu_init.c
13
+++ b/target/ppc/cpu_init.c
14
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
17
18
+ /* Default NaN: sign bit clear, set frac msb */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
+ set_float_default_nan_pattern(0b01000000, &env->vec_status);
21
+
22
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
23
ppc_spr_t *spr = &env->spr_cb[i];
24
25
--
26
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for sh4. Note that sh4
2
is one of the only three targets (the others being HPPA and
3
sometimes MIPS) that has snan_bit_is_one set.
2
4
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-47-peter.maydell@linaro.org
8
---
8
---
9
target/arm/kvm_arm.h | 2 --
9
target/sh4/cpu.c | 2 ++
10
target/arm/kvm.c | 2 +-
10
1 file changed, 2 insertions(+)
11
2 files changed, 1 insertion(+), 3 deletions(-)
12
11
13
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
12
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/kvm_arm.h
14
--- a/target/sh4/cpu.c
16
+++ b/target/arm/kvm_arm.h
15
+++ b/target/sh4/cpu.c
17
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_sve_supported(void);
16
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_reset_hold(Object *obj, ResetType type)
18
*/
17
set_flush_to_zero(1, &env->fp_status);
19
int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa);
18
#endif
20
19
set_default_nan_mode(1, &env->fp_status);
21
-void kvm_arm_vm_state_change(void *opaque, bool running, RunState state);
20
+ /* sign bit clear, set all frac bits other than msb */
22
-
21
+ set_float_default_nan_pattern(0b00111111, &env->fp_status);
23
int kvm_arm_vgic_probe(void);
24
25
void kvm_arm_pmu_set_irq(CPUState *cs, int irq);
26
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/kvm.c
29
+++ b/target/arm/kvm.c
30
@@ -XXX,XX +XXX,XX @@ MemTxAttrs kvm_arch_post_run(CPUState *cs, struct kvm_run *run)
31
return MEMTXATTRS_UNSPECIFIED;
32
}
22
}
33
23
34
-void kvm_arm_vm_state_change(void *opaque, bool running, RunState state)
24
static void superh_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
35
+static void kvm_arm_vm_state_change(void *opaque, bool running, RunState state)
36
{
37
CPUState *cs = opaque;
38
ARMCPU *cpu = ARM_CPU(cs);
39
--
25
--
40
2.34.1
26
2.34.1
41
42
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for rx.
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-48-peter.maydell@linaro.org
8
---
6
---
9
target/arm/kvm_arm.h | 20 --------------------
7
target/rx/cpu.c | 2 ++
10
target/arm/kvm.c | 20 ++++++++++++++++++--
8
1 file changed, 2 insertions(+)
11
2 files changed, 18 insertions(+), 22 deletions(-)
12
9
13
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
10
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
14
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/kvm_arm.h
12
--- a/target/rx/cpu.c
16
+++ b/target/arm/kvm_arm.h
13
+++ b/target/rx/cpu.c
17
@@ -XXX,XX +XXX,XX @@ void kvm_arm_cpu_post_load(ARMCPU *cpu);
14
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_reset_hold(Object *obj, ResetType type)
18
*/
15
* then prefer dest over source", which is float_2nan_prop_s_ab.
19
void kvm_arm_reset_vcpu(ARMCPU *cpu);
16
*/
20
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
21
-/**
18
+ /* Default NaN value: sign bit clear, set frac msb */
22
- * kvm_get_vcpu_events:
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
23
- * @cpu: ARMCPU
24
- *
25
- * Get VCPU related state from kvm.
26
- *
27
- * Returns: 0 if success else < 0 error code
28
- */
29
-int kvm_get_vcpu_events(ARMCPU *cpu);
30
-
31
-/**
32
- * kvm_put_vcpu_events:
33
- * @cpu: ARMCPU
34
- *
35
- * Put VCPU related state to kvm.
36
- *
37
- * Returns: 0 if success else < 0 error code
38
- */
39
-int kvm_put_vcpu_events(ARMCPU *cpu);
40
-
41
#ifdef CONFIG_KVM
42
/**
43
* kvm_arm_create_scratch_host_vcpu:
44
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/kvm.c
47
+++ b/target/arm/kvm.c
48
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_put_virtual_time(CPUState *cs)
49
cpu->kvm_vtime_dirty = false;
50
}
20
}
51
21
52
-int kvm_put_vcpu_events(ARMCPU *cpu)
22
static ObjectClass *rx_cpu_class_by_name(const char *cpu_model)
53
+/**
54
+ * kvm_put_vcpu_events:
55
+ * @cpu: ARMCPU
56
+ *
57
+ * Put VCPU related state to kvm.
58
+ *
59
+ * Returns: 0 if success else < 0 error code
60
+ */
61
+static int kvm_put_vcpu_events(ARMCPU *cpu)
62
{
63
CPUARMState *env = &cpu->env;
64
struct kvm_vcpu_events events;
65
@@ -XXX,XX +XXX,XX @@ int kvm_put_vcpu_events(ARMCPU *cpu)
66
return ret;
67
}
68
69
-int kvm_get_vcpu_events(ARMCPU *cpu)
70
+/**
71
+ * kvm_get_vcpu_events:
72
+ * @cpu: ARMCPU
73
+ *
74
+ * Get VCPU related state from kvm.
75
+ *
76
+ * Returns: 0 if success else < 0 error code
77
+ */
78
+static int kvm_get_vcpu_events(ARMCPU *cpu)
79
{
80
CPUARMState *env = &cpu->env;
81
struct kvm_vcpu_events events;
82
--
23
--
83
2.34.1
24
2.34.1
84
85
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for s390x.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-49-peter.maydell@linaro.org
6
---
7
target/s390x/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/s390x/cpu.c
13
+++ b/target/s390x/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_always,
17
&env->fpu_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fpu_status);
20
/* fall through */
21
case RESET_TYPE_S390_CPU_NORMAL:
22
env->psw.mask &= ~PSW_MASK_RI;
23
--
24
2.34.1
diff view generated by jsdifflib
1
The system registers DBGVCR32_EL2, FPEXC32_EL2, DACR32_EL2 and
1
Set the default NaN pattern explicitly for SPARC, and remove
2
IFSR32_EL2 are present only to allow an AArch64 EL2 or EL3 to read
2
the ifdef from parts64_default_nan.
3
and write the contents of an AArch32-only system register. The
4
architecture requires that they are present only when EL1 can be
5
AArch32, but we implement them unconditionally. This was OK when all
6
our CPUs supported AArch32 EL1, but we have quite a lot of CPU models
7
now which only support AArch64 at EL1:
8
a64fx
9
cortex-a76
10
cortex-a710
11
neoverse-n1
12
neoverse-n2
13
neoverse-v1
14
15
Only define these registers for CPUs which allow AArch32 EL1.
16
3
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20231121144605.3980419-1-peter.maydell@linaro.org
6
Message-id: 20241202131347.498124-50-peter.maydell@linaro.org
20
---
7
---
21
target/arm/debug_helper.c | 23 +++++++++++++++--------
8
target/sparc/cpu.c | 2 ++
22
target/arm/helper.c | 35 +++++++++++++++++++++--------------
9
fpu/softfloat-specialize.c.inc | 5 +----
23
2 files changed, 36 insertions(+), 22 deletions(-)
10
2 files changed, 3 insertions(+), 4 deletions(-)
24
11
25
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
26
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/debug_helper.c
14
--- a/target/sparc/cpu.c
28
+++ b/target/arm/debug_helper.c
15
+++ b/target/sparc/cpu.c
29
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
30
.cp = 14, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0,
17
set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
31
.access = PL1_RW, .accessfn = access_tda,
18
/* For inf * 0 + NaN, return the input NaN */
32
.type = ARM_CP_NOP },
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
33
- /*
20
+ /* Default NaN value: sign bit clear, all frac bits set */
34
- * Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor
21
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
35
- * to save and restore a 32-bit guest's DBGVCR)
22
36
- */
23
cpu_exec_realizefn(cs, &local_err);
37
- { .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
24
if (local_err != NULL) {
38
- .opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
39
- .access = PL2_RW, .accessfn = access_tda,
40
- .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
41
/*
42
* Dummy MDCCINT_EL1, since we don't implement the Debug Communications
43
* Channel but Linux may try to access this register. The 32-bit
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
45
.fieldoffset = offsetof(CPUARMState, cp15.dbgclaim) },
46
};
47
48
+/* These are present only when EL1 supports AArch32 */
49
+static const ARMCPRegInfo debug_aa32_el1_reginfo[] = {
50
+ /*
51
+ * Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor
52
+ * to save and restore a 32-bit guest's DBGVCR)
53
+ */
54
+ { .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
55
+ .opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
56
+ .access = PL2_RW, .accessfn = access_tda,
57
+ .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
58
+};
59
+
60
static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
61
/* 64 bit access versions of the (dummy) debug registers */
62
{ .name = "DBGDRAR", .cp = 14, .crm = 1, .opc1 = 0,
63
@@ -XXX,XX +XXX,XX @@ void define_debug_regs(ARMCPU *cpu)
64
assert(ctx_cmps <= brps);
65
66
define_arm_cp_regs(cpu, debug_cp_reginfo);
67
+ if (cpu_isar_feature(aa64_aa32_el1, cpu)) {
68
+ define_arm_cp_regs(cpu, debug_aa32_el1_reginfo);
69
+ }
70
71
if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) {
72
define_arm_cp_regs(cpu, debug_lpae_cp_reginfo);
73
diff --git a/target/arm/helper.c b/target/arm/helper.c
74
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
75
--- a/target/arm/helper.c
27
--- a/fpu/softfloat-specialize.c.inc
76
+++ b/target/arm/helper.c
28
+++ b/fpu/softfloat-specialize.c.inc
77
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
78
.opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 0,
30
uint8_t dnan_pattern = status->default_nan_pattern;
79
.type = ARM_CP_NO_RAW,
31
80
.access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write },
32
if (dnan_pattern == 0) {
81
- { .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64,
33
-#if defined(TARGET_SPARC)
82
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0,
34
- /* Sign bit clear, all frac bits set */
83
- .access = PL2_RW,
35
- dnan_pattern = 0b01111111;
84
- .type = ARM_CP_ALIAS | ARM_CP_FPU | ARM_CP_EL3_NO_EL2_KEEP,
36
-#elif defined(TARGET_HEXAGON)
85
- .fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) },
37
+#if defined(TARGET_HEXAGON)
86
- { .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64,
38
/* Sign bit set, all frac bits set. */
87
- .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0,
39
dnan_pattern = 0b11111111;
88
- .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
40
#else
89
- .writefn = dacr_write, .raw_writefn = raw_write,
90
- .fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) },
91
- { .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64,
92
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1,
93
- .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
94
- .fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) },
95
{ .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64,
96
.type = ARM_CP_ALIAS,
97
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 0,
98
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
99
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
100
};
101
102
+/* These are present only when EL1 supports AArch32 */
103
+static const ARMCPRegInfo v8_aa32_el1_reginfo[] = {
104
+ { .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64,
105
+ .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0,
106
+ .access = PL2_RW,
107
+ .type = ARM_CP_ALIAS | ARM_CP_FPU | ARM_CP_EL3_NO_EL2_KEEP,
108
+ .fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) },
109
+ { .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64,
110
+ .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0,
111
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
112
+ .writefn = dacr_write, .raw_writefn = raw_write,
113
+ .fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) },
114
+ { .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64,
115
+ .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1,
116
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
117
+ .fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) },
118
+};
119
+
120
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
121
{
122
ARMCPU *cpu = env_archcpu(env);
123
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
124
}
125
define_arm_cp_regs(cpu, v8_idregs);
126
define_arm_cp_regs(cpu, v8_cp_reginfo);
127
+ if (cpu_isar_feature(aa64_aa32_el1, cpu)) {
128
+ define_arm_cp_regs(cpu, v8_aa32_el1_reginfo);
129
+ }
130
131
for (i = 4; i < 16; i++) {
132
/*
133
--
41
--
134
2.34.1
42
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for xtensa.
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-51-peter.maydell@linaro.org
8
---
6
---
9
target/arm/kvm_arm.h | 12 ------------
7
target/xtensa/cpu.c | 2 ++
10
target/arm/kvm.c | 10 ++++++++--
8
1 file changed, 2 insertions(+)
11
2 files changed, 8 insertions(+), 14 deletions(-)
12
9
13
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
10
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
14
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/kvm_arm.h
12
--- a/target/xtensa/cpu.c
16
+++ b/target/arm/kvm_arm.h
13
+++ b/target/xtensa/cpu.c
17
@@ -XXX,XX +XXX,XX @@
14
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
18
void kvm_arm_register_device(MemoryRegion *mr, uint64_t devid, uint64_t group,
15
/* For inf * 0 + NaN, return the input NaN */
19
uint64_t attr, int dev_fd, uint64_t addr_ormask);
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
17
set_no_signaling_nans(!dfpu, &env->fp_status);
21
-/**
18
+ /* Default NaN value: sign bit clear, set frac msb */
22
- * kvm_arm_init_cpreg_list:
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
23
- * @cpu: ARMCPU
20
xtensa_use_first_nan(env, !dfpu);
24
- *
25
- * Initialize the ARMCPU cpreg list according to the kernel's
26
- * definition of what CPU registers it knows about (and throw away
27
- * the previous TCG-created cpreg list).
28
- *
29
- * Returns: 0 if success, else < 0 error code
30
- */
31
-int kvm_arm_init_cpreg_list(ARMCPU *cpu);
32
-
33
/**
34
* write_list_to_kvmstate:
35
* @cpu: ARMCPU
36
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/kvm.c
39
+++ b/target/arm/kvm.c
40
@@ -XXX,XX +XXX,XX @@ static bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx)
41
}
42
}
21
}
43
22
44
-/* Initialize the ARMCPU cpreg list according to the kernel's
45
+/**
46
+ * kvm_arm_init_cpreg_list:
47
+ * @cpu: ARMCPU
48
+ *
49
+ * Initialize the ARMCPU cpreg list according to the kernel's
50
* definition of what CPU registers it knows about (and throw away
51
* the previous TCG-created cpreg list).
52
+ *
53
+ * Returns: 0 if success, else < 0 error code
54
*/
55
-int kvm_arm_init_cpreg_list(ARMCPU *cpu)
56
+static int kvm_arm_init_cpreg_list(ARMCPU *cpu)
57
{
58
struct kvm_reg_list rl;
59
struct kvm_reg_list *rlp;
60
--
23
--
61
2.34.1
24
2.34.1
62
63
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
Set the default NaN pattern explicitly for hexagon.
2
Remove the ifdef from parts64_default_nan(); the only
3
remaining unconverted targets all use the default case.
2
4
3
MDCR_EL2.HPMN allows an hypervisor to limit the number of PMU counters
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
available to EL1 and EL0 (to keep the others to itself). QEMU already
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
implements this split correctly, except for PMCR_EL0.N reads: the number
7
Message-id: 20241202131347.498124-52-peter.maydell@linaro.org
6
of counters read by EL1 or EL0 should be the one configured in
8
---
7
MDCR_EL2.HPMN.
9
target/hexagon/cpu.c | 2 ++
10
fpu/softfloat-specialize.c.inc | 5 -----
11
2 files changed, 2 insertions(+), 5 deletions(-)
8
12
9
Cc: qemu-stable@nongnu.org
13
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
11
Message-id: 20231215144652.4193815-2-jean-philippe@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/helper.c | 22 ++++++++++++++++++++--
16
1 file changed, 20 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
15
--- a/target/hexagon/cpu.c
21
+++ b/target/arm/helper.c
16
+++ b/target/hexagon/cpu.c
22
@@ -XXX,XX +XXX,XX @@ static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
17
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_reset_hold(Object *obj, ResetType type)
23
pmu_op_finish(env);
18
19
set_default_nan_mode(1, &env->fp_status);
20
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
21
+ /* Default NaN value: sign bit set, all frac bits set */
22
+ set_float_default_nan_pattern(0b11111111, &env->fp_status);
24
}
23
}
25
24
26
+static uint64_t pmcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
25
static void hexagon_cpu_disas_set_info(CPUState *s, disassemble_info *info)
27
+{
26
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
+ uint64_t pmcr = env->cp15.c9_pmcr;
27
index XXXXXXX..XXXXXXX 100644
29
+
28
--- a/fpu/softfloat-specialize.c.inc
30
+ /*
29
+++ b/fpu/softfloat-specialize.c.inc
31
+ * If EL2 is implemented and enabled for the current security state, reads
30
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
32
+ * of PMCR.N from EL1 or EL0 return the value of MDCR_EL2.HPMN or HDCR.HPMN.
31
uint8_t dnan_pattern = status->default_nan_pattern;
33
+ */
32
34
+ if (arm_current_el(env) <= 1 && arm_is_el2_enabled(env)) {
33
if (dnan_pattern == 0) {
35
+ pmcr &= ~PMCRN_MASK;
34
-#if defined(TARGET_HEXAGON)
36
+ pmcr |= (env->cp15.mdcr_el2 & MDCR_HPMN) << PMCRN_SHIFT;
35
- /* Sign bit set, all frac bits set. */
37
+ }
36
- dnan_pattern = 0b11111111;
38
+
37
-#else
39
+ return pmcr;
38
/*
40
+}
39
* This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
41
+
40
* S390, SH4, TriCore, and Xtensa. Our other supported targets
42
static void pmswinc_write(CPUARMState *env, const ARMCPRegInfo *ri,
41
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
43
uint64_t value)
42
/* sign bit clear, set frac msb */
44
{
43
dnan_pattern = 0b01000000;
45
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
44
}
46
.fgt = FGT_PMCR_EL0,
45
-#endif
47
.type = ARM_CP_IO | ARM_CP_ALIAS,
46
}
48
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcr),
47
assert(dnan_pattern != 0);
49
- .accessfn = pmreg_access, .writefn = pmcr_write,
50
- .raw_writefn = raw_write,
51
+ .accessfn = pmreg_access,
52
+ .readfn = pmcr_read, .raw_readfn = raw_read,
53
+ .writefn = pmcr_write, .raw_writefn = raw_write,
54
};
55
ARMCPRegInfo pmcr64 = {
56
.name = "PMCR_EL0", .state = ARM_CP_STATE_AA64,
57
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
58
.type = ARM_CP_IO,
59
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
60
.resetvalue = cpu->isar.reset_pmcr_el0,
61
+ .readfn = pmcr_read, .raw_readfn = raw_read,
62
.writefn = pmcr_write, .raw_writefn = raw_write,
63
};
64
48
65
--
49
--
66
2.34.1
50
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Set the default NaN pattern explicitly for riscv.
2
2
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
5
Message-id: 20241202131347.498124-53-peter.maydell@linaro.org
6
---
7
target/riscv/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
6
9
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-15-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm.c | 10 +++++-----
14
1 file changed, 5 insertions(+), 5 deletions(-)
15
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
17
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
12
--- a/target/riscv/cpu.c
19
+++ b/target/arm/kvm.c
13
+++ b/target/riscv/cpu.c
20
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_vm_state_change(void *opaque, bool running, RunState state)
14
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj, ResetType type)
21
15
cs->exception_index = RISCV_EXCP_NONE;
22
/**
16
env->load_res = -1;
23
* kvm_arm_handle_dabt_nisv:
17
set_default_nan_mode(1, &env->fp_status);
24
- * @cs: CPUState
18
+ /* Default NaN value: sign bit clear, frac msb set */
25
+ * @cpu: ARMCPU
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
26
* @esr_iss: ISS encoding (limited) for the exception from Data Abort
20
env->vill = true;
27
* ISV bit set to '0b0' -> no valid instruction syndrome
21
28
* @fault_ipa: faulting address for the synchronous data abort
22
#ifndef CONFIG_USER_ONLY
29
*
30
* Returns: 0 if the exception has been handled, < 0 otherwise
31
*/
32
-static int kvm_arm_handle_dabt_nisv(CPUState *cs, uint64_t esr_iss,
33
+static int kvm_arm_handle_dabt_nisv(ARMCPU *cpu, uint64_t esr_iss,
34
uint64_t fault_ipa)
35
{
36
- ARMCPU *cpu = ARM_CPU(cs);
37
CPUARMState *env = &cpu->env;
38
/*
39
* Request KVM to inject the external data abort into the guest
40
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_handle_dabt_nisv(CPUState *cs, uint64_t esr_iss,
41
*/
42
events.exception.ext_dabt_pending = 1;
43
/* KVM_CAP_ARM_INJECT_EXT_DABT implies KVM_CAP_VCPU_EVENTS */
44
- if (!kvm_vcpu_ioctl(cs, KVM_SET_VCPU_EVENTS, &events)) {
45
+ if (!kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events)) {
46
env->ext_dabt_raised = 1;
47
return 0;
48
}
49
@@ -XXX,XX +XXX,XX @@ static bool kvm_arm_handle_debug(CPUState *cs,
50
51
int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
52
{
53
+ ARMCPU *cpu = ARM_CPU(cs);
54
int ret = 0;
55
56
switch (run->exit_reason) {
57
@@ -XXX,XX +XXX,XX @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
58
break;
59
case KVM_EXIT_ARM_NISV:
60
/* External DABT with no valid iss to decode */
61
- ret = kvm_arm_handle_dabt_nisv(cs, run->arm_nisv.esr_iss,
62
+ ret = kvm_arm_handle_dabt_nisv(cpu, run->arm_nisv.esr_iss,
63
run->arm_nisv.fault_ipa);
64
break;
65
default:
66
--
23
--
67
2.34.1
24
2.34.1
68
69
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Set the default NaN pattern explicitly for tricore.
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-54-peter.maydell@linaro.org
8
---
6
---
9
target/arm/kvm_arm.h | 16 ----------------
7
target/tricore/helper.c | 2 ++
10
target/arm/kvm.c | 16 ++++++++++++++--
8
1 file changed, 2 insertions(+)
11
2 files changed, 14 insertions(+), 18 deletions(-)
12
9
13
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
10
diff --git a/target/tricore/helper.c b/target/tricore/helper.c
14
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/kvm_arm.h
12
--- a/target/tricore/helper.c
16
+++ b/target/arm/kvm_arm.h
13
+++ b/target/tricore/helper.c
17
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_kvm(ARMCPU *cpu);
14
@@ -XXX,XX +XXX,XX @@ void fpu_set_state(CPUTriCoreState *env)
18
*/
15
set_flush_to_zero(1, &env->fp_status);
19
int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu);
16
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
20
17
set_default_nan_mode(1, &env->fp_status);
21
-/**
18
+ /* Default NaN pattern: sign bit clear, frac msb set */
22
- * kvm_arm_get_virtual_time:
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
23
- * @cs: CPUState
24
- *
25
- * Gets the VCPU's virtual counter and stores it in the KVM CPU state.
26
- */
27
-void kvm_arm_get_virtual_time(CPUState *cs);
28
-
29
-/**
30
- * kvm_arm_put_virtual_time:
31
- * @cs: CPUState
32
- *
33
- * Sets the VCPU's virtual counter to the value stored in the KVM CPU state.
34
- */
35
-void kvm_arm_put_virtual_time(CPUState *cs);
36
-
37
void kvm_arm_vm_state_change(void *opaque, bool running, RunState state);
38
39
int kvm_arm_vgic_probe(void);
40
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/kvm.c
43
+++ b/target/arm/kvm.c
44
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
45
return 0;
46
}
20
}
47
21
48
-void kvm_arm_get_virtual_time(CPUState *cs)
22
uint32_t psw_read(CPUTriCoreState *env)
49
+/**
50
+ * kvm_arm_get_virtual_time:
51
+ * @cs: CPUState
52
+ *
53
+ * Gets the VCPU's virtual counter and stores it in the KVM CPU state.
54
+ */
55
+static void kvm_arm_get_virtual_time(CPUState *cs)
56
{
57
ARMCPU *cpu = ARM_CPU(cs);
58
int ret;
59
@@ -XXX,XX +XXX,XX @@ void kvm_arm_get_virtual_time(CPUState *cs)
60
cpu->kvm_vtime_dirty = true;
61
}
62
63
-void kvm_arm_put_virtual_time(CPUState *cs)
64
+/**
65
+ * kvm_arm_put_virtual_time:
66
+ * @cs: CPUState
67
+ *
68
+ * Sets the VCPU's virtual counter to the value stored in the KVM CPU state.
69
+ */
70
+static void kvm_arm_put_virtual_time(CPUState *cs)
71
{
72
ARMCPU *cpu = ARM_CPU(cs);
73
int ret;
74
--
23
--
75
2.34.1
24
2.34.1
76
77
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
Now that all our targets have bene converted to explicitly specify
2
their pattern for the default NaN value we can remove the remaining
3
fallback code in parts64_default_nan().
2
4
3
Unify the "kvm_arm.h" API: All functions related to ARM vCPUs
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
take a ARMCPU* argument. Use the CPU() QOM cast macro When
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
calling the generic vCPU API from "sysemu/kvm.h".
7
Message-id: 20241202131347.498124-55-peter.maydell@linaro.org
8
---
9
fpu/softfloat-specialize.c.inc | 14 --------------
10
1 file changed, 14 deletions(-)
6
11
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Message-id: 20231123183518.64569-14-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm.c | 8 ++++----
14
1 file changed, 4 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
14
--- a/fpu/softfloat-specialize.c.inc
19
+++ b/target/arm/kvm.c
15
+++ b/fpu/softfloat-specialize.c.inc
20
@@ -XXX,XX +XXX,XX @@ static int kvm_get_vcpu_events(ARMCPU *cpu)
16
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
21
17
uint64_t frac;
22
/**
18
uint8_t dnan_pattern = status->default_nan_pattern;
23
* kvm_arm_verify_ext_dabt_pending:
19
24
- * @cs: CPUState
20
- if (dnan_pattern == 0) {
25
+ * @cpu: ARMCPU
21
- /*
26
*
22
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
27
* Verify the fault status code wrt the Ext DABT injection
23
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
28
*
24
- * do not have floating-point.
29
* Returns: true if the fault status code is as expected, false otherwise
25
- */
30
*/
26
- if (snan_bit_is_one(status)) {
31
-static bool kvm_arm_verify_ext_dabt_pending(CPUState *cs)
27
- /* sign bit clear, set all frac bits other than msb */
32
+static bool kvm_arm_verify_ext_dabt_pending(ARMCPU *cpu)
28
- dnan_pattern = 0b00111111;
33
{
29
- } else {
34
+ CPUState *cs = CPU(cpu);
30
- /* sign bit clear, set frac msb */
35
uint64_t dfsr_val;
31
- dnan_pattern = 0b01000000;
36
32
- }
37
if (!kvm_get_one_reg(cs, ARM64_REG_ESR_EL1, &dfsr_val)) {
33
- }
38
- ARMCPU *cpu = ARM_CPU(cs);
34
assert(dnan_pattern != 0);
39
CPUARMState *env = &cpu->env;
35
40
int aarch64_mode = arm_feature(env, ARM_FEATURE_AARCH64);
36
sign = dnan_pattern >> 7;
41
int lpae = 0;
42
@@ -XXX,XX +XXX,XX @@ void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
43
* an IMPLEMENTATION DEFINED exception (for 32-bit EL1)
44
*/
45
if (!arm_feature(env, ARM_FEATURE_AARCH64) &&
46
- unlikely(!kvm_arm_verify_ext_dabt_pending(cs))) {
47
+ unlikely(!kvm_arm_verify_ext_dabt_pending(cpu))) {
48
49
error_report("Data abort exception with no valid ISS generated by "
50
"guest memory access. KVM unable to emulate faulting "
51
--
37
--
52
2.34.1
38
2.34.1
53
54
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Inline pickNaNMulAdd into its only caller. This makes
4
one assert redundant with the immediately preceding IF.
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-3-richard.henderson@linaro.org
9
[PMM: keep comment from old code in new location]
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
target/arm/kvm_arm.h | 22 ----
12
fpu/softfloat-parts.c.inc | 41 +++++++++++++++++++++++++-
10
target/arm/kvm.c | 265 +++++++++++++++++++++++++++++++++++++++++++
13
fpu/softfloat-specialize.c.inc | 54 ----------------------------------
11
target/arm/kvm64.c | 254 -----------------------------------------
14
2 files changed, 40 insertions(+), 55 deletions(-)
12
3 files changed, 265 insertions(+), 276 deletions(-)
13
15
14
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm_arm.h
18
--- a/fpu/softfloat-parts.c.inc
17
+++ b/target/arm/kvm_arm.h
19
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
19
*/
21
}
20
void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
22
21
23
if (s->default_nan_mode) {
22
-/**
24
+ /*
23
- * ARMHostCPUFeatures: information about the host CPU (identified
25
+ * We guarantee not to require the target to tell us how to
24
- * by asking the host kernel)
26
+ * pick a NaN if we're always returning the default NaN.
25
- */
27
+ * But if we're not in default-NaN mode then the target must
26
-typedef struct ARMHostCPUFeatures {
28
+ * specify.
27
- ARMISARegisters isar;
29
+ */
28
- uint64_t features;
30
which = 3;
29
- uint32_t target;
31
+ } else if (infzero) {
30
- const char *dtb_compatible;
32
+ /*
31
-} ARMHostCPUFeatures;
33
+ * Inf * 0 + NaN -- some implementations return the
32
-
34
+ * default NaN here, and some return the input NaN.
33
-/**
35
+ */
34
- * kvm_arm_get_host_cpu_features:
36
+ switch (s->float_infzeronan_rule) {
35
- * @ahcf: ARMHostCPUClass to fill in
37
+ case float_infzeronan_dnan_never:
36
- *
38
+ which = 2;
37
- * Probe the capabilities of the host kernel's preferred CPU and fill
39
+ break;
38
- * in the ARMHostCPUClass struct accordingly.
40
+ case float_infzeronan_dnan_always:
39
- *
41
+ which = 3;
40
- * Returns true on success and false otherwise.
42
+ break;
41
- */
43
+ case float_infzeronan_dnan_if_qnan:
42
-bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf);
44
+ which = is_qnan(c->cls) ? 3 : 2;
43
-
45
+ break;
44
/**
46
+ default:
45
* kvm_arm_sve_get_vls:
47
+ g_assert_not_reached();
46
* @cs: CPUState
48
+ }
47
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
49
} else {
50
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
51
+ FloatClass cls[3] = { a->cls, b->cls, c->cls };
52
+ Float3NaNPropRule rule = s->float_3nan_prop_rule;
53
+
54
+ assert(rule != float_3nan_prop_none);
55
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
56
+ /* We have at least one SNaN input and should prefer it */
57
+ do {
58
+ which = rule & R_3NAN_1ST_MASK;
59
+ rule >>= R_3NAN_1ST_LENGTH;
60
+ } while (!is_snan(cls[which]));
61
+ } else {
62
+ do {
63
+ which = rule & R_3NAN_1ST_MASK;
64
+ rule >>= R_3NAN_1ST_LENGTH;
65
+ } while (!is_nan(cls[which]));
66
+ }
67
}
68
69
if (which == 3) {
70
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
48
index XXXXXXX..XXXXXXX 100644
71
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/kvm.c
72
--- a/fpu/softfloat-specialize.c.inc
50
+++ b/target/arm/kvm.c
73
+++ b/fpu/softfloat-specialize.c.inc
51
@@ -XXX,XX +XXX,XX @@ static bool cap_has_mp_state;
74
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
52
static bool cap_has_inject_serror_esr;
53
static bool cap_has_inject_ext_dabt;
54
55
+/**
56
+ * ARMHostCPUFeatures: information about the host CPU (identified
57
+ * by asking the host kernel)
58
+ */
59
+typedef struct ARMHostCPUFeatures {
60
+ ARMISARegisters isar;
61
+ uint64_t features;
62
+ uint32_t target;
63
+ const char *dtb_compatible;
64
+} ARMHostCPUFeatures;
65
+
66
static ARMHostCPUFeatures arm_host_cpu_features;
67
68
int kvm_arm_vcpu_init(CPUState *cs)
69
@@ -XXX,XX +XXX,XX @@ void kvm_arm_destroy_scratch_host_vcpu(int *fdarray)
70
}
75
}
71
}
76
}
72
77
73
+static int read_sys_reg32(int fd, uint32_t *pret, uint64_t id)
78
-/*----------------------------------------------------------------------------
74
+{
79
-| Select which NaN to propagate for a three-input operation.
75
+ uint64_t ret;
80
-| For the moment we assume that no CPU needs the 'larger significand'
76
+ struct kvm_one_reg idreg = { .id = id, .addr = (uintptr_t)&ret };
81
-| information.
77
+ int err;
82
-| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
78
+
83
-*----------------------------------------------------------------------------*/
79
+ assert((id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64);
84
-static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
80
+ err = ioctl(fd, KVM_GET_ONE_REG, &idreg);
85
- bool infzero, bool have_snan, float_status *status)
81
+ if (err < 0) {
82
+ return -1;
83
+ }
84
+ *pret = ret;
85
+ return 0;
86
+}
87
+
88
+static int read_sys_reg64(int fd, uint64_t *pret, uint64_t id)
89
+{
90
+ struct kvm_one_reg idreg = { .id = id, .addr = (uintptr_t)pret };
91
+
92
+ assert((id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64);
93
+ return ioctl(fd, KVM_GET_ONE_REG, &idreg);
94
+}
95
+
96
+static bool kvm_arm_pauth_supported(void)
97
+{
98
+ return (kvm_check_extension(kvm_state, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
99
+ kvm_check_extension(kvm_state, KVM_CAP_ARM_PTRAUTH_GENERIC));
100
+}
101
+
102
+static bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
103
+{
104
+ /* Identify the feature bits corresponding to the host CPU, and
105
+ * fill out the ARMHostCPUClass fields accordingly. To do this
106
+ * we have to create a scratch VM, create a single CPU inside it,
107
+ * and then query that CPU for the relevant ID registers.
108
+ */
109
+ int fdarray[3];
110
+ bool sve_supported;
111
+ bool pmu_supported = false;
112
+ uint64_t features = 0;
113
+ int err;
114
+
115
+ /* Old kernels may not know about the PREFERRED_TARGET ioctl: however
116
+ * we know these will only support creating one kind of guest CPU,
117
+ * which is its preferred CPU type. Fortunately these old kernels
118
+ * support only a very limited number of CPUs.
119
+ */
120
+ static const uint32_t cpus_to_try[] = {
121
+ KVM_ARM_TARGET_AEM_V8,
122
+ KVM_ARM_TARGET_FOUNDATION_V8,
123
+ KVM_ARM_TARGET_CORTEX_A57,
124
+ QEMU_KVM_ARM_TARGET_NONE
125
+ };
126
+ /*
127
+ * target = -1 informs kvm_arm_create_scratch_host_vcpu()
128
+ * to use the preferred target
129
+ */
130
+ struct kvm_vcpu_init init = { .target = -1, };
131
+
132
+ /*
133
+ * Ask for SVE if supported, so that we can query ID_AA64ZFR0,
134
+ * which is otherwise RAZ.
135
+ */
136
+ sve_supported = kvm_arm_sve_supported();
137
+ if (sve_supported) {
138
+ init.features[0] |= 1 << KVM_ARM_VCPU_SVE;
139
+ }
140
+
141
+ /*
142
+ * Ask for Pointer Authentication if supported, so that we get
143
+ * the unsanitized field values for AA64ISAR1_EL1.
144
+ */
145
+ if (kvm_arm_pauth_supported()) {
146
+ init.features[0] |= (1 << KVM_ARM_VCPU_PTRAUTH_ADDRESS |
147
+ 1 << KVM_ARM_VCPU_PTRAUTH_GENERIC);
148
+ }
149
+
150
+ if (kvm_arm_pmu_supported()) {
151
+ init.features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
152
+ pmu_supported = true;
153
+ }
154
+
155
+ if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
156
+ return false;
157
+ }
158
+
159
+ ahcf->target = init.target;
160
+ ahcf->dtb_compatible = "arm,arm-v8";
161
+
162
+ err = read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64pfr0,
163
+ ARM64_SYS_REG(3, 0, 0, 4, 0));
164
+ if (unlikely(err < 0)) {
165
+ /*
166
+ * Before v4.15, the kernel only exposed a limited number of system
167
+ * registers, not including any of the interesting AArch64 ID regs.
168
+ * For the most part we could leave these fields as zero with minimal
169
+ * effect, since this does not affect the values seen by the guest.
170
+ *
171
+ * However, it could cause problems down the line for QEMU,
172
+ * so provide a minimal v8.0 default.
173
+ *
174
+ * ??? Could read MIDR and use knowledge from cpu64.c.
175
+ * ??? Could map a page of memory into our temp guest and
176
+ * run the tiniest of hand-crafted kernels to extract
177
+ * the values seen by the guest.
178
+ * ??? Either of these sounds like too much effort just
179
+ * to work around running a modern host kernel.
180
+ */
181
+ ahcf->isar.id_aa64pfr0 = 0x00000011; /* EL1&0, AArch64 only */
182
+ err = 0;
183
+ } else {
184
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64pfr1,
185
+ ARM64_SYS_REG(3, 0, 0, 4, 1));
186
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64smfr0,
187
+ ARM64_SYS_REG(3, 0, 0, 4, 5));
188
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64dfr0,
189
+ ARM64_SYS_REG(3, 0, 0, 5, 0));
190
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64dfr1,
191
+ ARM64_SYS_REG(3, 0, 0, 5, 1));
192
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar0,
193
+ ARM64_SYS_REG(3, 0, 0, 6, 0));
194
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar1,
195
+ ARM64_SYS_REG(3, 0, 0, 6, 1));
196
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar2,
197
+ ARM64_SYS_REG(3, 0, 0, 6, 2));
198
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr0,
199
+ ARM64_SYS_REG(3, 0, 0, 7, 0));
200
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr1,
201
+ ARM64_SYS_REG(3, 0, 0, 7, 1));
202
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr2,
203
+ ARM64_SYS_REG(3, 0, 0, 7, 2));
204
+
205
+ /*
206
+ * Note that if AArch32 support is not present in the host,
207
+ * the AArch32 sysregs are present to be read, but will
208
+ * return UNKNOWN values. This is neither better nor worse
209
+ * than skipping the reads and leaving 0, as we must avoid
210
+ * considering the values in every case.
211
+ */
212
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_pfr0,
213
+ ARM64_SYS_REG(3, 0, 0, 1, 0));
214
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_pfr1,
215
+ ARM64_SYS_REG(3, 0, 0, 1, 1));
216
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_dfr0,
217
+ ARM64_SYS_REG(3, 0, 0, 1, 2));
218
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr0,
219
+ ARM64_SYS_REG(3, 0, 0, 1, 4));
220
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr1,
221
+ ARM64_SYS_REG(3, 0, 0, 1, 5));
222
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr2,
223
+ ARM64_SYS_REG(3, 0, 0, 1, 6));
224
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr3,
225
+ ARM64_SYS_REG(3, 0, 0, 1, 7));
226
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar0,
227
+ ARM64_SYS_REG(3, 0, 0, 2, 0));
228
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar1,
229
+ ARM64_SYS_REG(3, 0, 0, 2, 1));
230
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar2,
231
+ ARM64_SYS_REG(3, 0, 0, 2, 2));
232
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar3,
233
+ ARM64_SYS_REG(3, 0, 0, 2, 3));
234
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar4,
235
+ ARM64_SYS_REG(3, 0, 0, 2, 4));
236
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar5,
237
+ ARM64_SYS_REG(3, 0, 0, 2, 5));
238
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr4,
239
+ ARM64_SYS_REG(3, 0, 0, 2, 6));
240
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar6,
241
+ ARM64_SYS_REG(3, 0, 0, 2, 7));
242
+
243
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr0,
244
+ ARM64_SYS_REG(3, 0, 0, 3, 0));
245
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr1,
246
+ ARM64_SYS_REG(3, 0, 0, 3, 1));
247
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr2,
248
+ ARM64_SYS_REG(3, 0, 0, 3, 2));
249
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_pfr2,
250
+ ARM64_SYS_REG(3, 0, 0, 3, 4));
251
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_dfr1,
252
+ ARM64_SYS_REG(3, 0, 0, 3, 5));
253
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr5,
254
+ ARM64_SYS_REG(3, 0, 0, 3, 6));
255
+
256
+ /*
257
+ * DBGDIDR is a bit complicated because the kernel doesn't
258
+ * provide an accessor for it in 64-bit mode, which is what this
259
+ * scratch VM is in, and there's no architected "64-bit sysreg
260
+ * which reads the same as the 32-bit register" the way there is
261
+ * for other ID registers. Instead we synthesize a value from the
262
+ * AArch64 ID_AA64DFR0, the same way the kernel code in
263
+ * arch/arm64/kvm/sys_regs.c:trap_dbgidr() does.
264
+ * We only do this if the CPU supports AArch32 at EL1.
265
+ */
266
+ if (FIELD_EX32(ahcf->isar.id_aa64pfr0, ID_AA64PFR0, EL1) >= 2) {
267
+ int wrps = FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, WRPS);
268
+ int brps = FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, BRPS);
269
+ int ctx_cmps =
270
+ FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, CTX_CMPS);
271
+ int version = 6; /* ARMv8 debug architecture */
272
+ bool has_el3 =
273
+ !!FIELD_EX32(ahcf->isar.id_aa64pfr0, ID_AA64PFR0, EL3);
274
+ uint32_t dbgdidr = 0;
275
+
276
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, WRPS, wrps);
277
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, BRPS, brps);
278
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, CTX_CMPS, ctx_cmps);
279
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, VERSION, version);
280
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, NSUHD_IMP, has_el3);
281
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, SE_IMP, has_el3);
282
+ dbgdidr |= (1 << 15); /* RES1 bit */
283
+ ahcf->isar.dbgdidr = dbgdidr;
284
+ }
285
+
286
+ if (pmu_supported) {
287
+ /* PMCR_EL0 is only accessible if the vCPU has feature PMU_V3 */
288
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.reset_pmcr_el0,
289
+ ARM64_SYS_REG(3, 3, 9, 12, 0));
290
+ }
291
+
292
+ if (sve_supported) {
293
+ /*
294
+ * There is a range of kernels between kernel commit 73433762fcae
295
+ * and f81cb2c3ad41 which have a bug where the kernel doesn't
296
+ * expose SYS_ID_AA64ZFR0_EL1 via the ONE_REG API unless the VM has
297
+ * enabled SVE support, which resulted in an error rather than RAZ.
298
+ * So only read the register if we set KVM_ARM_VCPU_SVE above.
299
+ */
300
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64zfr0,
301
+ ARM64_SYS_REG(3, 0, 0, 4, 4));
302
+ }
303
+ }
304
+
305
+ kvm_arm_destroy_scratch_host_vcpu(fdarray);
306
+
307
+ if (err < 0) {
308
+ return false;
309
+ }
310
+
311
+ /*
312
+ * We can assume any KVM supporting CPU is at least a v8
313
+ * with VFPv4+Neon; this in turn implies most of the other
314
+ * feature bits.
315
+ */
316
+ features |= 1ULL << ARM_FEATURE_V8;
317
+ features |= 1ULL << ARM_FEATURE_NEON;
318
+ features |= 1ULL << ARM_FEATURE_AARCH64;
319
+ features |= 1ULL << ARM_FEATURE_PMU;
320
+ features |= 1ULL << ARM_FEATURE_GENERIC_TIMER;
321
+
322
+ ahcf->features = features;
323
+
324
+ return true;
325
+}
326
+
327
void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
328
{
329
CPUARMState *env = &cpu->env;
330
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
331
index XXXXXXX..XXXXXXX 100644
332
--- a/target/arm/kvm64.c
333
+++ b/target/arm/kvm64.c
334
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa)
335
}
336
}
337
338
-static int read_sys_reg32(int fd, uint32_t *pret, uint64_t id)
339
-{
86
-{
340
- uint64_t ret;
87
- FloatClass cls[3] = { a_cls, b_cls, c_cls };
341
- struct kvm_one_reg idreg = { .id = id, .addr = (uintptr_t)&ret };
88
- Float3NaNPropRule rule = status->float_3nan_prop_rule;
342
- int err;
89
- int which;
343
-
344
- assert((id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64);
345
- err = ioctl(fd, KVM_GET_ONE_REG, &idreg);
346
- if (err < 0) {
347
- return -1;
348
- }
349
- *pret = ret;
350
- return 0;
351
-}
352
-
353
-static int read_sys_reg64(int fd, uint64_t *pret, uint64_t id)
354
-{
355
- struct kvm_one_reg idreg = { .id = id, .addr = (uintptr_t)pret };
356
-
357
- assert((id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64);
358
- return ioctl(fd, KVM_GET_ONE_REG, &idreg);
359
-}
360
-
361
-static bool kvm_arm_pauth_supported(void)
362
-{
363
- return (kvm_check_extension(kvm_state, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
364
- kvm_check_extension(kvm_state, KVM_CAP_ARM_PTRAUTH_GENERIC));
365
-}
366
-
367
-bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
368
-{
369
- /* Identify the feature bits corresponding to the host CPU, and
370
- * fill out the ARMHostCPUClass fields accordingly. To do this
371
- * we have to create a scratch VM, create a single CPU inside it,
372
- * and then query that CPU for the relevant ID registers.
373
- */
374
- int fdarray[3];
375
- bool sve_supported;
376
- bool pmu_supported = false;
377
- uint64_t features = 0;
378
- int err;
379
-
380
- /* Old kernels may not know about the PREFERRED_TARGET ioctl: however
381
- * we know these will only support creating one kind of guest CPU,
382
- * which is its preferred CPU type. Fortunately these old kernels
383
- * support only a very limited number of CPUs.
384
- */
385
- static const uint32_t cpus_to_try[] = {
386
- KVM_ARM_TARGET_AEM_V8,
387
- KVM_ARM_TARGET_FOUNDATION_V8,
388
- KVM_ARM_TARGET_CORTEX_A57,
389
- QEMU_KVM_ARM_TARGET_NONE
390
- };
391
- /*
392
- * target = -1 informs kvm_arm_create_scratch_host_vcpu()
393
- * to use the preferred target
394
- */
395
- struct kvm_vcpu_init init = { .target = -1, };
396
-
90
-
397
- /*
91
- /*
398
- * Ask for SVE if supported, so that we can query ID_AA64ZFR0,
92
- * We guarantee not to require the target to tell us how to
399
- * which is otherwise RAZ.
93
- * pick a NaN if we're always returning the default NaN.
94
- * But if we're not in default-NaN mode then the target must
95
- * specify.
400
- */
96
- */
401
- sve_supported = kvm_arm_sve_supported();
97
- assert(!status->default_nan_mode);
402
- if (sve_supported) {
403
- init.features[0] |= 1 << KVM_ARM_VCPU_SVE;
404
- }
405
-
98
-
406
- /*
99
- if (infzero) {
407
- * Ask for Pointer Authentication if supported, so that we get
408
- * the unsanitized field values for AA64ISAR1_EL1.
409
- */
410
- if (kvm_arm_pauth_supported()) {
411
- init.features[0] |= (1 << KVM_ARM_VCPU_PTRAUTH_ADDRESS |
412
- 1 << KVM_ARM_VCPU_PTRAUTH_GENERIC);
413
- }
414
-
415
- if (kvm_arm_pmu_supported()) {
416
- init.features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
417
- pmu_supported = true;
418
- }
419
-
420
- if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
421
- return false;
422
- }
423
-
424
- ahcf->target = init.target;
425
- ahcf->dtb_compatible = "arm,arm-v8";
426
-
427
- err = read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64pfr0,
428
- ARM64_SYS_REG(3, 0, 0, 4, 0));
429
- if (unlikely(err < 0)) {
430
- /*
100
- /*
431
- * Before v4.15, the kernel only exposed a limited number of system
101
- * Inf * 0 + NaN -- some implementations return the default NaN here,
432
- * registers, not including any of the interesting AArch64 ID regs.
102
- * and some return the input NaN.
433
- * For the most part we could leave these fields as zero with minimal
434
- * effect, since this does not affect the values seen by the guest.
435
- *
436
- * However, it could cause problems down the line for QEMU,
437
- * so provide a minimal v8.0 default.
438
- *
439
- * ??? Could read MIDR and use knowledge from cpu64.c.
440
- * ??? Could map a page of memory into our temp guest and
441
- * run the tiniest of hand-crafted kernels to extract
442
- * the values seen by the guest.
443
- * ??? Either of these sounds like too much effort just
444
- * to work around running a modern host kernel.
445
- */
103
- */
446
- ahcf->isar.id_aa64pfr0 = 0x00000011; /* EL1&0, AArch64 only */
104
- switch (status->float_infzeronan_rule) {
447
- err = 0;
105
- case float_infzeronan_dnan_never:
448
- } else {
106
- return 2;
449
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64pfr1,
107
- case float_infzeronan_dnan_always:
450
- ARM64_SYS_REG(3, 0, 0, 4, 1));
108
- return 3;
451
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64smfr0,
109
- case float_infzeronan_dnan_if_qnan:
452
- ARM64_SYS_REG(3, 0, 0, 4, 5));
110
- return is_qnan(c_cls) ? 3 : 2;
453
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64dfr0,
111
- default:
454
- ARM64_SYS_REG(3, 0, 0, 5, 0));
112
- g_assert_not_reached();
455
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64dfr1,
456
- ARM64_SYS_REG(3, 0, 0, 5, 1));
457
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar0,
458
- ARM64_SYS_REG(3, 0, 0, 6, 0));
459
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar1,
460
- ARM64_SYS_REG(3, 0, 0, 6, 1));
461
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar2,
462
- ARM64_SYS_REG(3, 0, 0, 6, 2));
463
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr0,
464
- ARM64_SYS_REG(3, 0, 0, 7, 0));
465
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr1,
466
- ARM64_SYS_REG(3, 0, 0, 7, 1));
467
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64mmfr2,
468
- ARM64_SYS_REG(3, 0, 0, 7, 2));
469
-
470
- /*
471
- * Note that if AArch32 support is not present in the host,
472
- * the AArch32 sysregs are present to be read, but will
473
- * return UNKNOWN values. This is neither better nor worse
474
- * than skipping the reads and leaving 0, as we must avoid
475
- * considering the values in every case.
476
- */
477
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_pfr0,
478
- ARM64_SYS_REG(3, 0, 0, 1, 0));
479
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_pfr1,
480
- ARM64_SYS_REG(3, 0, 0, 1, 1));
481
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_dfr0,
482
- ARM64_SYS_REG(3, 0, 0, 1, 2));
483
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr0,
484
- ARM64_SYS_REG(3, 0, 0, 1, 4));
485
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr1,
486
- ARM64_SYS_REG(3, 0, 0, 1, 5));
487
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr2,
488
- ARM64_SYS_REG(3, 0, 0, 1, 6));
489
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr3,
490
- ARM64_SYS_REG(3, 0, 0, 1, 7));
491
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar0,
492
- ARM64_SYS_REG(3, 0, 0, 2, 0));
493
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar1,
494
- ARM64_SYS_REG(3, 0, 0, 2, 1));
495
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar2,
496
- ARM64_SYS_REG(3, 0, 0, 2, 2));
497
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar3,
498
- ARM64_SYS_REG(3, 0, 0, 2, 3));
499
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar4,
500
- ARM64_SYS_REG(3, 0, 0, 2, 4));
501
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar5,
502
- ARM64_SYS_REG(3, 0, 0, 2, 5));
503
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr4,
504
- ARM64_SYS_REG(3, 0, 0, 2, 6));
505
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar6,
506
- ARM64_SYS_REG(3, 0, 0, 2, 7));
507
-
508
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr0,
509
- ARM64_SYS_REG(3, 0, 0, 3, 0));
510
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr1,
511
- ARM64_SYS_REG(3, 0, 0, 3, 1));
512
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr2,
513
- ARM64_SYS_REG(3, 0, 0, 3, 2));
514
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_pfr2,
515
- ARM64_SYS_REG(3, 0, 0, 3, 4));
516
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_dfr1,
517
- ARM64_SYS_REG(3, 0, 0, 3, 5));
518
- err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr5,
519
- ARM64_SYS_REG(3, 0, 0, 3, 6));
520
-
521
- /*
522
- * DBGDIDR is a bit complicated because the kernel doesn't
523
- * provide an accessor for it in 64-bit mode, which is what this
524
- * scratch VM is in, and there's no architected "64-bit sysreg
525
- * which reads the same as the 32-bit register" the way there is
526
- * for other ID registers. Instead we synthesize a value from the
527
- * AArch64 ID_AA64DFR0, the same way the kernel code in
528
- * arch/arm64/kvm/sys_regs.c:trap_dbgidr() does.
529
- * We only do this if the CPU supports AArch32 at EL1.
530
- */
531
- if (FIELD_EX32(ahcf->isar.id_aa64pfr0, ID_AA64PFR0, EL1) >= 2) {
532
- int wrps = FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, WRPS);
533
- int brps = FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, BRPS);
534
- int ctx_cmps =
535
- FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, CTX_CMPS);
536
- int version = 6; /* ARMv8 debug architecture */
537
- bool has_el3 =
538
- !!FIELD_EX32(ahcf->isar.id_aa64pfr0, ID_AA64PFR0, EL3);
539
- uint32_t dbgdidr = 0;
540
-
541
- dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, WRPS, wrps);
542
- dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, BRPS, brps);
543
- dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, CTX_CMPS, ctx_cmps);
544
- dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, VERSION, version);
545
- dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, NSUHD_IMP, has_el3);
546
- dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, SE_IMP, has_el3);
547
- dbgdidr |= (1 << 15); /* RES1 bit */
548
- ahcf->isar.dbgdidr = dbgdidr;
549
- }
550
-
551
- if (pmu_supported) {
552
- /* PMCR_EL0 is only accessible if the vCPU has feature PMU_V3 */
553
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.reset_pmcr_el0,
554
- ARM64_SYS_REG(3, 3, 9, 12, 0));
555
- }
556
-
557
- if (sve_supported) {
558
- /*
559
- * There is a range of kernels between kernel commit 73433762fcae
560
- * and f81cb2c3ad41 which have a bug where the kernel doesn't
561
- * expose SYS_ID_AA64ZFR0_EL1 via the ONE_REG API unless the VM has
562
- * enabled SVE support, which resulted in an error rather than RAZ.
563
- * So only read the register if we set KVM_ARM_VCPU_SVE above.
564
- */
565
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64zfr0,
566
- ARM64_SYS_REG(3, 0, 0, 4, 4));
567
- }
113
- }
568
- }
114
- }
569
-
115
-
570
- kvm_arm_destroy_scratch_host_vcpu(fdarray);
116
- assert(rule != float_3nan_prop_none);
571
-
117
- if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
572
- if (err < 0) {
118
- /* We have at least one SNaN input and should prefer it */
573
- return false;
119
- do {
120
- which = rule & R_3NAN_1ST_MASK;
121
- rule >>= R_3NAN_1ST_LENGTH;
122
- } while (!is_snan(cls[which]));
123
- } else {
124
- do {
125
- which = rule & R_3NAN_1ST_MASK;
126
- rule >>= R_3NAN_1ST_LENGTH;
127
- } while (!is_nan(cls[which]));
574
- }
128
- }
575
-
129
- return which;
576
- /*
577
- * We can assume any KVM supporting CPU is at least a v8
578
- * with VFPv4+Neon; this in turn implies most of the other
579
- * feature bits.
580
- */
581
- features |= 1ULL << ARM_FEATURE_V8;
582
- features |= 1ULL << ARM_FEATURE_NEON;
583
- features |= 1ULL << ARM_FEATURE_AARCH64;
584
- features |= 1ULL << ARM_FEATURE_PMU;
585
- features |= 1ULL << ARM_FEATURE_GENERIC_TIMER;
586
-
587
- ahcf->features = features;
588
-
589
- return true;
590
-}
130
-}
591
-
131
-
592
void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp)
132
/*----------------------------------------------------------------------------
593
{
133
| Returns 1 if the double-precision floating-point value `a' is a quiet
594
bool has_steal_time = kvm_check_extension(kvm_state, KVM_CAP_STEAL_TIME);
134
| NaN; otherwise returns 0.
595
--
135
--
596
2.34.1
136
2.34.1
597
137
598
138
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There is no need to do this in kvm_arch_init_vcpu per vcpu.
3
Remove "3" as a special case for which and simply
4
Inline kvm_arm_init_serror_injection rather than keep separate.
4
branch to return the desired value.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Gavin Shan <gshan@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-4-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/kvm_arm.h | 8 --------
11
fpu/softfloat-parts.c.inc | 20 ++++++++++----------
13
target/arm/kvm.c | 13 ++++---------
12
1 file changed, 10 insertions(+), 10 deletions(-)
14
2 files changed, 4 insertions(+), 17 deletions(-)
15
13
16
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm_arm.h
16
--- a/fpu/softfloat-parts.c.inc
19
+++ b/target/arm/kvm_arm.h
17
+++ b/fpu/softfloat-parts.c.inc
20
@@ -XXX,XX +XXX,XX @@ void kvm_arm_cpu_post_load(ARMCPU *cpu);
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
21
*/
19
* But if we're not in default-NaN mode then the target must
22
void kvm_arm_reset_vcpu(ARMCPU *cpu);
20
* specify.
23
21
*/
24
-/**
22
- which = 3;
25
- * kvm_arm_init_serror_injection:
23
+ goto default_nan;
26
- * @cs: CPUState
24
} else if (infzero) {
27
- *
25
/*
28
- * Check whether KVM can set guest SError syndrome.
26
* Inf * 0 + NaN -- some implementations return the
29
- */
27
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
30
-void kvm_arm_init_serror_injection(CPUState *cs);
28
*/
29
switch (s->float_infzeronan_rule) {
30
case float_infzeronan_dnan_never:
31
- which = 2;
32
break;
33
case float_infzeronan_dnan_always:
34
- which = 3;
35
- break;
36
+ goto default_nan;
37
case float_infzeronan_dnan_if_qnan:
38
- which = is_qnan(c->cls) ? 3 : 2;
39
+ if (is_qnan(c->cls)) {
40
+ goto default_nan;
41
+ }
42
break;
43
default:
44
g_assert_not_reached();
45
}
46
+ which = 2;
47
} else {
48
FloatClass cls[3] = { a->cls, b->cls, c->cls };
49
Float3NaNPropRule rule = s->float_3nan_prop_rule;
50
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
51
}
52
}
53
54
- if (which == 3) {
55
- parts_default_nan(a, s);
56
- return a;
57
- }
31
-
58
-
32
/**
59
switch (which) {
33
* kvm_get_vcpu_events:
60
case 0:
34
* @cpu: ARMCPU
61
break;
35
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
62
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
36
index XXXXXXX..XXXXXXX 100644
63
parts_silence_nan(a, s);
37
--- a/target/arm/kvm.c
64
}
38
+++ b/target/arm/kvm.c
65
return a;
39
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_vcpu_finalize(CPUState *cs, int feature)
66
+
40
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_FINALIZE, &feature);
67
+ default_nan:
68
+ parts_default_nan(a, s);
69
+ return a;
41
}
70
}
42
71
43
-void kvm_arm_init_serror_injection(CPUState *cs)
72
/*
44
-{
45
- cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
46
- KVM_CAP_ARM_INJECT_SERROR_ESR);
47
-}
48
-
49
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
50
int *fdarray,
51
struct kvm_vcpu_init *init)
52
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init(MachineState *ms, KVMState *s)
53
54
cap_has_mp_state = kvm_check_extension(s, KVM_CAP_MP_STATE);
55
56
+ /* Check whether user space can specify guest syndrome value */
57
+ cap_has_inject_serror_esr =
58
+ kvm_check_extension(s, KVM_CAP_ARM_INJECT_SERROR_ESR);
59
+
60
if (ms->smp.cpus > 256 &&
61
!kvm_check_extension(s, KVM_CAP_ARM_IRQ_LINE_LAYOUT_2)) {
62
error_report("Using more than 256 vcpus requires a host kernel "
63
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
64
}
65
cpu->mp_affinity = mpidr & ARM64_AFFINITY_MASK;
66
67
- /* Check whether user space can specify guest syndrome value */
68
- kvm_arm_init_serror_injection(cs);
69
-
70
return kvm_arm_init_cpreg_list(cpu);
71
}
72
73
--
73
--
74
2.34.1
74
2.34.1
75
75
76
76
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Assign the pointer return value to 'a' directly,
4
rather than going through an intermediary index.
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-5-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
target/arm/kvm_arm.h | 8 --------
11
fpu/softfloat-parts.c.inc | 32 ++++++++++----------------------
10
target/arm/kvm.c | 8 +++++++-
12
1 file changed, 10 insertions(+), 22 deletions(-)
11
target/arm/kvm64.c | 12 ------------
12
3 files changed, 7 insertions(+), 21 deletions(-)
13
13
14
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm_arm.h
16
--- a/fpu/softfloat-parts.c.inc
17
+++ b/target/arm/kvm_arm.h
17
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
19
#define KVM_ARM_VGIC_V2 (1 << 0)
19
FloatPartsN *c, float_status *s,
20
#define KVM_ARM_VGIC_V3 (1 << 1)
20
int ab_mask, int abc_mask)
21
21
{
22
-/**
22
- int which;
23
- * kvm_arm_init_debug() - initialize guest debug capabilities
23
bool infzero = (ab_mask == float_cmask_infzero);
24
- * @s: KVMState
24
bool have_snan = (abc_mask & float_cmask_snan);
25
- *
25
+ FloatPartsN *ret;
26
- * Should be called only once before using guest debug capabilities.
26
27
- */
27
if (unlikely(have_snan)) {
28
-void kvm_arm_init_debug(KVMState *s);
28
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
29
-
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
30
/**
30
default:
31
* kvm_arm_vcpu_init:
31
g_assert_not_reached();
32
* @cs: CPUState
32
}
33
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
33
- which = 2;
34
index XXXXXXX..XXXXXXX 100644
34
+ ret = c;
35
--- a/target/arm/kvm.c
35
} else {
36
+++ b/target/arm/kvm.c
36
- FloatClass cls[3] = { a->cls, b->cls, c->cls };
37
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init(MachineState *ms, KVMState *s)
37
+ FloatPartsN *val[3] = { a, b, c };
38
Float3NaNPropRule rule = s->float_3nan_prop_rule;
39
40
assert(rule != float_3nan_prop_none);
41
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
42
/* We have at least one SNaN input and should prefer it */
43
do {
44
- which = rule & R_3NAN_1ST_MASK;
45
+ ret = val[rule & R_3NAN_1ST_MASK];
46
rule >>= R_3NAN_1ST_LENGTH;
47
- } while (!is_snan(cls[which]));
48
+ } while (!is_snan(ret->cls));
49
} else {
50
do {
51
- which = rule & R_3NAN_1ST_MASK;
52
+ ret = val[rule & R_3NAN_1ST_MASK];
53
rule >>= R_3NAN_1ST_LENGTH;
54
- } while (!is_nan(cls[which]));
55
+ } while (!is_nan(ret->cls));
38
}
56
}
39
}
57
}
40
58
41
- kvm_arm_init_debug(s);
59
- switch (which) {
42
+ max_hw_wps = kvm_check_extension(s, KVM_CAP_GUEST_DEBUG_HW_WPS);
60
- case 0:
43
+ hw_watchpoints = g_array_sized_new(true, true,
61
- break;
44
+ sizeof(HWWatchpoint), max_hw_wps);
62
- case 1:
45
+
63
- a = b;
46
+ max_hw_bps = kvm_check_extension(s, KVM_CAP_GUEST_DEBUG_HW_BPS);
64
- break;
47
+ hw_breakpoints = g_array_sized_new(true, true,
65
- case 2:
48
+ sizeof(HWBreakpoint), max_hw_bps);
66
- a = c;
49
67
- break;
50
return ret;
68
- default:
51
}
69
- g_assert_not_reached();
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
70
+ if (is_snan(ret->cls)) {
53
index XXXXXXX..XXXXXXX 100644
71
+ parts_silence_nan(ret, s);
54
--- a/target/arm/kvm64.c
72
}
55
+++ b/target/arm/kvm64.c
73
- if (is_snan(a->cls)) {
56
@@ -XXX,XX +XXX,XX @@
74
- parts_silence_nan(a, s);
57
#include "hw/acpi/ghes.h"
75
- }
58
76
- return a;
59
77
+ return ret;
60
-void kvm_arm_init_debug(KVMState *s)
78
61
-{
79
default_nan:
62
- max_hw_wps = kvm_check_extension(s, KVM_CAP_GUEST_DEBUG_HW_WPS);
80
parts_default_nan(a, s);
63
- hw_watchpoints = g_array_sized_new(true, true,
64
- sizeof(HWWatchpoint), max_hw_wps);
65
-
66
- max_hw_bps = kvm_check_extension(s, KVM_CAP_GUEST_DEBUG_HW_BPS);
67
- hw_breakpoints = g_array_sized_new(true, true,
68
- sizeof(HWBreakpoint), max_hw_bps);
69
- return;
70
-}
71
-
72
int kvm_arch_insert_hw_breakpoint(vaddr addr, vaddr len, int type)
73
{
74
switch (type) {
75
--
81
--
76
2.34.1
82
2.34.1
77
83
78
84
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This variable is not used or declared outside kvm-all.c.
3
While all indices into val[] should be in [0-2], the mask
4
applied is two bits. To help static analysis see there is
5
no possibility of read beyond the end of the array, pad the
6
array to 4 entries, with the final being (implicitly) NULL.
4
7
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Gavin Shan <gshan@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-id: 20241203203949.483774-6-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
accel/kvm/kvm-all.c | 2 +-
13
fpu/softfloat-parts.c.inc | 2 +-
12
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
15
14
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/accel/kvm/kvm-all.c
18
--- a/fpu/softfloat-parts.c.inc
17
+++ b/accel/kvm/kvm-all.c
19
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@ bool kvm_allowed;
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
19
bool kvm_readonly_mem_allowed;
21
}
20
bool kvm_vm_attributes_allowed;
22
ret = c;
21
bool kvm_msi_use_devid;
23
} else {
22
-bool kvm_has_guest_debug;
24
- FloatPartsN *val[3] = { a, b, c };
23
+static bool kvm_has_guest_debug;
25
+ FloatPartsN *val[R_3NAN_1ST_MASK + 1] = { a, b, c };
24
static int kvm_sstep_flags;
26
Float3NaNPropRule rule = s->float_3nan_prop_rule;
25
static bool kvm_immediate_exit;
27
26
static hwaddr kvm_max_slot_size = ~0;
28
assert(rule != float_3nan_prop_none);
27
--
29
--
28
2.34.1
30
2.34.1
29
31
30
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This function is part of the public interface and
4
is not "specialized" to any target in any way.
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-7-richard.henderson@linaro.org
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
target/arm/kvm_arm.h | 10 ----------
11
fpu/softfloat.c | 52 ++++++++++++++++++++++++++++++++++
10
target/arm/kvm.c | 24 ++++++++++++++++++++++++
12
fpu/softfloat-specialize.c.inc | 52 ----------------------------------
11
target/arm/kvm64.c | 17 -----------------
13
2 files changed, 52 insertions(+), 52 deletions(-)
12
3 files changed, 24 insertions(+), 27 deletions(-)
13
14
14
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm_arm.h
17
--- a/fpu/softfloat.c
17
+++ b/target/arm/kvm_arm.h
18
+++ b/fpu/softfloat.c
18
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit);
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
19
*/
20
*zExpPtr = 1 - shiftCount;
20
bool kvm_arm_hw_debug_active(CPUState *cs);
21
22
-/**
23
- * kvm_arm_copy_hw_debug_data:
24
- * @ptr: kvm_guest_debug_arch structure
25
- *
26
- * Copy the architecture specific debug registers into the
27
- * kvm_guest_debug ioctl structure.
28
- */
29
-struct kvm_guest_debug_arch;
30
-void kvm_arm_copy_hw_debug_data(struct kvm_guest_debug_arch *ptr);
31
-
32
#endif
33
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/kvm.c
36
+++ b/target/arm/kvm.c
37
@@ -XXX,XX +XXX,XX @@ int kvm_arch_process_async_events(CPUState *cs)
38
return 0;
39
}
21
}
40
22
41
+/**
23
+/*----------------------------------------------------------------------------
42
+ * kvm_arm_copy_hw_debug_data:
24
+| Takes two extended double-precision floating-point values `a' and `b', one
43
+ * @ptr: kvm_guest_debug_arch structure
25
+| of which is a NaN, and returns the appropriate NaN result. If either `a' or
44
+ *
26
+| `b' is a signaling NaN, the invalid exception is raised.
45
+ * Copy the architecture specific debug registers into the
27
+*----------------------------------------------------------------------------*/
46
+ * kvm_guest_debug ioctl structure.
28
+
47
+ */
29
+floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
48
+static void kvm_arm_copy_hw_debug_data(struct kvm_guest_debug_arch *ptr)
49
+{
30
+{
50
+ int i;
31
+ bool aIsLargerSignificand;
51
+ memset(ptr, 0, sizeof(struct kvm_guest_debug_arch));
32
+ FloatClass a_cls, b_cls;
52
+
33
+
53
+ for (i = 0; i < max_hw_wps; i++) {
34
+ /* This is not complete, but is good enough for pickNaN. */
54
+ HWWatchpoint *wp = get_hw_wp(i);
35
+ a_cls = (!floatx80_is_any_nan(a)
55
+ ptr->dbg_wcr[i] = wp->wcr;
36
+ ? float_class_normal
56
+ ptr->dbg_wvr[i] = wp->wvr;
37
+ : floatx80_is_signaling_nan(a, status)
38
+ ? float_class_snan
39
+ : float_class_qnan);
40
+ b_cls = (!floatx80_is_any_nan(b)
41
+ ? float_class_normal
42
+ : floatx80_is_signaling_nan(b, status)
43
+ ? float_class_snan
44
+ : float_class_qnan);
45
+
46
+ if (is_snan(a_cls) || is_snan(b_cls)) {
47
+ float_raise(float_flag_invalid, status);
57
+ }
48
+ }
58
+ for (i = 0; i < max_hw_bps; i++) {
49
+
59
+ HWBreakpoint *bp = get_hw_bp(i);
50
+ if (status->default_nan_mode) {
60
+ ptr->dbg_bcr[i] = bp->bcr;
51
+ return floatx80_default_nan(status);
61
+ ptr->dbg_bvr[i] = bp->bvr;
52
+ }
53
+
54
+ if (a.low < b.low) {
55
+ aIsLargerSignificand = 0;
56
+ } else if (b.low < a.low) {
57
+ aIsLargerSignificand = 1;
58
+ } else {
59
+ aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
60
+ }
61
+
62
+ if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
63
+ if (is_snan(b_cls)) {
64
+ return floatx80_silence_nan(b, status);
65
+ }
66
+ return b;
67
+ } else {
68
+ if (is_snan(a_cls)) {
69
+ return floatx80_silence_nan(a, status);
70
+ }
71
+ return a;
62
+ }
72
+ }
63
+}
73
+}
64
+
74
+
65
void kvm_arch_update_guest_debug(CPUState *cs, struct kvm_guest_debug *dbg)
75
/*----------------------------------------------------------------------------
66
{
76
| Takes an abstract floating-point value having sign `zSign', exponent `zExp',
67
if (kvm_sw_breakpoints_active(cs)) {
77
| and extended significand formed by the concatenation of `zSig0' and `zSig1',
68
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
78
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
69
index XXXXXXX..XXXXXXX 100644
79
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/kvm64.c
80
--- a/fpu/softfloat-specialize.c.inc
71
+++ b/target/arm/kvm64.c
81
+++ b/fpu/softfloat-specialize.c.inc
72
@@ -XXX,XX +XXX,XX @@ void kvm_arch_remove_all_hw_breakpoints(void)
82
@@ -XXX,XX +XXX,XX @@ floatx80 floatx80_silence_nan(floatx80 a, float_status *status)
73
}
83
return a;
74
}
84
}
75
85
76
-void kvm_arm_copy_hw_debug_data(struct kvm_guest_debug_arch *ptr)
86
-/*----------------------------------------------------------------------------
87
-| Takes two extended double-precision floating-point values `a' and `b', one
88
-| of which is a NaN, and returns the appropriate NaN result. If either `a' or
89
-| `b' is a signaling NaN, the invalid exception is raised.
90
-*----------------------------------------------------------------------------*/
91
-
92
-floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
77
-{
93
-{
78
- int i;
94
- bool aIsLargerSignificand;
79
- memset(ptr, 0, sizeof(struct kvm_guest_debug_arch));
95
- FloatClass a_cls, b_cls;
80
-
96
-
81
- for (i = 0; i < max_hw_wps; i++) {
97
- /* This is not complete, but is good enough for pickNaN. */
82
- HWWatchpoint *wp = get_hw_wp(i);
98
- a_cls = (!floatx80_is_any_nan(a)
83
- ptr->dbg_wcr[i] = wp->wcr;
99
- ? float_class_normal
84
- ptr->dbg_wvr[i] = wp->wvr;
100
- : floatx80_is_signaling_nan(a, status)
101
- ? float_class_snan
102
- : float_class_qnan);
103
- b_cls = (!floatx80_is_any_nan(b)
104
- ? float_class_normal
105
- : floatx80_is_signaling_nan(b, status)
106
- ? float_class_snan
107
- : float_class_qnan);
108
-
109
- if (is_snan(a_cls) || is_snan(b_cls)) {
110
- float_raise(float_flag_invalid, status);
85
- }
111
- }
86
- for (i = 0; i < max_hw_bps; i++) {
112
-
87
- HWBreakpoint *bp = get_hw_bp(i);
113
- if (status->default_nan_mode) {
88
- ptr->dbg_bcr[i] = bp->bcr;
114
- return floatx80_default_nan(status);
89
- ptr->dbg_bvr[i] = bp->bvr;
115
- }
116
-
117
- if (a.low < b.low) {
118
- aIsLargerSignificand = 0;
119
- } else if (b.low < a.low) {
120
- aIsLargerSignificand = 1;
121
- } else {
122
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
123
- }
124
-
125
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
126
- if (is_snan(b_cls)) {
127
- return floatx80_silence_nan(b, status);
128
- }
129
- return b;
130
- } else {
131
- if (is_snan(a_cls)) {
132
- return floatx80_silence_nan(a, status);
133
- }
134
- return a;
90
- }
135
- }
91
-}
136
-}
92
-
137
-
93
bool kvm_arm_hw_debug_active(CPUState *cs)
138
/*----------------------------------------------------------------------------
94
{
139
| Returns 1 if the quadruple-precision floating-point value `a' is a quiet
95
return ((cur_hw_wps > 0) || (cur_hw_bps > 0));
140
| NaN; otherwise returns 0.
96
--
141
--
97
2.34.1
142
2.34.1
98
99
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Drop fprintfs and actually use the return values in the callers.
3
Unpacking and repacking the parts may be slightly more work
4
This is OK to do since commit 7191f24c7fcf which added the
4
than we did before, but we get to reuse more code. For a
5
error-check to the generic accel/kvm functions that eventually
5
code path handling exceptional values, this is an improvement.
6
call into these ones.
7
6
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Gavin Shan <gshan@redhat.com>
8
Message-id: 20241203203949.483774-8-richard.henderson@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
[PMM: tweak commit message]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
target/arm/kvm_arm.h | 20 --------------------
12
fpu/softfloat.c | 43 +++++--------------------------------------
16
target/arm/kvm.c | 23 ++++++-----------------
13
1 file changed, 5 insertions(+), 38 deletions(-)
17
2 files changed, 6 insertions(+), 37 deletions(-)
18
14
19
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/kvm_arm.h
17
--- a/fpu/softfloat.c
22
+++ b/target/arm/kvm_arm.h
18
+++ b/fpu/softfloat.c
23
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_sve_supported(void);
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
24
*/
20
25
int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa);
21
floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
26
22
{
27
-/**
23
- bool aIsLargerSignificand;
28
- * kvm_arm_sync_mpstate_to_kvm:
24
- FloatClass a_cls, b_cls;
29
- * @cpu: ARMCPU
25
+ FloatParts128 pa, pb, *pr;
30
- *
26
31
- * If supported set the KVM MP_STATE based on QEMU's model.
27
- /* This is not complete, but is good enough for pickNaN. */
32
- *
28
- a_cls = (!floatx80_is_any_nan(a)
33
- * Returns 0 on success and -1 on failure.
29
- ? float_class_normal
34
- */
30
- : floatx80_is_signaling_nan(a, status)
35
-int kvm_arm_sync_mpstate_to_kvm(ARMCPU *cpu);
31
- ? float_class_snan
32
- : float_class_qnan);
33
- b_cls = (!floatx80_is_any_nan(b)
34
- ? float_class_normal
35
- : floatx80_is_signaling_nan(b, status)
36
- ? float_class_snan
37
- : float_class_qnan);
36
-
38
-
37
-/**
39
- if (is_snan(a_cls) || is_snan(b_cls)) {
38
- * kvm_arm_sync_mpstate_to_qemu:
40
- float_raise(float_flag_invalid, status);
39
- * @cpu: ARMCPU
41
- }
40
- *
41
- * If supported get the MP_STATE from KVM and store in QEMU's model.
42
- *
43
- * Returns 0 on success and aborts on failure.
44
- */
45
-int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu);
46
-
42
-
47
void kvm_arm_vm_state_change(void *opaque, bool running, RunState state);
43
- if (status->default_nan_mode) {
48
44
+ if (!floatx80_unpack_canonical(&pa, a, status) ||
49
int kvm_arm_vgic_probe(void);
45
+ !floatx80_unpack_canonical(&pb, b, status)) {
50
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
46
return floatx80_default_nan(status);
51
index XXXXXXX..XXXXXXX 100644
47
}
52
--- a/target/arm/kvm.c
48
53
+++ b/target/arm/kvm.c
49
- if (a.low < b.low) {
54
@@ -XXX,XX +XXX,XX @@ void kvm_arm_reset_vcpu(ARMCPU *cpu)
50
- aIsLargerSignificand = 0;
55
/*
51
- } else if (b.low < a.low) {
56
* Update KVM's MP_STATE based on what QEMU thinks it is
52
- aIsLargerSignificand = 1;
57
*/
53
- } else {
58
-int kvm_arm_sync_mpstate_to_kvm(ARMCPU *cpu)
54
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
59
+static int kvm_arm_sync_mpstate_to_kvm(ARMCPU *cpu)
55
- }
60
{
56
-
61
if (cap_has_mp_state) {
57
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
62
struct kvm_mp_state mp_state = {
58
- if (is_snan(b_cls)) {
63
.mp_state = (cpu->power_state == PSCI_OFF) ?
59
- return floatx80_silence_nan(b, status);
64
KVM_MP_STATE_STOPPED : KVM_MP_STATE_RUNNABLE
65
};
66
- int ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_MP_STATE, &mp_state);
67
- if (ret) {
68
- fprintf(stderr, "%s: failed to set MP_STATE %d/%s\n",
69
- __func__, ret, strerror(-ret));
70
- return -1;
71
- }
60
- }
72
+ return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_MP_STATE, &mp_state);
61
- return b;
73
}
62
- } else {
74
-
63
- if (is_snan(a_cls)) {
75
return 0;
64
- return floatx80_silence_nan(a, status);
65
- }
66
- return a;
67
- }
68
+ pr = parts_pick_nan(&pa, &pb, status);
69
+ return floatx80_round_pack_canonical(pr, status);
76
}
70
}
77
71
78
/*
72
/*----------------------------------------------------------------------------
79
* Sync the KVM MP_STATE into QEMU
80
*/
81
-int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
82
+static int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
83
{
84
if (cap_has_mp_state) {
85
struct kvm_mp_state mp_state;
86
int ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MP_STATE, &mp_state);
87
if (ret) {
88
- fprintf(stderr, "%s: failed to get MP_STATE %d/%s\n",
89
- __func__, ret, strerror(-ret));
90
- abort();
91
+ return ret;
92
}
93
cpu->power_state = (mp_state.mp_state == KVM_MP_STATE_STOPPED) ?
94
PSCI_OFF : PSCI_ON;
95
}
96
-
97
return 0;
98
}
99
100
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
101
return ret;
102
}
103
104
- kvm_arm_sync_mpstate_to_kvm(cpu);
105
-
106
- return ret;
107
+ return kvm_arm_sync_mpstate_to_kvm(cpu);
108
}
109
110
static int kvm_arch_get_fpsimd(CPUState *cs)
111
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
112
*/
113
write_list_to_cpustate(cpu);
114
115
- kvm_arm_sync_mpstate_to_qemu(cpu);
116
+ ret = kvm_arm_sync_mpstate_to_qemu(cpu);
117
118
/* TODO: other registers */
119
return ret;
120
--
73
--
121
2.34.1
74
2.34.1
122
123
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Inline pickNaN into its only caller. This makes one assert
4
redundant with the immediately preceding IF.
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-9-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
target/arm/kvm_arm.h | 9 ---------
11
fpu/softfloat-parts.c.inc | 82 +++++++++++++++++++++++++----
10
target/arm/kvm.c | 22 ++++++++++++++++++++++
12
fpu/softfloat-specialize.c.inc | 96 ----------------------------------
11
target/arm/kvm64.c | 15 ---------------
13
2 files changed, 73 insertions(+), 105 deletions(-)
12
3 files changed, 22 insertions(+), 24 deletions(-)
14
13
15
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
14
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm_arm.h
17
--- a/fpu/softfloat-parts.c.inc
17
+++ b/target/arm/kvm_arm.h
18
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@ int kvm_arm_init_cpreg_list(ARMCPU *cpu);
19
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
19
*/
20
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
20
bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx);
21
float_status *s)
21
22
{
22
-/**
23
+ int cmp, which;
23
- * kvm_arm_cpreg_level:
24
+
24
- * @regidx: KVM register index
25
if (is_snan(a->cls) || is_snan(b->cls)) {
25
- *
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
26
- * Return the level of this coprocessor/system register. Return value is
27
}
27
- * either KVM_PUT_RUNTIME_STATE, KVM_PUT_RESET_STATE, or KVM_PUT_FULL_STATE.
28
28
- */
29
if (s->default_nan_mode) {
29
-int kvm_arm_cpreg_level(uint64_t regidx);
30
parts_default_nan(a, s);
30
-
31
- } else {
31
/**
32
- int cmp = frac_cmp(a, b);
32
* write_list_to_kvmstate:
33
- if (cmp == 0) {
33
* @cpu: ARMCPU
34
- cmp = a->sign < b->sign;
34
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
35
- }
36
+ return a;
37
+ }
38
39
- if (pickNaN(a->cls, b->cls, cmp > 0, s)) {
40
- a = b;
41
- }
42
+ cmp = frac_cmp(a, b);
43
+ if (cmp == 0) {
44
+ cmp = a->sign < b->sign;
45
+ }
46
+
47
+ switch (s->float_2nan_prop_rule) {
48
+ case float_2nan_prop_s_ab:
49
if (is_snan(a->cls)) {
50
- parts_silence_nan(a, s);
51
+ which = 0;
52
+ } else if (is_snan(b->cls)) {
53
+ which = 1;
54
+ } else if (is_qnan(a->cls)) {
55
+ which = 0;
56
+ } else {
57
+ which = 1;
58
}
59
+ break;
60
+ case float_2nan_prop_s_ba:
61
+ if (is_snan(b->cls)) {
62
+ which = 1;
63
+ } else if (is_snan(a->cls)) {
64
+ which = 0;
65
+ } else if (is_qnan(b->cls)) {
66
+ which = 1;
67
+ } else {
68
+ which = 0;
69
+ }
70
+ break;
71
+ case float_2nan_prop_ab:
72
+ which = is_nan(a->cls) ? 0 : 1;
73
+ break;
74
+ case float_2nan_prop_ba:
75
+ which = is_nan(b->cls) ? 1 : 0;
76
+ break;
77
+ case float_2nan_prop_x87:
78
+ /*
79
+ * This implements x87 NaN propagation rules:
80
+ * SNaN + QNaN => return the QNaN
81
+ * two SNaNs => return the one with the larger significand, silenced
82
+ * two QNaNs => return the one with the larger significand
83
+ * SNaN and a non-NaN => return the SNaN, silenced
84
+ * QNaN and a non-NaN => return the QNaN
85
+ *
86
+ * If we get down to comparing significands and they are the same,
87
+ * return the NaN with the positive sign bit (if any).
88
+ */
89
+ if (is_snan(a->cls)) {
90
+ if (is_snan(b->cls)) {
91
+ which = cmp > 0 ? 0 : 1;
92
+ } else {
93
+ which = is_qnan(b->cls) ? 1 : 0;
94
+ }
95
+ } else if (is_qnan(a->cls)) {
96
+ if (is_snan(b->cls) || !is_qnan(b->cls)) {
97
+ which = 0;
98
+ } else {
99
+ which = cmp > 0 ? 0 : 1;
100
+ }
101
+ } else {
102
+ which = 1;
103
+ }
104
+ break;
105
+ default:
106
+ g_assert_not_reached();
107
+ }
108
+
109
+ if (which) {
110
+ a = b;
111
+ }
112
+ if (is_snan(a->cls)) {
113
+ parts_silence_nan(a, s);
114
}
115
return a;
116
}
117
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
35
index XXXXXXX..XXXXXXX 100644
118
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/kvm.c
119
--- a/fpu/softfloat-specialize.c.inc
37
+++ b/target/arm/kvm.c
120
+++ b/fpu/softfloat-specialize.c.inc
38
@@ -XXX,XX +XXX,XX @@ out:
121
@@ -XXX,XX +XXX,XX @@ bool float32_is_signaling_nan(float32 a_, float_status *status)
39
return ret;
40
}
41
42
+/**
43
+ * kvm_arm_cpreg_level:
44
+ * @regidx: KVM register index
45
+ *
46
+ * Return the level of this coprocessor/system register. Return value is
47
+ * either KVM_PUT_RUNTIME_STATE, KVM_PUT_RESET_STATE, or KVM_PUT_FULL_STATE.
48
+ */
49
+static int kvm_arm_cpreg_level(uint64_t regidx)
50
+{
51
+ /*
52
+ * All system registers are assumed to be level KVM_PUT_RUNTIME_STATE.
53
+ * If a register should be written less often, you must add it here
54
+ * with a state of either KVM_PUT_RESET_STATE or KVM_PUT_FULL_STATE.
55
+ */
56
+ switch (regidx) {
57
+ case KVM_REG_ARM_TIMER_CNT:
58
+ case KVM_REG_ARM_PTIMER_CNT:
59
+ return KVM_PUT_FULL_STATE;
60
+ }
61
+ return KVM_PUT_RUNTIME_STATE;
62
+}
63
+
64
bool write_kvmstate_to_list(ARMCPU *cpu)
65
{
66
CPUState *cs = CPU(cpu);
67
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/kvm64.c
70
+++ b/target/arm/kvm64.c
71
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx)
72
}
122
}
73
}
123
}
74
124
75
-int kvm_arm_cpreg_level(uint64_t regidx)
125
-/*----------------------------------------------------------------------------
126
-| Select which NaN to propagate for a two-input operation.
127
-| IEEE754 doesn't specify all the details of this, so the
128
-| algorithm is target-specific.
129
-| The routine is passed various bits of information about the
130
-| two NaNs and should return 0 to select NaN a and 1 for NaN b.
131
-| Note that signalling NaNs are always squashed to quiet NaNs
132
-| by the caller, by calling floatXX_silence_nan() before
133
-| returning them.
134
-|
135
-| aIsLargerSignificand is only valid if both a and b are NaNs
136
-| of some kind, and is true if a has the larger significand,
137
-| or if both a and b have the same significand but a is
138
-| positive but b is negative. It is only needed for the x87
139
-| tie-break rule.
140
-*----------------------------------------------------------------------------*/
141
-
142
-static int pickNaN(FloatClass a_cls, FloatClass b_cls,
143
- bool aIsLargerSignificand, float_status *status)
76
-{
144
-{
77
- /*
145
- /*
78
- * All system registers are assumed to be level KVM_PUT_RUNTIME_STATE.
146
- * We guarantee not to require the target to tell us how to
79
- * If a register should be written less often, you must add it here
147
- * pick a NaN if we're always returning the default NaN.
80
- * with a state of either KVM_PUT_RESET_STATE or KVM_PUT_FULL_STATE.
148
- * But if we're not in default-NaN mode then the target must
149
- * specify via set_float_2nan_prop_rule().
81
- */
150
- */
82
- switch (regidx) {
151
- assert(!status->default_nan_mode);
83
- case KVM_REG_ARM_TIMER_CNT:
152
-
84
- case KVM_REG_ARM_PTIMER_CNT:
153
- switch (status->float_2nan_prop_rule) {
85
- return KVM_PUT_FULL_STATE;
154
- case float_2nan_prop_s_ab:
155
- if (is_snan(a_cls)) {
156
- return 0;
157
- } else if (is_snan(b_cls)) {
158
- return 1;
159
- } else if (is_qnan(a_cls)) {
160
- return 0;
161
- } else {
162
- return 1;
163
- }
164
- break;
165
- case float_2nan_prop_s_ba:
166
- if (is_snan(b_cls)) {
167
- return 1;
168
- } else if (is_snan(a_cls)) {
169
- return 0;
170
- } else if (is_qnan(b_cls)) {
171
- return 1;
172
- } else {
173
- return 0;
174
- }
175
- break;
176
- case float_2nan_prop_ab:
177
- if (is_nan(a_cls)) {
178
- return 0;
179
- } else {
180
- return 1;
181
- }
182
- break;
183
- case float_2nan_prop_ba:
184
- if (is_nan(b_cls)) {
185
- return 1;
186
- } else {
187
- return 0;
188
- }
189
- break;
190
- case float_2nan_prop_x87:
191
- /*
192
- * This implements x87 NaN propagation rules:
193
- * SNaN + QNaN => return the QNaN
194
- * two SNaNs => return the one with the larger significand, silenced
195
- * two QNaNs => return the one with the larger significand
196
- * SNaN and a non-NaN => return the SNaN, silenced
197
- * QNaN and a non-NaN => return the QNaN
198
- *
199
- * If we get down to comparing significands and they are the same,
200
- * return the NaN with the positive sign bit (if any).
201
- */
202
- if (is_snan(a_cls)) {
203
- if (is_snan(b_cls)) {
204
- return aIsLargerSignificand ? 0 : 1;
205
- }
206
- return is_qnan(b_cls) ? 1 : 0;
207
- } else if (is_qnan(a_cls)) {
208
- if (is_snan(b_cls) || !is_qnan(b_cls)) {
209
- return 0;
210
- } else {
211
- return aIsLargerSignificand ? 0 : 1;
212
- }
213
- } else {
214
- return 1;
215
- }
216
- default:
217
- g_assert_not_reached();
86
- }
218
- }
87
- return KVM_PUT_RUNTIME_STATE;
88
-}
219
-}
89
-
220
-
90
/* Callers must hold the iothread mutex lock */
221
/*----------------------------------------------------------------------------
91
static void kvm_inject_arm_sea(CPUState *c)
222
| Returns 1 if the double-precision floating-point value `a' is a quiet
92
{
223
| NaN; otherwise returns 0.
93
--
224
--
94
2.34.1
225
2.34.1
95
226
96
227
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Remember if there was an SNaN, and use that to simplify
4
float_2nan_prop_s_{ab,ba} to only the snan component.
5
Then, fall through to the corresponding
6
float_2nan_prop_{ab,ba} case to handle any remaining
7
nans, which must be quiet.
8
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20241203203949.483774-10-richard.henderson@linaro.org
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
13
---
9
target/arm/kvm_arm.h | 12 ------------
14
fpu/softfloat-parts.c.inc | 32 ++++++++++++--------------------
10
target/arm/kvm.c | 12 +++++++++++-
15
1 file changed, 12 insertions(+), 20 deletions(-)
11
2 files changed, 11 insertions(+), 13 deletions(-)
12
16
13
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/kvm_arm.h
19
--- a/fpu/softfloat-parts.c.inc
16
+++ b/target/arm/kvm_arm.h
20
+++ b/fpu/softfloat-parts.c.inc
17
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
18
#define KVM_ARM_VGIC_V2 (1 << 0)
22
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
19
#define KVM_ARM_VGIC_V3 (1 << 1)
23
float_status *s)
20
21
-/**
22
- * kvm_arm_vcpu_init:
23
- * @cs: CPUState
24
- *
25
- * Initialize (or reinitialize) the VCPU by invoking the
26
- * KVM_ARM_VCPU_INIT ioctl with the CPU type and feature
27
- * bitmask specified in the CPUState.
28
- *
29
- * Returns: 0 if success else < 0 error code
30
- */
31
-int kvm_arm_vcpu_init(CPUState *cs);
32
-
33
/**
34
* kvm_arm_vcpu_finalize:
35
* @cs: CPUState
36
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/kvm.c
39
+++ b/target/arm/kvm.c
40
@@ -XXX,XX +XXX,XX @@ typedef struct ARMHostCPUFeatures {
41
42
static ARMHostCPUFeatures arm_host_cpu_features;
43
44
-int kvm_arm_vcpu_init(CPUState *cs)
45
+/**
46
+ * kvm_arm_vcpu_init:
47
+ * @cs: CPUState
48
+ *
49
+ * Initialize (or reinitialize) the VCPU by invoking the
50
+ * KVM_ARM_VCPU_INIT ioctl with the CPU type and feature
51
+ * bitmask specified in the CPUState.
52
+ *
53
+ * Returns: 0 if success else < 0 error code
54
+ */
55
+static int kvm_arm_vcpu_init(CPUState *cs)
56
{
24
{
57
ARMCPU *cpu = ARM_CPU(cs);
25
+ bool have_snan = false;
58
struct kvm_vcpu_init init;
26
int cmp, which;
27
28
if (is_snan(a->cls) || is_snan(b->cls)) {
29
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
30
+ have_snan = true;
31
}
32
33
if (s->default_nan_mode) {
34
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
35
36
switch (s->float_2nan_prop_rule) {
37
case float_2nan_prop_s_ab:
38
- if (is_snan(a->cls)) {
39
- which = 0;
40
- } else if (is_snan(b->cls)) {
41
- which = 1;
42
- } else if (is_qnan(a->cls)) {
43
- which = 0;
44
- } else {
45
- which = 1;
46
+ if (have_snan) {
47
+ which = is_snan(a->cls) ? 0 : 1;
48
+ break;
49
}
50
- break;
51
- case float_2nan_prop_s_ba:
52
- if (is_snan(b->cls)) {
53
- which = 1;
54
- } else if (is_snan(a->cls)) {
55
- which = 0;
56
- } else if (is_qnan(b->cls)) {
57
- which = 1;
58
- } else {
59
- which = 0;
60
- }
61
- break;
62
+ /* fall through */
63
case float_2nan_prop_ab:
64
which = is_nan(a->cls) ? 0 : 1;
65
break;
66
+ case float_2nan_prop_s_ba:
67
+ if (have_snan) {
68
+ which = is_snan(b->cls) ? 1 : 0;
69
+ break;
70
+ }
71
+ /* fall through */
72
case float_2nan_prop_ba:
73
which = is_nan(b->cls) ? 1 : 0;
74
break;
59
--
75
--
60
2.34.1
76
2.34.1
61
62
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move the fractional comparison to the end of the
4
float_2nan_prop_x87 case. This is not required for
5
any other 2nan propagation rule. Reorganize the
6
x87 case itself to break out of the switch when the
7
fractional comparison is not required.
8
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Gavin Shan <gshan@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20241203203949.483774-11-richard.henderson@linaro.org
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
13
---
9
target/arm/kvm_arm.h | 9 ------
14
fpu/softfloat-parts.c.inc | 19 +++++++++----------
10
target/arm/kvm.c | 77 ++++++++++++++++++++++++++++++++++++++++++++
15
1 file changed, 9 insertions(+), 10 deletions(-)
11
target/arm/kvm64.c | 70 ----------------------------------------
12
3 files changed, 77 insertions(+), 79 deletions(-)
13
16
14
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm_arm.h
19
--- a/fpu/softfloat-parts.c.inc
17
+++ b/target/arm/kvm_arm.h
20
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@ static inline uint32_t kvm_arm_sve_get_vls(CPUState *cs)
21
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
19
22
return a;
20
#endif
21
22
-/**
23
- * kvm_arm_handle_debug:
24
- * @cs: CPUState
25
- * @debug_exit: debug part of the KVM exit structure
26
- *
27
- * Returns: TRUE if the debug exception was handled.
28
- */
29
-bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit);
30
-
31
#endif
32
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/kvm.c
35
+++ b/target/arm/kvm.c
36
@@ -XXX,XX +XXX,XX @@ static int kvm_arm_handle_dabt_nisv(CPUState *cs, uint64_t esr_iss,
37
return -1;
38
}
39
40
+/**
41
+ * kvm_arm_handle_debug:
42
+ * @cs: CPUState
43
+ * @debug_exit: debug part of the KVM exit structure
44
+ *
45
+ * Returns: TRUE if the debug exception was handled.
46
+ *
47
+ * See v8 ARM ARM D7.2.27 ESR_ELx, Exception Syndrome Register
48
+ *
49
+ * To minimise translating between kernel and user-space the kernel
50
+ * ABI just provides user-space with the full exception syndrome
51
+ * register value to be decoded in QEMU.
52
+ */
53
+static bool kvm_arm_handle_debug(CPUState *cs,
54
+ struct kvm_debug_exit_arch *debug_exit)
55
+{
56
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
57
+ ARMCPU *cpu = ARM_CPU(cs);
58
+ CPUARMState *env = &cpu->env;
59
+
60
+ /* Ensure PC is synchronised */
61
+ kvm_cpu_synchronize_state(cs);
62
+
63
+ switch (hsr_ec) {
64
+ case EC_SOFTWARESTEP:
65
+ if (cs->singlestep_enabled) {
66
+ return true;
67
+ } else {
68
+ /*
69
+ * The kernel should have suppressed the guest's ability to
70
+ * single step at this point so something has gone wrong.
71
+ */
72
+ error_report("%s: guest single-step while debugging unsupported"
73
+ " (%"PRIx64", %"PRIx32")",
74
+ __func__, env->pc, debug_exit->hsr);
75
+ return false;
76
+ }
77
+ break;
78
+ case EC_AA64_BKPT:
79
+ if (kvm_find_sw_breakpoint(cs, env->pc)) {
80
+ return true;
81
+ }
82
+ break;
83
+ case EC_BREAKPOINT:
84
+ if (find_hw_breakpoint(cs, env->pc)) {
85
+ return true;
86
+ }
87
+ break;
88
+ case EC_WATCHPOINT:
89
+ {
90
+ CPUWatchpoint *wp = find_hw_watchpoint(cs, debug_exit->far);
91
+ if (wp) {
92
+ cs->watchpoint_hit = wp;
93
+ return true;
94
+ }
95
+ break;
96
+ }
97
+ default:
98
+ error_report("%s: unhandled debug exit (%"PRIx32", %"PRIx64")",
99
+ __func__, debug_exit->hsr, env->pc);
100
+ }
101
+
102
+ /* If we are not handling the debug exception it must belong to
103
+ * the guest. Let's re-use the existing TCG interrupt code to set
104
+ * everything up properly.
105
+ */
106
+ cs->exception_index = EXCP_BKPT;
107
+ env->exception.syndrome = debug_exit->hsr;
108
+ env->exception.vaddress = debug_exit->far;
109
+ env->exception.target_el = 1;
110
+ qemu_mutex_lock_iothread();
111
+ arm_cpu_do_interrupt(cs);
112
+ qemu_mutex_unlock_iothread();
113
+
114
+ return false;
115
+}
116
+
117
int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
118
{
119
int ret = 0;
120
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/kvm64.c
123
+++ b/target/arm/kvm64.c
124
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
125
}
23
}
126
return 0;
24
127
}
25
- cmp = frac_cmp(a, b);
128
-
26
- if (cmp == 0) {
129
-/* See v8 ARM ARM D7.2.27 ESR_ELx, Exception Syndrome Register
27
- cmp = a->sign < b->sign;
130
- *
131
- * To minimise translating between kernel and user-space the kernel
132
- * ABI just provides user-space with the full exception syndrome
133
- * register value to be decoded in QEMU.
134
- */
135
-
136
-bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
137
-{
138
- int hsr_ec = syn_get_ec(debug_exit->hsr);
139
- ARMCPU *cpu = ARM_CPU(cs);
140
- CPUARMState *env = &cpu->env;
141
-
142
- /* Ensure PC is synchronised */
143
- kvm_cpu_synchronize_state(cs);
144
-
145
- switch (hsr_ec) {
146
- case EC_SOFTWARESTEP:
147
- if (cs->singlestep_enabled) {
148
- return true;
149
- } else {
150
- /*
151
- * The kernel should have suppressed the guest's ability to
152
- * single step at this point so something has gone wrong.
153
- */
154
- error_report("%s: guest single-step while debugging unsupported"
155
- " (%"PRIx64", %"PRIx32")",
156
- __func__, env->pc, debug_exit->hsr);
157
- return false;
158
- }
159
- break;
160
- case EC_AA64_BKPT:
161
- if (kvm_find_sw_breakpoint(cs, env->pc)) {
162
- return true;
163
- }
164
- break;
165
- case EC_BREAKPOINT:
166
- if (find_hw_breakpoint(cs, env->pc)) {
167
- return true;
168
- }
169
- break;
170
- case EC_WATCHPOINT:
171
- {
172
- CPUWatchpoint *wp = find_hw_watchpoint(cs, debug_exit->far);
173
- if (wp) {
174
- cs->watchpoint_hit = wp;
175
- return true;
176
- }
177
- break;
178
- }
179
- default:
180
- error_report("%s: unhandled debug exit (%"PRIx32", %"PRIx64")",
181
- __func__, debug_exit->hsr, env->pc);
182
- }
28
- }
183
-
29
-
184
- /* If we are not handling the debug exception it must belong to
30
switch (s->float_2nan_prop_rule) {
185
- * the guest. Let's re-use the existing TCG interrupt code to set
31
case float_2nan_prop_s_ab:
186
- * everything up properly.
32
if (have_snan) {
187
- */
33
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
188
- cs->exception_index = EXCP_BKPT;
34
* return the NaN with the positive sign bit (if any).
189
- env->exception.syndrome = debug_exit->hsr;
35
*/
190
- env->exception.vaddress = debug_exit->far;
36
if (is_snan(a->cls)) {
191
- env->exception.target_el = 1;
37
- if (is_snan(b->cls)) {
192
- qemu_mutex_lock_iothread();
38
- which = cmp > 0 ? 0 : 1;
193
- arm_cpu_do_interrupt(cs);
39
- } else {
194
- qemu_mutex_unlock_iothread();
40
+ if (!is_snan(b->cls)) {
195
-
41
which = is_qnan(b->cls) ? 1 : 0;
196
- return false;
42
+ break;
197
-}
43
}
44
} else if (is_qnan(a->cls)) {
45
if (is_snan(b->cls) || !is_qnan(b->cls)) {
46
which = 0;
47
- } else {
48
- which = cmp > 0 ? 0 : 1;
49
+ break;
50
}
51
} else {
52
which = 1;
53
+ break;
54
}
55
+ cmp = frac_cmp(a, b);
56
+ if (cmp == 0) {
57
+ cmp = a->sign < b->sign;
58
+ }
59
+ which = cmp > 0 ? 0 : 1;
60
break;
61
default:
62
g_assert_not_reached();
198
--
63
--
199
2.34.1
64
2.34.1
200
201
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This function is only used once, and is quite simple.
3
Replace the "index" selecting between A and B with a result variable
4
of the proper type. This improves clarity within the function.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Gavin Shan <gshan@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20241203203949.483774-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/kvm_arm.h | 13 -------------
11
fpu/softfloat-parts.c.inc | 28 +++++++++++++---------------
12
target/arm/kvm64.c | 7 +------
12
1 file changed, 13 insertions(+), 15 deletions(-)
13
2 files changed, 1 insertion(+), 19 deletions(-)
14
13
15
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/kvm_arm.h
16
--- a/fpu/softfloat-parts.c.inc
18
+++ b/target/arm/kvm_arm.h
17
+++ b/fpu/softfloat-parts.c.inc
19
@@ -XXX,XX +XXX,XX @@ void kvm_arm_add_vcpu_properties(Object *obj);
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
20
*/
19
float_status *s)
21
void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp);
20
{
22
21
bool have_snan = false;
23
-/**
22
- int cmp, which;
24
- * kvm_arm_steal_time_supported:
23
+ FloatPartsN *ret;
25
- *
24
+ int cmp;
26
- * Returns: true if KVM can enable steal time reporting
25
27
- * and false otherwise.
26
if (is_snan(a->cls) || is_snan(b->cls)) {
28
- */
27
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
29
-bool kvm_arm_steal_time_supported(void);
28
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
30
-
29
switch (s->float_2nan_prop_rule) {
31
/**
30
case float_2nan_prop_s_ab:
32
* kvm_arm_aarch32_supported:
31
if (have_snan) {
33
*
32
- which = is_snan(a->cls) ? 0 : 1;
34
@@ -XXX,XX +XXX,XX @@ static inline bool kvm_arm_sve_supported(void)
33
+ ret = is_snan(a->cls) ? a : b;
35
return false;
34
break;
35
}
36
/* fall through */
37
case float_2nan_prop_ab:
38
- which = is_nan(a->cls) ? 0 : 1;
39
+ ret = is_nan(a->cls) ? a : b;
40
break;
41
case float_2nan_prop_s_ba:
42
if (have_snan) {
43
- which = is_snan(b->cls) ? 1 : 0;
44
+ ret = is_snan(b->cls) ? b : a;
45
break;
46
}
47
/* fall through */
48
case float_2nan_prop_ba:
49
- which = is_nan(b->cls) ? 1 : 0;
50
+ ret = is_nan(b->cls) ? b : a;
51
break;
52
case float_2nan_prop_x87:
53
/*
54
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
55
*/
56
if (is_snan(a->cls)) {
57
if (!is_snan(b->cls)) {
58
- which = is_qnan(b->cls) ? 1 : 0;
59
+ ret = is_qnan(b->cls) ? b : a;
60
break;
61
}
62
} else if (is_qnan(a->cls)) {
63
if (is_snan(b->cls) || !is_qnan(b->cls)) {
64
- which = 0;
65
+ ret = a;
66
break;
67
}
68
} else {
69
- which = 1;
70
+ ret = b;
71
break;
72
}
73
cmp = frac_cmp(a, b);
74
if (cmp == 0) {
75
cmp = a->sign < b->sign;
76
}
77
- which = cmp > 0 ? 0 : 1;
78
+ ret = cmp > 0 ? a : b;
79
break;
80
default:
81
g_assert_not_reached();
82
}
83
84
- if (which) {
85
- a = b;
86
+ if (is_snan(ret->cls)) {
87
+ parts_silence_nan(ret, s);
88
}
89
- if (is_snan(a->cls)) {
90
- parts_silence_nan(a, s);
91
- }
92
- return a;
93
+ return ret;
36
}
94
}
37
95
38
-static inline bool kvm_arm_steal_time_supported(void)
96
static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
39
-{
40
- return false;
41
-}
42
-
43
/*
44
* These functions should never actually be called without KVM support.
45
*/
46
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/kvm64.c
49
+++ b/target/arm/kvm64.c
50
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
51
52
void kvm_arm_steal_time_finalize(ARMCPU *cpu, Error **errp)
53
{
54
- bool has_steal_time = kvm_arm_steal_time_supported();
55
+ bool has_steal_time = kvm_check_extension(kvm_state, KVM_CAP_STEAL_TIME);
56
57
if (cpu->kvm_steal_time == ON_OFF_AUTO_AUTO) {
58
if (!has_steal_time || !arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
59
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_sve_supported(void)
60
return kvm_check_extension(kvm_state, KVM_CAP_ARM_SVE);
61
}
62
63
-bool kvm_arm_steal_time_supported(void)
64
-{
65
- return kvm_check_extension(kvm_state, KVM_CAP_STEAL_TIME);
66
-}
67
-
68
QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
69
70
uint32_t kvm_arm_sve_get_vls(CPUState *cs)
71
--
97
--
72
2.34.1
98
2.34.1
73
99
74
100
diff view generated by jsdifflib
1
From: Chao Du <duchao@eswincomputing.com>
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
2
3
The KVM_CAP_SET_GUEST_DEBUG is probed during kvm_init().
3
I'm migrating to Qualcomm's new open source email infrastructure, so
4
gdbserver will fail to start if the CAP is not supported.
4
update my email address, and update the mailmap to match.
5
So no need to make another probe here, like other targets.
6
5
7
Signed-off-by: Chao Du <duchao@eswincomputing.com>
6
Signed-off-by: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
9
Message-Id: <20231025070726.22689-1-duchao@eswincomputing.com>
8
Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Gavin Shan <gshan@redhat.com>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20241205114047.1125842-1-leif.lindholm@oss.qualcomm.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
13
---
16
target/arm/kvm64.c | 28 +++++++---------------------
14
MAINTAINERS | 2 +-
17
1 file changed, 7 insertions(+), 21 deletions(-)
15
.mailmap | 5 +++--
16
2 files changed, 4 insertions(+), 3 deletions(-)
18
17
19
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
18
diff --git a/MAINTAINERS b/MAINTAINERS
20
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/kvm64.c
20
--- a/MAINTAINERS
22
+++ b/target/arm/kvm64.c
21
+++ b/MAINTAINERS
23
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
24
#include "hw/acpi/acpi.h"
23
SBSA-REF
25
#include "hw/acpi/ghes.h"
24
M: Radoslaw Biernacki <rad@semihalf.com>
26
25
M: Peter Maydell <peter.maydell@linaro.org>
27
-static bool have_guest_debug;
26
-R: Leif Lindholm <quic_llindhol@quicinc.com>
28
27
+R: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
29
void kvm_arm_init_debug(KVMState *s)
28
R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
30
{
29
L: qemu-arm@nongnu.org
31
- have_guest_debug = kvm_check_extension(s,
30
S: Maintained
32
- KVM_CAP_SET_GUEST_DEBUG);
31
diff --git a/.mailmap b/.mailmap
33
-
32
index XXXXXXX..XXXXXXX 100644
34
max_hw_wps = kvm_check_extension(s, KVM_CAP_GUEST_DEBUG_HW_WPS);
33
--- a/.mailmap
35
hw_watchpoints = g_array_sized_new(true, true,
34
+++ b/.mailmap
36
sizeof(HWWatchpoint), max_hw_wps);
35
@@ -XXX,XX +XXX,XX @@ Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
37
@@ -XXX,XX +XXX,XX @@ static const uint32_t brk_insn = 0xd4200000;
36
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
38
37
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
39
int kvm_arch_insert_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
38
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
40
{
39
-Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
41
- if (have_guest_debug) {
40
-Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
42
- if (cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&bp->saved_insn, 4, 0) ||
41
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <quic_llindhol@quicinc.com>
43
- cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&brk_insn, 4, 1)) {
42
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif.lindholm@linaro.org>
44
- return -EINVAL;
43
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif@nuviainc.com>
45
- }
44
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
46
- return 0;
45
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
47
- } else {
46
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
48
- error_report("guest debug not supported on this kernel");
49
+ if (cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&bp->saved_insn, 4, 0) ||
50
+ cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&brk_insn, 4, 1)) {
51
return -EINVAL;
52
}
53
+ return 0;
54
}
55
56
int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
57
{
58
static uint32_t brk;
59
60
- if (have_guest_debug) {
61
- if (cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&brk, 4, 0) ||
62
- brk != brk_insn ||
63
- cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&bp->saved_insn, 4, 1)) {
64
- return -EINVAL;
65
- }
66
- return 0;
67
- } else {
68
- error_report("guest debug not supported on this kernel");
69
+ if (cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&brk, 4, 0) ||
70
+ brk != brk_insn ||
71
+ cpu_memory_rw_debug(cs, bp->pc, (uint8_t *)&bp->saved_insn, 4, 1)) {
72
return -EINVAL;
73
}
74
+ return 0;
75
}
76
77
/* See v8 ARM ARM D7.2.27 ESR_ELx, Exception Syndrome Register
78
--
47
--
79
2.34.1
48
2.34.1
80
49
81
50
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Vikram Garhwal <vikram.garhwal@bytedance.com>
2
2
3
Both MemoryRegion and Error types are forward declared
3
Previously, maintainer role was paused due to inactive email id. Commit id:
4
in "qemu/typedefs.h".
4
c009d715721861984c4987bcc78b7ee183e86d75.
5
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Signed-off-by: Vikram Garhwal <vikram.garhwal@bytedance.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
8
Reviewed-by: Gavin Shan <gshan@redhat.com>
8
Message-id: 20241204184205.12952-1-vikram.garhwal@bytedance.com
9
Message-id: 20231123183518.64569-3-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/kvm_arm.h | 2 --
11
MAINTAINERS | 2 ++
13
1 file changed, 2 deletions(-)
12
1 file changed, 2 insertions(+)
14
13
15
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
14
diff --git a/MAINTAINERS b/MAINTAINERS
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/kvm_arm.h
16
--- a/MAINTAINERS
18
+++ b/target/arm/kvm_arm.h
17
+++ b/MAINTAINERS
19
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ F: tests/qtest/fuzz-sb16-test.c
20
#define QEMU_KVM_ARM_H
19
21
20
Xilinx CAN
22
#include "sysemu/kvm.h"
21
M: Francisco Iglesias <francisco.iglesias@amd.com>
23
-#include "exec/memory.h"
22
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
24
-#include "qemu/error-report.h"
23
S: Maintained
25
24
F: hw/net/can/xlnx-*
26
#define KVM_ARM_VGIC_V2 (1 << 0)
25
F: include/hw/net/xlnx-*
27
#define KVM_ARM_VGIC_V3 (1 << 1)
26
@@ -XXX,XX +XXX,XX @@ F: include/hw/rx/
27
CAN bus subsystem and hardware
28
M: Pavel Pisa <pisa@cmp.felk.cvut.cz>
29
M: Francisco Iglesias <francisco.iglesias@amd.com>
30
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
31
S: Maintained
32
W: https://canbus.pages.fel.cvut.cz/
33
F: net/can/*
28
--
34
--
29
2.34.1
35
2.34.1
30
31
diff view generated by jsdifflib