1
Hi; hopefully this is the last arm pullreq before softfreeze.
1
First arm pullreq of the cycle; this is mostly my softfloat NaN
2
There's a handful of miscellaneous bug fixes here, but the
2
handling series. (Lots more in my to-review queue, but I don't
3
bulk of the pullreq is Mostafa's implementation of 2-stage
3
like pullreqs growing too close to a hundred patches at a time :-))
4
translation in the SMMUv3.
5
4
6
thanks
5
thanks
7
-- PMM
6
-- PMM
8
7
9
The following changes since commit d74ec4d7dda6322bcc51d1b13ccbd993d3574795:
8
The following changes since commit 97f2796a3736ed37a1b85dc1c76a6c45b829dd17:
10
9
11
Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging (2024-07-18 10:07:23 +1000)
10
Open 10.0 development tree (2024-12-10 17:41:17 +0000)
12
11
13
are available in the Git repository at:
12
are available in the Git repository at:
14
13
15
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20240718
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20241211
16
15
17
for you to fetch changes up to 30a1690f2402e6c1582d5b3ebcf7940bfe2fad4b:
16
for you to fetch changes up to 1abe28d519239eea5cf9620bb13149423e5665f8:
18
17
19
hvf: arm: Do not advance PC when raising an exception (2024-07-18 13:49:30 +0100)
18
MAINTAINERS: Add correct email address for Vikram Garhwal (2024-12-11 15:31:09 +0000)
20
19
21
----------------------------------------------------------------
20
----------------------------------------------------------------
22
target-arm queue:
21
target-arm queue:
23
* Fix handling of LDAPR/STLR with negative offset
22
* hw/net/lan9118: Extract PHY model, reuse with imx_fec, fix bugs
24
* LDAPR should honour SCTLR_ELx.nAA
23
* fpu: Make muladd NaN handling runtime-selected, not compile-time
25
* Use float_status copy in sme_fmopa_s
24
* fpu: Make default NaN pattern runtime-selected, not compile-time
26
* hw/display/bcm2835_fb: fix fb_use_offsets condition
25
* fpu: Minor NaN-related cleanups
27
* hw/arm/smmuv3: Support and advertise nesting
26
* MAINTAINERS: email address updates
28
* Use FPST_F16 for SME FMOPA (widening)
29
* tests/arm-cpu-features: Do not assume PMU availability
30
* hvf: arm: Do not advance PC when raising an exception
31
27
32
----------------------------------------------------------------
28
----------------------------------------------------------------
33
Akihiko Odaki (2):
29
Bernhard Beschow (5):
34
tests/arm-cpu-features: Do not assume PMU availability
30
hw/net/lan9118: Extract lan9118_phy
35
hvf: arm: Do not advance PC when raising an exception
31
hw/net/lan9118_phy: Reuse in imx_fec and consolidate implementations
32
hw/net/lan9118_phy: Fix off-by-one error in MII_ANLPAR register
33
hw/net/lan9118_phy: Reuse MII constants
34
hw/net/lan9118_phy: Add missing 100 mbps full duplex advertisement
36
35
37
Daniyal Khan (2):
36
Leif Lindholm (1):
38
target/arm: Use float_status copy in sme_fmopa_s
37
MAINTAINERS: update email address for Leif Lindholm
39
tests/tcg/aarch64: Add test cases for SME FMOPA (widening)
40
38
41
Mostafa Saleh (18):
39
Peter Maydell (54):
42
hw/arm/smmu-common: Add missing size check for stage-1
40
fpu: handle raising Invalid for infzero in pick_nan_muladd
43
hw/arm/smmu: Fix IPA for stage-2 events
41
fpu: Check for default_nan_mode before calling pickNaNMulAdd
44
hw/arm/smmuv3: Fix encoding of CLASS in events
42
softfloat: Allow runtime choice of inf * 0 + NaN result
45
hw/arm/smmu: Use enum for SMMU stage
43
tests/fp: Explicitly set inf-zero-nan rule
46
hw/arm/smmu: Split smmuv3_translate()
44
target/arm: Set FloatInfZeroNaNRule explicitly
47
hw/arm/smmu: Consolidate ASID and VMID types
45
target/s390: Set FloatInfZeroNaNRule explicitly
48
hw/arm/smmu: Introduce CACHED_ENTRY_TO_ADDR
46
target/ppc: Set FloatInfZeroNaNRule explicitly
49
hw/arm/smmuv3: Translate CD and TT using stage-2 table
47
target/mips: Set FloatInfZeroNaNRule explicitly
50
hw/arm/smmu-common: Rework TLB lookup for nesting
48
target/sparc: Set FloatInfZeroNaNRule explicitly
51
hw/arm/smmu-common: Add support for nested TLB
49
target/xtensa: Set FloatInfZeroNaNRule explicitly
52
hw/arm/smmu-common: Support nested translation
50
target/x86: Set FloatInfZeroNaNRule explicitly
53
hw/arm/smmu: Support nesting in smmuv3_range_inval()
51
target/loongarch: Set FloatInfZeroNaNRule explicitly
54
hw/arm/smmu: Introduce smmu_iotlb_inv_asid_vmid
52
target/hppa: Set FloatInfZeroNaNRule explicitly
55
hw/arm/smmu: Support nesting in the rest of commands
53
softfloat: Pass have_snan to pickNaNMulAdd
56
hw/arm/smmuv3: Support nested SMMUs in smmuv3_notify_iova()
54
softfloat: Allow runtime choice of NaN propagation for muladd
57
hw/arm/smmuv3: Handle translation faults according to SMMUPTWEventInfo
55
tests/fp: Explicitly set 3-NaN propagation rule
58
hw/arm/smmuv3: Support and advertise nesting
56
target/arm: Set Float3NaNPropRule explicitly
59
hw/arm/smmu: Refactor SMMU OAS
57
target/loongarch: Set Float3NaNPropRule explicitly
58
target/ppc: Set Float3NaNPropRule explicitly
59
target/s390x: Set Float3NaNPropRule explicitly
60
target/sparc: Set Float3NaNPropRule explicitly
61
target/mips: Set Float3NaNPropRule explicitly
62
target/xtensa: Set Float3NaNPropRule explicitly
63
target/i386: Set Float3NaNPropRule explicitly
64
target/hppa: Set Float3NaNPropRule explicitly
65
fpu: Remove use_first_nan field from float_status
66
target/m68k: Don't pass NULL float_status to floatx80_default_nan()
67
softfloat: Create floatx80 default NaN from parts64_default_nan
68
target/loongarch: Use normal float_status in fclass_s and fclass_d helpers
69
target/m68k: In frem helper, initialize local float_status from env->fp_status
70
target/m68k: Init local float_status from env fp_status in gdb get/set reg
71
target/sparc: Initialize local scratch float_status from env->fp_status
72
target/ppc: Use env->fp_status in helper_compute_fprf functions
73
fpu: Allow runtime choice of default NaN value
74
tests/fp: Set default NaN pattern explicitly
75
target/microblaze: Set default NaN pattern explicitly
76
target/i386: Set default NaN pattern explicitly
77
target/hppa: Set default NaN pattern explicitly
78
target/alpha: Set default NaN pattern explicitly
79
target/arm: Set default NaN pattern explicitly
80
target/loongarch: Set default NaN pattern explicitly
81
target/m68k: Set default NaN pattern explicitly
82
target/mips: Set default NaN pattern explicitly
83
target/openrisc: Set default NaN pattern explicitly
84
target/ppc: Set default NaN pattern explicitly
85
target/sh4: Set default NaN pattern explicitly
86
target/rx: Set default NaN pattern explicitly
87
target/s390x: Set default NaN pattern explicitly
88
target/sparc: Set default NaN pattern explicitly
89
target/xtensa: Set default NaN pattern explicitly
90
target/hexagon: Set default NaN pattern explicitly
91
target/riscv: Set default NaN pattern explicitly
92
target/tricore: Set default NaN pattern explicitly
93
fpu: Remove default handling for dnan_pattern
60
94
61
Peter Maydell (2):
95
Richard Henderson (11):
62
target/arm: Fix handling of LDAPR/STLR with negative offset
96
target/arm: Copy entire float_status in is_ebf
63
target/arm: LDAPR should honour SCTLR_ELx.nAA
97
softfloat: Inline pickNaNMulAdd
98
softfloat: Use goto for default nan case in pick_nan_muladd
99
softfloat: Remove which from parts_pick_nan_muladd
100
softfloat: Pad array size in pick_nan_muladd
101
softfloat: Move propagateFloatx80NaN to softfloat.c
102
softfloat: Use parts_pick_nan in propagateFloatx80NaN
103
softfloat: Inline pickNaN
104
softfloat: Share code between parts_pick_nan cases
105
softfloat: Sink frac_cmp in parts_pick_nan until needed
106
softfloat: Replace WHICH with RET in parts_pick_nan
64
107
65
Richard Henderson (1):
108
Vikram Garhwal (1):
66
target/arm: Use FPST_F16 for SME FMOPA (widening)
109
MAINTAINERS: Add correct email address for Vikram Garhwal
67
110
68
SamJakob (1):
111
MAINTAINERS | 4 +-
69
hw/display/bcm2835_fb: fix fb_use_offsets condition
112
include/fpu/softfloat-helpers.h | 38 +++-
70
113
include/fpu/softfloat-types.h | 89 +++++++-
71
hw/arm/smmuv3-internal.h | 19 +-
114
include/hw/net/imx_fec.h | 9 +-
72
include/hw/arm/smmu-common.h | 46 +++-
115
include/hw/net/lan9118_phy.h | 37 ++++
73
target/arm/tcg/a64.decode | 2 +-
116
include/hw/net/mii.h | 6 +
74
hw/arm/smmu-common.c | 312 ++++++++++++++++++++++---
117
target/mips/fpu_helper.h | 20 ++
75
hw/arm/smmuv3.c | 467 +++++++++++++++++++++++++-------------
118
target/sparc/helper.h | 4 +-
76
hw/display/bcm2835_fb.c | 2 +-
119
fpu/softfloat.c | 19 ++
77
target/arm/hvf/hvf.c | 1 +
120
hw/net/imx_fec.c | 146 ++------------
78
target/arm/tcg/sme_helper.c | 2 +-
121
hw/net/lan9118.c | 137 ++-----------
79
target/arm/tcg/translate-a64.c | 2 +-
122
hw/net/lan9118_phy.c | 222 ++++++++++++++++++++
80
target/arm/tcg/translate-sme.c | 12 +-
123
linux-user/arm/nwfpe/fpa11.c | 5 +
81
tests/qtest/arm-cpu-features.c | 13 +-
124
target/alpha/cpu.c | 2 +
82
tests/tcg/aarch64/sme-fmopa-1.c | 63 +++++
125
target/arm/cpu.c | 10 +
83
tests/tcg/aarch64/sme-fmopa-2.c | 56 +++++
126
target/arm/tcg/vec_helper.c | 20 +-
84
tests/tcg/aarch64/sme-fmopa-3.c | 63 +++++
127
target/hexagon/cpu.c | 2 +
85
hw/arm/trace-events | 26 ++-
128
target/hppa/fpu_helper.c | 12 ++
86
tests/tcg/aarch64/Makefile.target | 5 +-
129
target/i386/tcg/fpu_helper.c | 12 ++
87
16 files changed, 846 insertions(+), 245 deletions(-)
130
target/loongarch/tcg/fpu_helper.c | 14 +-
88
create mode 100644 tests/tcg/aarch64/sme-fmopa-1.c
131
target/m68k/cpu.c | 14 +-
89
create mode 100644 tests/tcg/aarch64/sme-fmopa-2.c
132
target/m68k/fpu_helper.c | 6 +-
90
create mode 100644 tests/tcg/aarch64/sme-fmopa-3.c
133
target/m68k/helper.c | 6 +-
134
target/microblaze/cpu.c | 2 +
135
target/mips/msa.c | 10 +
136
target/openrisc/cpu.c | 2 +
137
target/ppc/cpu_init.c | 19 ++
138
target/ppc/fpu_helper.c | 3 +-
139
target/riscv/cpu.c | 2 +
140
target/rx/cpu.c | 2 +
141
target/s390x/cpu.c | 5 +
142
target/sh4/cpu.c | 2 +
143
target/sparc/cpu.c | 6 +
144
target/sparc/fop_helper.c | 8 +-
145
target/sparc/translate.c | 4 +-
146
target/tricore/helper.c | 2 +
147
target/xtensa/cpu.c | 4 +
148
target/xtensa/fpu_helper.c | 3 +-
149
tests/fp/fp-bench.c | 7 +
150
tests/fp/fp-test-log2.c | 1 +
151
tests/fp/fp-test.c | 7 +
152
fpu/softfloat-parts.c.inc | 152 +++++++++++---
153
fpu/softfloat-specialize.c.inc | 412 ++------------------------------------
154
.mailmap | 5 +-
155
hw/net/Kconfig | 5 +
156
hw/net/meson.build | 1 +
157
hw/net/trace-events | 10 +-
158
47 files changed, 778 insertions(+), 730 deletions(-)
159
create mode 100644 include/hw/net/lan9118_phy.h
160
create mode 100644 hw/net/lan9118_phy.c
diff view generated by jsdifflib
1
From: Daniyal Khan <danikhan632@gmail.com>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Signed-off-by: Daniyal Khan <danikhan632@gmail.com>
3
A very similar implementation of the same device exists in imx_fec. Prepare for
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
a common implementation by extracting a device model into its own files.
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
6
Message-id: 20240717060149.204788-4-richard.henderson@linaro.org
6
Some migration state has been moved into the new device model which breaks
7
Message-Id: 172090222034.13953.16888708708822922098-1@git.sr.ht
7
migration compatibility for the following machines:
8
[rth: Split test from a larger patch, tidy assembly]
8
* smdkc210
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
* realview-*
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
* vexpress-*
11
* kzm
12
* mps2-*
13
14
While breaking migration ABI, fix the size of the MII registers to be 16 bit,
15
as defined by IEEE 802.3u.
16
17
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
18
Tested-by: Guenter Roeck <linux@roeck-us.net>
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Message-id: 20241102125724.532843-2-shentey@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
22
---
13
tests/tcg/aarch64/sme-fmopa-1.c | 63 +++++++++++++++++++++++++++++++
23
include/hw/net/lan9118_phy.h | 37 ++++++++
14
tests/tcg/aarch64/sme-fmopa-2.c | 56 +++++++++++++++++++++++++++
24
hw/net/lan9118.c | 137 +++++-----------------------
15
tests/tcg/aarch64/sme-fmopa-3.c | 63 +++++++++++++++++++++++++++++++
25
hw/net/lan9118_phy.c | 169 +++++++++++++++++++++++++++++++++++
16
tests/tcg/aarch64/Makefile.target | 5 ++-
26
hw/net/Kconfig | 4 +
17
4 files changed, 185 insertions(+), 2 deletions(-)
27
hw/net/meson.build | 1 +
18
create mode 100644 tests/tcg/aarch64/sme-fmopa-1.c
28
5 files changed, 233 insertions(+), 115 deletions(-)
19
create mode 100644 tests/tcg/aarch64/sme-fmopa-2.c
29
create mode 100644 include/hw/net/lan9118_phy.h
20
create mode 100644 tests/tcg/aarch64/sme-fmopa-3.c
30
create mode 100644 hw/net/lan9118_phy.c
21
31
22
diff --git a/tests/tcg/aarch64/sme-fmopa-1.c b/tests/tcg/aarch64/sme-fmopa-1.c
32
diff --git a/include/hw/net/lan9118_phy.h b/include/hw/net/lan9118_phy.h
23
new file mode 100644
33
new file mode 100644
24
index XXXXXXX..XXXXXXX
34
index XXXXXXX..XXXXXXX
25
--- /dev/null
35
--- /dev/null
26
+++ b/tests/tcg/aarch64/sme-fmopa-1.c
36
+++ b/include/hw/net/lan9118_phy.h
27
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@
28
+/*
38
+/*
29
+ * SME outer product, 1 x 1.
39
+ * SMSC LAN9118 PHY emulation
30
+ * SPDX-License-Identifier: GPL-2.0-or-later
40
+ *
41
+ * Copyright (c) 2009 CodeSourcery, LLC.
42
+ * Written by Paul Brook
43
+ *
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
45
+ * See the COPYING file in the top-level directory.
31
+ */
46
+ */
32
+
47
+
33
+#include <stdio.h>
48
+#ifndef HW_NET_LAN9118_PHY_H
34
+
49
+#define HW_NET_LAN9118_PHY_H
35
+static void foo(float *dst)
50
+
36
+{
51
+#include "qom/object.h"
37
+ asm(".arch_extension sme\n\t"
52
+#include "hw/sysbus.h"
38
+ "smstart\n\t"
53
+
39
+ "ptrue p0.s, vl4\n\t"
54
+#define TYPE_LAN9118_PHY "lan9118-phy"
40
+ "fmov z0.s, #1.0\n\t"
55
+OBJECT_DECLARE_SIMPLE_TYPE(Lan9118PhyState, LAN9118_PHY)
41
+ /*
56
+
42
+ * An outer product of a vector of 1.0 by itself should be a matrix of 1.0.
57
+typedef struct Lan9118PhyState {
43
+ * Note that we are using tile 1 here (za1.s) rather than tile 0.
58
+ SysBusDevice parent_obj;
44
+ */
59
+
45
+ "zero {za}\n\t"
60
+ uint16_t status;
46
+ "fmopa za1.s, p0/m, p0/m, z0.s, z0.s\n\t"
61
+ uint16_t control;
47
+ /*
62
+ uint16_t advertise;
48
+ * Read the first 4x4 sub-matrix of elements from tile 1:
63
+ uint16_t ints;
49
+ * Note that za1h should be interchangeable here.
64
+ uint16_t int_mask;
50
+ */
65
+ qemu_irq irq;
51
+ "mov w12, #0\n\t"
66
+ bool link_down;
52
+ "mova z0.s, p0/m, za1v.s[w12, #0]\n\t"
67
+} Lan9118PhyState;
53
+ "mova z1.s, p0/m, za1v.s[w12, #1]\n\t"
68
+
54
+ "mova z2.s, p0/m, za1v.s[w12, #2]\n\t"
69
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down);
55
+ "mova z3.s, p0/m, za1v.s[w12, #3]\n\t"
70
+void lan9118_phy_reset(Lan9118PhyState *s);
56
+ /*
71
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg);
57
+ * And store them to the input pointer (dst in the C code):
72
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val);
58
+ */
73
+
59
+ "st1w {z0.s}, p0, [%0]\n\t"
74
+#endif
60
+ "add x0, x0, #16\n\t"
75
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
61
+ "st1w {z1.s}, p0, [x0]\n\t"
76
index XXXXXXX..XXXXXXX 100644
62
+ "add x0, x0, #16\n\t"
77
--- a/hw/net/lan9118.c
63
+ "st1w {z2.s}, p0, [x0]\n\t"
78
+++ b/hw/net/lan9118.c
64
+ "add x0, x0, #16\n\t"
79
@@ -XXX,XX +XXX,XX @@
65
+ "st1w {z3.s}, p0, [x0]\n\t"
80
#include "net/net.h"
66
+ "smstop"
81
#include "net/eth.h"
67
+ : : "r"(dst)
82
#include "hw/irq.h"
68
+ : "x12", "d0", "d1", "d2", "d3", "memory");
83
+#include "hw/net/lan9118_phy.h"
69
+}
84
#include "hw/net/lan9118.h"
70
+
85
#include "hw/ptimer.h"
71
+int main()
86
#include "hw/qdev-properties.h"
72
+{
87
@@ -XXX,XX +XXX,XX @@ do { printf("lan9118: " fmt , ## __VA_ARGS__); } while (0)
73
+ float dst[16] = { };
88
#define MAC_CR_RXEN 0x00000004
74
+
89
#define MAC_CR_RESERVED 0x7f404213
75
+ foo(dst);
90
76
+
91
-#define PHY_INT_ENERGYON 0x80
77
+ for (int i = 0; i < 16; i++) {
92
-#define PHY_INT_AUTONEG_COMPLETE 0x40
78
+ if (dst[i] != 1.0f) {
93
-#define PHY_INT_FAULT 0x20
79
+ goto failure;
94
-#define PHY_INT_DOWN 0x10
80
+ }
95
-#define PHY_INT_AUTONEG_LP 0x08
96
-#define PHY_INT_PARFAULT 0x04
97
-#define PHY_INT_AUTONEG_PAGE 0x02
98
-
99
#define GPT_TIMER_EN 0x20000000
100
101
/*
102
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
103
uint32_t mac_mii_data;
104
uint32_t mac_flow;
105
106
- uint32_t phy_status;
107
- uint32_t phy_control;
108
- uint32_t phy_advertise;
109
- uint32_t phy_int;
110
- uint32_t phy_int_mask;
111
+ Lan9118PhyState mii;
112
+ IRQState mii_irq;
113
114
int32_t eeprom_writable;
115
uint8_t eeprom[128];
116
@@ -XXX,XX +XXX,XX @@ struct lan9118_state {
117
118
static const VMStateDescription vmstate_lan9118 = {
119
.name = "lan9118",
120
- .version_id = 2,
121
- .minimum_version_id = 1,
122
+ .version_id = 3,
123
+ .minimum_version_id = 3,
124
.fields = (const VMStateField[]) {
125
VMSTATE_PTIMER(timer, lan9118_state),
126
VMSTATE_UINT32(irq_cfg, lan9118_state),
127
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118 = {
128
VMSTATE_UINT32(mac_mii_acc, lan9118_state),
129
VMSTATE_UINT32(mac_mii_data, lan9118_state),
130
VMSTATE_UINT32(mac_flow, lan9118_state),
131
- VMSTATE_UINT32(phy_status, lan9118_state),
132
- VMSTATE_UINT32(phy_control, lan9118_state),
133
- VMSTATE_UINT32(phy_advertise, lan9118_state),
134
- VMSTATE_UINT32(phy_int, lan9118_state),
135
- VMSTATE_UINT32(phy_int_mask, lan9118_state),
136
VMSTATE_INT32(eeprom_writable, lan9118_state),
137
VMSTATE_UINT8_ARRAY(eeprom, lan9118_state, 128),
138
VMSTATE_INT32(tx_fifo_size, lan9118_state),
139
@@ -XXX,XX +XXX,XX @@ static void lan9118_reload_eeprom(lan9118_state *s)
140
lan9118_mac_changed(s);
141
}
142
143
-static void phy_update_irq(lan9118_state *s)
144
+static void lan9118_update_irq(void *opaque, int n, int level)
145
{
146
- if (s->phy_int & s->phy_int_mask) {
147
+ lan9118_state *s = opaque;
148
+
149
+ if (level) {
150
s->int_sts |= PHY_INT;
151
} else {
152
s->int_sts &= ~PHY_INT;
153
@@ -XXX,XX +XXX,XX @@ static void phy_update_irq(lan9118_state *s)
154
lan9118_update(s);
155
}
156
157
-static void phy_update_link(lan9118_state *s)
158
-{
159
- /* Autonegotiation status mirrors link status. */
160
- if (qemu_get_queue(s->nic)->link_down) {
161
- s->phy_status &= ~0x0024;
162
- s->phy_int |= PHY_INT_DOWN;
163
- } else {
164
- s->phy_status |= 0x0024;
165
- s->phy_int |= PHY_INT_ENERGYON;
166
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
167
- }
168
- phy_update_irq(s);
169
-}
170
-
171
static void lan9118_set_link(NetClientState *nc)
172
{
173
- phy_update_link(qemu_get_nic_opaque(nc));
174
-}
175
-
176
-static void phy_reset(lan9118_state *s)
177
-{
178
- s->phy_status = 0x7809;
179
- s->phy_control = 0x3000;
180
- s->phy_advertise = 0x01e1;
181
- s->phy_int_mask = 0;
182
- s->phy_int = 0;
183
- phy_update_link(s);
184
+ lan9118_phy_update_link(&LAN9118(qemu_get_nic_opaque(nc))->mii,
185
+ nc->link_down);
186
}
187
188
static void lan9118_reset(DeviceState *d)
189
@@ -XXX,XX +XXX,XX @@ static void lan9118_reset(DeviceState *d)
190
s->read_word_n = 0;
191
s->write_word_n = 0;
192
193
- phy_reset(s);
194
-
195
s->eeprom_writable = 0;
196
lan9118_reload_eeprom(s);
197
}
198
@@ -XXX,XX +XXX,XX @@ static void do_tx_packet(lan9118_state *s)
199
uint32_t status;
200
201
/* FIXME: Honor TX disable, and allow queueing of packets. */
202
- if (s->phy_control & 0x4000) {
203
+ if (s->mii.control & 0x4000) {
204
/* This assumes the receive routine doesn't touch the VLANClient. */
205
qemu_receive_packet(qemu_get_queue(s->nic), s->txp->data, s->txp->len);
206
} else {
207
@@ -XXX,XX +XXX,XX @@ static void tx_fifo_push(lan9118_state *s, uint32_t val)
208
}
209
}
210
211
-static uint32_t do_phy_read(lan9118_state *s, int reg)
212
-{
213
- uint32_t val;
214
-
215
- switch (reg) {
216
- case 0: /* Basic Control */
217
- return s->phy_control;
218
- case 1: /* Basic Status */
219
- return s->phy_status;
220
- case 2: /* ID1 */
221
- return 0x0007;
222
- case 3: /* ID2 */
223
- return 0xc0d1;
224
- case 4: /* Auto-neg advertisement */
225
- return s->phy_advertise;
226
- case 5: /* Auto-neg Link Partner Ability */
227
- return 0x0f71;
228
- case 6: /* Auto-neg Expansion */
229
- return 1;
230
- /* TODO 17, 18, 27, 29, 30, 31 */
231
- case 29: /* Interrupt source. */
232
- val = s->phy_int;
233
- s->phy_int = 0;
234
- phy_update_irq(s);
235
- return val;
236
- case 30: /* Interrupt mask */
237
- return s->phy_int_mask;
238
- default:
239
- qemu_log_mask(LOG_GUEST_ERROR,
240
- "do_phy_read: PHY read reg %d\n", reg);
241
- return 0;
242
- }
243
-}
244
-
245
-static void do_phy_write(lan9118_state *s, int reg, uint32_t val)
246
-{
247
- switch (reg) {
248
- case 0: /* Basic Control */
249
- if (val & 0x8000) {
250
- phy_reset(s);
251
- break;
252
- }
253
- s->phy_control = val & 0x7980;
254
- /* Complete autonegotiation immediately. */
255
- if (val & 0x1000) {
256
- s->phy_status |= 0x0020;
257
- }
258
- break;
259
- case 4: /* Auto-neg advertisement */
260
- s->phy_advertise = (val & 0x2d7f) | 0x80;
261
- break;
262
- /* TODO 17, 18, 27, 31 */
263
- case 30: /* Interrupt mask */
264
- s->phy_int_mask = val & 0xff;
265
- phy_update_irq(s);
266
- break;
267
- default:
268
- qemu_log_mask(LOG_GUEST_ERROR,
269
- "do_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
270
- }
271
-}
272
-
273
static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
274
{
275
switch (reg) {
276
@@ -XXX,XX +XXX,XX @@ static void do_mac_write(lan9118_state *s, int reg, uint32_t val)
277
if (val & 2) {
278
DPRINTF("PHY write %d = 0x%04x\n",
279
(val >> 6) & 0x1f, s->mac_mii_data);
280
- do_phy_write(s, (val >> 6) & 0x1f, s->mac_mii_data);
281
+ lan9118_phy_write(&s->mii, (val >> 6) & 0x1f, s->mac_mii_data);
282
} else {
283
- s->mac_mii_data = do_phy_read(s, (val >> 6) & 0x1f);
284
+ s->mac_mii_data = lan9118_phy_read(&s->mii, (val >> 6) & 0x1f);
285
DPRINTF("PHY read %d = 0x%04x\n",
286
(val >> 6) & 0x1f, s->mac_mii_data);
287
}
288
@@ -XXX,XX +XXX,XX @@ static void lan9118_writel(void *opaque, hwaddr offset,
289
break;
290
case CSR_PMT_CTRL:
291
if (val & 0x400) {
292
- phy_reset(s);
293
+ lan9118_phy_reset(&s->mii);
294
}
295
s->pmt_ctrl &= ~0x34e;
296
s->pmt_ctrl |= (val & 0x34e);
297
@@ -XXX,XX +XXX,XX @@ static void lan9118_realize(DeviceState *dev, Error **errp)
298
const MemoryRegionOps *mem_ops =
299
s->mode_16bit ? &lan9118_16bit_mem_ops : &lan9118_mem_ops;
300
301
+ qemu_init_irq(&s->mii_irq, lan9118_update_irq, s, 0);
302
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
303
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
304
+ return;
81
+ }
305
+ }
82
+ /* success */
306
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
83
+ return 0;
307
+
84
+
308
memory_region_init_io(&s->mmio, OBJECT(dev), mem_ops, s,
85
+ failure:
309
"lan9118-mmio", 0x100);
86
+ for (int i = 0; i < 16; i++) {
310
sysbus_init_mmio(sbd, &s->mmio);
87
+ printf("%f%c", dst[i], i % 4 == 3 ? '\n' : ' ');
311
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
88
+ }
89
+ return 1;
90
+}
91
diff --git a/tests/tcg/aarch64/sme-fmopa-2.c b/tests/tcg/aarch64/sme-fmopa-2.c
92
new file mode 100644
312
new file mode 100644
93
index XXXXXXX..XXXXXXX
313
index XXXXXXX..XXXXXXX
94
--- /dev/null
314
--- /dev/null
95
+++ b/tests/tcg/aarch64/sme-fmopa-2.c
315
+++ b/hw/net/lan9118_phy.c
96
@@ -XXX,XX +XXX,XX @@
316
@@ -XXX,XX +XXX,XX @@
97
+/*
317
+/*
98
+ * SME outer product, FZ vs FZ16
318
+ * SMSC LAN9118 PHY emulation
99
+ * SPDX-License-Identifier: GPL-2.0-or-later
319
+ *
320
+ * Copyright (c) 2009 CodeSourcery, LLC.
321
+ * Written by Paul Brook
322
+ *
323
+ * This code is licensed under the GNU GPL v2
324
+ *
325
+ * Contributions after 2012-01-13 are licensed under the terms of the
326
+ * GNU GPL, version 2 or (at your option) any later version.
100
+ */
327
+ */
101
+
328
+
102
+#include <stdint.h>
329
+#include "qemu/osdep.h"
103
+#include <stdio.h>
330
+#include "hw/net/lan9118_phy.h"
104
+
331
+#include "hw/irq.h"
105
+static void test_fmopa(uint32_t *result)
332
+#include "hw/resettable.h"
106
+{
333
+#include "migration/vmstate.h"
107
+ asm(".arch_extension sme\n\t"
334
+#include "qemu/log.h"
108
+ "smstart\n\t" /* Z*, P* and ZArray cleared */
335
+
109
+ "ptrue p2.b, vl16\n\t" /* Limit vector length to 16 */
336
+#define PHY_INT_ENERGYON (1 << 7)
110
+ "ptrue p5.b, vl16\n\t"
337
+#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
111
+ "movi d0, #0x00ff\n\t" /* fp16 denormal */
338
+#define PHY_INT_FAULT (1 << 5)
112
+ "movi d16, #0x00ff\n\t"
339
+#define PHY_INT_DOWN (1 << 4)
113
+ "mov w15, #0x0001000000\n\t" /* FZ=1, FZ16=0 */
340
+#define PHY_INT_AUTONEG_LP (1 << 3)
114
+ "msr fpcr, x15\n\t"
341
+#define PHY_INT_PARFAULT (1 << 2)
115
+ "fmopa za3.s, p2/m, p5/m, z16.h, z0.h\n\t"
342
+#define PHY_INT_AUTONEG_PAGE (1 << 1)
116
+ "mov w15, #0\n\t"
343
+
117
+ "st1w {za3h.s[w15, 0]}, p2, [%0]\n\t"
344
+static void lan9118_phy_update_irq(Lan9118PhyState *s)
118
+ "add %0, %0, #16\n\t"
345
+{
119
+ "st1w {za3h.s[w15, 1]}, p2, [%0]\n\t"
346
+ qemu_set_irq(s->irq, !!(s->ints & s->int_mask));
120
+ "mov w15, #2\n\t"
347
+}
121
+ "add %0, %0, #16\n\t"
348
+
122
+ "st1w {za3h.s[w15, 0]}, p2, [%0]\n\t"
349
+uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
123
+ "add %0, %0, #16\n\t"
350
+{
124
+ "st1w {za3h.s[w15, 1]}, p2, [%0]\n\t"
351
+ uint16_t val;
125
+ "smstop"
352
+
126
+ : "+r"(result) :
353
+ switch (reg) {
127
+ : "x15", "x16", "p2", "p5", "d0", "d16", "memory");
354
+ case 0: /* Basic Control */
128
+}
355
+ return s->control;
129
+
356
+ case 1: /* Basic Status */
130
+int main(void)
357
+ return s->status;
131
+{
358
+ case 2: /* ID1 */
132
+ uint32_t result[4 * 4] = { };
359
+ return 0x0007;
133
+
360
+ case 3: /* ID2 */
134
+ test_fmopa(result);
361
+ return 0xc0d1;
135
+
362
+ case 4: /* Auto-neg advertisement */
136
+ if (result[0] != 0x2f7e0100) {
363
+ return s->advertise;
137
+ printf("Test failed: Incorrect output in first 4 bytes\n"
364
+ case 5: /* Auto-neg Link Partner Ability */
138
+ "Expected: %08x\n"
365
+ return 0x0f71;
139
+ "Got: %08x\n",
366
+ case 6: /* Auto-neg Expansion */
140
+ 0x2f7e0100, result[0]);
141
+ return 1;
367
+ return 1;
368
+ /* TODO 17, 18, 27, 29, 30, 31 */
369
+ case 29: /* Interrupt source. */
370
+ val = s->ints;
371
+ s->ints = 0;
372
+ lan9118_phy_update_irq(s);
373
+ return val;
374
+ case 30: /* Interrupt mask */
375
+ return s->int_mask;
376
+ default:
377
+ qemu_log_mask(LOG_GUEST_ERROR,
378
+ "lan9118_phy_read: PHY read reg %d\n", reg);
379
+ return 0;
142
+ }
380
+ }
143
+
381
+}
144
+ for (int i = 1; i < 16; ++i) {
382
+
145
+ if (result[i] != 0) {
383
+void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
146
+ printf("Test failed: Non-zero word at position %d\n", i);
384
+{
147
+ return 1;
385
+ switch (reg) {
386
+ case 0: /* Basic Control */
387
+ if (val & 0x8000) {
388
+ lan9118_phy_reset(s);
389
+ break;
148
+ }
390
+ }
391
+ s->control = val & 0x7980;
392
+ /* Complete autonegotiation immediately. */
393
+ if (val & 0x1000) {
394
+ s->status |= 0x0020;
395
+ }
396
+ break;
397
+ case 4: /* Auto-neg advertisement */
398
+ s->advertise = (val & 0x2d7f) | 0x80;
399
+ break;
400
+ /* TODO 17, 18, 27, 31 */
401
+ case 30: /* Interrupt mask */
402
+ s->int_mask = val & 0xff;
403
+ lan9118_phy_update_irq(s);
404
+ break;
405
+ default:
406
+ qemu_log_mask(LOG_GUEST_ERROR,
407
+ "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
149
+ }
408
+ }
150
+
409
+}
151
+ return 0;
410
+
152
+}
411
+void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
153
diff --git a/tests/tcg/aarch64/sme-fmopa-3.c b/tests/tcg/aarch64/sme-fmopa-3.c
412
+{
154
new file mode 100644
413
+ s->link_down = link_down;
155
index XXXXXXX..XXXXXXX
414
+
156
--- /dev/null
415
+ /* Autonegotiation status mirrors link status. */
157
+++ b/tests/tcg/aarch64/sme-fmopa-3.c
416
+ if (link_down) {
158
@@ -XXX,XX +XXX,XX @@
417
+ s->status &= ~0x0024;
159
+/*
418
+ s->ints |= PHY_INT_DOWN;
160
+ * SME outer product, [ 1 2 3 4 ] squared
419
+ } else {
161
+ * SPDX-License-Identifier: GPL-2.0-or-later
420
+ s->status |= 0x0024;
162
+ */
421
+ s->ints |= PHY_INT_ENERGYON;
163
+
422
+ s->ints |= PHY_INT_AUTONEG_COMPLETE;
164
+#include <stdio.h>
423
+ }
165
+#include <stdint.h>
424
+ lan9118_phy_update_irq(s);
166
+#include <string.h>
425
+}
167
+#include <math.h>
426
+
168
+
427
+void lan9118_phy_reset(Lan9118PhyState *s)
169
+static const float i_1234[4] = {
428
+{
170
+ 1.0f, 2.0f, 3.0f, 4.0f
429
+ s->control = 0x3000;
430
+ s->status = 0x7809;
431
+ s->advertise = 0x01e1;
432
+ s->int_mask = 0;
433
+ s->ints = 0;
434
+ lan9118_phy_update_link(s, s->link_down);
435
+}
436
+
437
+static void lan9118_phy_reset_hold(Object *obj, ResetType type)
438
+{
439
+ Lan9118PhyState *s = LAN9118_PHY(obj);
440
+
441
+ lan9118_phy_reset(s);
442
+}
443
+
444
+static void lan9118_phy_init(Object *obj)
445
+{
446
+ Lan9118PhyState *s = LAN9118_PHY(obj);
447
+
448
+ qdev_init_gpio_out(DEVICE(s), &s->irq, 1);
449
+}
450
+
451
+static const VMStateDescription vmstate_lan9118_phy = {
452
+ .name = "lan9118-phy",
453
+ .version_id = 1,
454
+ .minimum_version_id = 1,
455
+ .fields = (const VMStateField[]) {
456
+ VMSTATE_UINT16(control, Lan9118PhyState),
457
+ VMSTATE_UINT16(status, Lan9118PhyState),
458
+ VMSTATE_UINT16(advertise, Lan9118PhyState),
459
+ VMSTATE_UINT16(ints, Lan9118PhyState),
460
+ VMSTATE_UINT16(int_mask, Lan9118PhyState),
461
+ VMSTATE_BOOL(link_down, Lan9118PhyState),
462
+ VMSTATE_END_OF_LIST()
463
+ }
171
+};
464
+};
172
+
465
+
173
+static const float expected[4] = {
466
+static void lan9118_phy_class_init(ObjectClass *klass, void *data)
174
+ 4.515625f, 5.750000f, 6.984375f, 8.218750f
467
+{
468
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
469
+ DeviceClass *dc = DEVICE_CLASS(klass);
470
+
471
+ rc->phases.hold = lan9118_phy_reset_hold;
472
+ dc->vmsd = &vmstate_lan9118_phy;
473
+}
474
+
475
+static const TypeInfo types[] = {
476
+ {
477
+ .name = TYPE_LAN9118_PHY,
478
+ .parent = TYPE_SYS_BUS_DEVICE,
479
+ .instance_size = sizeof(Lan9118PhyState),
480
+ .instance_init = lan9118_phy_init,
481
+ .class_init = lan9118_phy_class_init,
482
+ }
175
+};
483
+};
176
+
484
+
177
+static void test_fmopa(float *result)
485
+DEFINE_TYPES(types)
178
+{
486
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
179
+ asm(".arch_extension sme\n\t"
180
+ "smstart\n\t" /* ZArray cleared */
181
+ "ptrue p2.b, vl16\n\t" /* Limit vector length to 16 */
182
+ "ld1w {z0.s}, p2/z, [%1]\n\t"
183
+ "mov w15, #0\n\t"
184
+ "mov za3h.s[w15, 0], p2/m, z0.s\n\t"
185
+ "mov za3h.s[w15, 1], p2/m, z0.s\n\t"
186
+ "mov w15, #2\n\t"
187
+ "mov za3h.s[w15, 0], p2/m, z0.s\n\t"
188
+ "mov za3h.s[w15, 1], p2/m, z0.s\n\t"
189
+ "msr fpcr, xzr\n\t"
190
+ "fmopa za3.s, p2/m, p2/m, z0.h, z0.h\n\t"
191
+ "mov w15, #0\n\t"
192
+ "st1w {za3h.s[w15, 0]}, p2, [%0]\n"
193
+ "add %0, %0, #16\n\t"
194
+ "st1w {za3h.s[w15, 1]}, p2, [%0]\n\t"
195
+ "mov w15, #2\n\t"
196
+ "add %0, %0, #16\n\t"
197
+ "st1w {za3h.s[w15, 0]}, p2, [%0]\n\t"
198
+ "add %0, %0, #16\n\t"
199
+ "st1w {za3h.s[w15, 1]}, p2, [%0]\n\t"
200
+ "smstop"
201
+ : "+r"(result) : "r"(i_1234)
202
+ : "x15", "x16", "p2", "d0", "memory");
203
+}
204
+
205
+int main(void)
206
+{
207
+ float result[4 * 4] = { };
208
+ int ret = 0;
209
+
210
+ test_fmopa(result);
211
+
212
+ for (int i = 0; i < 4; i++) {
213
+ float actual = result[i];
214
+ if (fabsf(actual - expected[i]) > 0.001f) {
215
+ printf("Test failed at element %d: Expected %f, got %f\n",
216
+ i, expected[i], actual);
217
+ ret = 1;
218
+ }
219
+ }
220
+ return ret;
221
+}
222
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
223
index XXXXXXX..XXXXXXX 100644
487
index XXXXXXX..XXXXXXX 100644
224
--- a/tests/tcg/aarch64/Makefile.target
488
--- a/hw/net/Kconfig
225
+++ b/tests/tcg/aarch64/Makefile.target
489
+++ b/hw/net/Kconfig
226
@@ -XXX,XX +XXX,XX @@ endif
490
@@ -XXX,XX +XXX,XX @@ config VMXNET3_PCI
227
491
config SMC91C111
228
# SME Tests
492
bool
229
ifneq ($(CROSS_AS_HAS_ARMV9_SME),)
493
230
-AARCH64_TESTS += sme-outprod1 sme-smopa-1 sme-smopa-2
494
+config LAN9118_PHY
231
-sme-outprod1 sme-smopa-1 sme-smopa-2: CFLAGS += $(CROSS_AS_HAS_ARMV9_SME)
495
+ bool
232
+SME_TESTS = sme-outprod1 sme-smopa-1 sme-smopa-2 sme-fmopa-1 sme-fmopa-2 sme-fmopa-3
496
+
233
+AARCH64_TESTS += $(SME_TESTS)
497
config LAN9118
234
+$(SME_TESTS): CFLAGS += $(CROSS_AS_HAS_ARMV9_SME)
498
bool
235
endif
499
+ select LAN9118_PHY
236
500
select PTIMER
237
# System Registers Tests
501
502
config NE2000_ISA
503
diff --git a/hw/net/meson.build b/hw/net/meson.build
504
index XXXXXXX..XXXXXXX 100644
505
--- a/hw/net/meson.build
506
+++ b/hw/net/meson.build
507
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_VMXNET3_PCI', if_true: files('vmxnet3.c'))
508
509
system_ss.add(when: 'CONFIG_SMC91C111', if_true: files('smc91c111.c'))
510
system_ss.add(when: 'CONFIG_LAN9118', if_true: files('lan9118.c'))
511
+system_ss.add(when: 'CONFIG_LAN9118_PHY', if_true: files('lan9118_phy.c'))
512
system_ss.add(when: 'CONFIG_NE2000_ISA', if_true: files('ne2000-isa.c'))
513
system_ss.add(when: 'CONFIG_OPENCORES_ETH', if_true: files('opencores_eth.c'))
514
system_ss.add(when: 'CONFIG_XGMAC', if_true: files('xgmac.c'))
238
--
515
--
239
2.34.1
516
2.34.1
240
241
diff view generated by jsdifflib
New patch
1
From: Bernhard Beschow <shentey@gmail.com>
1
2
3
imx_fec models the same PHY as lan9118_phy. The code is almost the same with
4
imx_fec having more logging and tracing. Merge these improvements into
5
lan9118_phy and reuse in imx_fec to fix the code duplication.
6
7
Some migration state how resides in the new device model which breaks migration
8
compatibility for the following machines:
9
* imx25-pdk
10
* sabrelite
11
* mcimx7d-sabre
12
* mcimx6ul-evk
13
14
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
15
Tested-by: Guenter Roeck <linux@roeck-us.net>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20241102125724.532843-3-shentey@gmail.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
include/hw/net/imx_fec.h | 9 ++-
21
hw/net/imx_fec.c | 146 ++++-----------------------------------
22
hw/net/lan9118_phy.c | 82 ++++++++++++++++------
23
hw/net/Kconfig | 1 +
24
hw/net/trace-events | 10 +--
25
5 files changed, 85 insertions(+), 163 deletions(-)
26
27
diff --git a/include/hw/net/imx_fec.h b/include/hw/net/imx_fec.h
28
index XXXXXXX..XXXXXXX 100644
29
--- a/include/hw/net/imx_fec.h
30
+++ b/include/hw/net/imx_fec.h
31
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(IMXFECState, IMX_FEC)
32
#define TYPE_IMX_ENET "imx.enet"
33
34
#include "hw/sysbus.h"
35
+#include "hw/net/lan9118_phy.h"
36
+#include "hw/irq.h"
37
#include "net/net.h"
38
39
#define ENET_EIR 1
40
@@ -XXX,XX +XXX,XX @@ struct IMXFECState {
41
uint32_t tx_descriptor[ENET_TX_RING_NUM];
42
uint32_t tx_ring_num;
43
44
- uint32_t phy_status;
45
- uint32_t phy_control;
46
- uint32_t phy_advertise;
47
- uint32_t phy_int;
48
- uint32_t phy_int_mask;
49
+ Lan9118PhyState mii;
50
+ IRQState mii_irq;
51
uint32_t phy_num;
52
bool phy_connected;
53
struct IMXFECState *phy_consumer;
54
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/net/imx_fec.c
57
+++ b/hw/net/imx_fec.c
58
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth_txdescs = {
59
60
static const VMStateDescription vmstate_imx_eth = {
61
.name = TYPE_IMX_FEC,
62
- .version_id = 2,
63
- .minimum_version_id = 2,
64
+ .version_id = 3,
65
+ .minimum_version_id = 3,
66
.fields = (const VMStateField[]) {
67
VMSTATE_UINT32_ARRAY(regs, IMXFECState, ENET_MAX),
68
VMSTATE_UINT32(rx_descriptor, IMXFECState),
69
VMSTATE_UINT32(tx_descriptor[0], IMXFECState),
70
- VMSTATE_UINT32(phy_status, IMXFECState),
71
- VMSTATE_UINT32(phy_control, IMXFECState),
72
- VMSTATE_UINT32(phy_advertise, IMXFECState),
73
- VMSTATE_UINT32(phy_int, IMXFECState),
74
- VMSTATE_UINT32(phy_int_mask, IMXFECState),
75
VMSTATE_END_OF_LIST()
76
},
77
.subsections = (const VMStateDescription * const []) {
78
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx_eth = {
79
},
80
};
81
82
-#define PHY_INT_ENERGYON (1 << 7)
83
-#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
84
-#define PHY_INT_FAULT (1 << 5)
85
-#define PHY_INT_DOWN (1 << 4)
86
-#define PHY_INT_AUTONEG_LP (1 << 3)
87
-#define PHY_INT_PARFAULT (1 << 2)
88
-#define PHY_INT_AUTONEG_PAGE (1 << 1)
89
-
90
static void imx_eth_update(IMXFECState *s);
91
92
/*
93
@@ -XXX,XX +XXX,XX @@ static void imx_eth_update(IMXFECState *s);
94
* For now we don't handle any GPIO/interrupt line, so the OS will
95
* have to poll for the PHY status.
96
*/
97
-static void imx_phy_update_irq(IMXFECState *s)
98
+static void imx_phy_update_irq(void *opaque, int n, int level)
99
{
100
- imx_eth_update(s);
101
-}
102
-
103
-static void imx_phy_update_link(IMXFECState *s)
104
-{
105
- /* Autonegotiation status mirrors link status. */
106
- if (qemu_get_queue(s->nic)->link_down) {
107
- trace_imx_phy_update_link("down");
108
- s->phy_status &= ~0x0024;
109
- s->phy_int |= PHY_INT_DOWN;
110
- } else {
111
- trace_imx_phy_update_link("up");
112
- s->phy_status |= 0x0024;
113
- s->phy_int |= PHY_INT_ENERGYON;
114
- s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
115
- }
116
- imx_phy_update_irq(s);
117
+ imx_eth_update(opaque);
118
}
119
120
static void imx_eth_set_link(NetClientState *nc)
121
{
122
- imx_phy_update_link(IMX_FEC(qemu_get_nic_opaque(nc)));
123
-}
124
-
125
-static void imx_phy_reset(IMXFECState *s)
126
-{
127
- trace_imx_phy_reset();
128
-
129
- s->phy_status = 0x7809;
130
- s->phy_control = 0x3000;
131
- s->phy_advertise = 0x01e1;
132
- s->phy_int_mask = 0;
133
- s->phy_int = 0;
134
- imx_phy_update_link(s);
135
+ lan9118_phy_update_link(&IMX_FEC(qemu_get_nic_opaque(nc))->mii,
136
+ nc->link_down);
137
}
138
139
static uint32_t imx_phy_read(IMXFECState *s, int reg)
140
{
141
- uint32_t val;
142
uint32_t phy = reg / 32;
143
144
if (!s->phy_connected) {
145
@@ -XXX,XX +XXX,XX @@ static uint32_t imx_phy_read(IMXFECState *s, int reg)
146
147
reg %= 32;
148
149
- switch (reg) {
150
- case 0: /* Basic Control */
151
- val = s->phy_control;
152
- break;
153
- case 1: /* Basic Status */
154
- val = s->phy_status;
155
- break;
156
- case 2: /* ID1 */
157
- val = 0x0007;
158
- break;
159
- case 3: /* ID2 */
160
- val = 0xc0d1;
161
- break;
162
- case 4: /* Auto-neg advertisement */
163
- val = s->phy_advertise;
164
- break;
165
- case 5: /* Auto-neg Link Partner Ability */
166
- val = 0x0f71;
167
- break;
168
- case 6: /* Auto-neg Expansion */
169
- val = 1;
170
- break;
171
- case 29: /* Interrupt source. */
172
- val = s->phy_int;
173
- s->phy_int = 0;
174
- imx_phy_update_irq(s);
175
- break;
176
- case 30: /* Interrupt mask */
177
- val = s->phy_int_mask;
178
- break;
179
- case 17:
180
- case 18:
181
- case 27:
182
- case 31:
183
- qemu_log_mask(LOG_UNIMP, "[%s.phy]%s: reg %d not implemented\n",
184
- TYPE_IMX_FEC, __func__, reg);
185
- val = 0;
186
- break;
187
- default:
188
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
189
- TYPE_IMX_FEC, __func__, reg);
190
- val = 0;
191
- break;
192
- }
193
-
194
- trace_imx_phy_read(val, phy, reg);
195
-
196
- return val;
197
+ return lan9118_phy_read(&s->mii, reg);
198
}
199
200
static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
201
@@ -XXX,XX +XXX,XX @@ static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
202
203
reg %= 32;
204
205
- trace_imx_phy_write(val, phy, reg);
206
-
207
- switch (reg) {
208
- case 0: /* Basic Control */
209
- if (val & 0x8000) {
210
- imx_phy_reset(s);
211
- } else {
212
- s->phy_control = val & 0x7980;
213
- /* Complete autonegotiation immediately. */
214
- if (val & 0x1000) {
215
- s->phy_status |= 0x0020;
216
- }
217
- }
218
- break;
219
- case 4: /* Auto-neg advertisement */
220
- s->phy_advertise = (val & 0x2d7f) | 0x80;
221
- break;
222
- case 30: /* Interrupt mask */
223
- s->phy_int_mask = val & 0xff;
224
- imx_phy_update_irq(s);
225
- break;
226
- case 17:
227
- case 18:
228
- case 27:
229
- case 31:
230
- qemu_log_mask(LOG_UNIMP, "[%s.phy)%s: reg %d not implemented\n",
231
- TYPE_IMX_FEC, __func__, reg);
232
- break;
233
- default:
234
- qemu_log_mask(LOG_GUEST_ERROR, "[%s.phy]%s: Bad address at offset %d\n",
235
- TYPE_IMX_FEC, __func__, reg);
236
- break;
237
- }
238
+ lan9118_phy_write(&s->mii, reg, val);
239
}
240
241
static void imx_fec_read_bd(IMXFECBufDesc *bd, dma_addr_t addr)
242
@@ -XXX,XX +XXX,XX @@ static void imx_eth_reset(DeviceState *d)
243
244
s->rx_descriptor = 0;
245
memset(s->tx_descriptor, 0, sizeof(s->tx_descriptor));
246
-
247
- /* We also reset the PHY */
248
- imx_phy_reset(s);
249
}
250
251
static uint32_t imx_default_read(IMXFECState *s, uint32_t index)
252
@@ -XXX,XX +XXX,XX @@ static void imx_eth_realize(DeviceState *dev, Error **errp)
253
sysbus_init_irq(sbd, &s->irq[0]);
254
sysbus_init_irq(sbd, &s->irq[1]);
255
256
+ qemu_init_irq(&s->mii_irq, imx_phy_update_irq, s, 0);
257
+ object_initialize_child(OBJECT(s), "mii", &s->mii, TYPE_LAN9118_PHY);
258
+ if (!sysbus_realize_and_unref(SYS_BUS_DEVICE(&s->mii), errp)) {
259
+ return;
260
+ }
261
+ qdev_connect_gpio_out(DEVICE(&s->mii), 0, &s->mii_irq);
262
+
263
qemu_macaddr_default_if_unset(&s->conf.macaddr);
264
265
s->nic = qemu_new_nic(&imx_eth_net_info, &s->conf,
266
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
267
index XXXXXXX..XXXXXXX 100644
268
--- a/hw/net/lan9118_phy.c
269
+++ b/hw/net/lan9118_phy.c
270
@@ -XXX,XX +XXX,XX @@
271
* Copyright (c) 2009 CodeSourcery, LLC.
272
* Written by Paul Brook
273
*
274
+ * Copyright (c) 2013 Jean-Christophe Dubois. <jcd@tribudubois.net>
275
+ *
276
* This code is licensed under the GNU GPL v2
277
*
278
* Contributions after 2012-01-13 are licensed under the terms of the
279
@@ -XXX,XX +XXX,XX @@
280
#include "hw/resettable.h"
281
#include "migration/vmstate.h"
282
#include "qemu/log.h"
283
+#include "trace.h"
284
285
#define PHY_INT_ENERGYON (1 << 7)
286
#define PHY_INT_AUTONEG_COMPLETE (1 << 6)
287
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
288
289
switch (reg) {
290
case 0: /* Basic Control */
291
- return s->control;
292
+ val = s->control;
293
+ break;
294
case 1: /* Basic Status */
295
- return s->status;
296
+ val = s->status;
297
+ break;
298
case 2: /* ID1 */
299
- return 0x0007;
300
+ val = 0x0007;
301
+ break;
302
case 3: /* ID2 */
303
- return 0xc0d1;
304
+ val = 0xc0d1;
305
+ break;
306
case 4: /* Auto-neg advertisement */
307
- return s->advertise;
308
+ val = s->advertise;
309
+ break;
310
case 5: /* Auto-neg Link Partner Ability */
311
- return 0x0f71;
312
+ val = 0x0f71;
313
+ break;
314
case 6: /* Auto-neg Expansion */
315
- return 1;
316
- /* TODO 17, 18, 27, 29, 30, 31 */
317
+ val = 1;
318
+ break;
319
case 29: /* Interrupt source. */
320
val = s->ints;
321
s->ints = 0;
322
lan9118_phy_update_irq(s);
323
- return val;
324
+ break;
325
case 30: /* Interrupt mask */
326
- return s->int_mask;
327
+ val = s->int_mask;
328
+ break;
329
+ case 17:
330
+ case 18:
331
+ case 27:
332
+ case 31:
333
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
334
+ __func__, reg);
335
+ val = 0;
336
+ break;
337
default:
338
- qemu_log_mask(LOG_GUEST_ERROR,
339
- "lan9118_phy_read: PHY read reg %d\n", reg);
340
- return 0;
341
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
342
+ __func__, reg);
343
+ val = 0;
344
+ break;
345
}
346
+
347
+ trace_lan9118_phy_read(val, reg);
348
+
349
+ return val;
350
}
351
352
void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
353
{
354
+ trace_lan9118_phy_write(val, reg);
355
+
356
switch (reg) {
357
case 0: /* Basic Control */
358
if (val & 0x8000) {
359
lan9118_phy_reset(s);
360
- break;
361
- }
362
- s->control = val & 0x7980;
363
- /* Complete autonegotiation immediately. */
364
- if (val & 0x1000) {
365
- s->status |= 0x0020;
366
+ } else {
367
+ s->control = val & 0x7980;
368
+ /* Complete autonegotiation immediately. */
369
+ if (val & 0x1000) {
370
+ s->status |= 0x0020;
371
+ }
372
}
373
break;
374
case 4: /* Auto-neg advertisement */
375
s->advertise = (val & 0x2d7f) | 0x80;
376
break;
377
- /* TODO 17, 18, 27, 31 */
378
case 30: /* Interrupt mask */
379
s->int_mask = val & 0xff;
380
lan9118_phy_update_irq(s);
381
break;
382
+ case 17:
383
+ case 18:
384
+ case 27:
385
+ case 31:
386
+ qemu_log_mask(LOG_UNIMP, "%s: reg %d not implemented\n",
387
+ __func__, reg);
388
+ break;
389
default:
390
- qemu_log_mask(LOG_GUEST_ERROR,
391
- "lan9118_phy_write: PHY write reg %d = 0x%04x\n", reg, val);
392
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address at offset %d\n",
393
+ __func__, reg);
394
+ break;
395
}
396
}
397
398
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
399
400
/* Autonegotiation status mirrors link status. */
401
if (link_down) {
402
+ trace_lan9118_phy_update_link("down");
403
s->status &= ~0x0024;
404
s->ints |= PHY_INT_DOWN;
405
} else {
406
+ trace_lan9118_phy_update_link("up");
407
s->status |= 0x0024;
408
s->ints |= PHY_INT_ENERGYON;
409
s->ints |= PHY_INT_AUTONEG_COMPLETE;
410
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
411
412
void lan9118_phy_reset(Lan9118PhyState *s)
413
{
414
+ trace_lan9118_phy_reset();
415
+
416
s->control = 0x3000;
417
s->status = 0x7809;
418
s->advertise = 0x01e1;
419
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_phy = {
420
.version_id = 1,
421
.minimum_version_id = 1,
422
.fields = (const VMStateField[]) {
423
- VMSTATE_UINT16(control, Lan9118PhyState),
424
VMSTATE_UINT16(status, Lan9118PhyState),
425
+ VMSTATE_UINT16(control, Lan9118PhyState),
426
VMSTATE_UINT16(advertise, Lan9118PhyState),
427
VMSTATE_UINT16(ints, Lan9118PhyState),
428
VMSTATE_UINT16(int_mask, Lan9118PhyState),
429
diff --git a/hw/net/Kconfig b/hw/net/Kconfig
430
index XXXXXXX..XXXXXXX 100644
431
--- a/hw/net/Kconfig
432
+++ b/hw/net/Kconfig
433
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_SUN8I_EMAC
434
435
config IMX_FEC
436
bool
437
+ select LAN9118_PHY
438
439
config CADENCE
440
bool
441
diff --git a/hw/net/trace-events b/hw/net/trace-events
442
index XXXXXXX..XXXXXXX 100644
443
--- a/hw/net/trace-events
444
+++ b/hw/net/trace-events
445
@@ -XXX,XX +XXX,XX @@ allwinner_sun8i_emac_set_link(bool active) "Set link: active=%u"
446
allwinner_sun8i_emac_read(uint64_t offset, uint64_t val) "MMIO read: offset=0x%" PRIx64 " value=0x%" PRIx64
447
allwinner_sun8i_emac_write(uint64_t offset, uint64_t val) "MMIO write: offset=0x%" PRIx64 " value=0x%" PRIx64
448
449
+# lan9118_phy.c
450
+lan9118_phy_read(uint16_t val, int reg) "[0x%02x] -> 0x%04" PRIx16
451
+lan9118_phy_write(uint16_t val, int reg) "[0x%02x] <- 0x%04" PRIx16
452
+lan9118_phy_update_link(const char *s) "%s"
453
+lan9118_phy_reset(void) ""
454
+
455
# lance.c
456
lance_mem_readw(uint64_t addr, uint32_t ret) "addr=0x%"PRIx64"val=0x%04x"
457
lance_mem_writew(uint64_t addr, uint32_t val) "addr=0x%"PRIx64"val=0x%04x"
458
@@ -XXX,XX +XXX,XX @@ i82596_set_multicast(uint16_t count) "Added %d multicast entries"
459
i82596_channel_attention(void *s) "%p: Received CHANNEL ATTENTION"
460
461
# imx_fec.c
462
-imx_phy_read(uint32_t val, int phy, int reg) "0x%04"PRIx32" <= phy[%d].reg[%d]"
463
imx_phy_read_num(int phy, int configured) "read request from unconfigured phy %d (configured %d)"
464
-imx_phy_write(uint32_t val, int phy, int reg) "0x%04"PRIx32" => phy[%d].reg[%d]"
465
imx_phy_write_num(int phy, int configured) "write request to unconfigured phy %d (configured %d)"
466
-imx_phy_update_link(const char *s) "%s"
467
-imx_phy_reset(void) ""
468
imx_fec_read_bd(uint64_t addr, int flags, int len, int data) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x"
469
imx_enet_read_bd(uint64_t addr, int flags, int len, int data, int options, int status) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x option 0x%04x status 0x%04x"
470
imx_eth_tx_bd_busy(void) "tx_bd ran out of descriptors to transmit"
471
--
472
2.34.1
diff view generated by jsdifflib
1
When we converted the LDAPR/STLR instructions to decodetree we
1
From: Bernhard Beschow <shentey@gmail.com>
2
accidentally introduced a regression where the offset is negative.
3
The 9-bit immediate field is signed, and the old hand decoder
4
correctly used sextract32() to get it out of the insn word,
5
but the ldapr_stlr_i pattern in the decode file used "imm:9"
6
instead of "imm:s9", so it treated the field as unsigned.
7
2
8
Fix the pattern to treat the field as a signed immediate.
3
Turns 0x70 into 0xe0 (== 0x70 << 1) which adds the missing MII_ANLPAR_TX and
4
fixes the MSB of selector field to be zero, as specified in the datasheet.
9
5
10
Cc: qemu-stable@nongnu.org
6
Fixes: 2a424990170b "LAN9118 emulation"
11
Fixes: 2521b6073b7 ("target/arm: Convert LDAPR/STLR (imm) to decodetree")
7
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
12
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2419
8
Tested-by: Guenter Roeck <linux@roeck-us.net>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20241102125724.532843-4-shentey@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20240709134504.3500007-2-peter.maydell@linaro.org
17
---
12
---
18
target/arm/tcg/a64.decode | 2 +-
13
hw/net/lan9118_phy.c | 2 +-
19
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
20
15
21
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/tcg/a64.decode
18
--- a/hw/net/lan9118_phy.c
24
+++ b/target/arm/tcg/a64.decode
19
+++ b/hw/net/lan9118_phy.c
25
@@ -XXX,XX +XXX,XX @@ LDAPR sz:2 111 0 00 1 0 1 11111 1100 00 rn:5 rt:5
20
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
26
LDRA 11 111 0 00 m:1 . 1 ......... w:1 1 rn:5 rt:5 imm=%ldra_imm
21
val = s->advertise;
27
22
break;
28
&ldapr_stlr_i rn rt imm sz sign ext
23
case 5: /* Auto-neg Link Partner Ability */
29
-@ldapr_stlr_i .. ...... .. . imm:9 .. rn:5 rt:5 &ldapr_stlr_i
24
- val = 0x0f71;
30
+@ldapr_stlr_i .. ...... .. . imm:s9 .. rn:5 rt:5 &ldapr_stlr_i
25
+ val = 0x0fe1;
31
STLR_i sz:2 011001 00 0 ......... 00 ..... ..... @ldapr_stlr_i sign=0 ext=0
26
break;
32
LDAPR_i sz:2 011001 01 0 ......... 00 ..... ..... @ldapr_stlr_i sign=0 ext=0
27
case 6: /* Auto-neg Expansion */
33
LDAPR_i 00 011001 10 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=0 sz=0
28
val = 1;
34
--
29
--
35
2.34.1
30
2.34.1
36
37
diff view generated by jsdifflib
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
hvf did not advance PC when raising an exception for most unhandled
3
Prefer named constants over magic values for better readability.
4
system registers, but it mistakenly advanced PC when raising an
5
exception for GICv3 registers.
6
4
7
Cc: qemu-stable@nongnu.org
8
Fixes: a2260983c655 ("hvf: arm: Add support for GICv3")
9
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
10
Message-id: 20240716-pmu-v3-4-8c7c1858a227@daynix.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
7
Tested-by: Guenter Roeck <linux@roeck-us.net>
8
Message-id: 20241102125724.532843-5-shentey@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/hvf/hvf.c | 1 +
11
include/hw/net/mii.h | 6 +++++
15
1 file changed, 1 insertion(+)
12
hw/net/lan9118_phy.c | 63 ++++++++++++++++++++++++++++----------------
13
2 files changed, 46 insertions(+), 23 deletions(-)
16
14
17
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
15
diff --git a/include/hw/net/mii.h b/include/hw/net/mii.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/hvf/hvf.c
17
--- a/include/hw/net/mii.h
20
+++ b/target/arm/hvf/hvf.c
18
+++ b/include/hw/net/mii.h
21
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
19
@@ -XXX,XX +XXX,XX @@
22
/* Call the TCG sysreg handler. This is only safe for GICv3 regs. */
20
#define MII_BMSR_JABBER (1 << 1) /* Jabber detected */
23
if (!hvf_sysreg_read_cp(cpu, reg, &val)) {
21
#define MII_BMSR_EXTCAP (1 << 0) /* Ext-reg capability */
24
hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
22
25
+ return 1;
23
+#define MII_ANAR_RFAULT (1 << 13) /* Say we can detect faults */
24
#define MII_ANAR_PAUSE_ASYM (1 << 11) /* Try for asymmetric pause */
25
#define MII_ANAR_PAUSE (1 << 10) /* Try for pause */
26
#define MII_ANAR_TXFD (1 << 8)
27
@@ -XXX,XX +XXX,XX @@
28
#define MII_ANAR_10FD (1 << 6)
29
#define MII_ANAR_10 (1 << 5)
30
#define MII_ANAR_CSMACD (1 << 0)
31
+#define MII_ANAR_SELECT (0x001f) /* Selector bits */
32
33
#define MII_ANLPAR_ACK (1 << 14)
34
#define MII_ANLPAR_PAUSEASY (1 << 11) /* can pause asymmetrically */
35
@@ -XXX,XX +XXX,XX @@
36
#define RTL8201CP_PHYID1 0x0000
37
#define RTL8201CP_PHYID2 0x8201
38
39
+/* SMSC LAN9118 */
40
+#define SMSCLAN9118_PHYID1 0x0007
41
+#define SMSCLAN9118_PHYID2 0xc0d1
42
+
43
/* RealTek 8211E */
44
#define RTL8211E_PHYID1 0x001c
45
#define RTL8211E_PHYID2 0xc915
46
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/net/lan9118_phy.c
49
+++ b/hw/net/lan9118_phy.c
50
@@ -XXX,XX +XXX,XX @@
51
52
#include "qemu/osdep.h"
53
#include "hw/net/lan9118_phy.h"
54
+#include "hw/net/mii.h"
55
#include "hw/irq.h"
56
#include "hw/resettable.h"
57
#include "migration/vmstate.h"
58
@@ -XXX,XX +XXX,XX @@ uint16_t lan9118_phy_read(Lan9118PhyState *s, int reg)
59
uint16_t val;
60
61
switch (reg) {
62
- case 0: /* Basic Control */
63
+ case MII_BMCR:
64
val = s->control;
65
break;
66
- case 1: /* Basic Status */
67
+ case MII_BMSR:
68
val = s->status;
69
break;
70
- case 2: /* ID1 */
71
- val = 0x0007;
72
+ case MII_PHYID1:
73
+ val = SMSCLAN9118_PHYID1;
74
break;
75
- case 3: /* ID2 */
76
- val = 0xc0d1;
77
+ case MII_PHYID2:
78
+ val = SMSCLAN9118_PHYID2;
79
break;
80
- case 4: /* Auto-neg advertisement */
81
+ case MII_ANAR:
82
val = s->advertise;
83
break;
84
- case 5: /* Auto-neg Link Partner Ability */
85
- val = 0x0fe1;
86
+ case MII_ANLPAR:
87
+ val = MII_ANLPAR_PAUSEASY | MII_ANLPAR_PAUSE | MII_ANLPAR_T4 |
88
+ MII_ANLPAR_TXFD | MII_ANLPAR_TX | MII_ANLPAR_10FD |
89
+ MII_ANLPAR_10 | MII_ANLPAR_CSMACD;
90
break;
91
- case 6: /* Auto-neg Expansion */
92
- val = 1;
93
+ case MII_ANER:
94
+ val = MII_ANER_NWAY;
95
break;
96
case 29: /* Interrupt source. */
97
val = s->ints;
98
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
99
trace_lan9118_phy_write(val, reg);
100
101
switch (reg) {
102
- case 0: /* Basic Control */
103
- if (val & 0x8000) {
104
+ case MII_BMCR:
105
+ if (val & MII_BMCR_RESET) {
106
lan9118_phy_reset(s);
107
} else {
108
- s->control = val & 0x7980;
109
+ s->control = val & (MII_BMCR_LOOPBACK | MII_BMCR_SPEED100 |
110
+ MII_BMCR_AUTOEN | MII_BMCR_PDOWN | MII_BMCR_FD |
111
+ MII_BMCR_CTST);
112
/* Complete autonegotiation immediately. */
113
- if (val & 0x1000) {
114
- s->status |= 0x0020;
115
+ if (val & MII_BMCR_AUTOEN) {
116
+ s->status |= MII_BMSR_AN_COMP;
117
}
26
}
118
}
27
break;
119
break;
28
case SYSREG_DBGBVR0_EL1:
120
- case 4: /* Auto-neg advertisement */
121
- s->advertise = (val & 0x2d7f) | 0x80;
122
+ case MII_ANAR:
123
+ s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
124
+ MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
125
+ MII_ANAR_SELECT))
126
+ | MII_ANAR_TX;
127
break;
128
case 30: /* Interrupt mask */
129
s->int_mask = val & 0xff;
130
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_update_link(Lan9118PhyState *s, bool link_down)
131
/* Autonegotiation status mirrors link status. */
132
if (link_down) {
133
trace_lan9118_phy_update_link("down");
134
- s->status &= ~0x0024;
135
+ s->status &= ~(MII_BMSR_AN_COMP | MII_BMSR_LINK_ST);
136
s->ints |= PHY_INT_DOWN;
137
} else {
138
trace_lan9118_phy_update_link("up");
139
- s->status |= 0x0024;
140
+ s->status |= MII_BMSR_AN_COMP | MII_BMSR_LINK_ST;
141
s->ints |= PHY_INT_ENERGYON;
142
s->ints |= PHY_INT_AUTONEG_COMPLETE;
143
}
144
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_reset(Lan9118PhyState *s)
145
{
146
trace_lan9118_phy_reset();
147
148
- s->control = 0x3000;
149
- s->status = 0x7809;
150
- s->advertise = 0x01e1;
151
+ s->control = MII_BMCR_AUTOEN | MII_BMCR_SPEED100;
152
+ s->status = MII_BMSR_100TX_FD
153
+ | MII_BMSR_100TX_HD
154
+ | MII_BMSR_10T_FD
155
+ | MII_BMSR_10T_HD
156
+ | MII_BMSR_AUTONEG
157
+ | MII_BMSR_EXTCAP;
158
+ s->advertise = MII_ANAR_TXFD
159
+ | MII_ANAR_TX
160
+ | MII_ANAR_10FD
161
+ | MII_ANAR_10
162
+ | MII_ANAR_CSMACD;
163
s->int_mask = 0;
164
s->ints = 0;
165
lan9118_phy_update_link(s, s->link_down);
29
--
166
--
30
2.34.1
167
2.34.1
diff view generated by jsdifflib
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
1
From: Bernhard Beschow <shentey@gmail.com>
2
2
3
Asahi Linux supports KVM but lacks PMU support.
3
The real device advertises this mode and the device model already advertises
4
100 mbps half duplex and 10 mbps full+half duplex. So advertise this mode to
5
make the model more realistic.
4
6
5
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
7
Message-id: 20240716-pmu-v3-1-8c7c1858a227@daynix.com
9
Tested-by: Guenter Roeck <linux@roeck-us.net>
10
Message-id: 20241102125724.532843-6-shentey@gmail.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
tests/qtest/arm-cpu-features.c | 13 ++++++++-----
13
hw/net/lan9118_phy.c | 4 ++--
11
1 file changed, 8 insertions(+), 5 deletions(-)
14
1 file changed, 2 insertions(+), 2 deletions(-)
12
15
13
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
16
diff --git a/hw/net/lan9118_phy.c b/hw/net/lan9118_phy.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/qtest/arm-cpu-features.c
18
--- a/hw/net/lan9118_phy.c
16
+++ b/tests/qtest/arm-cpu-features.c
19
+++ b/hw/net/lan9118_phy.c
17
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
20
@@ -XXX,XX +XXX,XX @@ void lan9118_phy_write(Lan9118PhyState *s, int reg, uint16_t val)
18
assert_set_feature(qts, "host", "kvm-no-adjvtime", false);
21
break;
19
22
case MII_ANAR:
20
if (g_str_equal(qtest_get_arch(), "aarch64")) {
23
s->advertise = (val & (MII_ANAR_RFAULT | MII_ANAR_PAUSE_ASYM |
21
+ bool kvm_supports_pmu;
24
- MII_ANAR_PAUSE | MII_ANAR_10FD | MII_ANAR_10 |
22
bool kvm_supports_steal_time;
25
- MII_ANAR_SELECT))
23
bool kvm_supports_sve;
26
+ MII_ANAR_PAUSE | MII_ANAR_TXFD | MII_ANAR_10FD |
24
char max_name[8], name[8];
27
+ MII_ANAR_10 | MII_ANAR_SELECT))
25
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
28
| MII_ANAR_TX;
26
29
break;
27
assert_has_feature_enabled(qts, "host", "aarch64");
30
case 30: /* Interrupt mask */
28
29
- /* Enabling and disabling pmu should always work. */
30
- assert_has_feature_enabled(qts, "host", "pmu");
31
- assert_set_feature(qts, "host", "pmu", false);
32
- assert_set_feature(qts, "host", "pmu", true);
33
-
34
/*
35
* Some features would be enabled by default, but they're disabled
36
* because this instance of KVM doesn't support them. Test that the
37
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
38
assert_has_feature(qts, "host", "sve");
39
40
resp = do_query_no_props(qts, "host");
41
+ kvm_supports_pmu = resp_get_feature(resp, "pmu");
42
kvm_supports_steal_time = resp_get_feature(resp, "kvm-steal-time");
43
kvm_supports_sve = resp_get_feature(resp, "sve");
44
vls = resp_get_sve_vls(resp);
45
qobject_unref(resp);
46
47
+ if (kvm_supports_pmu) {
48
+ /* If we have pmu then we should be able to toggle it. */
49
+ assert_set_feature(qts, "host", "pmu", false);
50
+ assert_set_feature(qts, "host", "pmu", true);
51
+ }
52
+
53
if (kvm_supports_steal_time) {
54
/* If we have steal-time then we should be able to toggle it. */
55
assert_set_feature(qts, "host", "kvm-steal-time", false);
56
--
31
--
57
2.34.1
32
2.34.1
58
59
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
For IEEE fused multiply-add, the (0 * inf) + NaN case should raise
2
Invalid for the multiplication of 0 by infinity. Currently we handle
3
this in the per-architecture ifdef ladder in pickNaNMulAdd().
4
However, since this isn't really architecture specific we can hoist
5
it up to the generic code.
2
6
3
When nested translation is requested, do the following:
7
For the cases where the infzero test in pickNaNMulAdd was
4
- Translate stage-1 table address IPA into PA through stage-2.
8
returning 2, we can delete the check entirely and allow the
5
- Translate stage-1 table walk output (IPA) through stage-2.
9
code to fall into the normal pick-a-NaN handling, because this
6
- Create a single TLB entry from stage-1 and stage-2 translations
10
will return 2 anyway (input 'c' being the only NaN in this case).
7
using logic introduced before.
11
For the cases where infzero was returning 3 to indicate "return
12
the default NaN", we must retain that "return 3".
8
13
9
smmu_ptw() has a new argument SMMUState which include the TLB as
14
For Arm, this looks like it might be a behaviour change because we
10
stage-1 table address can be cached in there.
15
used to set float_flag_invalid | float_flag_invalid_imz only if C is
16
a quiet NaN. However, it is not, because Arm target code never looks
17
at float_flag_invalid_imz, and for the (0 * inf) + SNaN case we
18
already raised float_flag_invalid via the "abc_mask &
19
float_cmask_snan" check in pick_nan_muladd.
11
20
12
Also in smmu_ptw(), a separate path used for nesting to simplify the
21
For any target architecture using the "default implementation" at the
13
code, although some logic can be combined.
22
bottom of the ifdef, this is a behaviour change but will be fixing a
23
bug (where we failed to raise the Invalid exception for (0 * inf +
24
QNaN). The architectures using the default case are:
25
* hppa
26
* i386
27
* sh4
28
* tricore
14
29
15
With nested translation class of translation fault can be different,
30
The x86, Tricore and SH4 CPU architecture manuals are clear that this
16
from the class of the translation, as faults from translating stage-1
31
should have raised Invalid; HPPA is a bit vaguer but still seems
17
tables are considered as CLASS_TT and not CLASS_IN, a new member
32
clear enough.
18
"is_ipa_descriptor" added to "SMMUPTWEventInfo" to differ faults
19
from walking stage 1 translation table and faults from translating
20
an IPA for a transaction.
21
33
22
Signed-off-by: Mostafa Saleh <smostafa@google.com>
23
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
24
Reviewed-by: Eric Auger <eric.auger@redhat.com>
25
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
26
Message-id: 20240715084519.1189624-12-smostafa@google.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20241202131347.498124-2-peter.maydell@linaro.org
28
---
37
---
29
include/hw/arm/smmu-common.h | 7 ++--
38
fpu/softfloat-parts.c.inc | 13 +++++++------
30
hw/arm/smmu-common.c | 74 +++++++++++++++++++++++++++++++-----
39
fpu/softfloat-specialize.c.inc | 29 +----------------------------
31
hw/arm/smmuv3.c | 14 +++++++
40
2 files changed, 8 insertions(+), 34 deletions(-)
32
3 files changed, 82 insertions(+), 13 deletions(-)
33
41
34
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
42
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
35
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
36
--- a/include/hw/arm/smmu-common.h
44
--- a/fpu/softfloat-parts.c.inc
37
+++ b/include/hw/arm/smmu-common.h
45
+++ b/fpu/softfloat-parts.c.inc
38
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUPTWEventInfo {
46
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
39
SMMUStage stage;
47
int ab_mask, int abc_mask)
40
SMMUPTWEventType type;
48
{
41
dma_addr_t addr; /* fetched address that induced an abort, if any */
49
int which;
42
+ bool is_ipa_descriptor; /* src for fault in nested translation. */
50
+ bool infzero = (ab_mask == float_cmask_infzero);
43
} SMMUPTWEventInfo;
51
44
52
if (unlikely(abc_mask & float_cmask_snan)) {
45
typedef struct SMMUTransTableInfo {
53
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
46
@@ -XXX,XX +XXX,XX @@ static inline uint16_t smmu_get_sid(SMMUDevice *sdev)
54
}
47
* smmu_ptw - Perform the page table walk for a given iova / access flags
55
48
* pair, according to @cfg translation config
56
- which = pickNaNMulAdd(a->cls, b->cls, c->cls,
49
*/
57
- ab_mask == float_cmask_infzero, s);
50
-int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
58
+ if (infzero) {
51
- SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info);
59
+ /* This is (0 * inf) + NaN or (inf * 0) + NaN */
52
-
60
+ float_raise(float_flag_invalid | float_flag_invalid_imz, s);
53
+int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
54
+ IOMMUAccessFlags perm, SMMUTLBEntry *tlbe,
55
+ SMMUPTWEventInfo *info);
56
57
/*
58
* smmu_translate - Look for a translation in TLB, if not, do a PTW.
59
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/arm/smmu-common.c
62
+++ b/hw/arm/smmu-common.c
63
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
64
return NULL;
65
}
66
67
+/* Translate stage-1 table address using stage-2 page table. */
68
+static inline int translate_table_addr_ipa(SMMUState *bs,
69
+ dma_addr_t *table_addr,
70
+ SMMUTransCfg *cfg,
71
+ SMMUPTWEventInfo *info)
72
+{
73
+ dma_addr_t addr = *table_addr;
74
+ SMMUTLBEntry *cached_entry;
75
+ int asid;
76
+
77
+ /*
78
+ * The translation table walks performed from TTB0 or TTB1 are always
79
+ * performed in IPA space if stage 2 translations are enabled.
80
+ */
81
+ asid = cfg->asid;
82
+ cfg->stage = SMMU_STAGE_2;
83
+ cfg->asid = -1;
84
+ cached_entry = smmu_translate(bs, cfg, addr, IOMMU_RO, info);
85
+ cfg->asid = asid;
86
+ cfg->stage = SMMU_NESTED;
87
+
88
+ if (cached_entry) {
89
+ *table_addr = CACHED_ENTRY_TO_ADDR(cached_entry, addr);
90
+ return 0;
91
+ }
61
+ }
92
+
62
+
93
+ info->stage = SMMU_STAGE_2;
63
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
94
+ info->addr = addr;
64
95
+ info->is_ipa_descriptor = true;
65
if (s->default_nan_mode || which == 3) {
96
+ return -EINVAL;
66
- /*
97
+}
67
- * Note that this check is after pickNaNMulAdd so that function
68
- * has an opportunity to set the Invalid flag for infzero.
69
- */
70
parts_default_nan(a, s);
71
return a;
72
}
73
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
74
index XXXXXXX..XXXXXXX 100644
75
--- a/fpu/softfloat-specialize.c.inc
76
+++ b/fpu/softfloat-specialize.c.inc
77
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
78
* the default NaN
79
*/
80
if (infzero && is_qnan(c_cls)) {
81
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
82
return 3;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
86
* case sets InvalidOp and returns the default NaN
87
*/
88
if (infzero) {
89
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
90
return 3;
91
}
92
/* Prefer sNaN over qNaN, in the a, b, c order. */
93
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
94
* For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
95
* case sets InvalidOp and returns the input value 'c'
96
*/
97
- if (infzero) {
98
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
99
- return 2;
100
- }
101
/* Prefer sNaN over qNaN, in the c, a, b order. */
102
if (is_snan(c_cls)) {
103
return 2;
104
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
105
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
106
* case sets InvalidOp and returns the input value 'c'
107
*/
108
- if (infzero) {
109
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
110
- return 2;
111
- }
98
+
112
+
99
/**
113
/* Prefer sNaN over qNaN, in the c, a, b order. */
100
* smmu_ptw_64_s1 - VMSAv8-64 Walk of the page tables for a given IOVA
114
if (is_snan(c_cls)) {
101
+ * @bs: smmu state which includes TLB instance
115
return 2;
102
* @cfg: translation config
116
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
103
* @iova: iova to translate
117
* to return an input NaN if we have one (ie c) rather than generating
104
* @perm: access type
118
* a default NaN
105
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
119
*/
106
* Upon success, @tlbe is filled with translated_addr and entry
120
- if (infzero) {
107
* permission rights.
121
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
108
*/
122
- return 2;
109
-static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
123
- }
110
+static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
124
111
dma_addr_t iova, IOMMUAccessFlags perm,
125
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
112
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
126
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
113
{
127
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
114
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
128
return 1;
115
goto error;
116
}
117
baseaddr = get_table_pte_address(pte, granule_sz);
118
+ if (cfg->stage == SMMU_NESTED) {
119
+ if (translate_table_addr_ipa(bs, &baseaddr, cfg, info)) {
120
+ goto error;
121
+ }
122
+ }
123
level++;
124
continue;
125
} else if (is_page_pte(pte, level)) {
126
@@ -XXX,XX +XXX,XX @@ error:
127
* combine S1 and S2 TLB entries into a single entry.
128
* As a result the S1 entry is overriden with combined data.
129
*/
130
-static void __attribute__((unused)) combine_tlb(SMMUTLBEntry *tlbe,
131
- SMMUTLBEntry *tlbe_s2,
132
- dma_addr_t iova,
133
- SMMUTransCfg *cfg)
134
+static void combine_tlb(SMMUTLBEntry *tlbe, SMMUTLBEntry *tlbe_s2,
135
+ dma_addr_t iova, SMMUTransCfg *cfg)
136
{
137
if (tlbe_s2->entry.addr_mask < tlbe->entry.addr_mask) {
138
tlbe->entry.addr_mask = tlbe_s2->entry.addr_mask;
139
@@ -XXX,XX +XXX,XX @@ static void __attribute__((unused)) combine_tlb(SMMUTLBEntry *tlbe,
140
/**
141
* smmu_ptw - Walk the page tables for an IOVA, according to @cfg
142
*
143
+ * @bs: smmu state which includes TLB instance
144
* @cfg: translation configuration
145
* @iova: iova to translate
146
* @perm: tentative access type
147
@@ -XXX,XX +XXX,XX @@ static void __attribute__((unused)) combine_tlb(SMMUTLBEntry *tlbe,
148
*
149
* return 0 on success
150
*/
151
-int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
152
- SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
153
+int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
154
+ IOMMUAccessFlags perm, SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
155
{
156
+ int ret;
157
+ SMMUTLBEntry tlbe_s2;
158
+ dma_addr_t ipa;
159
+
160
if (cfg->stage == SMMU_STAGE_1) {
161
- return smmu_ptw_64_s1(cfg, iova, perm, tlbe, info);
162
+ return smmu_ptw_64_s1(bs, cfg, iova, perm, tlbe, info);
163
} else if (cfg->stage == SMMU_STAGE_2) {
164
/*
165
* If bypassing stage 1(or unimplemented), the input address is passed
166
@@ -XXX,XX +XXX,XX @@ int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
167
return smmu_ptw_64_s2(cfg, iova, perm, tlbe, info);
168
}
129
}
169
130
#elif defined(TARGET_RISCV)
170
- g_assert_not_reached();
131
- /* For RISC-V, InvalidOp is set when multiplicands are Inf and zero */
171
+ /* SMMU_NESTED. */
132
- if (infzero) {
172
+ ret = smmu_ptw_64_s1(bs, cfg, iova, perm, tlbe, info);
133
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
173
+ if (ret) {
134
- }
174
+ return ret;
135
return 3; /* default NaN */
175
+ }
136
#elif defined(TARGET_S390X)
176
+
137
if (infzero) {
177
+ ipa = CACHED_ENTRY_TO_ADDR(tlbe, iova);
138
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
178
+ ret = smmu_ptw_64_s2(cfg, ipa, perm, &tlbe_s2, info);
139
return 3;
179
+ if (ret) {
180
+ return ret;
181
+ }
182
+
183
+ combine_tlb(tlbe, &tlbe_s2, iova, cfg);
184
+ return 0;
185
}
186
187
SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t addr,
188
@@ -XXX,XX +XXX,XX @@ SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t addr,
189
}
140
}
190
141
191
cached_entry = g_new0(SMMUTLBEntry, 1);
142
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
192
- status = smmu_ptw(cfg, addr, flag, cached_entry, info);
143
return 2;
193
+ status = smmu_ptw(bs, cfg, addr, flag, cached_entry, info);
144
}
194
if (status) {
145
#elif defined(TARGET_SPARC)
195
g_free(cached_entry);
146
- /* For (inf,0,nan) return c. */
196
return NULL;
147
- if (infzero) {
197
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
148
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
198
index XXXXXXX..XXXXXXX 100644
149
- return 2;
199
--- a/hw/arm/smmuv3.c
150
- }
200
+++ b/hw/arm/smmuv3.c
151
/* Prefer SNaN over QNaN, order C, B, A. */
201
@@ -XXX,XX +XXX,XX @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
152
if (is_snan(c_cls)) {
202
if (!cached_entry) {
153
return 2;
203
/* All faults from PTW has S2 field. */
154
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
204
event->u.f_walk_eabt.s2 = (ptw_info.stage == SMMU_STAGE_2);
155
* For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
205
+ /*
156
* an input NaN if we have one (ie c).
206
+ * Fault class is set as follows based on "class" input to
157
*/
207
+ * the function and to "ptw_info" from "smmu_translate()"
158
- if (infzero) {
208
+ * For stage-1:
159
- float_raise(float_flag_invalid | float_flag_invalid_imz, status);
209
+ * - EABT => CLASS_TT (hardcoded)
160
- return 2;
210
+ * - other events => CLASS_IN (input to function)
161
- }
211
+ * For stage-2 => CLASS_IN (input to function)
162
if (status->use_first_nan) {
212
+ * For nested, for all events:
163
if (is_nan(a_cls)) {
213
+ * - CD fetch => CLASS_CD (input to function)
164
return 0;
214
+ * - walking stage 1 translation table => CLASS_TT (from
215
+ * is_ipa_descriptor or input in case of TTBx)
216
+ * - s2 translation => CLASS_IN (input to function)
217
+ */
218
+ class = ptw_info.is_ipa_descriptor ? SMMU_CLASS_TT : class;
219
switch (ptw_info.type) {
220
case SMMU_PTW_ERR_WALK_EABT:
221
event->type = SMMU_EVT_F_WALK_EABT;
222
--
165
--
223
2.34.1
166
2.34.1
224
225
diff view generated by jsdifflib
New patch
1
If the target sets default_nan_mode then we're always going to return
2
the default NaN, and pickNaNMulAdd() no longer has any side effects.
3
For consistency with pickNaN(), check for default_nan_mode before
4
calling pickNaNMulAdd().
1
5
6
When we convert pickNaNMulAdd() to allow runtime selection of the NaN
7
propagation rule, this means we won't have to make the targets which
8
use default_nan_mode also set a propagation rule.
9
10
Since RiscV always uses default_nan_mode, this allows us to remove
11
its ifdef case from pickNaNMulAdd().
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-3-peter.maydell@linaro.org
16
---
17
fpu/softfloat-parts.c.inc | 8 ++++++--
18
fpu/softfloat-specialize.c.inc | 9 +++++++--
19
2 files changed, 13 insertions(+), 4 deletions(-)
20
21
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
22
index XXXXXXX..XXXXXXX 100644
23
--- a/fpu/softfloat-parts.c.inc
24
+++ b/fpu/softfloat-parts.c.inc
25
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
26
float_raise(float_flag_invalid | float_flag_invalid_imz, s);
27
}
28
29
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
30
+ if (s->default_nan_mode) {
31
+ which = 3;
32
+ } else {
33
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ }
35
36
- if (s->default_nan_mode || which == 3) {
37
+ if (which == 3) {
38
parts_default_nan(a, s);
39
return a;
40
}
41
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
42
index XXXXXXX..XXXXXXX 100644
43
--- a/fpu/softfloat-specialize.c.inc
44
+++ b/fpu/softfloat-specialize.c.inc
45
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
46
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
47
bool infzero, float_status *status)
48
{
49
+ /*
50
+ * We guarantee not to require the target to tell us how to
51
+ * pick a NaN if we're always returning the default NaN.
52
+ * But if we're not in default-NaN mode then the target must
53
+ * specify.
54
+ */
55
+ assert(!status->default_nan_mode);
56
#if defined(TARGET_ARM)
57
/* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
58
* the default NaN
59
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
60
} else {
61
return 1;
62
}
63
-#elif defined(TARGET_RISCV)
64
- return 3; /* default NaN */
65
#elif defined(TARGET_S390X)
66
if (infzero) {
67
return 3;
68
--
69
2.34.1
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
IEEE 758 does not define a fixed rule for what NaN to return in
2
2
the case of a fused multiply-add of inf * 0 + NaN. Different
3
Soon, smmuv3_do_translate() will be used to translate the CD and the
3
architectures thus do different things:
4
TTBx, instead of re-writting the same logic to convert the returned
4
* some return the default NaN
5
cached entry to an address, add a new macro CACHED_ENTRY_TO_ADDR.
5
* some return the input NaN
6
6
* Arm returns the default NaN if the input NaN is quiet,
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
and the input NaN if it is signalling
8
Signed-off-by: Mostafa Saleh <smostafa@google.com>
8
9
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
9
We want to make this logic be runtime selected rather than
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
hardcoded into the binary, because:
11
Message-id: 20240715084519.1189624-8-smostafa@google.com
11
* this will let us have multiple targets in one QEMU binary
12
* the Arm FEAT_AFP architectural feature includes letting
13
the guest select a NaN propagation rule at runtime
14
15
In this commit we add an enum for the propagation rule, the field in
16
float_status, and the corresponding getters and setters. We change
17
pickNaNMulAdd to honour this, but because all targets still leave
18
this field at its default 0 value, the fallback logic will pick the
19
rule type with the old ifdef ladder.
20
21
Note that four architectures both use the muladd softfloat functions
22
and did not have a branch of the ifdef ladder to specify their
23
behaviour (and so were ending up with the "default" case, probably
24
wrongly): i386, HPPA, SH4 and Tricore. SH4 and Tricore both set
25
default_nan_mode, and so will never get into pickNaNMulAdd(). For
26
HPPA and i386 we retain the same behaviour as the old default-case,
27
which is to not ever return the default NaN. This might not be
28
correct but it is not a behaviour change.
29
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
32
Message-id: 20241202131347.498124-4-peter.maydell@linaro.org
13
---
33
---
14
include/hw/arm/smmu-common.h | 3 +++
34
include/fpu/softfloat-helpers.h | 11 ++++
15
hw/arm/smmuv3.c | 3 +--
35
include/fpu/softfloat-types.h | 23 +++++++++
16
2 files changed, 4 insertions(+), 2 deletions(-)
36
fpu/softfloat-specialize.c.inc | 91 ++++++++++++++++++++++-----------
17
37
3 files changed, 95 insertions(+), 30 deletions(-)
18
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
38
39
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
19
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/smmu-common.h
41
--- a/include/fpu/softfloat-helpers.h
21
+++ b/include/hw/arm/smmu-common.h
42
+++ b/include/fpu/softfloat-helpers.h
22
@@ -XXX,XX +XXX,XX @@
43
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
23
#define VMSA_IDXMSK(isz, strd, lvl) ((1ULL << \
44
status->float_2nan_prop_rule = rule;
24
VMSA_BIT_LVL(isz, strd, lvl)) - 1)
45
}
25
46
26
+#define CACHED_ENTRY_TO_ADDR(ent, addr) ((ent)->entry.translated_addr + \
47
+static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
27
+ ((addr) & (ent)->entry.addr_mask))
48
+ float_status *status)
49
+{
50
+ status->float_infzeronan_rule = rule;
51
+}
52
+
53
static inline void set_flush_to_zero(bool val, float_status *status)
54
{
55
status->flush_to_zero = val;
56
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
57
return status->float_2nan_prop_rule;
58
}
59
60
+static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
61
+{
62
+ return status->float_infzeronan_rule;
63
+}
64
+
65
static inline bool get_flush_to_zero(float_status *status)
66
{
67
return status->flush_to_zero;
68
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
69
index XXXXXXX..XXXXXXX 100644
70
--- a/include/fpu/softfloat-types.h
71
+++ b/include/fpu/softfloat-types.h
72
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
73
float_2nan_prop_x87,
74
} Float2NaNPropRule;
75
76
+/*
77
+ * Rule for result of fused multiply-add 0 * Inf + NaN.
78
+ * This must be a NaN, but implementations differ on whether this
79
+ * is the input NaN or the default NaN.
80
+ *
81
+ * You don't need to set this if default_nan_mode is enabled.
82
+ * When not in default-NaN mode, it is an error for the target
83
+ * not to set the rule in float_status if it uses muladd, and we
84
+ * will assert if we need to handle an input NaN and no rule was
85
+ * selected.
86
+ */
87
+typedef enum __attribute__((__packed__)) {
88
+ /* No propagation rule specified */
89
+ float_infzeronan_none = 0,
90
+ /* Result is never the default NaN (so always the input NaN) */
91
+ float_infzeronan_dnan_never,
92
+ /* Result is always the default NaN */
93
+ float_infzeronan_dnan_always,
94
+ /* Result is the default NaN if the input NaN is quiet */
95
+ float_infzeronan_dnan_if_qnan,
96
+} FloatInfZeroNaNRule;
28
+
97
+
29
/*
98
/*
30
* Page table walk error types
99
* Floating Point Status. Individual architectures may maintain
31
*/
100
* several versions of float_status for different functions. The
32
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
101
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
102
FloatRoundMode float_rounding_mode;
103
FloatX80RoundPrec floatx80_rounding_precision;
104
Float2NaNPropRule float_2nan_prop_rule;
105
+ FloatInfZeroNaNRule float_infzeronan_rule;
106
bool tininess_before_rounding;
107
/* should denormalised results go to zero and set the inexact flag? */
108
bool flush_to_zero;
109
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
33
index XXXXXXX..XXXXXXX 100644
110
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/smmuv3.c
111
--- a/fpu/softfloat-specialize.c.inc
35
+++ b/hw/arm/smmuv3.c
112
+++ b/fpu/softfloat-specialize.c.inc
36
@@ -XXX,XX +XXX,XX @@ epilogue:
113
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
37
switch (status) {
114
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
38
case SMMU_TRANS_SUCCESS:
115
bool infzero, float_status *status)
39
entry.perm = cached_entry->entry.perm;
116
{
40
- entry.translated_addr = cached_entry->entry.translated_addr +
117
+ FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
41
- (addr & cached_entry->entry.addr_mask);
118
+
42
+ entry.translated_addr = CACHED_ENTRY_TO_ADDR(cached_entry, addr);
119
/*
43
entry.addr_mask = cached_entry->entry.addr_mask;
120
* We guarantee not to require the target to tell us how to
44
trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
121
* pick a NaN if we're always returning the default NaN.
45
entry.translated_addr, entry.perm,
122
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
123
* specify.
124
*/
125
assert(!status->default_nan_mode);
126
+
127
+ if (rule == float_infzeronan_none) {
128
+ /*
129
+ * Temporarily fall back to ifdef ladder
130
+ */
131
#if defined(TARGET_ARM)
132
- /* For ARM, the (inf,zero,qnan) case sets InvalidOp and returns
133
- * the default NaN
134
- */
135
- if (infzero && is_qnan(c_cls)) {
136
- return 3;
137
+ /*
138
+ * For ARM, the (inf,zero,qnan) case returns the default NaN,
139
+ * but (inf,zero,snan) returns the input NaN.
140
+ */
141
+ rule = float_infzeronan_dnan_if_qnan;
142
+#elif defined(TARGET_MIPS)
143
+ if (snan_bit_is_one(status)) {
144
+ /*
145
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
146
+ * case sets InvalidOp and returns the default NaN
147
+ */
148
+ rule = float_infzeronan_dnan_always;
149
+ } else {
150
+ /*
151
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
152
+ * case sets InvalidOp and returns the input value 'c'
153
+ */
154
+ rule = float_infzeronan_dnan_never;
155
+ }
156
+#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
157
+ defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
158
+ defined(TARGET_I386) || defined(TARGET_LOONGARCH)
159
+ /*
160
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
161
+ * case sets InvalidOp and returns the input value 'c'
162
+ */
163
+ /*
164
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
165
+ * to return an input NaN if we have one (ie c) rather than generating
166
+ * a default NaN
167
+ */
168
+ rule = float_infzeronan_dnan_never;
169
+#elif defined(TARGET_S390X)
170
+ rule = float_infzeronan_dnan_always;
171
+#endif
172
}
173
174
+ if (infzero) {
175
+ /*
176
+ * Inf * 0 + NaN -- some implementations return the default NaN here,
177
+ * and some return the input NaN.
178
+ */
179
+ switch (rule) {
180
+ case float_infzeronan_dnan_never:
181
+ return 2;
182
+ case float_infzeronan_dnan_always:
183
+ return 3;
184
+ case float_infzeronan_dnan_if_qnan:
185
+ return is_qnan(c_cls) ? 3 : 2;
186
+ default:
187
+ g_assert_not_reached();
188
+ }
189
+ }
190
+
191
+#if defined(TARGET_ARM)
192
+
193
/* This looks different from the ARM ARM pseudocode, because the ARM ARM
194
* puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
195
*/
196
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
197
}
198
#elif defined(TARGET_MIPS)
199
if (snan_bit_is_one(status)) {
200
- /*
201
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
202
- * case sets InvalidOp and returns the default NaN
203
- */
204
- if (infzero) {
205
- return 3;
206
- }
207
/* Prefer sNaN over qNaN, in the a, b, c order. */
208
if (is_snan(a_cls)) {
209
return 0;
210
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
211
return 2;
212
}
213
} else {
214
- /*
215
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
216
- * case sets InvalidOp and returns the input value 'c'
217
- */
218
/* Prefer sNaN over qNaN, in the c, a, b order. */
219
if (is_snan(c_cls)) {
220
return 2;
221
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
222
}
223
}
224
#elif defined(TARGET_LOONGARCH64)
225
- /*
226
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
227
- * case sets InvalidOp and returns the input value 'c'
228
- */
229
-
230
/* Prefer sNaN over qNaN, in the c, a, b order. */
231
if (is_snan(c_cls)) {
232
return 2;
233
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
234
return 1;
235
}
236
#elif defined(TARGET_PPC)
237
- /* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
238
- * to return an input NaN if we have one (ie c) rather than generating
239
- * a default NaN
240
- */
241
-
242
/* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
243
* otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
244
*/
245
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
246
return 1;
247
}
248
#elif defined(TARGET_S390X)
249
- if (infzero) {
250
- return 3;
251
- }
252
-
253
if (is_snan(a_cls)) {
254
return 0;
255
} else if (is_snan(b_cls)) {
46
--
256
--
47
2.34.1
257
2.34.1
48
49
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for the inf-zero-nan
2
muladd special case. In meson.build we put -DTARGET_ARM in fpcflags,
3
and so we should select here the Arm rule of
4
float_infzeronan_dnan_if_qnan.
1
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20241202131347.498124-5-peter.maydell@linaro.org
9
---
10
tests/fp/fp-bench.c | 5 +++++
11
tests/fp/fp-test.c | 5 +++++
12
2 files changed, 10 insertions(+)
13
14
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/fp/fp-bench.c
17
+++ b/tests/fp/fp-bench.c
18
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
19
{
20
bench_func_t f;
21
22
+ /*
23
+ * These implementation-defined choices for various things IEEE
24
+ * doesn't specify match those used by the Arm architecture.
25
+ */
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
28
29
f = bench_funcs[operation][precision];
30
g_assert(f);
31
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/tests/fp/fp-test.c
34
+++ b/tests/fp/fp-test.c
35
@@ -XXX,XX +XXX,XX @@ void run_test(void)
36
{
37
unsigned int i;
38
39
+ /*
40
+ * These implementation-defined choices for various things IEEE
41
+ * doesn't specify match those used by the Arm architecture.
42
+ */
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
44
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
45
46
genCases_setLevel(test_level);
47
verCases_maxErrorCount = n_max_errors;
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the Arm target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-6-peter.maydell@linaro.org
7
---
8
target/arm/cpu.c | 3 +++
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
2 files changed, 4 insertions(+), 7 deletions(-)
11
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
17
* * tininess-before-rounding
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
20
+ * * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
21
+ * and the input NaN if it is signalling
22
*/
23
static void arm_set_default_fp_behaviours(float_status *s)
24
{
25
set_float_detect_tininess(float_tininess_before_rounding, s);
26
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
27
+ set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
28
}
29
30
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
/*
37
* Temporarily fall back to ifdef ladder
38
*/
39
-#if defined(TARGET_ARM)
40
- /*
41
- * For ARM, the (inf,zero,qnan) case returns the default NaN,
42
- * but (inf,zero,snan) returns the input NaN.
43
- */
44
- rule = float_infzeronan_dnan_if_qnan;
45
-#elif defined(TARGET_MIPS)
46
+#if defined(TARGET_MIPS)
47
if (snan_bit_is_one(status)) {
48
/*
49
* For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
50
--
51
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for s390, so we
2
can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-7-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/cpu.c
15
+++ b/target/s390x/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
17
set_float_detect_tininess(float_tininess_before_rounding,
18
&env->fpu_status);
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
20
+ set_float_infzeronan_rule(float_infzeronan_dnan_always,
21
+ &env->fpu_status);
22
/* fall through */
23
case RESET_TYPE_S390_CPU_NORMAL:
24
env->psw.mask &= ~PSW_MASK_RI;
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
* a default NaN
31
*/
32
rule = float_infzeronan_dnan_never;
33
-#elif defined(TARGET_S390X)
34
- rule = float_infzeronan_dnan_always;
35
#endif
36
}
37
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the PPC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-8-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 7 +++++++
9
fpu/softfloat-specialize.c.inc | 7 +------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
22
+ * to return an input NaN if we have one (ie c) rather than generating
23
+ * a default NaN
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
27
28
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
29
ppc_spr_t *spr = &env->spr_cb[i];
30
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
31
index XXXXXXX..XXXXXXX 100644
32
--- a/fpu/softfloat-specialize.c.inc
33
+++ b/fpu/softfloat-specialize.c.inc
34
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
*/
36
rule = float_infzeronan_dnan_never;
37
}
38
-#elif defined(TARGET_PPC) || defined(TARGET_SPARC) || \
39
+#elif defined(TARGET_SPARC) || \
40
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
41
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
42
/*
43
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
44
* case sets InvalidOp and returns the input value 'c'
45
*/
46
- /*
47
- * For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
48
- * to return an input NaN if we have one (ie c) rather than generating
49
- * a default NaN
50
- */
51
rule = float_infzeronan_dnan_never;
52
#endif
53
}
54
--
55
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the MIPS target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-9-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 9 +++++++++
9
target/mips/msa.c | 4 ++++
10
fpu/softfloat-specialize.c.inc | 16 +---------------
11
3 files changed, 14 insertions(+), 15 deletions(-)
12
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/mips/fpu_helper.h
16
+++ b/target/mips/fpu_helper.h
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_flush_mode(CPUMIPSState *env)
18
static inline void restore_snan_bit_mode(CPUMIPSState *env)
19
{
20
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
21
+ FloatInfZeroNaNRule izn_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
set_snan_bit_is_one(!nan2008, &env->active_fpu.fp_status);
28
set_default_nan_mode(!nan2008, &env->active_fpu.fp_status);
29
+ /*
30
+ * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
31
+ * case sets InvalidOp and returns the default NaN.
32
+ * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
33
+ * case sets InvalidOp and returns the input value 'c'.
34
+ */
35
+ izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
36
+ set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
37
}
38
39
static inline void restore_fp_status(CPUMIPSState *env)
40
diff --git a/target/mips/msa.c b/target/mips/msa.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/mips/msa.c
43
+++ b/target/mips/msa.c
44
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
45
46
/* set proper signanling bit meaning ("1" means "quiet") */
47
set_snan_bit_is_one(0, &env->active_tc.msa_fp_status);
48
+
49
+ /* Inf * 0 + NaN returns the input NaN */
50
+ set_float_infzeronan_rule(float_infzeronan_dnan_never,
51
+ &env->active_tc.msa_fp_status);
52
}
53
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
54
index XXXXXXX..XXXXXXX 100644
55
--- a/fpu/softfloat-specialize.c.inc
56
+++ b/fpu/softfloat-specialize.c.inc
57
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
58
/*
59
* Temporarily fall back to ifdef ladder
60
*/
61
-#if defined(TARGET_MIPS)
62
- if (snan_bit_is_one(status)) {
63
- /*
64
- * For MIPS systems that conform to IEEE754-1985, the (inf,zero,nan)
65
- * case sets InvalidOp and returns the default NaN
66
- */
67
- rule = float_infzeronan_dnan_always;
68
- } else {
69
- /*
70
- * For MIPS systems that conform to IEEE754-2008, the (inf,zero,nan)
71
- * case sets InvalidOp and returns the input value 'c'
72
- */
73
- rule = float_infzeronan_dnan_never;
74
- }
75
-#elif defined(TARGET_SPARC) || \
76
+#if defined(TARGET_SPARC) || \
77
defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
78
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
79
/*
80
--
81
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the SPARC target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-10-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_SPARC) || \
34
- defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
35
+#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
36
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
37
/*
38
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the xtensa target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-11-peter.maydell@linaro.org
7
---
8
target/xtensa/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 +-
10
2 files changed, 3 insertions(+), 1 deletion(-)
11
12
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/xtensa/cpu.c
15
+++ b/target/xtensa/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
17
reset_mmu(env);
18
cs->halted = env->runstall;
19
#endif
20
+ /* For inf * 0 + NaN, return the input NaN */
21
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
22
set_no_signaling_nans(!dfpu, &env->fp_status);
23
xtensa_use_first_nan(env, !dfpu);
24
}
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
/*
31
* Temporarily fall back to ifdef ladder
32
*/
33
-#if defined(TARGET_XTENSA) || defined(TARGET_HPPA) || \
34
+#if defined(TARGET_HPPA) || \
35
defined(TARGET_I386) || defined(TARGET_LOONGARCH)
36
/*
37
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the x86 target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-12-peter.maydell@linaro.org
6
---
7
target/i386/tcg/fpu_helper.c | 7 +++++++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 8 insertions(+), 1 deletion(-)
10
11
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/i386/tcg/fpu_helper.c
14
+++ b/target/i386/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->mmx_status);
18
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->sse_status);
19
+ /*
20
+ * Only SSE has multiply-add instructions. In the SDM Section 14.5.2
21
+ * "Fused-Multiply-ADD (FMA) Numeric Behavior" the NaN handling is
22
+ * specified -- for 0 * inf + NaN the input NaN is selected, and if
23
+ * there are multiple input NaNs they are selected in the order a, b, c.
24
+ */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
26
}
27
28
static inline uint8_t save_exception_flags(CPUX86State *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
34
* Temporarily fall back to ifdef ladder
35
*/
36
#if defined(TARGET_HPPA) || \
37
- defined(TARGET_I386) || defined(TARGET_LOONGARCH)
38
+ defined(TARGET_LOONGARCH)
39
/*
40
* For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
41
* case sets InvalidOp and returns the input value 'c'
42
--
43
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the loongarch target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-13-peter.maydell@linaro.org
6
---
7
target/loongarch/tcg/fpu_helper.c | 5 +++++
8
fpu/softfloat-specialize.c.inc | 7 +------
9
2 files changed, 6 insertions(+), 6 deletions(-)
10
11
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/loongarch/tcg/fpu_helper.c
14
+++ b/target/loongarch/tcg/fpu_helper.c
15
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
16
&env->fp_status);
17
set_flush_to_zero(0, &env->fp_status);
18
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
19
+ /*
20
+ * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
21
+ * case sets InvalidOp and returns the input value 'c'
22
+ */
23
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
}
25
26
int ieee_ex_to_loongarch(int xcpt)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
32
/*
33
* Temporarily fall back to ifdef ladder
34
*/
35
-#if defined(TARGET_HPPA) || \
36
- defined(TARGET_LOONGARCH)
37
- /*
38
- * For LoongArch systems that conform to IEEE754-2008, the (inf,zero,nan)
39
- * case sets InvalidOp and returns the input value 'c'
40
- */
41
+#if defined(TARGET_HPPA)
42
rule = float_infzeronan_dnan_never;
43
#endif
44
}
45
--
46
2.34.1
diff view generated by jsdifflib
New patch
1
Set the FloatInfZeroNaNRule explicitly for the HPPA target,
2
so we can remove the ifdef from pickNaNMulAdd().
1
3
4
As this is the last target to be converted to explicitly setting
5
the rule, we can remove the fallback code in pickNaNMulAdd()
6
entirely.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20241202131347.498124-14-peter.maydell@linaro.org
11
---
12
target/hppa/fpu_helper.c | 2 ++
13
fpu/softfloat-specialize.c.inc | 13 +------------
14
2 files changed, 3 insertions(+), 12 deletions(-)
15
16
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/hppa/fpu_helper.c
19
+++ b/target/hppa/fpu_helper.c
20
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
21
* HPPA does note implement a CPU reset method at all...
22
*/
23
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
24
+ /* For inf * 0 + NaN, return the input NaN */
25
+ set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
26
}
27
28
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
29
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
30
index XXXXXXX..XXXXXXX 100644
31
--- a/fpu/softfloat-specialize.c.inc
32
+++ b/fpu/softfloat-specialize.c.inc
33
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
34
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
35
bool infzero, float_status *status)
36
{
37
- FloatInfZeroNaNRule rule = status->float_infzeronan_rule;
38
-
39
/*
40
* We guarantee not to require the target to tell us how to
41
* pick a NaN if we're always returning the default NaN.
42
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
43
*/
44
assert(!status->default_nan_mode);
45
46
- if (rule == float_infzeronan_none) {
47
- /*
48
- * Temporarily fall back to ifdef ladder
49
- */
50
-#if defined(TARGET_HPPA)
51
- rule = float_infzeronan_dnan_never;
52
-#endif
53
- }
54
-
55
if (infzero) {
56
/*
57
* Inf * 0 + NaN -- some implementations return the default NaN here,
58
* and some return the input NaN.
59
*/
60
- switch (rule) {
61
+ switch (status->float_infzeronan_rule) {
62
case float_infzeronan_dnan_never:
63
return 2;
64
case float_infzeronan_dnan_always:
65
--
66
2.34.1
diff view generated by jsdifflib
New patch
1
The new implementation of pickNaNMulAdd() will find it convenient
2
to know whether at least one of the three arguments to the muladd
3
was a signaling NaN. We already calculate that in the caller,
4
so pass it in as a new bool have_snan.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-15-peter.maydell@linaro.org
9
---
10
fpu/softfloat-parts.c.inc | 5 +++--
11
fpu/softfloat-specialize.c.inc | 2 +-
12
2 files changed, 4 insertions(+), 3 deletions(-)
13
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
15
index XXXXXXX..XXXXXXX 100644
16
--- a/fpu/softfloat-parts.c.inc
17
+++ b/fpu/softfloat-parts.c.inc
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
19
{
20
int which;
21
bool infzero = (ab_mask == float_cmask_infzero);
22
+ bool have_snan = (abc_mask & float_cmask_snan);
23
24
- if (unlikely(abc_mask & float_cmask_snan)) {
25
+ if (unlikely(have_snan)) {
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
27
}
28
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
30
if (s->default_nan_mode) {
31
which = 3;
32
} else {
33
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, s);
34
+ which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
35
}
36
37
if (which == 3) {
38
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
39
index XXXXXXX..XXXXXXX 100644
40
--- a/fpu/softfloat-specialize.c.inc
41
+++ b/fpu/softfloat-specialize.c.inc
42
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
43
| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
44
*----------------------------------------------------------------------------*/
45
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
46
- bool infzero, float_status *status)
47
+ bool infzero, bool have_snan, float_status *status)
48
{
49
/*
50
* We guarantee not to require the target to tell us how to
51
--
52
2.34.1
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
IEEE 758 does not define a fixed rule for which NaN to pick as the
2
2
result if both operands of a 3-operand fused multiply-add operation
3
According to ARM SMMU architecture specification (ARM IHI 0070 F.b),
3
are NaNs. As a result different architectures have ended up with
4
In "5.2 Stream Table Entry":
4
different rules for propagating NaNs.
5
[51:6] S1ContextPtr
5
6
If Config[1] == 1 (stage 2 enabled), this pointer is an IPA translated by
6
QEMU currently hardcodes the NaN propagation logic into the binary
7
stage 2 and the programmed value must be within the range of the IAS.
7
because pickNaNMulAdd() has an ifdef ladder for different targets.
8
8
We want to make the propagation rule instead be selectable at
9
In "5.4.1 CD notes":
9
runtime, because:
10
The translation table walks performed from TTB0 or TTB1 are always performed
10
* this will let us have multiple targets in one QEMU binary
11
in IPA space if stage 2 translations are enabled.
11
* the Arm FEAT_AFP architectural feature includes letting
12
12
the guest select a NaN propagation rule at runtime
13
This patch implements translation of the S1 context descriptor pointer and
13
14
TTBx base addresses through the S2 stage (IPA -> PA)
14
In this commit we add an enum for the propagation rule, the field in
15
15
float_status, and the corresponding getters and setters. We change
16
smmuv3_do_translate() is updated to have one arg which is translation
16
pickNaNMulAdd to honour this, but because all targets still leave
17
class, this is useful to:
17
this field at its default 0 value, the fallback logic will pick the
18
- Decide wether a translation is stage-2 only or use the STE config.
18
rule type with the old ifdef ladder.
19
- Populate the class in case of faults, WALK_EABT is left unchanged
19
20
for stage-1 as it is always IN, while stage-2 would match the
20
It's valid not to set a propagation rule if default_nan_mode is
21
used class (TT, IN, CD), this will change slightly when the ptw
21
enabled, because in that case there's no need to pick a NaN; all the
22
supports nested translation as it can also issue TT event with
22
callers of pickNaNMulAdd() catch this case and skip calling it.
23
class IN.
23
24
25
In case for stage-2 only translation, used in the context of nested
26
translation, the stage and asid are saved and restored before and
27
after calling smmu_translate().
28
29
Translating CD or TTBx can fail for the following reasons:
30
1) Large address size: This is described in
31
(3.4.3 Address sizes of SMMU-originated accesses)
32
- For CD ptr larger than IAS, for SMMUv3.1, it can trigger either
33
C_BAD_STE or Translation fault, we implement the latter as it
34
requires no extra code.
35
- For TTBx, if larger than the effective stage 1 output address size, it
36
triggers C_BAD_CD.
37
38
2) Faults from PTWs (7.3 Event records)
39
- F_ADDR_SIZE: large address size after first level causes stage 2 Address
40
Size fault (Also in 3.4.3 Address sizes of SMMU-originated accesses)
41
- F_PERMISSION: Same as an address translation. However, when
42
CLASS == CD, the access is implicitly Data and a read.
43
- F_ACCESS: Same as an address translation.
44
- F_TRANSLATION: Same as an address translation.
45
- F_WALK_EABT: Same as an address translation.
46
These are already implemented in the PTW logic, so no extra handling
47
required.
48
49
As in CD and TTBx translation context, the iova is not known, setting
50
the InputAddr was removed from "smmuv3_do_translate" and set after
51
from "smmuv3_translate" with the new function "smmuv3_fixup_event"
52
53
Signed-off-by: Mostafa Saleh <smostafa@google.com>
54
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
55
Reviewed-by: Eric Auger <eric.auger@redhat.com>
56
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
57
Message-id: 20240715084519.1189624-9-smostafa@google.com
58
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-id: 20241202131347.498124-16-peter.maydell@linaro.org
59
---
27
---
60
hw/arm/smmuv3.c | 120 +++++++++++++++++++++++++++++++++++++++++-------
28
include/fpu/softfloat-helpers.h | 11 +++
61
1 file changed, 103 insertions(+), 17 deletions(-)
29
include/fpu/softfloat-types.h | 55 +++++++++++
62
30
fpu/softfloat-specialize.c.inc | 167 ++++++++------------------------
63
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
31
3 files changed, 107 insertions(+), 126 deletions(-)
32
33
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
64
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/smmuv3.c
35
--- a/include/fpu/softfloat-helpers.h
66
+++ b/hw/arm/smmuv3.c
36
+++ b/include/fpu/softfloat-helpers.h
67
@@ -XXX,XX +XXX,XX @@ static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
37
@@ -XXX,XX +XXX,XX @@ static inline void set_float_2nan_prop_rule(Float2NaNPropRule rule,
68
38
status->float_2nan_prop_rule = rule;
69
}
39
}
70
40
71
+static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
41
+static inline void set_float_3nan_prop_rule(Float3NaNPropRule rule,
72
+ SMMUTransCfg *cfg,
42
+ float_status *status)
73
+ SMMUEventInfo *event,
43
+{
74
+ IOMMUAccessFlags flag,
44
+ status->float_3nan_prop_rule = rule;
75
+ SMMUTLBEntry **out_entry,
45
+}
76
+ SMMUTranslationClass class);
46
+
77
/* @ssid > 0 not supported yet */
47
static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
78
-static int smmu_get_cd(SMMUv3State *s, STE *ste, uint32_t ssid,
48
float_status *status)
79
- CD *buf, SMMUEventInfo *event)
80
+static int smmu_get_cd(SMMUv3State *s, STE *ste, SMMUTransCfg *cfg,
81
+ uint32_t ssid, CD *buf, SMMUEventInfo *event)
82
{
49
{
83
dma_addr_t addr = STE_CTXPTR(ste);
50
@@ -XXX,XX +XXX,XX @@ static inline Float2NaNPropRule get_float_2nan_prop_rule(float_status *status)
84
int ret, i;
51
return status->float_2nan_prop_rule;
85
+ SMMUTranslationStatus status;
52
}
86
+ SMMUTLBEntry *entry;
53
87
54
+static inline Float3NaNPropRule get_float_3nan_prop_rule(float_status *status)
88
trace_smmuv3_get_cd(addr);
55
+{
89
+
56
+ return status->float_3nan_prop_rule;
90
+ if (cfg->stage == SMMU_NESTED) {
57
+}
91
+ status = smmuv3_do_translate(s, addr, cfg, event,
58
+
92
+ IOMMU_RO, &entry, SMMU_CLASS_CD);
59
static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status)
93
+
60
{
94
+ /* Same PTW faults are reported but with CLASS = CD. */
61
return status->float_infzeronan_rule;
95
+ if (status != SMMU_TRANS_SUCCESS) {
62
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
96
+ return -EINVAL;
63
index XXXXXXX..XXXXXXX 100644
97
+ }
64
--- a/include/fpu/softfloat-types.h
98
+
65
+++ b/include/fpu/softfloat-types.h
99
+ addr = CACHED_ENTRY_TO_ADDR(entry, addr);
66
@@ -XXX,XX +XXX,XX @@ this code that are retained.
67
#ifndef SOFTFLOAT_TYPES_H
68
#define SOFTFLOAT_TYPES_H
69
70
+#include "hw/registerfields.h"
71
+
72
/*
73
* Software IEC/IEEE floating-point types.
74
*/
75
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
76
float_2nan_prop_x87,
77
} Float2NaNPropRule;
78
79
+/*
80
+ * 3-input NaN propagation rule, for fused multiply-add. Individual
81
+ * architectures have different rules for which input NaN is
82
+ * propagated to the output when there is more than one NaN on the
83
+ * input.
84
+ *
85
+ * If default_nan_mode is enabled then it is valid not to set a NaN
86
+ * propagation rule, because the softfloat code guarantees not to try
87
+ * to pick a NaN to propagate in default NaN mode. When not in
88
+ * default-NaN mode, it is an error for the target not to set the rule
89
+ * in float_status if it uses a muladd, and we will assert if we need
90
+ * to handle an input NaN and no rule was selected.
91
+ *
92
+ * The naming scheme for Float3NaNPropRule values is:
93
+ * float_3nan_prop_s_abc:
94
+ * = "Prefer SNaN over QNaN, then operand A over B over C"
95
+ * float_3nan_prop_abc:
96
+ * = "Prefer A over B over C regardless of SNaN vs QNAN"
97
+ *
98
+ * For QEMU, the multiply-add operation is A * B + C.
99
+ */
100
+
101
+/*
102
+ * We set the Float3NaNPropRule enum values up so we can select the
103
+ * right value in pickNaNMulAdd in a data driven way.
104
+ */
105
+FIELD(3NAN, 1ST, 0, 2) /* which operand is most preferred ? */
106
+FIELD(3NAN, 2ND, 2, 2) /* which operand is next most preferred ? */
107
+FIELD(3NAN, 3RD, 4, 2) /* which operand is least preferred ? */
108
+FIELD(3NAN, SNAN, 6, 1) /* do we prefer SNaN over QNaN ? */
109
+
110
+#define PROPRULE(X, Y, Z) \
111
+ ((X << R_3NAN_1ST_SHIFT) | (Y << R_3NAN_2ND_SHIFT) | (Z << R_3NAN_3RD_SHIFT))
112
+
113
+typedef enum __attribute__((__packed__)) {
114
+ float_3nan_prop_none = 0, /* No propagation rule specified */
115
+ float_3nan_prop_abc = PROPRULE(0, 1, 2),
116
+ float_3nan_prop_acb = PROPRULE(0, 2, 1),
117
+ float_3nan_prop_bac = PROPRULE(1, 0, 2),
118
+ float_3nan_prop_bca = PROPRULE(1, 2, 0),
119
+ float_3nan_prop_cab = PROPRULE(2, 0, 1),
120
+ float_3nan_prop_cba = PROPRULE(2, 1, 0),
121
+ float_3nan_prop_s_abc = float_3nan_prop_abc | R_3NAN_SNAN_MASK,
122
+ float_3nan_prop_s_acb = float_3nan_prop_acb | R_3NAN_SNAN_MASK,
123
+ float_3nan_prop_s_bac = float_3nan_prop_bac | R_3NAN_SNAN_MASK,
124
+ float_3nan_prop_s_bca = float_3nan_prop_bca | R_3NAN_SNAN_MASK,
125
+ float_3nan_prop_s_cab = float_3nan_prop_cab | R_3NAN_SNAN_MASK,
126
+ float_3nan_prop_s_cba = float_3nan_prop_cba | R_3NAN_SNAN_MASK,
127
+} Float3NaNPropRule;
128
+
129
+#undef PROPRULE
130
+
131
/*
132
* Rule for result of fused multiply-add 0 * Inf + NaN.
133
* This must be a NaN, but implementations differ on whether this
134
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
135
FloatRoundMode float_rounding_mode;
136
FloatX80RoundPrec floatx80_rounding_precision;
137
Float2NaNPropRule float_2nan_prop_rule;
138
+ Float3NaNPropRule float_3nan_prop_rule;
139
FloatInfZeroNaNRule float_infzeronan_rule;
140
bool tininess_before_rounding;
141
/* should denormalised results go to zero and set the inexact flag? */
142
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
143
index XXXXXXX..XXXXXXX 100644
144
--- a/fpu/softfloat-specialize.c.inc
145
+++ b/fpu/softfloat-specialize.c.inc
146
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
147
static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
148
bool infzero, bool have_snan, float_status *status)
149
{
150
+ FloatClass cls[3] = { a_cls, b_cls, c_cls };
151
+ Float3NaNPropRule rule = status->float_3nan_prop_rule;
152
+ int which;
153
+
154
/*
155
* We guarantee not to require the target to tell us how to
156
* pick a NaN if we're always returning the default NaN.
157
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
158
}
159
}
160
161
+ if (rule == float_3nan_prop_none) {
162
#if defined(TARGET_ARM)
163
-
164
- /* This looks different from the ARM ARM pseudocode, because the ARM ARM
165
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b.
166
- */
167
- if (is_snan(c_cls)) {
168
- return 2;
169
- } else if (is_snan(a_cls)) {
170
- return 0;
171
- } else if (is_snan(b_cls)) {
172
- return 1;
173
- } else if (is_qnan(c_cls)) {
174
- return 2;
175
- } else if (is_qnan(a_cls)) {
176
- return 0;
177
- } else {
178
- return 1;
179
- }
180
+ /*
181
+ * This looks different from the ARM ARM pseudocode, because the ARM ARM
182
+ * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
183
+ */
184
+ rule = float_3nan_prop_s_cab;
185
#elif defined(TARGET_MIPS)
186
- if (snan_bit_is_one(status)) {
187
- /* Prefer sNaN over qNaN, in the a, b, c order. */
188
- if (is_snan(a_cls)) {
189
- return 0;
190
- } else if (is_snan(b_cls)) {
191
- return 1;
192
- } else if (is_snan(c_cls)) {
193
- return 2;
194
- } else if (is_qnan(a_cls)) {
195
- return 0;
196
- } else if (is_qnan(b_cls)) {
197
- return 1;
198
+ if (snan_bit_is_one(status)) {
199
+ rule = float_3nan_prop_s_abc;
200
} else {
201
- return 2;
202
+ rule = float_3nan_prop_s_cab;
203
}
204
- } else {
205
- /* Prefer sNaN over qNaN, in the c, a, b order. */
206
- if (is_snan(c_cls)) {
207
- return 2;
208
- } else if (is_snan(a_cls)) {
209
- return 0;
210
- } else if (is_snan(b_cls)) {
211
- return 1;
212
- } else if (is_qnan(c_cls)) {
213
- return 2;
214
- } else if (is_qnan(a_cls)) {
215
- return 0;
216
- } else {
217
- return 1;
218
- }
219
- }
220
#elif defined(TARGET_LOONGARCH64)
221
- /* Prefer sNaN over qNaN, in the c, a, b order. */
222
- if (is_snan(c_cls)) {
223
- return 2;
224
- } else if (is_snan(a_cls)) {
225
- return 0;
226
- } else if (is_snan(b_cls)) {
227
- return 1;
228
- } else if (is_qnan(c_cls)) {
229
- return 2;
230
- } else if (is_qnan(a_cls)) {
231
- return 0;
232
- } else {
233
- return 1;
234
- }
235
+ rule = float_3nan_prop_s_cab;
236
#elif defined(TARGET_PPC)
237
- /* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
238
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
239
- */
240
- if (is_nan(a_cls)) {
241
- return 0;
242
- } else if (is_nan(c_cls)) {
243
- return 2;
244
- } else {
245
- return 1;
246
- }
247
+ /*
248
+ * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
249
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
250
+ */
251
+ rule = float_3nan_prop_acb;
252
#elif defined(TARGET_S390X)
253
- if (is_snan(a_cls)) {
254
- return 0;
255
- } else if (is_snan(b_cls)) {
256
- return 1;
257
- } else if (is_snan(c_cls)) {
258
- return 2;
259
- } else if (is_qnan(a_cls)) {
260
- return 0;
261
- } else if (is_qnan(b_cls)) {
262
- return 1;
263
- } else {
264
- return 2;
265
- }
266
+ rule = float_3nan_prop_s_abc;
267
#elif defined(TARGET_SPARC)
268
- /* Prefer SNaN over QNaN, order C, B, A. */
269
- if (is_snan(c_cls)) {
270
- return 2;
271
- } else if (is_snan(b_cls)) {
272
- return 1;
273
- } else if (is_snan(a_cls)) {
274
- return 0;
275
- } else if (is_qnan(c_cls)) {
276
- return 2;
277
- } else if (is_qnan(b_cls)) {
278
- return 1;
279
- } else {
280
- return 0;
281
- }
282
+ rule = float_3nan_prop_s_cba;
283
#elif defined(TARGET_XTENSA)
284
- /*
285
- * For Xtensa, the (inf,zero,nan) case sets InvalidOp and returns
286
- * an input NaN if we have one (ie c).
287
- */
288
- if (status->use_first_nan) {
289
- if (is_nan(a_cls)) {
290
- return 0;
291
- } else if (is_nan(b_cls)) {
292
- return 1;
293
+ if (status->use_first_nan) {
294
+ rule = float_3nan_prop_abc;
295
} else {
296
- return 2;
297
+ rule = float_3nan_prop_cba;
298
}
299
- } else {
300
- if (is_nan(c_cls)) {
301
- return 2;
302
- } else if (is_nan(b_cls)) {
303
- return 1;
304
- } else {
305
- return 0;
306
- }
307
- }
308
#else
309
- /* A default implementation: prefer a to b to c.
310
- * This is unlikely to actually match any real implementation.
311
- */
312
- if (is_nan(a_cls)) {
313
- return 0;
314
- } else if (is_nan(b_cls)) {
315
- return 1;
316
- } else {
317
- return 2;
318
- }
319
+ rule = float_3nan_prop_abc;
320
#endif
100
+ }
321
+ }
101
+
322
+
102
/* TODO: guarantee 64-bit single-copy atomicity */
323
+ assert(rule != float_3nan_prop_none);
103
ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
324
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
104
MEMTXATTRS_UNSPECIFIED);
325
+ /* We have at least one SNaN input and should prefer it */
105
@@ -XXX,XX +XXX,XX @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
326
+ do {
106
return 0;
327
+ which = rule & R_3NAN_1ST_MASK;
328
+ rule >>= R_3NAN_1ST_LENGTH;
329
+ } while (!is_snan(cls[which]));
330
+ } else {
331
+ do {
332
+ which = rule & R_3NAN_1ST_MASK;
333
+ rule >>= R_3NAN_1ST_LENGTH;
334
+ } while (!is_nan(cls[which]));
335
+ }
336
+ return which;
107
}
337
}
108
338
109
-static int decode_cd(SMMUTransCfg *cfg, CD *cd, SMMUEventInfo *event)
339
/*----------------------------------------------------------------------------
110
+static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
111
+ CD *cd, SMMUEventInfo *event)
112
{
113
int ret = -EINVAL;
114
int i;
115
+ SMMUTranslationStatus status;
116
+ SMMUTLBEntry *entry;
117
118
if (!CD_VALID(cd) || !CD_AARCH64(cd)) {
119
goto bad_cd;
120
@@ -XXX,XX +XXX,XX @@ static int decode_cd(SMMUTransCfg *cfg, CD *cd, SMMUEventInfo *event)
121
122
tt->tsz = tsz;
123
tt->ttb = CD_TTB(cd, i);
124
+
125
if (tt->ttb & ~(MAKE_64BIT_MASK(0, cfg->oas))) {
126
goto bad_cd;
127
}
128
+
129
+ /* Translate the TTBx, from IPA to PA if nesting is enabled. */
130
+ if (cfg->stage == SMMU_NESTED) {
131
+ status = smmuv3_do_translate(s, tt->ttb, cfg, event, IOMMU_RO,
132
+ &entry, SMMU_CLASS_TT);
133
+ /*
134
+ * Same PTW faults are reported but with CLASS = TT.
135
+ * If TTBx is larger than the effective stage 1 output addres
136
+ * size, it reports C_BAD_CD, which is handled by the above case.
137
+ */
138
+ if (status != SMMU_TRANS_SUCCESS) {
139
+ return -EINVAL;
140
+ }
141
+ tt->ttb = CACHED_ENTRY_TO_ADDR(entry, tt->ttb);
142
+ }
143
+
144
tt->had = CD_HAD(cd, i);
145
trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
146
}
147
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
148
return 0;
149
}
150
151
- ret = smmu_get_cd(s, &ste, 0 /* ssid */, &cd, event);
152
+ ret = smmu_get_cd(s, &ste, cfg, 0 /* ssid */, &cd, event);
153
if (ret) {
154
return ret;
155
}
156
157
- return decode_cd(cfg, &cd, event);
158
+ return decode_cd(s, cfg, &cd, event);
159
}
160
161
/**
162
@@ -XXX,XX +XXX,XX @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
163
SMMUTransCfg *cfg,
164
SMMUEventInfo *event,
165
IOMMUAccessFlags flag,
166
- SMMUTLBEntry **out_entry)
167
+ SMMUTLBEntry **out_entry,
168
+ SMMUTranslationClass class)
169
{
170
SMMUPTWEventInfo ptw_info = {};
171
SMMUState *bs = ARM_SMMU(s);
172
SMMUTLBEntry *cached_entry = NULL;
173
+ int asid, stage;
174
+ bool desc_s2_translation = class != SMMU_CLASS_IN;
175
+
176
+ /*
177
+ * The function uses the argument class to identify which stage is used:
178
+ * - CLASS = IN: Means an input translation, determine the stage from STE.
179
+ * - CLASS = CD: Means the addr is an IPA of the CD, and it would be
180
+ * translated using the stage-2.
181
+ * - CLASS = TT: Means the addr is an IPA of the stage-1 translation table
182
+ * and it would be translated using the stage-2.
183
+ * For the last 2 cases instead of having intrusive changes in the common
184
+ * logic, we modify the cfg to be a stage-2 translation only in case of
185
+ * nested, and then restore it after.
186
+ */
187
+ if (desc_s2_translation) {
188
+ asid = cfg->asid;
189
+ stage = cfg->stage;
190
+ cfg->asid = -1;
191
+ cfg->stage = SMMU_STAGE_2;
192
+ }
193
194
cached_entry = smmu_translate(bs, cfg, addr, flag, &ptw_info);
195
+
196
+ if (desc_s2_translation) {
197
+ cfg->asid = asid;
198
+ cfg->stage = stage;
199
+ }
200
+
201
if (!cached_entry) {
202
/* All faults from PTW has S2 field. */
203
event->u.f_walk_eabt.s2 = (ptw_info.stage == SMMU_STAGE_2);
204
switch (ptw_info.type) {
205
case SMMU_PTW_ERR_WALK_EABT:
206
event->type = SMMU_EVT_F_WALK_EABT;
207
- event->u.f_walk_eabt.addr = addr;
208
event->u.f_walk_eabt.rnw = flag & 0x1;
209
event->u.f_walk_eabt.class = (ptw_info.stage == SMMU_STAGE_2) ?
210
- SMMU_CLASS_IN : SMMU_CLASS_TT;
211
+ class : SMMU_CLASS_TT;
212
event->u.f_walk_eabt.addr2 = ptw_info.addr;
213
break;
214
case SMMU_PTW_ERR_TRANSLATION:
215
if (PTW_RECORD_FAULT(cfg)) {
216
event->type = SMMU_EVT_F_TRANSLATION;
217
- event->u.f_translation.addr = addr;
218
event->u.f_translation.addr2 = ptw_info.addr;
219
- event->u.f_translation.class = SMMU_CLASS_IN;
220
+ event->u.f_translation.class = class;
221
event->u.f_translation.rnw = flag & 0x1;
222
}
223
break;
224
case SMMU_PTW_ERR_ADDR_SIZE:
225
if (PTW_RECORD_FAULT(cfg)) {
226
event->type = SMMU_EVT_F_ADDR_SIZE;
227
- event->u.f_addr_size.addr = addr;
228
event->u.f_addr_size.addr2 = ptw_info.addr;
229
- event->u.f_addr_size.class = SMMU_CLASS_IN;
230
+ event->u.f_addr_size.class = class;
231
event->u.f_addr_size.rnw = flag & 0x1;
232
}
233
break;
234
case SMMU_PTW_ERR_ACCESS:
235
if (PTW_RECORD_FAULT(cfg)) {
236
event->type = SMMU_EVT_F_ACCESS;
237
- event->u.f_access.addr = addr;
238
event->u.f_access.addr2 = ptw_info.addr;
239
- event->u.f_access.class = SMMU_CLASS_IN;
240
+ event->u.f_access.class = class;
241
event->u.f_access.rnw = flag & 0x1;
242
}
243
break;
244
case SMMU_PTW_ERR_PERMISSION:
245
if (PTW_RECORD_FAULT(cfg)) {
246
event->type = SMMU_EVT_F_PERMISSION;
247
- event->u.f_permission.addr = addr;
248
event->u.f_permission.addr2 = ptw_info.addr;
249
- event->u.f_permission.class = SMMU_CLASS_IN;
250
+ event->u.f_permission.class = class;
251
event->u.f_permission.rnw = flag & 0x1;
252
}
253
break;
254
@@ -XXX,XX +XXX,XX @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
255
return SMMU_TRANS_SUCCESS;
256
}
257
258
+/*
259
+ * Sets the InputAddr for an SMMU_TRANS_ERROR, as it can't be
260
+ * set from all contexts, as smmuv3_get_config() can return
261
+ * translation faults in case of nested translation (for CD
262
+ * and TTBx). But in that case the iova is not known.
263
+ */
264
+static void smmuv3_fixup_event(SMMUEventInfo *event, hwaddr iova)
265
+{
266
+ switch (event->type) {
267
+ case SMMU_EVT_F_WALK_EABT:
268
+ case SMMU_EVT_F_TRANSLATION:
269
+ case SMMU_EVT_F_ADDR_SIZE:
270
+ case SMMU_EVT_F_ACCESS:
271
+ case SMMU_EVT_F_PERMISSION:
272
+ event->u.f_walk_eabt.addr = iova;
273
+ break;
274
+ default:
275
+ break;
276
+ }
277
+}
278
+
279
/* Entry point to SMMU, does everything. */
280
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
281
IOMMUAccessFlags flag, int iommu_idx)
282
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
283
goto epilogue;
284
}
285
286
- status = smmuv3_do_translate(s, addr, cfg, &event, flag, &cached_entry);
287
+ status = smmuv3_do_translate(s, addr, cfg, &event, flag,
288
+ &cached_entry, SMMU_CLASS_IN);
289
290
epilogue:
291
qemu_mutex_unlock(&s->mutex);
292
@@ -XXX,XX +XXX,XX @@ epilogue:
293
entry.perm);
294
break;
295
case SMMU_TRANS_ERROR:
296
+ smmuv3_fixup_event(&event, addr);
297
qemu_log_mask(LOG_GUEST_ERROR,
298
"%s translation failed for iova=0x%"PRIx64" (%s)\n",
299
mr->parent_obj.name, addr, smmu_event_string(event.type));
300
--
340
--
301
2.34.1
341
2.34.1
302
303
diff view generated by jsdifflib
New patch
1
Explicitly set a rule in the softfloat tests for propagating NaNs in
2
the muladd case. In meson.build we put -DTARGET_ARM in fpcflags, and
3
so we should select here the Arm rule of float_3nan_prop_s_cab.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-17-peter.maydell@linaro.org
8
---
9
tests/fp/fp-bench.c | 1 +
10
tests/fp/fp-test.c | 1 +
11
2 files changed, 2 insertions(+)
12
13
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/fp/fp-bench.c
16
+++ b/tests/fp/fp-bench.c
17
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
18
* doesn't specify match those used by the Arm architecture.
19
*/
20
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
22
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
23
24
f = bench_funcs[operation][precision];
25
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/fp/fp-test.c
28
+++ b/tests/fp/fp-test.c
29
@@ -XXX,XX +XXX,XX @@ void run_test(void)
30
* doesn't specify match those used by the Arm architecture.
31
*/
32
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
33
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
34
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
35
36
genCases_setLevel(test_level);
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-18-peter.maydell@linaro.org
7
---
8
target/arm/cpu.c | 5 +++++
9
fpu/softfloat-specialize.c.inc | 8 +-------
10
2 files changed, 6 insertions(+), 7 deletions(-)
11
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
16
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
17
* * tininess-before-rounding
18
* * 2-input NaN propagation prefers SNaN over QNaN, and then
19
* operand A over operand B (see FPProcessNaNs() pseudocode)
20
+ * * 3-input NaN propagation prefers SNaN over QNaN, and then
21
+ * operand C over A over B (see FPProcessNaNs3() pseudocode,
22
+ * but note that for QEMU muladd is a * b + c, whereas for
23
+ * the pseudocode function the arguments are in the order c, a, b.
24
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
25
* and the input NaN if it is signalling
26
*/
27
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
28
{
29
set_float_detect_tininess(float_tininess_before_rounding, s);
30
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
31
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
32
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
33
}
34
35
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
36
index XXXXXXX..XXXXXXX 100644
37
--- a/fpu/softfloat-specialize.c.inc
38
+++ b/fpu/softfloat-specialize.c.inc
39
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
40
}
41
42
if (rule == float_3nan_prop_none) {
43
-#if defined(TARGET_ARM)
44
- /*
45
- * This looks different from the ARM ARM pseudocode, because the ARM ARM
46
- * puts the operands to a fused mac operation (a*b)+c in the order c,a,b
47
- */
48
- rule = float_3nan_prop_s_cab;
49
-#elif defined(TARGET_MIPS)
50
+#if defined(TARGET_MIPS)
51
if (snan_bit_is_one(status)) {
52
rule = float_3nan_prop_s_abc;
53
} else {
54
--
55
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for loongarch, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-19-peter.maydell@linaro.org
7
---
8
target/loongarch/tcg/fpu_helper.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/loongarch/tcg/fpu_helper.c
15
+++ b/target/loongarch/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
17
* case sets InvalidOp and returns the input value 'c'
18
*/
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
21
}
22
23
int ieee_ex_to_loongarch(int xcpt)
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_LOONGARCH64)
33
- rule = float_3nan_prop_s_cab;
34
#elif defined(TARGET_PPC)
35
/*
36
* If fRA is a NaN return it; otherwise if fRB is a NaN return it;
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for PPC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-20-peter.maydell@linaro.org
7
---
8
target/ppc/cpu_init.c | 8 ++++++++
9
fpu/softfloat-specialize.c.inc | 6 ------
10
2 files changed, 8 insertions(+), 6 deletions(-)
11
12
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/ppc/cpu_init.c
15
+++ b/target/ppc/cpu_init.c
16
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->vec_status);
20
+ /*
21
+ * NaN propagation for fused multiply-add:
22
+ * if fRA is a NaN return it; otherwise if fRB is a NaN return it;
23
+ * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
24
+ * whereas QEMU labels the operands as (a * b) + c.
25
+ */
26
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->fp_status);
27
+ set_float_3nan_prop_rule(float_3nan_prop_acb, &env->vec_status);
28
/*
29
* For PPC, the (inf,zero,qnan) case sets InvalidOp, but we prefer
30
* to return an input NaN if we have one (ie c) rather than generating
31
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
32
index XXXXXXX..XXXXXXX 100644
33
--- a/fpu/softfloat-specialize.c.inc
34
+++ b/fpu/softfloat-specialize.c.inc
35
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
36
} else {
37
rule = float_3nan_prop_s_cab;
38
}
39
-#elif defined(TARGET_PPC)
40
- /*
41
- * If fRA is a NaN return it; otherwise if fRB is a NaN return it;
42
- * otherwise return fRC. Note that muladd on PPC is (fRA * fRC) + frB
43
- */
44
- rule = float_3nan_prop_acb;
45
#elif defined(TARGET_S390X)
46
rule = float_3nan_prop_s_abc;
47
#elif defined(TARGET_SPARC)
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for s390x, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-21-peter.maydell@linaro.org
7
---
8
target/s390x/cpu.c | 1 +
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 1 insertion(+), 2 deletions(-)
11
12
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/cpu.c
15
+++ b/target/s390x/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
17
set_float_detect_tininess(float_tininess_before_rounding,
18
&env->fpu_status);
19
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fpu_status);
20
+ set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
21
set_float_infzeronan_rule(float_infzeronan_dnan_always,
22
&env->fpu_status);
23
/* fall through */
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
29
} else {
30
rule = float_3nan_prop_s_cab;
31
}
32
-#elif defined(TARGET_S390X)
33
- rule = float_3nan_prop_s_abc;
34
#elif defined(TARGET_SPARC)
35
rule = float_3nan_prop_s_cba;
36
#elif defined(TARGET_XTENSA)
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for SPARC, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-22-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 2 --
10
2 files changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
* the CPU state struct so it won't get zeroed on reset.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &env->fp_status);
20
+ /* For fused-multiply add, prefer SNaN over QNaN, then C->B->A */
21
+ set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
22
/* For inf * 0 + NaN, return the input NaN */
23
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
24
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
30
} else {
31
rule = float_3nan_prop_s_cab;
32
}
33
-#elif defined(TARGET_SPARC)
34
- rule = float_3nan_prop_s_cba;
35
#elif defined(TARGET_XTENSA)
36
if (status->use_first_nan) {
37
rule = float_3nan_prop_abc;
38
--
39
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for Arm, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-23-peter.maydell@linaro.org
7
---
8
target/mips/fpu_helper.h | 4 ++++
9
target/mips/msa.c | 3 +++
10
fpu/softfloat-specialize.c.inc | 8 +-------
11
3 files changed, 8 insertions(+), 7 deletions(-)
12
13
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/mips/fpu_helper.h
16
+++ b/target/mips/fpu_helper.h
17
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
18
{
19
bool nan2008 = env->active_fpu.fcr31 & (1 << FCR31_NAN2008);
20
FloatInfZeroNaNRule izn_rule;
21
+ Float3NaNPropRule nan3_rule;
22
23
/*
24
* With nan2008, SNaNs are silenced in the usual way.
25
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
26
*/
27
izn_rule = nan2008 ? float_infzeronan_dnan_never : float_infzeronan_dnan_always;
28
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
29
+ nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
30
+ set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
31
+
32
}
33
34
static inline void restore_fp_status(CPUMIPSState *env)
35
diff --git a/target/mips/msa.c b/target/mips/msa.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/mips/msa.c
38
+++ b/target/mips/msa.c
39
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
40
set_float_2nan_prop_rule(float_2nan_prop_s_ab,
41
&env->active_tc.msa_fp_status);
42
43
+ set_float_3nan_prop_rule(float_3nan_prop_s_cab,
44
+ &env->active_tc.msa_fp_status);
45
+
46
/* clear float_status exception flags */
47
set_float_exception_flags(0, &env->active_tc.msa_fp_status);
48
49
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
50
index XXXXXXX..XXXXXXX 100644
51
--- a/fpu/softfloat-specialize.c.inc
52
+++ b/fpu/softfloat-specialize.c.inc
53
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
54
}
55
56
if (rule == float_3nan_prop_none) {
57
-#if defined(TARGET_MIPS)
58
- if (snan_bit_is_one(status)) {
59
- rule = float_3nan_prop_s_abc;
60
- } else {
61
- rule = float_3nan_prop_s_cab;
62
- }
63
-#elif defined(TARGET_XTENSA)
64
+#if defined(TARGET_XTENSA)
65
if (status->use_first_nan) {
66
rule = float_3nan_prop_abc;
67
} else {
68
--
69
2.34.1
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
Set the Float3NaNPropRule explicitly for xtensa, and remove the
2
ifdef from pickNaNMulAdd().
2
3
3
In the next patch, combine_tlb() will be added which combines 2 TLB
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
entries into one for nested translations, which chooses the granule
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
and level from the smallest entry.
6
Message-id: 20241202131347.498124-24-peter.maydell@linaro.org
7
---
8
target/xtensa/fpu_helper.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 8 --------
10
2 files changed, 2 insertions(+), 8 deletions(-)
6
11
7
This means that with nested translation, an entry can be cached with
12
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
8
the granule of stage-2 and not stage-1.
9
10
However, currently, the lookup for an IOVA is done with input stage
11
granule, which is stage-1 for nested configuration, which will not
12
work with the above logic.
13
This patch reworks lookup in that case, so it falls back to stage-2
14
granule if no entry is found using stage-1 granule.
15
16
Also, drop aligning the iova to avoid over-aligning in case the iova
17
is cached with a smaller granule, the TLB lookup will align the iova
18
anyway for each granule and level, and the page table walker doesn't
19
consider the page offset bits.
20
21
Signed-off-by: Mostafa Saleh <smostafa@google.com>
22
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
23
Reviewed-by: Eric Auger <eric.auger@redhat.com>
24
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
25
Message-id: 20240715084519.1189624-10-smostafa@google.com
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
---
28
hw/arm/smmu-common.c | 64 +++++++++++++++++++++++++++++---------------
29
1 file changed, 43 insertions(+), 21 deletions(-)
30
31
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
32
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/arm/smmu-common.c
14
--- a/target/xtensa/fpu_helper.c
34
+++ b/hw/arm/smmu-common.c
15
+++ b/target/xtensa/fpu_helper.c
35
@@ -XXX,XX +XXX,XX @@ SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
16
@@ -XXX,XX +XXX,XX @@ void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
36
return key;
17
set_use_first_nan(use_first, &env->fp_status);
18
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
19
&env->fp_status);
20
+ set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
21
+ &env->fp_status);
37
}
22
}
38
23
39
-SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
24
void HELPER(wur_fpu2k_fcr)(CPUXtensaState *env, uint32_t v)
40
- SMMUTransTableInfo *tt, hwaddr iova)
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
41
+static SMMUTLBEntry *smmu_iotlb_lookup_all_levels(SMMUState *bs,
26
index XXXXXXX..XXXXXXX 100644
42
+ SMMUTransCfg *cfg,
27
--- a/fpu/softfloat-specialize.c.inc
43
+ SMMUTransTableInfo *tt,
28
+++ b/fpu/softfloat-specialize.c.inc
44
+ hwaddr iova)
29
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
45
{
46
uint8_t tg = (tt->granule_sz - 10) / 2;
47
uint8_t inputsize = 64 - tt->tsz;
48
@@ -XXX,XX +XXX,XX @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
49
}
50
level++;
51
}
30
}
52
+ return entry;
31
53
+}
32
if (rule == float_3nan_prop_none) {
54
+
33
-#if defined(TARGET_XTENSA)
55
+/**
34
- if (status->use_first_nan) {
56
+ * smmu_iotlb_lookup - Look up for a TLB entry.
35
- rule = float_3nan_prop_abc;
57
+ * @bs: SMMU state which includes the TLB instance
36
- } else {
58
+ * @cfg: Configuration of the translation
37
- rule = float_3nan_prop_cba;
59
+ * @tt: Translation table info (granule and tsz)
38
- }
60
+ * @iova: IOVA address to lookup
39
-#else
61
+ *
40
rule = float_3nan_prop_abc;
62
+ * returns a valid entry on success, otherwise NULL.
41
-#endif
63
+ * In case of nested translation, tt can be updated to include
64
+ * the granule of the found entry as it might different from
65
+ * the IOVA granule.
66
+ */
67
+SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
68
+ SMMUTransTableInfo *tt, hwaddr iova)
69
+{
70
+ SMMUTLBEntry *entry = NULL;
71
+
72
+ entry = smmu_iotlb_lookup_all_levels(bs, cfg, tt, iova);
73
+ /*
74
+ * For nested translation also try the s2 granule, as the TLB will insert
75
+ * it if the size of s2 tlb entry was smaller.
76
+ */
77
+ if (!entry && (cfg->stage == SMMU_NESTED) &&
78
+ (cfg->s2cfg.granule_sz != tt->granule_sz)) {
79
+ tt->granule_sz = cfg->s2cfg.granule_sz;
80
+ entry = smmu_iotlb_lookup_all_levels(bs, cfg, tt, iova);
81
+ }
82
83
if (entry) {
84
cfg->iotlb_hits++;
85
@@ -XXX,XX +XXX,XX @@ int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
86
SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t addr,
87
IOMMUAccessFlags flag, SMMUPTWEventInfo *info)
88
{
89
- uint64_t page_mask, aligned_addr;
90
SMMUTLBEntry *cached_entry = NULL;
91
SMMUTransTableInfo *tt;
92
int status;
93
94
/*
95
- * Combined attributes used for TLB lookup, as only one stage is supported,
96
- * it will hold attributes based on the enabled stage.
97
+ * Combined attributes used for TLB lookup, holds the attributes for
98
+ * the input stage.
99
*/
100
SMMUTransTableInfo tt_combined;
101
102
- if (cfg->stage == SMMU_STAGE_1) {
103
+ if (cfg->stage == SMMU_STAGE_2) {
104
+ /* Stage2. */
105
+ tt_combined.granule_sz = cfg->s2cfg.granule_sz;
106
+ tt_combined.tsz = cfg->s2cfg.tsz;
107
+ } else {
108
/* Select stage1 translation table. */
109
tt = select_tt(cfg, addr);
110
if (!tt) {
111
@@ -XXX,XX +XXX,XX @@ SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t addr,
112
}
113
tt_combined.granule_sz = tt->granule_sz;
114
tt_combined.tsz = tt->tsz;
115
-
116
- } else {
117
- /* Stage2. */
118
- tt_combined.granule_sz = cfg->s2cfg.granule_sz;
119
- tt_combined.tsz = cfg->s2cfg.tsz;
120
}
42
}
121
43
122
- /*
44
assert(rule != float_3nan_prop_none);
123
- * TLB lookup looks for granule and input size for a translation stage,
124
- * as only one stage is supported right now, choose the right values
125
- * from the configuration.
126
- */
127
- page_mask = (1ULL << tt_combined.granule_sz) - 1;
128
- aligned_addr = addr & ~page_mask;
129
-
130
- cached_entry = smmu_iotlb_lookup(bs, cfg, &tt_combined, aligned_addr);
131
+ cached_entry = smmu_iotlb_lookup(bs, cfg, &tt_combined, addr);
132
if (cached_entry) {
133
if ((flag & IOMMU_WO) && !(cached_entry->entry.perm & IOMMU_WO)) {
134
info->type = SMMU_PTW_ERR_PERMISSION;
135
@@ -XXX,XX +XXX,XX @@ SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t addr,
136
}
137
138
cached_entry = g_new0(SMMUTLBEntry, 1);
139
- status = smmu_ptw(cfg, aligned_addr, flag, cached_entry, info);
140
+ status = smmu_ptw(cfg, addr, flag, cached_entry, info);
141
if (status) {
142
g_free(cached_entry);
143
return NULL;
144
--
45
--
145
2.34.1
46
2.34.1
146
147
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for i386. We had no
2
i386-specific behaviour in the old ifdef ladder, so we were using the
3
default "prefer a then b then c" fallback; this is actually the
4
correct per-the-spec handling for i386.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-25-peter.maydell@linaro.org
9
---
10
target/i386/tcg/fpu_helper.c | 1 +
11
1 file changed, 1 insertion(+)
12
13
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/i386/tcg/fpu_helper.c
16
+++ b/target/i386/tcg/fpu_helper.c
17
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
18
* there are multiple input NaNs they are selected in the order a, b, c.
19
*/
20
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
21
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
22
}
23
24
static inline uint8_t save_exception_flags(CPUX86State *env)
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
Set the Float3NaNPropRule explicitly for HPPA, and remove the
2
ifdef from pickNaNMulAdd().
1
3
4
HPPA is the only target that was using the default branch of the
5
ifdef ladder (other targets either do not use muladd or set
6
default_nan_mode), so we can remove the ifdef fallback entirely now
7
(allowing the "rule not set" case to fall into the default of the
8
switch statement and assert).
9
10
We add a TODO note that the HPPA rule is probably wrong; this is
11
not a behavioural change for this refactoring.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20241202131347.498124-26-peter.maydell@linaro.org
16
---
17
target/hppa/fpu_helper.c | 8 ++++++++
18
fpu/softfloat-specialize.c.inc | 4 ----
19
2 files changed, 8 insertions(+), 4 deletions(-)
20
21
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/hppa/fpu_helper.c
24
+++ b/target/hppa/fpu_helper.c
25
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
26
* HPPA does note implement a CPU reset method at all...
27
*/
28
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &env->fp_status);
29
+ /*
30
+ * TODO: The HPPA architecture reference only documents its NaN
31
+ * propagation rule for 2-operand operations. Testing on real hardware
32
+ * might be necessary to confirm whether this order for muladd is correct.
33
+ * Not preferring the SNaN is almost certainly incorrect as it diverges
34
+ * from the documented rules for 2-operand operations.
35
+ */
36
+ set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
37
/* For inf * 0 + NaN, return the input NaN */
38
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
39
}
40
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
41
index XXXXXXX..XXXXXXX 100644
42
--- a/fpu/softfloat-specialize.c.inc
43
+++ b/fpu/softfloat-specialize.c.inc
44
@@ -XXX,XX +XXX,XX @@ static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
45
}
46
}
47
48
- if (rule == float_3nan_prop_none) {
49
- rule = float_3nan_prop_abc;
50
- }
51
-
52
assert(rule != float_3nan_prop_none);
53
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
54
/* We have at least one SNaN input and should prefer it */
55
--
56
2.34.1
diff view generated by jsdifflib
New patch
1
The use_first_nan field in float_status was an xtensa-specific way to
2
select at runtime from two different NaN propagation rules. Now that
3
xtensa is using the target-agnostic NaN propagation rule selection
4
that we've just added, we can remove use_first_nan, because there is
5
no longer any code that reads it.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20241202131347.498124-27-peter.maydell@linaro.org
10
---
11
include/fpu/softfloat-helpers.h | 5 -----
12
include/fpu/softfloat-types.h | 1 -
13
target/xtensa/fpu_helper.c | 1 -
14
3 files changed, 7 deletions(-)
15
16
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/fpu/softfloat-helpers.h
19
+++ b/include/fpu/softfloat-helpers.h
20
@@ -XXX,XX +XXX,XX @@ static inline void set_snan_bit_is_one(bool val, float_status *status)
21
status->snan_bit_is_one = val;
22
}
23
24
-static inline void set_use_first_nan(bool val, float_status *status)
25
-{
26
- status->use_first_nan = val;
27
-}
28
-
29
static inline void set_no_signaling_nans(bool val, float_status *status)
30
{
31
status->no_signaling_nans = val;
32
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/include/fpu/softfloat-types.h
35
+++ b/include/fpu/softfloat-types.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
37
* softfloat-specialize.inc.c)
38
*/
39
bool snan_bit_is_one;
40
- bool use_first_nan;
41
bool no_signaling_nans;
42
/* should overflowed results subtract re_bias to its exponent? */
43
bool rebias_overflow;
44
diff --git a/target/xtensa/fpu_helper.c b/target/xtensa/fpu_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/xtensa/fpu_helper.c
47
+++ b/target/xtensa/fpu_helper.c
48
@@ -XXX,XX +XXX,XX @@ static const struct {
49
50
void xtensa_use_first_nan(CPUXtensaState *env, bool use_first)
51
{
52
- set_use_first_nan(use_first, &env->fp_status);
53
set_float_2nan_prop_rule(use_first ? float_2nan_prop_ab : float_2nan_prop_ba,
54
&env->fp_status);
55
set_float_3nan_prop_rule(use_first ? float_3nan_prop_abc : float_3nan_prop_cba,
56
--
57
2.34.1
diff view generated by jsdifflib
New patch
1
Currently m68k_cpu_reset_hold() calls floatx80_default_nan(NULL)
2
to get the NaN bit pattern to reset the FPU registers. This
3
works because it happens that our implementation of
4
floatx80_default_nan() doesn't actually look at the float_status
5
pointer except for TARGET_MIPS. However, this isn't guaranteed,
6
and to be able to remove the ifdef in floatx80_default_nan()
7
we're going to need a real float_status here.
1
8
9
Rearrange m68k_cpu_reset_hold() so that we initialize env->fp_status
10
earlier, and thus can pass it to floatx80_default_nan().
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20241202131347.498124-28-peter.maydell@linaro.org
15
---
16
target/m68k/cpu.c | 12 +++++++-----
17
1 file changed, 7 insertions(+), 5 deletions(-)
18
19
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/m68k/cpu.c
22
+++ b/target/m68k/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
24
CPUState *cs = CPU(obj);
25
M68kCPUClass *mcc = M68K_CPU_GET_CLASS(obj);
26
CPUM68KState *env = cpu_env(cs);
27
- floatx80 nan = floatx80_default_nan(NULL);
28
+ floatx80 nan;
29
int i;
30
31
if (mcc->parent_phases.hold) {
32
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
33
#else
34
cpu_m68k_set_sr(env, SR_S | SR_I);
35
#endif
36
- for (i = 0; i < 8; i++) {
37
- env->fregs[i].d = nan;
38
- }
39
- cpu_m68k_set_fpcr(env, 0);
40
/*
41
* M68000 FAMILY PROGRAMMER'S REFERENCE MANUAL
42
* 3.4 FLOATING-POINT INSTRUCTION DETAILS
43
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
44
* preceding paragraph for nonsignaling NaNs.
45
*/
46
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
47
+
48
+ nan = floatx80_default_nan(&env->fp_status);
49
+ for (i = 0; i < 8; i++) {
50
+ env->fregs[i].d = nan;
51
+ }
52
+ cpu_m68k_set_fpcr(env, 0);
53
env->fpsr = 0;
54
55
/* TODO: We should set PC from the interrupt vector. */
56
--
57
2.34.1
diff view generated by jsdifflib
New patch
1
We create our 128-bit default NaN by calling parts64_default_nan()
2
and then adjusting the result. We can do the same trick for creating
3
the floatx80 default NaN, which lets us drop a target ifdef.
1
4
5
floatx80 is used only by:
6
i386
7
m68k
8
arm nwfpe old floating-point emulation emulation support
9
(which is essentially dead, especially the parts involving floatx80)
10
PPC (only in the xsrqpxp instruction, which just rounds an input
11
value by converting to floatx80 and back, so will never generate
12
the default NaN)
13
14
The floatx80 default NaN as currently implemented is:
15
m68k: sign = 0, exp = 1...1, int = 1, frac = 1....1
16
i386: sign = 1, exp = 1...1, int = 1, frac = 10...0
17
18
These are the same as the parts64_default_nan for these architectures.
19
20
This is technically a possible behaviour change for arm linux-user
21
nwfpe emulation emulation, because the default NaN will now have the
22
sign bit clear. But we were already generating a different floatx80
23
default NaN from the real kernel emulation we are supposedly
24
following, which appears to use an all-bits-1 value:
25
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L267
26
27
This won't affect the only "real" use of the nwfpe emulation, which
28
is ancient binaries that used it as part of the old floating point
29
calling convention; that only uses loads and stores of 32 and 64 bit
30
floats, not any of the floatx80 behaviour the original hardware had.
31
We also get the nwfpe float64 default NaN value wrong:
32
https://elixir.bootlin.com/linux/v6.12/source/arch/arm/nwfpe/softfloat-specialize#L166
33
so if we ever cared about this obscure corner the right fix would be
34
to correct that so nwfpe used its own default-NaN setting rather
35
than the Arm VFP one.
36
37
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
39
Message-id: 20241202131347.498124-29-peter.maydell@linaro.org
40
---
41
fpu/softfloat-specialize.c.inc | 20 ++++++++++----------
42
1 file changed, 10 insertions(+), 10 deletions(-)
43
44
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
45
index XXXXXXX..XXXXXXX 100644
46
--- a/fpu/softfloat-specialize.c.inc
47
+++ b/fpu/softfloat-specialize.c.inc
48
@@ -XXX,XX +XXX,XX @@ static void parts128_silence_nan(FloatParts128 *p, float_status *status)
49
floatx80 floatx80_default_nan(float_status *status)
50
{
51
floatx80 r;
52
+ /*
53
+ * Extrapolate from the choices made by parts64_default_nan to fill
54
+ * in the floatx80 format. We assume that floatx80's explicit
55
+ * integer bit is always set (this is true for i386 and m68k,
56
+ * which are the only real users of this format).
57
+ */
58
+ FloatParts64 p64;
59
+ parts64_default_nan(&p64, status);
60
61
- /* None of the targets that have snan_bit_is_one use floatx80. */
62
- assert(!snan_bit_is_one(status));
63
-#if defined(TARGET_M68K)
64
- r.low = UINT64_C(0xFFFFFFFFFFFFFFFF);
65
- r.high = 0x7FFF;
66
-#else
67
- /* X86 */
68
- r.low = UINT64_C(0xC000000000000000);
69
- r.high = 0xFFFF;
70
-#endif
71
+ r.high = 0x7FFF | (p64.sign << 15);
72
+ r.low = (1ULL << DECOMPOSED_BINARY_POINT) | p64.frac;
73
return r;
74
}
75
76
--
77
2.34.1
diff view generated by jsdifflib
New patch
1
In target/loongarch's helper_fclass_s() and helper_fclass_d() we pass
2
a zero-initialized float_status struct to float32_is_quiet_nan() and
3
float64_is_quiet_nan(), with the cryptic comment "for
4
snan_bit_is_one".
1
5
6
This pattern appears to have been copied from target/riscv, where it
7
is used because the functions there do not have ready access to the
8
CPU state struct. The comment presumably refers to the fact that the
9
main reason the is_quiet_nan() functions want the float_state is
10
because they want to know about the snan_bit_is_one config.
11
12
In the loongarch helpers, though, we have the CPU state struct
13
to hand. Use the usual env->fp_status here. This avoids our needing
14
to track that we need to update the initializer of the local
15
float_status structs when the core softfloat code adds new
16
options for targets to configure their behaviour.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20241202131347.498124-30-peter.maydell@linaro.org
21
---
22
target/loongarch/tcg/fpu_helper.c | 6 ++----
23
1 file changed, 2 insertions(+), 4 deletions(-)
24
25
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/loongarch/tcg/fpu_helper.c
28
+++ b/target/loongarch/tcg/fpu_helper.c
29
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_s(CPULoongArchState *env, uint64_t fj)
30
} else if (float32_is_zero_or_denormal(f)) {
31
return sign ? 1 << 4 : 1 << 8;
32
} else if (float32_is_any_nan(f)) {
33
- float_status s = { }; /* for snan_bit_is_one */
34
- return float32_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
35
+ return float32_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
36
} else {
37
return sign ? 1 << 3 : 1 << 7;
38
}
39
@@ -XXX,XX +XXX,XX @@ uint64_t helper_fclass_d(CPULoongArchState *env, uint64_t fj)
40
} else if (float64_is_zero_or_denormal(f)) {
41
return sign ? 1 << 4 : 1 << 8;
42
} else if (float64_is_any_nan(f)) {
43
- float_status s = { }; /* for snan_bit_is_one */
44
- return float64_is_quiet_nan(f, &s) ? 1 << 1 : 1 << 0;
45
+ return float64_is_quiet_nan(f, &env->fp_status) ? 1 << 1 : 1 << 0;
46
} else {
47
return sign ? 1 << 3 : 1 << 7;
48
}
49
--
50
2.34.1
diff view generated by jsdifflib
New patch
1
In the frem helper, we have a local float_status because we want to
2
execute the floatx80_div() with a custom rounding mode. Instead of
3
zero-initializing the local float_status and then having to set it up
4
with the m68k standard behaviour (including the NaN propagation rule
5
and copying the rounding precision from env->fp_status), initialize
6
it as a complete copy of env->fp_status. This will avoid our having
7
to add new code in this function for every new config knob we add
8
to fp_status.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-31-peter.maydell@linaro.org
13
---
14
target/m68k/fpu_helper.c | 6 ++----
15
1 file changed, 2 insertions(+), 4 deletions(-)
16
17
diff --git a/target/m68k/fpu_helper.c b/target/m68k/fpu_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/m68k/fpu_helper.c
20
+++ b/target/m68k/fpu_helper.c
21
@@ -XXX,XX +XXX,XX @@ void HELPER(frem)(CPUM68KState *env, FPReg *res, FPReg *val0, FPReg *val1)
22
23
fp_rem = floatx80_rem(val1->d, val0->d, &env->fp_status);
24
if (!floatx80_is_any_nan(fp_rem)) {
25
- float_status fp_status = { };
26
+ /* Use local temporary fp_status to set different rounding mode */
27
+ float_status fp_status = env->fp_status;
28
uint32_t quotient;
29
int sign;
30
31
/* Calculate quotient directly using round to nearest mode */
32
- set_float_2nan_prop_rule(float_2nan_prop_ab, &fp_status);
33
set_float_rounding_mode(float_round_nearest_even, &fp_status);
34
- set_floatx80_rounding_precision(
35
- get_floatx80_rounding_precision(&env->fp_status), &fp_status);
36
fp_quot.d = floatx80_div(val1->d, val0->d, &fp_status);
37
38
sign = extractFloatx80Sign(fp_quot.d);
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
In cf_fpu_gdb_get_reg() and cf_fpu_gdb_set_reg() we do the conversion
2
from float64 to floatx80 using a scratch float_status, because we
3
don't want the conversion to affect the CPU's floating point exception
4
status. Currently we use a zero-initialized float_status. This will
5
get steadily more awkward as we add config knobs to float_status
6
that the target must initialize. Avoid having to add any of that
7
configuration here by instead initializing our local float_status
8
from the env->fp_status.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20241202131347.498124-32-peter.maydell@linaro.org
13
---
14
target/m68k/helper.c | 6 ++++--
15
1 file changed, 4 insertions(+), 2 deletions(-)
16
17
diff --git a/target/m68k/helper.c b/target/m68k/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/m68k/helper.c
20
+++ b/target/m68k/helper.c
21
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_get_reg(CPUState *cs, GByteArray *mem_buf, int n)
22
CPUM68KState *env = &cpu->env;
23
24
if (n < 8) {
25
- float_status s = {};
26
+ /* Use scratch float_status so any exceptions don't change CPU state */
27
+ float_status s = env->fp_status;
28
return gdb_get_reg64(mem_buf, floatx80_to_float64(env->fregs[n].d, &s));
29
}
30
switch (n) {
31
@@ -XXX,XX +XXX,XX @@ static int cf_fpu_gdb_set_reg(CPUState *cs, uint8_t *mem_buf, int n)
32
CPUM68KState *env = &cpu->env;
33
34
if (n < 8) {
35
- float_status s = {};
36
+ /* Use scratch float_status so any exceptions don't change CPU state */
37
+ float_status s = env->fp_status;
38
env->fregs[n].d = float64_to_floatx80(ldq_be_p(mem_buf), &s);
39
return 8;
40
}
41
--
42
2.34.1
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
In the helper functions flcmps and flcmpd we use a scratch float_status
2
so that we don't change the CPU state if the comparison raises any
3
floating point exception flags. Instead of zero-initializing this
4
scratch float_status, initialize it as a copy of env->fp_status. This
5
avoids the need to explicitly initialize settings like the NaN
6
propagation rule or others we might add to softfloat in future.
2
7
3
Soon, Instead of doing TLB invalidation by ASID only, VMID will be
8
To do this we need to pass the CPU env pointer in to the helper.
4
also required.
5
Add smmu_iotlb_inv_asid_vmid() which invalidates by both ASID and VMID.
6
9
7
However, at the moment this function is only used in SMMU_CMD_TLBI_NH_ASID
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
which is a stage-1 command, so passing VMID = -1 keeps the original
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
behaviour.
12
Message-id: 20241202131347.498124-33-peter.maydell@linaro.org
13
---
14
target/sparc/helper.h | 4 ++--
15
target/sparc/fop_helper.c | 8 ++++----
16
target/sparc/translate.c | 4 ++--
17
3 files changed, 8 insertions(+), 8 deletions(-)
10
18
11
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
19
diff --git a/target/sparc/helper.h b/target/sparc/helper.h
12
Reviewed-by: Eric Auger <eric.auger@redhat.com>
13
Signed-off-by: Mostafa Saleh <smostafa@google.com>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Message-id: 20240715084519.1189624-14-smostafa@google.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
include/hw/arm/smmu-common.h | 2 +-
19
hw/arm/smmu-common.c | 20 +++++++++++++-------
20
hw/arm/smmuv3.c | 2 +-
21
hw/arm/trace-events | 2 +-
22
4 files changed, 16 insertions(+), 10 deletions(-)
23
24
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
25
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/arm/smmu-common.h
21
--- a/target/sparc/helper.h
27
+++ b/include/hw/arm/smmu-common.h
22
+++ b/target/sparc/helper.h
28
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(fcmpd, TCG_CALL_NO_WG, i32, env, f64, f64)
29
SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
24
DEF_HELPER_FLAGS_3(fcmped, TCG_CALL_NO_WG, i32, env, f64, f64)
30
uint8_t tg, uint8_t level);
25
DEF_HELPER_FLAGS_3(fcmpq, TCG_CALL_NO_WG, i32, env, i128, i128)
31
void smmu_iotlb_inv_all(SMMUState *s);
26
DEF_HELPER_FLAGS_3(fcmpeq, TCG_CALL_NO_WG, i32, env, i128, i128)
32
-void smmu_iotlb_inv_asid(SMMUState *s, int asid);
27
-DEF_HELPER_FLAGS_2(flcmps, TCG_CALL_NO_RWG_SE, i32, f32, f32)
33
+void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
28
-DEF_HELPER_FLAGS_2(flcmpd, TCG_CALL_NO_RWG_SE, i32, f64, f64)
34
void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
29
+DEF_HELPER_FLAGS_3(flcmps, TCG_CALL_NO_RWG_SE, i32, env, f32, f32)
35
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
30
+DEF_HELPER_FLAGS_3(flcmpd, TCG_CALL_NO_RWG_SE, i32, env, f64, f64)
36
uint8_t tg, uint64_t num_pages, uint8_t ttl);
31
DEF_HELPER_2(raise_exception, noreturn, env, int)
37
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
32
33
DEF_HELPER_FLAGS_3(faddd, TCG_CALL_NO_WG, f64, env, f64, f64)
34
diff --git a/target/sparc/fop_helper.c b/target/sparc/fop_helper.c
38
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
39
--- a/hw/arm/smmu-common.c
36
--- a/target/sparc/fop_helper.c
40
+++ b/hw/arm/smmu-common.c
37
+++ b/target/sparc/fop_helper.c
41
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_all(SMMUState *s)
38
@@ -XXX,XX +XXX,XX @@ uint32_t helper_fcmpeq(CPUSPARCState *env, Int128 src1, Int128 src2)
42
g_hash_table_remove_all(s->iotlb);
39
return finish_fcmp(env, r, GETPC());
43
}
40
}
44
41
45
-static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value,
42
-uint32_t helper_flcmps(float32 src1, float32 src2)
46
- gpointer user_data)
43
+uint32_t helper_flcmps(CPUSPARCState *env, float32 src1, float32 src2)
47
+static gboolean smmu_hash_remove_by_asid_vmid(gpointer key, gpointer value,
48
+ gpointer user_data)
49
{
44
{
50
- int asid = *(int *)user_data;
45
/*
51
+ SMMUIOTLBPageInvInfo *info = (SMMUIOTLBPageInvInfo *)user_data;
46
* FLCMP never raises an exception nor modifies any FSR fields.
52
SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key;
47
* Perform the comparison with a dummy fp environment.
53
48
*/
54
- return SMMU_IOTLB_ASID(*iotlb_key) == asid;
49
- float_status discard = { };
55
+ return (SMMU_IOTLB_ASID(*iotlb_key) == info->asid) &&
50
+ float_status discard = env->fp_status;
56
+ (SMMU_IOTLB_VMID(*iotlb_key) == info->vmid);
51
FloatRelation r;
52
53
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
54
@@ -XXX,XX +XXX,XX @@ uint32_t helper_flcmps(float32 src1, float32 src2)
55
g_assert_not_reached();
57
}
56
}
58
57
59
static gboolean smmu_hash_remove_by_vmid(gpointer key, gpointer value,
58
-uint32_t helper_flcmpd(float64 src1, float64 src2)
60
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
59
+uint32_t helper_flcmpd(CPUSPARCState *env, float64 src1, float64 src2)
61
&info);
60
{
61
- float_status discard = { };
62
+ float_status discard = env->fp_status;
63
FloatRelation r;
64
65
set_float_2nan_prop_rule(float_2nan_prop_s_ba, &discard);
66
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/sparc/translate.c
69
+++ b/target/sparc/translate.c
70
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPs(DisasContext *dc, arg_FLCMPs *a)
71
72
src1 = gen_load_fpr_F(dc, a->rs1);
73
src2 = gen_load_fpr_F(dc, a->rs2);
74
- gen_helper_flcmps(cpu_fcc[a->cc], src1, src2);
75
+ gen_helper_flcmps(cpu_fcc[a->cc], tcg_env, src1, src2);
76
return advance_pc(dc);
62
}
77
}
63
78
64
-void smmu_iotlb_inv_asid(SMMUState *s, int asid)
79
@@ -XXX,XX +XXX,XX @@ static bool trans_FLCMPd(DisasContext *dc, arg_FLCMPd *a)
65
+void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid)
80
66
{
81
src1 = gen_load_fpr_D(dc, a->rs1);
67
- trace_smmu_iotlb_inv_asid(asid);
82
src2 = gen_load_fpr_D(dc, a->rs2);
68
- g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid);
83
- gen_helper_flcmpd(cpu_fcc[a->cc], src1, src2);
69
+ SMMUIOTLBPageInvInfo info = {
84
+ gen_helper_flcmpd(cpu_fcc[a->cc], tcg_env, src1, src2);
70
+ .asid = asid,
85
return advance_pc(dc);
71
+ .vmid = vmid,
72
+ };
73
+
74
+ trace_smmu_iotlb_inv_asid_vmid(asid, vmid);
75
+ g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid_vmid, &info);
76
}
86
}
77
87
78
void smmu_iotlb_inv_vmid(SMMUState *s, int vmid)
79
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/arm/smmuv3.c
82
+++ b/hw/arm/smmuv3.c
83
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
84
85
trace_smmuv3_cmdq_tlbi_nh_asid(asid);
86
smmu_inv_notifiers_all(&s->smmu_state);
87
- smmu_iotlb_inv_asid(bs, asid);
88
+ smmu_iotlb_inv_asid_vmid(bs, asid, -1);
89
break;
90
}
91
case SMMU_CMD_TLBI_NH_ALL:
92
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
93
index XXXXXXX..XXXXXXX 100644
94
--- a/hw/arm/trace-events
95
+++ b/hw/arm/trace-events
96
@@ -XXX,XX +XXX,XX @@ smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint6
97
smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
98
smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
99
smmu_iotlb_inv_all(void) "IOTLB invalidate all"
100
-smmu_iotlb_inv_asid(int asid) "IOTLB invalidate asid=%d"
101
+smmu_iotlb_inv_asid_vmid(int asid, int vmid) "IOTLB invalidate asid=%d vmid=%d"
102
smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d"
103
smmu_iotlb_inv_iova(int asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
104
smmu_inv_notifiers_mr(const char *name) "iommu mr=%s"
105
--
88
--
106
2.34.1
89
2.34.1
107
108
diff view generated by jsdifflib
New patch
1
In the helper_compute_fprf functions, we pass a dummy float_status
2
in to the is_signaling_nan() function. This is unnecessary, because
3
we have convenient access to the CPU env pointer here and that
4
is already set up with the correct values for the snan_bit_is_one
5
and no_signaling_nans config settings. is_signaling_nan() doesn't
6
ever update the fp_status with any exception flags, so there is
7
no reason not to use env->fp_status here.
1
8
9
Use env->fp_status instead of the dummy fp_status.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20241202131347.498124-34-peter.maydell@linaro.org
14
---
15
target/ppc/fpu_helper.c | 3 +--
16
1 file changed, 1 insertion(+), 2 deletions(-)
17
18
diff --git a/target/ppc/fpu_helper.c b/target/ppc/fpu_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/ppc/fpu_helper.c
21
+++ b/target/ppc/fpu_helper.c
22
@@ -XXX,XX +XXX,XX @@ void helper_compute_fprf_##tp(CPUPPCState *env, tp arg) \
23
} else if (tp##_is_infinity(arg)) { \
24
fprf = neg ? 0x09 << FPSCR_FPRF : 0x05 << FPSCR_FPRF; \
25
} else { \
26
- float_status dummy = { }; /* snan_bit_is_one = 0 */ \
27
- if (tp##_is_signaling_nan(arg, &dummy)) { \
28
+ if (tp##_is_signaling_nan(arg, &env->fp_status)) { \
29
fprf = 0x00 << FPSCR_FPRF; \
30
} else { \
31
fprf = 0x11 << FPSCR_FPRF; \
32
--
33
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This operation has float16 inputs and thus must use
3
Now that float_status has a bunch of fp parameters,
4
the FZ16 control not the FZ control.
4
it is easier to copy an existing structure than create
5
one from scratch. Begin by copying the structure that
6
corresponds to the FPSR and make only the adjustments
7
required for BFloat16 semantics.
5
8
6
Cc: qemu-stable@nongnu.org
7
Fixes: 3916841ac75 ("target/arm: Implement FMOPA, FMOPS (widening)")
8
Reported-by: Daniyal Khan <danikhan632@gmail.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20240717060149.204788-3-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2374
12
Message-id: 20241203203949.483774-2-richard.henderson@linaro.org
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
14
---
17
target/arm/tcg/translate-sme.c | 12 ++++++++----
15
target/arm/tcg/vec_helper.c | 20 +++++++-------------
18
1 file changed, 8 insertions(+), 4 deletions(-)
16
1 file changed, 7 insertions(+), 13 deletions(-)
19
17
20
diff --git a/target/arm/tcg/translate-sme.c b/target/arm/tcg/translate-sme.c
18
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/tcg/translate-sme.c
20
--- a/target/arm/tcg/vec_helper.c
23
+++ b/target/arm/tcg/translate-sme.c
21
+++ b/target/arm/tcg/vec_helper.c
24
@@ -XXX,XX +XXX,XX @@ static bool do_outprod(DisasContext *s, arg_op *a, MemOp esz,
22
@@ -XXX,XX +XXX,XX @@ bool is_ebf(CPUARMState *env, float_status *statusp, float_status *oddstatusp)
23
* no effect on AArch32 instructions.
24
*/
25
bool ebf = is_a64(env) && env->vfp.fpcr & FPCR_EBF;
26
- *statusp = (float_status){
27
- .tininess_before_rounding = float_tininess_before_rounding,
28
- .float_rounding_mode = float_round_to_odd_inf,
29
- .flush_to_zero = true,
30
- .flush_inputs_to_zero = true,
31
- .default_nan_mode = true,
32
- };
33
+
34
+ *statusp = env->vfp.fp_status;
35
+ set_default_nan_mode(true, statusp);
36
37
if (ebf) {
38
- float_status *fpst = &env->vfp.fp_status;
39
- set_flush_to_zero(get_flush_to_zero(fpst), statusp);
40
- set_flush_inputs_to_zero(get_flush_inputs_to_zero(fpst), statusp);
41
- set_float_rounding_mode(get_float_rounding_mode(fpst), statusp);
42
-
43
/* EBF=1 needs to do a step with round-to-odd semantics */
44
*oddstatusp = *statusp;
45
set_float_rounding_mode(float_round_to_odd, oddstatusp);
46
+ } else {
47
+ set_flush_to_zero(true, statusp);
48
+ set_flush_inputs_to_zero(true, statusp);
49
+ set_float_rounding_mode(float_round_to_odd_inf, statusp);
50
}
51
-
52
return ebf;
25
}
53
}
26
54
27
static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
28
+ ARMFPStatusFlavour e_fpst,
29
gen_helper_gvec_5_ptr *fn)
30
{
31
int svl = streaming_vec_reg_size(s);
32
@@ -XXX,XX +XXX,XX @@ static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
33
zm = vec_full_reg_ptr(s, a->zm);
34
pn = pred_full_reg_ptr(s, a->pn);
35
pm = pred_full_reg_ptr(s, a->pm);
36
- fpst = fpstatus_ptr(FPST_FPCR);
37
+ fpst = fpstatus_ptr(e_fpst);
38
39
fn(za, zn, zm, pn, pm, fpst, tcg_constant_i32(desc));
40
return true;
41
}
42
43
-TRANS_FEAT(FMOPA_h, aa64_sme, do_outprod_fpst, a, MO_32, gen_helper_sme_fmopa_h)
44
-TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst, a, MO_32, gen_helper_sme_fmopa_s)
45
-TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst, a, MO_64, gen_helper_sme_fmopa_d)
46
+TRANS_FEAT(FMOPA_h, aa64_sme, do_outprod_fpst, a,
47
+ MO_32, FPST_FPCR_F16, gen_helper_sme_fmopa_h)
48
+TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst, a,
49
+ MO_32, FPST_FPCR, gen_helper_sme_fmopa_s)
50
+TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst, a,
51
+ MO_64, FPST_FPCR, gen_helper_sme_fmopa_d)
52
53
/* TODO: FEAT_EBF16 */
54
TRANS_FEAT(BFMOPA, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_bfmopa)
55
--
55
--
56
2.34.1
56
2.34.1
57
57
58
58
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
Currently we hardcode the default NaN value in parts64_default_nan()
2
using a compile-time ifdef ladder. This is awkward for two cases:
3
* for single-QEMU-binary we can't hard-code target-specifics like this
4
* for Arm FEAT_AFP the default NaN value depends on FPCR.AH
5
(specifically the sign bit is different)
2
6
3
With nesting, we would need to invalidate IPAs without
7
Add a field to float_status to specify the default NaN value; fall
4
over-invalidating stage-1 IOVAs. This can be done by
8
back to the old ifdef behaviour if these are not set.
5
distinguishing IPAs in the TLBs by having ASID=-1.
6
To achieve that, rework the invalidation for IPAs to have a
7
separate function, while for IOVA invalidation ASID=-1 means
8
invalidate for all ASIDs.
9
9
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
The default NaN value is specified by setting a uint8_t to a
11
Signed-off-by: Mostafa Saleh <smostafa@google.com>
11
pattern corresponding to the sign and upper fraction parts of
12
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
12
the NaN; the lower bits of the fraction are set from bit 0 of
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
the pattern.
14
Message-id: 20240715084519.1189624-13-smostafa@google.com
14
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20241202131347.498124-35-peter.maydell@linaro.org
16
---
18
---
17
include/hw/arm/smmu-common.h | 3 ++-
19
include/fpu/softfloat-helpers.h | 11 +++++++
18
hw/arm/smmu-common.c | 47 ++++++++++++++++++++++++++++++++++++
20
include/fpu/softfloat-types.h | 10 ++++++
19
hw/arm/smmuv3.c | 23 ++++++++++++------
21
fpu/softfloat-specialize.c.inc | 55 ++++++++++++++++++++-------------
20
hw/arm/trace-events | 2 +-
22
3 files changed, 54 insertions(+), 22 deletions(-)
21
4 files changed, 66 insertions(+), 9 deletions(-)
22
23
23
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
24
diff --git a/include/fpu/softfloat-helpers.h b/include/fpu/softfloat-helpers.h
24
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/smmu-common.h
26
--- a/include/fpu/softfloat-helpers.h
26
+++ b/include/hw/arm/smmu-common.h
27
+++ b/include/fpu/softfloat-helpers.h
27
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_asid(SMMUState *s, int asid);
28
@@ -XXX,XX +XXX,XX @@ static inline void set_float_infzeronan_rule(FloatInfZeroNaNRule rule,
28
void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
29
status->float_infzeronan_rule = rule;
29
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
30
uint8_t tg, uint64_t num_pages, uint8_t ttl);
31
-
32
+void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
33
+ uint64_t num_pages, uint8_t ttl);
34
/* Unmap the range of all the notifiers registered to any IOMMU mr */
35
void smmu_inv_notifiers_all(SMMUState *s);
36
37
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/hw/arm/smmu-common.c
40
+++ b/hw/arm/smmu-common.c
41
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_asid_vmid_iova(gpointer key, gpointer value,
42
((entry->iova & ~info->mask) == info->iova);
43
}
30
}
44
31
45
+static gboolean smmu_hash_remove_by_vmid_ipa(gpointer key, gpointer value,
32
+static inline void set_float_default_nan_pattern(uint8_t dnan_pattern,
46
+ gpointer user_data)
33
+ float_status *status)
47
+{
34
+{
48
+ SMMUTLBEntry *iter = (SMMUTLBEntry *)value;
35
+ status->default_nan_pattern = dnan_pattern;
49
+ IOMMUTLBEntry *entry = &iter->entry;
50
+ SMMUIOTLBPageInvInfo *info = (SMMUIOTLBPageInvInfo *)user_data;
51
+ SMMUIOTLBKey iotlb_key = *(SMMUIOTLBKey *)key;
52
+
53
+ if (SMMU_IOTLB_ASID(iotlb_key) >= 0) {
54
+ /* This is a stage-1 address. */
55
+ return false;
56
+ }
57
+ if (info->vmid != SMMU_IOTLB_VMID(iotlb_key)) {
58
+ return false;
59
+ }
60
+ return ((info->iova & ~entry->addr_mask) == entry->iova) ||
61
+ ((entry->iova & ~info->mask) == info->iova);
62
+}
36
+}
63
+
37
+
64
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
38
static inline void set_flush_to_zero(bool val, float_status *status)
65
uint8_t tg, uint64_t num_pages, uint8_t ttl)
66
{
39
{
67
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
40
status->flush_to_zero = val;
68
&info);
41
@@ -XXX,XX +XXX,XX @@ static inline FloatInfZeroNaNRule get_float_infzeronan_rule(float_status *status
42
return status->float_infzeronan_rule;
69
}
43
}
70
44
71
+/*
45
+static inline uint8_t get_float_default_nan_pattern(float_status *status)
72
+ * Similar to smmu_iotlb_inv_iova(), but for Stage-2, ASID is always -1,
73
+ * in Stage-1 invalidation ASID = -1, means don't care.
74
+ */
75
+void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
76
+ uint64_t num_pages, uint8_t ttl)
77
+{
46
+{
78
+ uint8_t granule = tg ? tg * 2 + 10 : 12;
47
+ return status->default_nan_pattern;
79
+ int asid = -1;
80
+
81
+ if (ttl && (num_pages == 1)) {
82
+ SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa, tg, ttl);
83
+
84
+ if (g_hash_table_remove(s->iotlb, &key)) {
85
+ return;
86
+ }
87
+ }
88
+
89
+ SMMUIOTLBPageInvInfo info = {
90
+ .iova = ipa,
91
+ .vmid = vmid,
92
+ .mask = (num_pages << granule) - 1};
93
+
94
+ g_hash_table_foreach_remove(s->iotlb,
95
+ smmu_hash_remove_by_vmid_ipa,
96
+ &info);
97
+}
48
+}
98
+
49
+
99
void smmu_iotlb_inv_asid(SMMUState *s, int asid)
50
static inline bool get_flush_to_zero(float_status *status)
100
{
51
{
101
trace_smmu_iotlb_inv_asid(asid);
52
return status->flush_to_zero;
102
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
53
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
103
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
104
--- a/hw/arm/smmuv3.c
55
--- a/include/fpu/softfloat-types.h
105
+++ b/hw/arm/smmuv3.c
56
+++ b/include/fpu/softfloat-types.h
106
@@ -XXX,XX +XXX,XX @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
57
@@ -XXX,XX +XXX,XX @@ typedef struct float_status {
107
}
58
/* should denormalised inputs go to zero and set the input_denormal flag? */
108
}
59
bool flush_inputs_to_zero;
109
60
bool default_nan_mode;
110
-static void smmuv3_range_inval(SMMUState *s, Cmd *cmd)
61
+ /*
111
+static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
62
+ * The pattern to use for the default NaN. Here the high bit specifies
63
+ * the default NaN's sign bit, and bits 6..0 specify the high bits of the
64
+ * fractional part. The low bits of the fractional part are copies of bit 0.
65
+ * The exponent of the default NaN is (as for any NaN) always all 1s.
66
+ * Note that a value of 0 here is not a valid NaN. The target must set
67
+ * this to the correct non-zero value, or we will assert when trying to
68
+ * create a default NaN.
69
+ */
70
+ uint8_t default_nan_pattern;
71
/*
72
* The flags below are not used on all specializations and may
73
* constant fold away (see snan_bit_is_one()/no_signalling_nans() in
74
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
75
index XXXXXXX..XXXXXXX 100644
76
--- a/fpu/softfloat-specialize.c.inc
77
+++ b/fpu/softfloat-specialize.c.inc
78
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
112
{
79
{
113
dma_addr_t end, addr = CMD_ADDR(cmd);
80
bool sign = 0;
114
uint8_t type = CMD_TYPE(cmd);
81
uint64_t frac;
115
@@ -XXX,XX +XXX,XX @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd)
82
+ uint8_t dnan_pattern = status->default_nan_pattern;
116
}
83
117
84
+ if (dnan_pattern == 0) {
118
if (!tg) {
85
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
119
- trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf);
86
- /* !snan_bit_is_one, set all bits */
120
+ trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf, stage);
87
- frac = (1ULL << DECOMPOSED_BINARY_POINT) - 1;
121
smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1);
88
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
122
- smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
89
+ /* Sign bit clear, all frac bits set */
123
+ if (stage == SMMU_STAGE_1) {
90
+ dnan_pattern = 0b01111111;
124
+ smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
91
+#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
92
|| defined(TARGET_MICROBLAZE)
93
- /* !snan_bit_is_one, set sign and msb */
94
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
95
- sign = 1;
96
+ /* Sign bit set, most significant frac bit set */
97
+ dnan_pattern = 0b11000000;
98
#elif defined(TARGET_HPPA)
99
- /* snan_bit_is_one, set msb-1. */
100
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 2);
101
+ /* Sign bit clear, msb-1 frac bit set */
102
+ dnan_pattern = 0b00100000;
103
#elif defined(TARGET_HEXAGON)
104
- sign = 1;
105
- frac = ~0ULL;
106
+ /* Sign bit set, all frac bits set. */
107
+ dnan_pattern = 0b11111111;
108
#else
109
- /*
110
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
111
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
112
- * do not have floating-point.
113
- */
114
- if (snan_bit_is_one(status)) {
115
- /* set all bits other than msb */
116
- frac = (1ULL << (DECOMPOSED_BINARY_POINT - 1)) - 1;
117
- } else {
118
- /* set msb */
119
- frac = 1ULL << (DECOMPOSED_BINARY_POINT - 1);
120
- }
121
+ /*
122
+ * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
123
+ * S390, SH4, TriCore, and Xtensa. Our other supported targets
124
+ * do not have floating-point.
125
+ */
126
+ if (snan_bit_is_one(status)) {
127
+ /* sign bit clear, set all frac bits other than msb */
128
+ dnan_pattern = 0b00111111;
125
+ } else {
129
+ } else {
126
+ smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl);
130
+ /* sign bit clear, set frac msb */
131
+ dnan_pattern = 0b01000000;
127
+ }
132
+ }
128
return;
133
#endif
129
}
134
+ }
130
135
+ assert(dnan_pattern != 0);
131
@@ -XXX,XX +XXX,XX @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd)
136
+
132
uint64_t mask = dma_aligned_pow2_mask(addr, end, 64);
137
+ sign = dnan_pattern >> 7;
133
138
+ /*
134
num_pages = (mask + 1) >> granule;
139
+ * Place default_nan_pattern [6:0] into bits [62:56],
135
- trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages, ttl, leaf);
140
+ * and replecate bit [0] down into [55:0]
136
+ trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages,
141
+ */
137
+ ttl, leaf, stage);
142
+ frac = deposit64(0, DECOMPOSED_BINARY_POINT - 7, 7, dnan_pattern);
138
smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, num_pages);
143
+ frac = deposit64(frac, 0, DECOMPOSED_BINARY_POINT - 7, -(dnan_pattern & 1));
139
- smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
144
140
+ if (stage == SMMU_STAGE_1) {
145
*p = (FloatParts64) {
141
+ smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
146
.cls = float_class_qnan,
142
+ } else {
143
+ smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl);
144
+ }
145
addr += mask + 1;
146
}
147
}
148
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
149
cmd_error = SMMU_CERROR_ILL;
150
break;
151
}
152
- smmuv3_range_inval(bs, &cmd);
153
+ smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1);
154
break;
155
case SMMU_CMD_TLBI_S12_VMALL:
156
{
157
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
158
* As currently only either s1 or s2 are supported
159
* we can reuse same function for s2.
160
*/
161
- smmuv3_range_inval(bs, &cmd);
162
+ smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2);
163
break;
164
case SMMU_CMD_TLBI_EL3_ALL:
165
case SMMU_CMD_TLBI_EL3_VA:
166
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
167
index XXXXXXX..XXXXXXX 100644
168
--- a/hw/arm/trace-events
169
+++ b/hw/arm/trace-events
170
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%x - end=0x%x"
171
smmuv3_cmdq_cfgi_cd(uint32_t sid) "sid=0x%x"
172
smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
173
smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
174
-smmuv3_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d"
175
+smmuv3_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf, int stage) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d stage=%d"
176
smmuv3_cmdq_tlbi_nh(void) ""
177
smmuv3_cmdq_tlbi_nh_asid(int asid) "asid=%d"
178
smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
179
--
147
--
180
2.34.1
148
2.34.1
181
182
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the tests/fp code.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-36-peter.maydell@linaro.org
6
---
7
tests/fp/fp-bench.c | 1 +
8
tests/fp/fp-test-log2.c | 1 +
9
tests/fp/fp-test.c | 1 +
10
3 files changed, 3 insertions(+)
11
12
diff --git a/tests/fp/fp-bench.c b/tests/fp/fp-bench.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tests/fp/fp-bench.c
15
+++ b/tests/fp/fp-bench.c
16
@@ -XXX,XX +XXX,XX @@ static void run_bench(void)
17
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &soft_status);
18
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &soft_status);
19
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &soft_status);
20
+ set_float_default_nan_pattern(0b01000000, &soft_status);
21
22
f = bench_funcs[operation][precision];
23
g_assert(f);
24
diff --git a/tests/fp/fp-test-log2.c b/tests/fp/fp-test-log2.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/tests/fp/fp-test-log2.c
27
+++ b/tests/fp/fp-test-log2.c
28
@@ -XXX,XX +XXX,XX @@ int main(int ac, char **av)
29
int i;
30
31
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
32
+ set_float_default_nan_pattern(0b01000000, &qsf);
33
set_float_rounding_mode(float_round_nearest_even, &qsf);
34
35
test.d = 0.0;
36
diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/tests/fp/fp-test.c
39
+++ b/tests/fp/fp-test.c
40
@@ -XXX,XX +XXX,XX @@ void run_test(void)
41
*/
42
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &qsf);
43
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &qsf);
44
+ set_float_default_nan_pattern(0b01000000, &qsf);
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, &qsf);
46
47
genCases_setLevel(test_level);
48
--
49
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-37-peter.maydell@linaro.org
7
---
8
target/microblaze/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 +--
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/microblaze/cpu.c
15
+++ b/target/microblaze/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_reset_hold(Object *obj, ResetType type)
17
* this architecture.
18
*/
19
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
23
#if defined(CONFIG_USER_ONLY)
24
/* start in user mode with interrupts enabled. */
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_I386) || defined(TARGET_X86_64) \
34
- || defined(TARGET_MICROBLAZE)
35
+#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
/* Sign bit set, most significant frac bit set */
37
dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-38-peter.maydell@linaro.org
7
---
8
target/i386/tcg/fpu_helper.c | 4 ++++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 4 insertions(+), 3 deletions(-)
11
12
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/i386/tcg/fpu_helper.c
15
+++ b/target/i386/tcg/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void cpu_init_fp_statuses(CPUX86State *env)
17
*/
18
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->sse_status);
19
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->sse_status);
20
+ /* Default NaN: sign bit set, most significant frac bit set */
21
+ set_float_default_nan_pattern(0b11000000, &env->fp_status);
22
+ set_float_default_nan_pattern(0b11000000, &env->mmx_status);
23
+ set_float_default_nan_pattern(0b11000000, &env->sse_status);
24
}
25
26
static inline uint8_t save_exception_flags(CPUX86State *env)
27
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
28
index XXXXXXX..XXXXXXX 100644
29
--- a/fpu/softfloat-specialize.c.inc
30
+++ b/fpu/softfloat-specialize.c.inc
31
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
32
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
/* Sign bit clear, all frac bits set */
34
dnan_pattern = 0b01111111;
35
-#elif defined(TARGET_I386) || defined(TARGET_X86_64)
36
- /* Sign bit set, most significant frac bit set */
37
- dnan_pattern = 0b11000000;
38
#elif defined(TARGET_HPPA)
39
/* Sign bit clear, msb-1 frac bit set */
40
dnan_pattern = 0b00100000;
41
--
42
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly, and remove the ifdef from
2
parts64_default_nan().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-39-peter.maydell@linaro.org
7
---
8
target/hppa/fpu_helper.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 3 ---
10
2 files changed, 2 insertions(+), 3 deletions(-)
11
12
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/hppa/fpu_helper.c
15
+++ b/target/hppa/fpu_helper.c
16
@@ -XXX,XX +XXX,XX @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
17
set_float_3nan_prop_rule(float_3nan_prop_abc, &env->fp_status);
18
/* For inf * 0 + NaN, return the input NaN */
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ /* Default NaN: sign bit clear, msb-1 frac bit set */
21
+ set_float_default_nan_pattern(0b00100000, &env->fp_status);
22
}
23
24
void cpu_hppa_loaded_fr0(CPUHPPAState *env)
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
#if defined(TARGET_SPARC) || defined(TARGET_M68K)
31
/* Sign bit clear, all frac bits set */
32
dnan_pattern = 0b01111111;
33
-#elif defined(TARGET_HPPA)
34
- /* Sign bit clear, msb-1 frac bit set */
35
- dnan_pattern = 0b00100000;
36
#elif defined(TARGET_HEXAGON)
37
/* Sign bit set, all frac bits set. */
38
dnan_pattern = 0b11111111;
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the alpha target.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-40-peter.maydell@linaro.org
6
---
7
target/alpha/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/alpha/cpu.c
13
+++ b/target/alpha/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_initfn(Object *obj)
15
* operand in Fa. That is float_2nan_prop_ba.
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
18
+ /* Default NaN: sign bit clear, msb frac bit set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
#if defined(CONFIG_USER_ONLY)
21
env->flags = ENV_FLAG_PS_USER | ENV_FLAG_FEN;
22
cpu_alpha_store_fpcr(env, (uint64_t)(FPCR_INVD | FPCR_DZED | FPCR_OVFD
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for the arm target.
2
This includes setting it for the old linux-user nwfpe emulation.
3
For nwfpe, our default doesn't match the real kernel, but we
4
avoid making a behaviour change in this commit.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-41-peter.maydell@linaro.org
9
---
10
linux-user/arm/nwfpe/fpa11.c | 5 +++++
11
target/arm/cpu.c | 2 ++
12
2 files changed, 7 insertions(+)
13
14
diff --git a/linux-user/arm/nwfpe/fpa11.c b/linux-user/arm/nwfpe/fpa11.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/arm/nwfpe/fpa11.c
17
+++ b/linux-user/arm/nwfpe/fpa11.c
18
@@ -XXX,XX +XXX,XX @@ void resetFPA11(void)
19
* this late date.
20
*/
21
set_float_2nan_prop_rule(float_2nan_prop_s_ab, &fpa11->fp_status);
22
+ /*
23
+ * Use the same default NaN value as Arm VFP. This doesn't match
24
+ * the Linux kernel's nwfpe emulation, which uses an all-1s value.
25
+ */
26
+ set_float_default_nan_pattern(0b01000000, &fpa11->fp_status);
27
}
28
29
void SetRoundingMode(const unsigned int opcode)
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
35
* the pseudocode function the arguments are in the order c, a, b.
36
* * 0 * Inf + NaN returns the default NaN if the input NaN is quiet,
37
* and the input NaN if it is signalling
38
+ * * Default NaN has sign bit clear, msb frac bit set
39
*/
40
static void arm_set_default_fp_behaviours(float_status *s)
41
{
42
@@ -XXX,XX +XXX,XX @@ static void arm_set_default_fp_behaviours(float_status *s)
43
set_float_2nan_prop_rule(float_2nan_prop_s_ab, s);
44
set_float_3nan_prop_rule(float_3nan_prop_s_cab, s);
45
set_float_infzeronan_rule(float_infzeronan_dnan_if_qnan, s);
46
+ set_float_default_nan_pattern(0b01000000, s);
47
}
48
49
static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
50
--
51
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for loongarch.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-42-peter.maydell@linaro.org
6
---
7
target/loongarch/tcg/fpu_helper.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/loongarch/tcg/fpu_helper.c b/target/loongarch/tcg/fpu_helper.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/loongarch/tcg/fpu_helper.c
13
+++ b/target/loongarch/tcg/fpu_helper.c
14
@@ -XXX,XX +XXX,XX @@ void restore_fp_status(CPULoongArchState *env)
15
*/
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
17
set_float_3nan_prop_rule(float_3nan_prop_s_cab, &env->fp_status);
18
+ /* Default NaN: sign bit clear, msb frac bit set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
}
21
22
int ieee_ex_to_loongarch(int xcpt)
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for m68k.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-43-peter.maydell@linaro.org
6
---
7
target/m68k/cpu.c | 2 ++
8
fpu/softfloat-specialize.c.inc | 2 +-
9
2 files changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/m68k/cpu.c
14
+++ b/target/m68k/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj, ResetType type)
16
* preceding paragraph for nonsignaling NaNs.
17
*/
18
set_float_2nan_prop_rule(float_2nan_prop_ab, &env->fp_status);
19
+ /* Default NaN: sign bit clear, all frac bits set */
20
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
21
22
nan = floatx80_default_nan(&env->fp_status);
23
for (i = 0; i < 8; i++) {
24
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
25
index XXXXXXX..XXXXXXX 100644
26
--- a/fpu/softfloat-specialize.c.inc
27
+++ b/fpu/softfloat-specialize.c.inc
28
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
29
uint8_t dnan_pattern = status->default_nan_pattern;
30
31
if (dnan_pattern == 0) {
32
-#if defined(TARGET_SPARC) || defined(TARGET_M68K)
33
+#if defined(TARGET_SPARC)
34
/* Sign bit clear, all frac bits set */
35
dnan_pattern = 0b01111111;
36
#elif defined(TARGET_HEXAGON)
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for MIPS. Note that this
2
is our only target which currently changes the default NaN
3
at runtime (which it was previously doing indirectly when it
4
changed the snan_bit_is_one setting).
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20241202131347.498124-44-peter.maydell@linaro.org
9
---
10
target/mips/fpu_helper.h | 7 +++++++
11
target/mips/msa.c | 3 +++
12
2 files changed, 10 insertions(+)
13
14
diff --git a/target/mips/fpu_helper.h b/target/mips/fpu_helper.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/mips/fpu_helper.h
17
+++ b/target/mips/fpu_helper.h
18
@@ -XXX,XX +XXX,XX @@ static inline void restore_snan_bit_mode(CPUMIPSState *env)
19
set_float_infzeronan_rule(izn_rule, &env->active_fpu.fp_status);
20
nan3_rule = nan2008 ? float_3nan_prop_s_cab : float_3nan_prop_s_abc;
21
set_float_3nan_prop_rule(nan3_rule, &env->active_fpu.fp_status);
22
+ /*
23
+ * With nan2008, the default NaN value has the sign bit clear and the
24
+ * frac msb set; with the older mode, the sign bit is clear, and all
25
+ * frac bits except the msb are set.
26
+ */
27
+ set_float_default_nan_pattern(nan2008 ? 0b01000000 : 0b00111111,
28
+ &env->active_fpu.fp_status);
29
30
}
31
32
diff --git a/target/mips/msa.c b/target/mips/msa.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/mips/msa.c
35
+++ b/target/mips/msa.c
36
@@ -XXX,XX +XXX,XX @@ void msa_reset(CPUMIPSState *env)
37
/* Inf * 0 + NaN returns the input NaN */
38
set_float_infzeronan_rule(float_infzeronan_dnan_never,
39
&env->active_tc.msa_fp_status);
40
+ /* Default NaN: sign bit clear, frac msb set */
41
+ set_float_default_nan_pattern(0b01000000,
42
+ &env->active_tc.msa_fp_status);
43
}
44
--
45
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for openrisc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-45-peter.maydell@linaro.org
6
---
7
target/openrisc/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/openrisc/cpu.c
13
+++ b/target/openrisc/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_reset_hold(Object *obj, ResetType type)
15
*/
16
set_float_2nan_prop_rule(float_2nan_prop_x87, &cpu->env.fp_status);
17
18
+ /* Default NaN: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &cpu->env.fp_status);
20
21
#ifndef CONFIG_USER_ONLY
22
cpu->env.picmr = 0x00000000;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for ppc.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-46-peter.maydell@linaro.org
6
---
7
target/ppc/cpu_init.c | 4 ++++
8
1 file changed, 4 insertions(+)
9
10
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/ppc/cpu_init.c
13
+++ b/target/ppc/cpu_init.c
14
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->vec_status);
17
18
+ /* Default NaN: sign bit clear, set frac msb */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
+ set_float_default_nan_pattern(0b01000000, &env->vec_status);
21
+
22
for (i = 0; i < ARRAY_SIZE(env->spr_cb); i++) {
23
ppc_spr_t *spr = &env->spr_cb[i];
24
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for sh4. Note that sh4
2
is one of the only three targets (the others being HPPA and
3
sometimes MIPS) that has snan_bit_is_one set.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-47-peter.maydell@linaro.org
8
---
9
target/sh4/cpu.c | 2 ++
10
1 file changed, 2 insertions(+)
11
12
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sh4/cpu.c
15
+++ b/target/sh4/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_reset_hold(Object *obj, ResetType type)
17
set_flush_to_zero(1, &env->fp_status);
18
#endif
19
set_default_nan_mode(1, &env->fp_status);
20
+ /* sign bit clear, set all frac bits other than msb */
21
+ set_float_default_nan_pattern(0b00111111, &env->fp_status);
22
}
23
24
static void superh_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
25
--
26
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for rx.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-48-peter.maydell@linaro.org
6
---
7
target/rx/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/rx/cpu.c
13
+++ b/target/rx/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_reset_hold(Object *obj, ResetType type)
15
* then prefer dest over source", which is float_2nan_prop_s_ab.
16
*/
17
set_float_2nan_prop_rule(float_2nan_prop_x87, &env->fp_status);
18
+ /* Default NaN value: sign bit clear, set frac msb */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
}
21
22
static ObjectClass *rx_cpu_class_by_name(const char *cpu_model)
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for s390x.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-49-peter.maydell@linaro.org
6
---
7
target/s390x/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/s390x/cpu.c
13
+++ b/target/s390x/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_reset_hold(Object *obj, ResetType type)
15
set_float_3nan_prop_rule(float_3nan_prop_s_abc, &env->fpu_status);
16
set_float_infzeronan_rule(float_infzeronan_dnan_always,
17
&env->fpu_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fpu_status);
20
/* fall through */
21
case RESET_TYPE_S390_CPU_NORMAL:
22
env->psw.mask &= ~PSW_MASK_RI;
23
--
24
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for SPARC, and remove
2
the ifdef from parts64_default_nan.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20241202131347.498124-50-peter.maydell@linaro.org
7
---
8
target/sparc/cpu.c | 2 ++
9
fpu/softfloat-specialize.c.inc | 5 +----
10
2 files changed, 3 insertions(+), 4 deletions(-)
11
12
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/sparc/cpu.c
15
+++ b/target/sparc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_realizefn(DeviceState *dev, Error **errp)
17
set_float_3nan_prop_rule(float_3nan_prop_s_cba, &env->fp_status);
18
/* For inf * 0 + NaN, return the input NaN */
19
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
20
+ /* Default NaN value: sign bit clear, all frac bits set */
21
+ set_float_default_nan_pattern(0b01111111, &env->fp_status);
22
23
cpu_exec_realizefn(cs, &local_err);
24
if (local_err != NULL) {
25
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
26
index XXXXXXX..XXXXXXX 100644
27
--- a/fpu/softfloat-specialize.c.inc
28
+++ b/fpu/softfloat-specialize.c.inc
29
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
30
uint8_t dnan_pattern = status->default_nan_pattern;
31
32
if (dnan_pattern == 0) {
33
-#if defined(TARGET_SPARC)
34
- /* Sign bit clear, all frac bits set */
35
- dnan_pattern = 0b01111111;
36
-#elif defined(TARGET_HEXAGON)
37
+#if defined(TARGET_HEXAGON)
38
/* Sign bit set, all frac bits set. */
39
dnan_pattern = 0b11111111;
40
#else
41
--
42
2.34.1
diff view generated by jsdifflib
1
From: SamJakob <me@samjakob.com>
1
Set the default NaN pattern explicitly for xtensa.
2
2
3
It is common practice when implementing double-buffering on VideoCore
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
to do so by multiplying the height of the virtual buffer by the
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
number of virtual screens desired (i.e., two - in the case of
5
Message-id: 20241202131347.498124-51-peter.maydell@linaro.org
6
double-bufferring).
6
---
7
target/xtensa/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
7
9
8
At present, this won't work in QEMU because the logic in
10
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
9
fb_use_offsets require that both the virtual width and height exceed
10
their physical counterparts.
11
12
This appears to be unintentional/a typo and indeed the comment
13
states; "Experimentally, the hardware seems to do this only if the
14
viewport size is larger than the physical screen". The
15
viewport/virtual size would be larger than the physical size if
16
either virtual dimension were larger than their physical counterparts
17
and not necessarily both.
18
19
Signed-off-by: SamJakob <me@samjakob.com>
20
Message-id: 20240713160353.62410-1-me@samjakob.com
21
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
24
hw/display/bcm2835_fb.c | 2 +-
25
1 file changed, 1 insertion(+), 1 deletion(-)
26
27
diff --git a/hw/display/bcm2835_fb.c b/hw/display/bcm2835_fb.c
28
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/display/bcm2835_fb.c
12
--- a/target/xtensa/cpu.c
30
+++ b/hw/display/bcm2835_fb.c
13
+++ b/target/xtensa/cpu.c
31
@@ -XXX,XX +XXX,XX @@ static bool fb_use_offsets(BCM2835FBConfig *config)
14
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
32
* viewport size is larger than the physical screen. (It doesn't
15
/* For inf * 0 + NaN, return the input NaN */
33
* prevent the guest setting this silly viewport setting, though...)
16
set_float_infzeronan_rule(float_infzeronan_dnan_never, &env->fp_status);
34
*/
17
set_no_signaling_nans(!dfpu, &env->fp_status);
35
- return config->xres_virtual > config->xres &&
18
+ /* Default NaN value: sign bit clear, set frac msb */
36
+ return config->xres_virtual > config->xres ||
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
37
config->yres_virtual > config->yres;
20
xtensa_use_first_nan(env, !dfpu);
38
}
21
}
39
22
40
--
23
--
41
2.34.1
24
2.34.1
42
43
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for hexagon.
2
Remove the ifdef from parts64_default_nan(); the only
3
remaining unconverted targets all use the default case.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20241202131347.498124-52-peter.maydell@linaro.org
8
---
9
target/hexagon/cpu.c | 2 ++
10
fpu/softfloat-specialize.c.inc | 5 -----
11
2 files changed, 2 insertions(+), 5 deletions(-)
12
13
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/hexagon/cpu.c
16
+++ b/target/hexagon/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_reset_hold(Object *obj, ResetType type)
18
19
set_default_nan_mode(1, &env->fp_status);
20
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
21
+ /* Default NaN value: sign bit set, all frac bits set */
22
+ set_float_default_nan_pattern(0b11111111, &env->fp_status);
23
}
24
25
static void hexagon_cpu_disas_set_info(CPUState *s, disassemble_info *info)
26
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
27
index XXXXXXX..XXXXXXX 100644
28
--- a/fpu/softfloat-specialize.c.inc
29
+++ b/fpu/softfloat-specialize.c.inc
30
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
31
uint8_t dnan_pattern = status->default_nan_pattern;
32
33
if (dnan_pattern == 0) {
34
-#if defined(TARGET_HEXAGON)
35
- /* Sign bit set, all frac bits set. */
36
- dnan_pattern = 0b11111111;
37
-#else
38
/*
39
* This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
40
* S390, SH4, TriCore, and Xtensa. Our other supported targets
41
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
42
/* sign bit clear, set frac msb */
43
dnan_pattern = 0b01000000;
44
}
45
-#endif
46
}
47
assert(dnan_pattern != 0);
48
49
--
50
2.34.1
diff view generated by jsdifflib
New patch
1
Set the default NaN pattern explicitly for riscv.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20241202131347.498124-53-peter.maydell@linaro.org
6
---
7
target/riscv/cpu.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/riscv/cpu.c
13
+++ b/target/riscv/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj, ResetType type)
15
cs->exception_index = RISCV_EXCP_NONE;
16
env->load_res = -1;
17
set_default_nan_mode(1, &env->fp_status);
18
+ /* Default NaN value: sign bit clear, frac msb set */
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
20
env->vill = true;
21
22
#ifndef CONFIG_USER_ONLY
23
--
24
2.34.1
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
Set the default NaN pattern explicitly for tricore.
2
2
3
This patch adds support for nested (combined) TLB entries.
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
The main function combine_tlb() is not used here but in the next
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
patches, but to simplify the patches it is introduced first.
5
Message-id: 20241202131347.498124-54-peter.maydell@linaro.org
6
---
7
target/tricore/helper.c | 2 ++
8
1 file changed, 2 insertions(+)
6
9
7
Main changes:
10
diff --git a/target/tricore/helper.c b/target/tricore/helper.c
8
1) New field added in the SMMUTLBEntry struct: parent_perm, for
9
nested TLB, holds the stage-2 permission, this can be used to know
10
the origin of a permission fault from a cached entry as caching
11
the “and” of the permissions loses this information.
12
13
SMMUPTWEventInfo is used to hold information about PTW faults so
14
the event can be populated, the value of stage used to be set
15
based on the current stage for TLB permission faults, however
16
with the parent_perm, it is now set based on which perm has
17
the missing permission
18
19
When nesting is not enabled it has the same value as perm which
20
doesn't change the logic.
21
22
2) As combined TLB implementation is used, the combination logic
23
chooses:
24
- tg and level from the entry which has the smallest addr_mask.
25
- Based on that the iova that would be cached is recalculated.
26
- Translated_addr is chosen from stage-2.
27
28
Reviewed-by: Eric Auger <eric.auger@redhat.com>
29
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
30
Signed-off-by: Mostafa Saleh <smostafa@google.com>
31
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
32
Message-id: 20240715084519.1189624-11-smostafa@google.com
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
---
35
include/hw/arm/smmu-common.h | 1 +
36
hw/arm/smmu-common.c | 37 ++++++++++++++++++++++++++++++++----
37
2 files changed, 34 insertions(+), 4 deletions(-)
38
39
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
40
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
41
--- a/include/hw/arm/smmu-common.h
12
--- a/target/tricore/helper.c
42
+++ b/include/hw/arm/smmu-common.h
13
+++ b/target/tricore/helper.c
43
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUTLBEntry {
14
@@ -XXX,XX +XXX,XX @@ void fpu_set_state(CPUTriCoreState *env)
44
IOMMUTLBEntry entry;
15
set_flush_to_zero(1, &env->fp_status);
45
uint8_t level;
16
set_float_detect_tininess(float_tininess_before_rounding, &env->fp_status);
46
uint8_t granule;
17
set_default_nan_mode(1, &env->fp_status);
47
+ IOMMUAccessFlags parent_perm;
18
+ /* Default NaN pattern: sign bit clear, frac msb set */
48
} SMMUTLBEntry;
19
+ set_float_default_nan_pattern(0b01000000, &env->fp_status);
49
50
/* Stage-2 configuration. */
51
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/arm/smmu-common.c
54
+++ b/hw/arm/smmu-common.c
55
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
56
tlbe->entry.translated_addr = gpa;
57
tlbe->entry.iova = iova & ~mask;
58
tlbe->entry.addr_mask = mask;
59
- tlbe->entry.perm = PTE_AP_TO_PERM(ap);
60
+ tlbe->parent_perm = PTE_AP_TO_PERM(ap);
61
+ tlbe->entry.perm = tlbe->parent_perm;
62
tlbe->level = level;
63
tlbe->granule = granule_sz;
64
return 0;
65
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
66
tlbe->entry.translated_addr = gpa;
67
tlbe->entry.iova = ipa & ~mask;
68
tlbe->entry.addr_mask = mask;
69
- tlbe->entry.perm = s2ap;
70
+ tlbe->parent_perm = s2ap;
71
+ tlbe->entry.perm = tlbe->parent_perm;
72
tlbe->level = level;
73
tlbe->granule = granule_sz;
74
return 0;
75
@@ -XXX,XX +XXX,XX @@ error:
76
return -EINVAL;
77
}
20
}
78
21
79
+/*
22
uint32_t psw_read(CPUTriCoreState *env)
80
+ * combine S1 and S2 TLB entries into a single entry.
81
+ * As a result the S1 entry is overriden with combined data.
82
+ */
83
+static void __attribute__((unused)) combine_tlb(SMMUTLBEntry *tlbe,
84
+ SMMUTLBEntry *tlbe_s2,
85
+ dma_addr_t iova,
86
+ SMMUTransCfg *cfg)
87
+{
88
+ if (tlbe_s2->entry.addr_mask < tlbe->entry.addr_mask) {
89
+ tlbe->entry.addr_mask = tlbe_s2->entry.addr_mask;
90
+ tlbe->granule = tlbe_s2->granule;
91
+ tlbe->level = tlbe_s2->level;
92
+ }
93
+
94
+ tlbe->entry.translated_addr = CACHED_ENTRY_TO_ADDR(tlbe_s2,
95
+ tlbe->entry.translated_addr);
96
+
97
+ tlbe->entry.iova = iova & ~tlbe->entry.addr_mask;
98
+ /* parent_perm has s2 perm while perm keeps s1 perm. */
99
+ tlbe->parent_perm = tlbe_s2->entry.perm;
100
+ return;
101
+}
102
+
103
/**
104
* smmu_ptw - Walk the page tables for an IOVA, according to @cfg
105
*
106
@@ -XXX,XX +XXX,XX @@ SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t addr,
107
108
cached_entry = smmu_iotlb_lookup(bs, cfg, &tt_combined, addr);
109
if (cached_entry) {
110
- if ((flag & IOMMU_WO) && !(cached_entry->entry.perm & IOMMU_WO)) {
111
+ if ((flag & IOMMU_WO) && !(cached_entry->entry.perm &
112
+ cached_entry->parent_perm & IOMMU_WO)) {
113
info->type = SMMU_PTW_ERR_PERMISSION;
114
- info->stage = cfg->stage;
115
+ info->stage = !(cached_entry->entry.perm & IOMMU_WO) ?
116
+ SMMU_STAGE_1 :
117
+ SMMU_STAGE_2;
118
return NULL;
119
}
120
return cached_entry;
121
--
23
--
122
2.34.1
24
2.34.1
123
124
diff view generated by jsdifflib
1
In commit c1a1f80518d360b when we added the FEAT_LSE2 relaxations to
1
Now that all our targets have bene converted to explicitly specify
2
the alignment requirements for atomic and ordered loads and stores,
2
their pattern for the default NaN value we can remove the remaining
3
we didn't quite get it right for LDAPR/LDAPRH/LDAPRB with no
3
fallback code in parts64_default_nan().
4
immediate offset. These instructions were handled in the old decoder
5
as part of disas_ldst_atomic(), but unlike all the other insns that
6
function decoded (LDADD, LDCLR, etc) these insns are "ordered", not
7
"atomic", so they should be using check_ordered_align() rather than
8
check_atomic_align(). Commit c1a1f80518d360b used
9
check_atomic_align() regardless for everything in
10
disas_ldst_atomic(). We then carried that incorrect check over in
11
the decodetree conversion, where LDAPR/LDAPRH/LDAPRB are now handled
12
by trans_LDAPR().
13
4
14
The effect is that when FEAT_LSE2 is implemented, these instructions
15
don't honour the SCTLR_ELx.nAA bit and will generate alignment
16
faults when they should not.
17
18
(The LDAPR insns with an immediate offset were in disas_ldst_ldapr_stlr()
19
and then in trans_LDAPR_i() and trans_STLR_i(), and have always used
20
the correct check_ordered_align().)
21
22
Use check_ordered_align() in trans_LDAPR().
23
24
Cc: qemu-stable@nongnu.org
25
Fixes: c1a1f80518d360b ("target/arm: Relax ordered/atomic alignment checks for LSE2")
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Message-id: 20240709134504.3500007-3-peter.maydell@linaro.org
7
Message-id: 20241202131347.498124-55-peter.maydell@linaro.org
29
---
8
---
30
target/arm/tcg/translate-a64.c | 2 +-
9
fpu/softfloat-specialize.c.inc | 14 --------------
31
1 file changed, 1 insertion(+), 1 deletion(-)
10
1 file changed, 14 deletions(-)
32
11
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
12
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
34
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
14
--- a/fpu/softfloat-specialize.c.inc
36
+++ b/target/arm/tcg/translate-a64.c
15
+++ b/fpu/softfloat-specialize.c.inc
37
@@ -XXX,XX +XXX,XX @@ static bool trans_LDAPR(DisasContext *s, arg_LDAPR *a)
16
@@ -XXX,XX +XXX,XX @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
38
if (a->rn == 31) {
17
uint64_t frac;
39
gen_check_sp_alignment(s);
18
uint8_t dnan_pattern = status->default_nan_pattern;
40
}
19
41
- mop = check_atomic_align(s, a->rn, a->sz);
20
- if (dnan_pattern == 0) {
42
+ mop = check_ordered_align(s, a->rn, 0, false, a->sz);
21
- /*
43
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, a->rn), false,
22
- * This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
44
a->rn != 31, mop);
23
- * S390, SH4, TriCore, and Xtensa. Our other supported targets
45
/*
24
- * do not have floating-point.
25
- */
26
- if (snan_bit_is_one(status)) {
27
- /* sign bit clear, set all frac bits other than msb */
28
- dnan_pattern = 0b00111111;
29
- } else {
30
- /* sign bit clear, set frac msb */
31
- dnan_pattern = 0b01000000;
32
- }
33
- }
34
assert(dnan_pattern != 0);
35
36
sign = dnan_pattern >> 7;
46
--
37
--
47
2.34.1
38
2.34.1
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
smmuv3_translate() does everything from STE/CD parsing to TLB lookup
3
Inline pickNaNMulAdd into its only caller. This makes
4
and PTW.
4
one assert redundant with the immediately preceding IF.
5
5
6
Soon, when nesting is supported, stage-1 data (tt, CD) needs to be
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
translated using stage-2.
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
8
Message-id: 20241203203949.483774-3-richard.henderson@linaro.org
9
Split smmuv3_translate() to 3 functions:
9
[PMM: keep comment from old code in new location]
10
11
- smmu_translate(): in smmu-common.c, which does the TLB lookup, PTW,
12
TLB insertion, all the functions are already there, this just puts
13
them together.
14
This also simplifies the code as it consolidates event generation
15
in case of TLB lookup permission failure or in TT selection.
16
17
- smmuv3_do_translate(): in smmuv3.c, Calls smmu_translate() and does
18
the event population in case of errors.
19
20
- smmuv3_translate(), now calls smmuv3_do_translate() for
21
translation while the rest is the same.
22
23
Also, add stage in trace_smmuv3_translate_success()
24
25
Reviewed-by: Eric Auger <eric.auger@redhat.com>
26
Signed-off-by: Mostafa Saleh <smostafa@google.com>
27
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
28
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
29
Message-id: 20240715084519.1189624-6-smostafa@google.com
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
---
11
---
32
include/hw/arm/smmu-common.h | 8 ++
12
fpu/softfloat-parts.c.inc | 41 +++++++++++++++++++++++++-
33
hw/arm/smmu-common.c | 59 +++++++++++
13
fpu/softfloat-specialize.c.inc | 54 ----------------------------------
34
hw/arm/smmuv3.c | 194 +++++++++++++----------------------
14
2 files changed, 40 insertions(+), 55 deletions(-)
35
hw/arm/trace-events | 2 +-
36
4 files changed, 142 insertions(+), 121 deletions(-)
37
15
38
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
39
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
40
--- a/include/hw/arm/smmu-common.h
18
--- a/fpu/softfloat-parts.c.inc
41
+++ b/include/hw/arm/smmu-common.h
19
+++ b/fpu/softfloat-parts.c.inc
42
@@ -XXX,XX +XXX,XX @@ static inline uint16_t smmu_get_sid(SMMUDevice *sdev)
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
43
int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
21
}
44
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info);
22
45
23
if (s->default_nan_mode) {
46
+
24
+ /*
47
+/*
25
+ * We guarantee not to require the target to tell us how to
48
+ * smmu_translate - Look for a translation in TLB, if not, do a PTW.
26
+ * pick a NaN if we're always returning the default NaN.
49
+ * Returns NULL on PTW error or incase of TLB permission errors.
27
+ * But if we're not in default-NaN mode then the target must
50
+ */
28
+ * specify.
51
+SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t addr,
29
+ */
52
+ IOMMUAccessFlags flag, SMMUPTWEventInfo *info);
30
which = 3;
53
+
31
+ } else if (infzero) {
54
/**
32
+ /*
55
* select_tt - compute which translation table shall be used according to
33
+ * Inf * 0 + NaN -- some implementations return the
56
* the input iova and translation config and return the TT specific info
34
+ * default NaN here, and some return the input NaN.
57
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
35
+ */
58
index XXXXXXX..XXXXXXX 100644
36
+ switch (s->float_infzeronan_rule) {
59
--- a/hw/arm/smmu-common.c
37
+ case float_infzeronan_dnan_never:
60
+++ b/hw/arm/smmu-common.c
38
+ which = 2;
61
@@ -XXX,XX +XXX,XX @@ int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
62
g_assert_not_reached();
63
}
64
65
+SMMUTLBEntry *smmu_translate(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t addr,
66
+ IOMMUAccessFlags flag, SMMUPTWEventInfo *info)
67
+{
68
+ uint64_t page_mask, aligned_addr;
69
+ SMMUTLBEntry *cached_entry = NULL;
70
+ SMMUTransTableInfo *tt;
71
+ int status;
72
+
73
+ /*
74
+ * Combined attributes used for TLB lookup, as only one stage is supported,
75
+ * it will hold attributes based on the enabled stage.
76
+ */
77
+ SMMUTransTableInfo tt_combined;
78
+
79
+ if (cfg->stage == SMMU_STAGE_1) {
80
+ /* Select stage1 translation table. */
81
+ tt = select_tt(cfg, addr);
82
+ if (!tt) {
83
+ info->type = SMMU_PTW_ERR_TRANSLATION;
84
+ info->stage = SMMU_STAGE_1;
85
+ return NULL;
86
+ }
87
+ tt_combined.granule_sz = tt->granule_sz;
88
+ tt_combined.tsz = tt->tsz;
89
+
90
+ } else {
91
+ /* Stage2. */
92
+ tt_combined.granule_sz = cfg->s2cfg.granule_sz;
93
+ tt_combined.tsz = cfg->s2cfg.tsz;
94
+ }
95
+
96
+ /*
97
+ * TLB lookup looks for granule and input size for a translation stage,
98
+ * as only one stage is supported right now, choose the right values
99
+ * from the configuration.
100
+ */
101
+ page_mask = (1ULL << tt_combined.granule_sz) - 1;
102
+ aligned_addr = addr & ~page_mask;
103
+
104
+ cached_entry = smmu_iotlb_lookup(bs, cfg, &tt_combined, aligned_addr);
105
+ if (cached_entry) {
106
+ if ((flag & IOMMU_WO) && !(cached_entry->entry.perm & IOMMU_WO)) {
107
+ info->type = SMMU_PTW_ERR_PERMISSION;
108
+ info->stage = cfg->stage;
109
+ return NULL;
110
+ }
111
+ return cached_entry;
112
+ }
113
+
114
+ cached_entry = g_new0(SMMUTLBEntry, 1);
115
+ status = smmu_ptw(cfg, aligned_addr, flag, cached_entry, info);
116
+ if (status) {
117
+ g_free(cached_entry);
118
+ return NULL;
119
+ }
120
+ smmu_iotlb_insert(bs, cfg, cached_entry);
121
+ return cached_entry;
122
+}
123
+
124
/**
125
* The bus number is used for lookup when SID based invalidation occurs.
126
* In that case we lazily populate the SMMUPciBus array from the bus hash
127
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/hw/arm/smmuv3.c
130
+++ b/hw/arm/smmuv3.c
131
@@ -XXX,XX +XXX,XX @@ static void smmuv3_flush_config(SMMUDevice *sdev)
132
g_hash_table_remove(bc->configs, sdev);
133
}
134
135
+/* Do translation with TLB lookup. */
136
+static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
137
+ SMMUTransCfg *cfg,
138
+ SMMUEventInfo *event,
139
+ IOMMUAccessFlags flag,
140
+ SMMUTLBEntry **out_entry)
141
+{
142
+ SMMUPTWEventInfo ptw_info = {};
143
+ SMMUState *bs = ARM_SMMU(s);
144
+ SMMUTLBEntry *cached_entry = NULL;
145
+
146
+ cached_entry = smmu_translate(bs, cfg, addr, flag, &ptw_info);
147
+ if (!cached_entry) {
148
+ /* All faults from PTW has S2 field. */
149
+ event->u.f_walk_eabt.s2 = (ptw_info.stage == SMMU_STAGE_2);
150
+ switch (ptw_info.type) {
151
+ case SMMU_PTW_ERR_WALK_EABT:
152
+ event->type = SMMU_EVT_F_WALK_EABT;
153
+ event->u.f_walk_eabt.addr = addr;
154
+ event->u.f_walk_eabt.rnw = flag & 0x1;
155
+ event->u.f_walk_eabt.class = (ptw_info.stage == SMMU_STAGE_2) ?
156
+ SMMU_CLASS_IN : SMMU_CLASS_TT;
157
+ event->u.f_walk_eabt.addr2 = ptw_info.addr;
158
+ break;
39
+ break;
159
+ case SMMU_PTW_ERR_TRANSLATION:
40
+ case float_infzeronan_dnan_always:
160
+ if (PTW_RECORD_FAULT(cfg)) {
41
+ which = 3;
161
+ event->type = SMMU_EVT_F_TRANSLATION;
162
+ event->u.f_translation.addr = addr;
163
+ event->u.f_translation.addr2 = ptw_info.addr;
164
+ event->u.f_translation.class = SMMU_CLASS_IN;
165
+ event->u.f_translation.rnw = flag & 0x1;
166
+ }
167
+ break;
42
+ break;
168
+ case SMMU_PTW_ERR_ADDR_SIZE:
43
+ case float_infzeronan_dnan_if_qnan:
169
+ if (PTW_RECORD_FAULT(cfg)) {
44
+ which = is_qnan(c->cls) ? 3 : 2;
170
+ event->type = SMMU_EVT_F_ADDR_SIZE;
171
+ event->u.f_addr_size.addr = addr;
172
+ event->u.f_addr_size.addr2 = ptw_info.addr;
173
+ event->u.f_addr_size.class = SMMU_CLASS_IN;
174
+ event->u.f_addr_size.rnw = flag & 0x1;
175
+ }
176
+ break;
177
+ case SMMU_PTW_ERR_ACCESS:
178
+ if (PTW_RECORD_FAULT(cfg)) {
179
+ event->type = SMMU_EVT_F_ACCESS;
180
+ event->u.f_access.addr = addr;
181
+ event->u.f_access.addr2 = ptw_info.addr;
182
+ event->u.f_access.class = SMMU_CLASS_IN;
183
+ event->u.f_access.rnw = flag & 0x1;
184
+ }
185
+ break;
186
+ case SMMU_PTW_ERR_PERMISSION:
187
+ if (PTW_RECORD_FAULT(cfg)) {
188
+ event->type = SMMU_EVT_F_PERMISSION;
189
+ event->u.f_permission.addr = addr;
190
+ event->u.f_permission.addr2 = ptw_info.addr;
191
+ event->u.f_permission.class = SMMU_CLASS_IN;
192
+ event->u.f_permission.rnw = flag & 0x1;
193
+ }
194
+ break;
45
+ break;
195
+ default:
46
+ default:
196
+ g_assert_not_reached();
47
+ g_assert_not_reached();
197
+ }
48
+ }
198
+ return SMMU_TRANS_ERROR;
49
} else {
199
+ }
50
- which = pickNaNMulAdd(a->cls, b->cls, c->cls, infzero, have_snan, s);
200
+ *out_entry = cached_entry;
51
+ FloatClass cls[3] = { a->cls, b->cls, c->cls };
201
+ return SMMU_TRANS_SUCCESS;
52
+ Float3NaNPropRule rule = s->float_3nan_prop_rule;
202
+}
203
+
53
+
204
+/* Entry point to SMMU, does everything. */
54
+ assert(rule != float_3nan_prop_none);
205
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
55
+ if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
206
IOMMUAccessFlags flag, int iommu_idx)
56
+ /* We have at least one SNaN input and should prefer it */
207
{
57
+ do {
208
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
58
+ which = rule & R_3NAN_1ST_MASK;
209
SMMUEventInfo event = {.type = SMMU_EVT_NONE,
59
+ rule >>= R_3NAN_1ST_LENGTH;
210
.sid = sid,
60
+ } while (!is_snan(cls[which]));
211
.inval_ste_allowed = false};
61
+ } else {
212
- SMMUPTWEventInfo ptw_info = {};
62
+ do {
213
SMMUTranslationStatus status;
63
+ which = rule & R_3NAN_1ST_MASK;
214
- SMMUState *bs = ARM_SMMU(s);
64
+ rule >>= R_3NAN_1ST_LENGTH;
215
- uint64_t page_mask, aligned_addr;
65
+ } while (!is_nan(cls[which]));
216
- SMMUTLBEntry *cached_entry = NULL;
66
+ }
217
- SMMUTransTableInfo *tt;
67
}
218
SMMUTransCfg *cfg = NULL;
68
219
IOMMUTLBEntry entry = {
69
if (which == 3) {
220
.target_as = &address_space_memory,
70
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
221
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
71
index XXXXXXX..XXXXXXX 100644
222
.addr_mask = ~(hwaddr)0,
72
--- a/fpu/softfloat-specialize.c.inc
223
.perm = IOMMU_NONE,
73
+++ b/fpu/softfloat-specialize.c.inc
224
};
74
@@ -XXX,XX +XXX,XX @@ static int pickNaN(FloatClass a_cls, FloatClass b_cls,
75
}
76
}
77
78
-/*----------------------------------------------------------------------------
79
-| Select which NaN to propagate for a three-input operation.
80
-| For the moment we assume that no CPU needs the 'larger significand'
81
-| information.
82
-| Return values : 0 : a; 1 : b; 2 : c; 3 : default-NaN
83
-*----------------------------------------------------------------------------*/
84
-static int pickNaNMulAdd(FloatClass a_cls, FloatClass b_cls, FloatClass c_cls,
85
- bool infzero, bool have_snan, float_status *status)
86
-{
87
- FloatClass cls[3] = { a_cls, b_cls, c_cls };
88
- Float3NaNPropRule rule = status->float_3nan_prop_rule;
89
- int which;
90
-
225
- /*
91
- /*
226
- * Combined attributes used for TLB lookup, as only one stage is supported,
92
- * We guarantee not to require the target to tell us how to
227
- * it will hold attributes based on the enabled stage.
93
- * pick a NaN if we're always returning the default NaN.
94
- * But if we're not in default-NaN mode then the target must
95
- * specify.
228
- */
96
- */
229
- SMMUTransTableInfo tt_combined;
97
- assert(!status->default_nan_mode);
230
+ SMMUTLBEntry *cached_entry = NULL;
231
232
qemu_mutex_lock(&s->mutex);
233
234
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
235
goto epilogue;
236
}
237
238
- if (cfg->stage == SMMU_STAGE_1) {
239
- /* Select stage1 translation table. */
240
- tt = select_tt(cfg, addr);
241
- if (!tt) {
242
- if (cfg->record_faults) {
243
- event.type = SMMU_EVT_F_TRANSLATION;
244
- event.u.f_translation.addr = addr;
245
- event.u.f_translation.rnw = flag & 0x1;
246
- }
247
- status = SMMU_TRANS_ERROR;
248
- goto epilogue;
249
- }
250
- tt_combined.granule_sz = tt->granule_sz;
251
- tt_combined.tsz = tt->tsz;
252
-
98
-
253
- } else {
99
- if (infzero) {
254
- /* Stage2. */
100
- /*
255
- tt_combined.granule_sz = cfg->s2cfg.granule_sz;
101
- * Inf * 0 + NaN -- some implementations return the default NaN here,
256
- tt_combined.tsz = cfg->s2cfg.tsz;
102
- * and some return the input NaN.
257
- }
103
- */
258
- /*
104
- switch (status->float_infzeronan_rule) {
259
- * TLB lookup looks for granule and input size for a translation stage,
105
- case float_infzeronan_dnan_never:
260
- * as only one stage is supported right now, choose the right values
106
- return 2;
261
- * from the configuration.
107
- case float_infzeronan_dnan_always:
262
- */
108
- return 3;
263
- page_mask = (1ULL << tt_combined.granule_sz) - 1;
109
- case float_infzeronan_dnan_if_qnan:
264
- aligned_addr = addr & ~page_mask;
110
- return is_qnan(c_cls) ? 3 : 2;
265
-
266
- cached_entry = smmu_iotlb_lookup(bs, cfg, &tt_combined, aligned_addr);
267
- if (cached_entry) {
268
- if ((flag & IOMMU_WO) && !(cached_entry->entry.perm & IOMMU_WO)) {
269
- status = SMMU_TRANS_ERROR;
270
- /*
271
- * We know that the TLB only contains either stage-1 or stage-2 as
272
- * nesting is not supported. So it is sufficient to check the
273
- * translation stage to know the TLB stage for now.
274
- */
275
- event.u.f_walk_eabt.s2 = (cfg->stage == SMMU_STAGE_2);
276
- if (PTW_RECORD_FAULT(cfg)) {
277
- event.type = SMMU_EVT_F_PERMISSION;
278
- event.u.f_permission.addr = addr;
279
- event.u.f_permission.rnw = flag & 0x1;
280
- }
281
- } else {
282
- status = SMMU_TRANS_SUCCESS;
283
- }
284
- goto epilogue;
285
- }
286
-
287
- cached_entry = g_new0(SMMUTLBEntry, 1);
288
-
289
- if (smmu_ptw(cfg, aligned_addr, flag, cached_entry, &ptw_info)) {
290
- /* All faults from PTW has S2 field. */
291
- event.u.f_walk_eabt.s2 = (ptw_info.stage == SMMU_STAGE_2);
292
- g_free(cached_entry);
293
- switch (ptw_info.type) {
294
- case SMMU_PTW_ERR_WALK_EABT:
295
- event.type = SMMU_EVT_F_WALK_EABT;
296
- event.u.f_walk_eabt.addr = addr;
297
- event.u.f_walk_eabt.rnw = flag & 0x1;
298
- /* Stage-2 (only) is class IN while stage-1 is class TT */
299
- event.u.f_walk_eabt.class = (ptw_info.stage == SMMU_STAGE_2) ?
300
- SMMU_CLASS_IN : SMMU_CLASS_TT;
301
- event.u.f_walk_eabt.addr2 = ptw_info.addr;
302
- break;
303
- case SMMU_PTW_ERR_TRANSLATION:
304
- if (PTW_RECORD_FAULT(cfg)) {
305
- event.type = SMMU_EVT_F_TRANSLATION;
306
- event.u.f_translation.addr = addr;
307
- event.u.f_translation.addr2 = ptw_info.addr;
308
- event.u.f_translation.class = SMMU_CLASS_IN;
309
- event.u.f_translation.rnw = flag & 0x1;
310
- }
311
- break;
312
- case SMMU_PTW_ERR_ADDR_SIZE:
313
- if (PTW_RECORD_FAULT(cfg)) {
314
- event.type = SMMU_EVT_F_ADDR_SIZE;
315
- event.u.f_addr_size.addr = addr;
316
- event.u.f_addr_size.addr2 = ptw_info.addr;
317
- event.u.f_translation.class = SMMU_CLASS_IN;
318
- event.u.f_addr_size.rnw = flag & 0x1;
319
- }
320
- break;
321
- case SMMU_PTW_ERR_ACCESS:
322
- if (PTW_RECORD_FAULT(cfg)) {
323
- event.type = SMMU_EVT_F_ACCESS;
324
- event.u.f_access.addr = addr;
325
- event.u.f_access.addr2 = ptw_info.addr;
326
- event.u.f_translation.class = SMMU_CLASS_IN;
327
- event.u.f_access.rnw = flag & 0x1;
328
- }
329
- break;
330
- case SMMU_PTW_ERR_PERMISSION:
331
- if (PTW_RECORD_FAULT(cfg)) {
332
- event.type = SMMU_EVT_F_PERMISSION;
333
- event.u.f_permission.addr = addr;
334
- event.u.f_permission.addr2 = ptw_info.addr;
335
- event.u.f_translation.class = SMMU_CLASS_IN;
336
- event.u.f_permission.rnw = flag & 0x1;
337
- }
338
- break;
339
- default:
111
- default:
340
- g_assert_not_reached();
112
- g_assert_not_reached();
341
- }
113
- }
342
- status = SMMU_TRANS_ERROR;
114
- }
115
-
116
- assert(rule != float_3nan_prop_none);
117
- if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
118
- /* We have at least one SNaN input and should prefer it */
119
- do {
120
- which = rule & R_3NAN_1ST_MASK;
121
- rule >>= R_3NAN_1ST_LENGTH;
122
- } while (!is_snan(cls[which]));
343
- } else {
123
- } else {
344
- smmu_iotlb_insert(bs, cfg, cached_entry);
124
- do {
345
- status = SMMU_TRANS_SUCCESS;
125
- which = rule & R_3NAN_1ST_MASK;
126
- rule >>= R_3NAN_1ST_LENGTH;
127
- } while (!is_nan(cls[which]));
346
- }
128
- }
347
+ status = smmuv3_do_translate(s, addr, cfg, &event, flag, &cached_entry);
129
- return which;
348
130
-}
349
epilogue:
131
-
350
qemu_mutex_unlock(&s->mutex);
132
/*----------------------------------------------------------------------------
351
@@ -XXX,XX +XXX,XX @@ epilogue:
133
| Returns 1 if the double-precision floating-point value `a' is a quiet
352
(addr & cached_entry->entry.addr_mask);
134
| NaN; otherwise returns 0.
353
entry.addr_mask = cached_entry->entry.addr_mask;
354
trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
355
- entry.translated_addr, entry.perm);
356
+ entry.translated_addr, entry.perm,
357
+ cfg->stage);
358
break;
359
case SMMU_TRANS_DISABLE:
360
entry.perm = flag;
361
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
362
index XXXXXXX..XXXXXXX 100644
363
--- a/hw/arm/trace-events
364
+++ b/hw/arm/trace-events
365
@@ -XXX,XX +XXX,XX @@ smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
366
smmuv3_translate_disable(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x bypass (smmu disabled) iova:0x%"PRIx64" is_write=%d"
367
smmuv3_translate_bypass(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x STE bypass iova:0x%"PRIx64" is_write=%d"
368
smmuv3_translate_abort(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x abort on iova:0x%"PRIx64" is_write=%d"
369
-smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=0x%x iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
370
+smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm, int stage) "%s sid=0x%x iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x stage=%d"
371
smmuv3_get_cd(uint64_t addr) "CD addr: 0x%"PRIx64
372
smmuv3_decode_cd(uint32_t oas) "oas=%d"
373
smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz, bool had) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d had:%d"
374
--
135
--
375
2.34.1
136
2.34.1
376
137
377
138
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
SMMUv3 OAS is currently hardcoded in the code to 44 bits, for nested
3
Remove "3" as a special case for which and simply
4
configurations that can be a problem, as stage-2 might be shared with
4
branch to return the desired value.
5
the CPU which might have different PARANGE, and according to SMMU manual
6
ARM IHI 0070F.b:
7
6.3.6 SMMU_IDR5, OAS must match the system physical address size.
8
5
9
This patch doesn't change the SMMU OAS, but refactors the code to
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
make it easier to do that:
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
- Rely everywhere on IDR5 for reading OAS instead of using the
8
Message-id: 20241203203949.483774-4-richard.henderson@linaro.org
12
SMMU_IDR5_OAS macro, so, it is easier just to change IDR5 and
13
it propagages correctly.
14
- Add additional checks when OAS is greater than 48bits.
15
- Remove unused functions/macros: pa_range/MAX_PA.
16
17
Reviewed-by: Eric Auger <eric.auger@redhat.com>
18
Signed-off-by: Mostafa Saleh <smostafa@google.com>
19
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
20
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
21
Message-id: 20240715084519.1189624-19-smostafa@google.com
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
10
---
24
hw/arm/smmuv3-internal.h | 13 -------------
11
fpu/softfloat-parts.c.inc | 20 ++++++++++----------
25
hw/arm/smmu-common.c | 7 ++++---
12
1 file changed, 10 insertions(+), 10 deletions(-)
26
hw/arm/smmuv3.c | 35 ++++++++++++++++++++++++++++-------
27
3 files changed, 32 insertions(+), 23 deletions(-)
28
13
29
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
30
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/smmuv3-internal.h
16
--- a/fpu/softfloat-parts.c.inc
32
+++ b/hw/arm/smmuv3-internal.h
17
+++ b/fpu/softfloat-parts.c.inc
33
@@ -XXX,XX +XXX,XX @@ static inline int oas2bits(int oas_field)
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
34
return -1;
19
* But if we're not in default-NaN mode then the target must
35
}
20
* specify.
36
21
*/
37
-static inline int pa_range(STE *ste)
22
- which = 3;
38
-{
23
+ goto default_nan;
39
- int oas_field = MIN(STE_S2PS(ste), SMMU_IDR5_OAS);
24
} else if (infzero) {
40
-
25
/*
41
- if (!STE_S2AA64(ste)) {
26
* Inf * 0 + NaN -- some implementations return the
42
- return 40;
27
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
28
*/
29
switch (s->float_infzeronan_rule) {
30
case float_infzeronan_dnan_never:
31
- which = 2;
32
break;
33
case float_infzeronan_dnan_always:
34
- which = 3;
35
- break;
36
+ goto default_nan;
37
case float_infzeronan_dnan_if_qnan:
38
- which = is_qnan(c->cls) ? 3 : 2;
39
+ if (is_qnan(c->cls)) {
40
+ goto default_nan;
41
+ }
42
break;
43
default:
44
g_assert_not_reached();
45
}
46
+ which = 2;
47
} else {
48
FloatClass cls[3] = { a->cls, b->cls, c->cls };
49
Float3NaNPropRule rule = s->float_3nan_prop_rule;
50
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
51
}
52
}
53
54
- if (which == 3) {
55
- parts_default_nan(a, s);
56
- return a;
43
- }
57
- }
44
-
58
-
45
- return oas2bits(oas_field);
59
switch (which) {
46
-}
60
case 0:
47
-
61
break;
48
-#define MAX_PA(ste) ((1 << pa_range(ste)) - 1)
62
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
49
-
63
parts_silence_nan(a, s);
50
/* CD fields */
64
}
51
65
return a;
52
#define CD_VALID(x) extract32((x)->word[0], 31, 1)
53
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/smmu-common.c
56
+++ b/hw/arm/smmu-common.c
57
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
58
inputsize = 64 - tt->tsz;
59
level = 4 - (inputsize - 4) / stride;
60
indexmask = VMSA_IDXMSK(inputsize, stride, level);
61
- baseaddr = extract64(tt->ttb, 0, 48);
62
+
66
+
63
+ baseaddr = extract64(tt->ttb, 0, cfg->oas);
67
+ default_nan:
64
baseaddr &= ~indexmask;
68
+ parts_default_nan(a, s);
65
69
+ return a;
66
while (level < VMSA_LEVELS) {
67
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
68
* Get the ttb from concatenated structure.
69
* The offset is the idx * size of each ttb(number of ptes * (sizeof(pte))
70
*/
71
- uint64_t baseaddr = extract64(cfg->s2cfg.vttb, 0, 48) + (1 << stride) *
72
- idx * sizeof(uint64_t);
73
+ uint64_t baseaddr = extract64(cfg->s2cfg.vttb, 0, cfg->s2cfg.eff_ps) +
74
+ (1 << stride) * idx * sizeof(uint64_t);
75
dma_addr_t indexmask = VMSA_IDXMSK(inputsize, stride, level);
76
77
baseaddr &= ~indexmask;
78
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/hw/arm/smmuv3.c
81
+++ b/hw/arm/smmuv3.c
82
@@ -XXX,XX +XXX,XX @@ static bool s2t0sz_valid(SMMUTransCfg *cfg)
83
}
84
85
if (cfg->s2cfg.granule_sz == 16) {
86
- return (cfg->s2cfg.tsz >= 64 - oas2bits(SMMU_IDR5_OAS));
87
+ return (cfg->s2cfg.tsz >= 64 - cfg->s2cfg.eff_ps);
88
}
89
90
- return (cfg->s2cfg.tsz >= MAX(64 - oas2bits(SMMU_IDR5_OAS), 16));
91
+ return (cfg->s2cfg.tsz >= MAX(64 - cfg->s2cfg.eff_ps, 16));
92
}
70
}
93
71
94
/*
72
/*
95
@@ -XXX,XX +XXX,XX @@ static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
96
return nr_concat <= VMSA_MAX_S2_CONCAT;
97
}
98
99
-static int decode_ste_s2_cfg(SMMUTransCfg *cfg, STE *ste)
100
+static int decode_ste_s2_cfg(SMMUv3State *s, SMMUTransCfg *cfg,
101
+ STE *ste)
102
{
103
+ uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
104
+
105
if (STE_S2AA64(ste) == 0x0) {
106
qemu_log_mask(LOG_UNIMP,
107
"SMMUv3 AArch32 tables not supported\n");
108
@@ -XXX,XX +XXX,XX @@ static int decode_ste_s2_cfg(SMMUTransCfg *cfg, STE *ste)
109
}
110
111
/* For AA64, The effective S2PS size is capped to the OAS. */
112
- cfg->s2cfg.eff_ps = oas2bits(MIN(STE_S2PS(ste), SMMU_IDR5_OAS));
113
+ cfg->s2cfg.eff_ps = oas2bits(MIN(STE_S2PS(ste), oas));
114
+ /*
115
+ * For SMMUv3.1 and later, when OAS == IAS == 52, the stage 2 input
116
+ * range is further limited to 48 bits unless STE.S2TG indicates a
117
+ * 64KB granule.
118
+ */
119
+ if (cfg->s2cfg.granule_sz != 16) {
120
+ cfg->s2cfg.eff_ps = MIN(cfg->s2cfg.eff_ps, 48);
121
+ }
122
/*
123
* It is ILLEGAL for the address in S2TTB to be outside the range
124
* described by the effective S2PS value.
125
@@ -XXX,XX +XXX,XX @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
126
STE *ste, SMMUEventInfo *event)
127
{
128
uint32_t config;
129
+ uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
130
int ret;
131
132
if (!STE_VALID(ste)) {
133
@@ -XXX,XX +XXX,XX @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
134
* Stage-1 OAS defaults to OAS even if not enabled as it would be used
135
* in input address check for stage-2.
136
*/
137
- cfg->oas = oas2bits(SMMU_IDR5_OAS);
138
- ret = decode_ste_s2_cfg(cfg, ste);
139
+ cfg->oas = oas2bits(oas);
140
+ ret = decode_ste_s2_cfg(s, cfg, ste);
141
if (ret) {
142
goto bad_ste;
143
}
144
@@ -XXX,XX +XXX,XX @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
145
int i;
146
SMMUTranslationStatus status;
147
SMMUTLBEntry *entry;
148
+ uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
149
150
if (!CD_VALID(cd) || !CD_AARCH64(cd)) {
151
goto bad_cd;
152
@@ -XXX,XX +XXX,XX @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
153
cfg->aa64 = true;
154
155
cfg->oas = oas2bits(CD_IPS(cd));
156
- cfg->oas = MIN(oas2bits(SMMU_IDR5_OAS), cfg->oas);
157
+ cfg->oas = MIN(oas2bits(oas), cfg->oas);
158
cfg->tbi = CD_TBI(cd);
159
cfg->asid = CD_ASID(cd);
160
cfg->affd = CD_AFFD(cd);
161
@@ -XXX,XX +XXX,XX @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
162
goto bad_cd;
163
}
164
165
+ /*
166
+ * An address greater than 48 bits in size can only be output from a
167
+ * TTD when, in SMMUv3.1 and later, the effective IPS is 52 and a 64KB
168
+ * granule is in use for that translation table
169
+ */
170
+ if (tt->granule_sz != 16) {
171
+ cfg->oas = MIN(cfg->oas, 48);
172
+ }
173
tt->tsz = tsz;
174
tt->ttb = CD_TTB(cd, i);
175
176
--
73
--
177
2.34.1
74
2.34.1
178
75
179
76
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Currently, translation stage is represented as an int, where 1 is stage-1 and
3
Assign the pointer return value to 'a' directly,
4
2 is stage-2, when nested is added, 3 would be confusing to represent nesting,
4
rather than going through an intermediary index.
5
so we use an enum instead.
6
5
7
While keeping the same values, this is useful for:
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
- Doing tricks with bit masks, where BIT(0) is stage-1 and BIT(1) is
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
stage-2 and both is nested.
8
Message-id: 20241203203949.483774-5-richard.henderson@linaro.org
10
- Tracing, as stage is printed as int.
11
12
Reviewed-by: Eric Auger <eric.auger@redhat.com>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Signed-off-by: Mostafa Saleh <smostafa@google.com>
15
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
16
Message-id: 20240715084519.1189624-5-smostafa@google.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
10
---
19
include/hw/arm/smmu-common.h | 11 +++++++++--
11
fpu/softfloat-parts.c.inc | 32 ++++++++++----------------------
20
hw/arm/smmu-common.c | 14 +++++++-------
12
1 file changed, 10 insertions(+), 22 deletions(-)
21
hw/arm/smmuv3.c | 17 +++++++++--------
22
3 files changed, 25 insertions(+), 17 deletions(-)
23
13
24
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/arm/smmu-common.h
16
--- a/fpu/softfloat-parts.c.inc
27
+++ b/include/hw/arm/smmu-common.h
17
+++ b/fpu/softfloat-parts.c.inc
28
@@ -XXX,XX +XXX,XX @@ typedef enum {
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
29
SMMU_PTW_ERR_PERMISSION, /* Permission fault */
19
FloatPartsN *c, float_status *s,
30
} SMMUPTWEventType;
20
int ab_mask, int abc_mask)
31
32
+/* SMMU Stage */
33
+typedef enum {
34
+ SMMU_STAGE_1 = 1,
35
+ SMMU_STAGE_2,
36
+ SMMU_NESTED,
37
+} SMMUStage;
38
+
39
typedef struct SMMUPTWEventInfo {
40
- int stage;
41
+ SMMUStage stage;
42
SMMUPTWEventType type;
43
dma_addr_t addr; /* fetched address that induced an abort, if any */
44
} SMMUPTWEventInfo;
45
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUS2Cfg {
46
*/
47
typedef struct SMMUTransCfg {
48
/* Shared fields between stage-1 and stage-2. */
49
- int stage; /* translation stage */
50
+ SMMUStage stage; /* translation stage */
51
bool disabled; /* smmu is disabled */
52
bool bypassed; /* translation is bypassed */
53
bool aborted; /* translation is aborted */
54
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/arm/smmu-common.c
57
+++ b/hw/arm/smmu-common.c
58
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
59
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
60
{
21
{
61
dma_addr_t baseaddr, indexmask;
22
- int which;
62
- int stage = cfg->stage;
23
bool infzero = (ab_mask == float_cmask_infzero);
63
+ SMMUStage stage = cfg->stage;
24
bool have_snan = (abc_mask & float_cmask_snan);
64
SMMUTransTableInfo *tt = select_tt(cfg, iova);
25
+ FloatPartsN *ret;
65
uint8_t level, granule_sz, inputsize, stride;
26
66
27
if (unlikely(have_snan)) {
67
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
28
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
68
info->type = SMMU_PTW_ERR_TRANSLATION;
29
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
69
30
default:
70
error:
31
g_assert_not_reached();
71
- info->stage = 1;
72
+ info->stage = SMMU_STAGE_1;
73
tlbe->entry.perm = IOMMU_NONE;
74
return -EINVAL;
75
}
76
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
77
dma_addr_t ipa, IOMMUAccessFlags perm,
78
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
79
{
80
- const int stage = 2;
81
+ const SMMUStage stage = SMMU_STAGE_2;
82
int granule_sz = cfg->s2cfg.granule_sz;
83
/* ARM DDI0487I.a: Table D8-7. */
84
int inputsize = 64 - cfg->s2cfg.tsz;
85
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
86
error_ipa:
87
info->addr = ipa;
88
error:
89
- info->stage = 2;
90
+ info->stage = SMMU_STAGE_2;
91
tlbe->entry.perm = IOMMU_NONE;
92
return -EINVAL;
93
}
94
@@ -XXX,XX +XXX,XX @@ error:
95
int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
96
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
97
{
98
- if (cfg->stage == 1) {
99
+ if (cfg->stage == SMMU_STAGE_1) {
100
return smmu_ptw_64_s1(cfg, iova, perm, tlbe, info);
101
- } else if (cfg->stage == 2) {
102
+ } else if (cfg->stage == SMMU_STAGE_2) {
103
/*
104
* If bypassing stage 1(or unimplemented), the input address is passed
105
* directly to stage 2 as IPA. If the input address of a transaction
106
@@ -XXX,XX +XXX,XX @@ int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
107
*/
108
if (iova >= (1ULL << cfg->oas)) {
109
info->type = SMMU_PTW_ERR_ADDR_SIZE;
110
- info->stage = 1;
111
+ info->stage = SMMU_STAGE_1;
112
tlbe->entry.perm = IOMMU_NONE;
113
return -EINVAL;
114
}
32
}
115
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
33
- which = 2;
116
index XXXXXXX..XXXXXXX 100644
34
+ ret = c;
117
--- a/hw/arm/smmuv3.c
35
} else {
118
+++ b/hw/arm/smmuv3.c
36
- FloatClass cls[3] = { a->cls, b->cls, c->cls };
119
@@ -XXX,XX +XXX,XX @@
37
+ FloatPartsN *val[3] = { a, b, c };
120
#include "smmuv3-internal.h"
38
Float3NaNPropRule rule = s->float_3nan_prop_rule;
121
#include "smmu-internal.h"
39
122
40
assert(rule != float_3nan_prop_none);
123
-#define PTW_RECORD_FAULT(cfg) (((cfg)->stage == 1) ? (cfg)->record_faults : \
41
if (have_snan && (rule & R_3NAN_SNAN_MASK)) {
124
+#define PTW_RECORD_FAULT(cfg) (((cfg)->stage == SMMU_STAGE_1) ? \
42
/* We have at least one SNaN input and should prefer it */
125
+ (cfg)->record_faults : \
43
do {
126
(cfg)->s2cfg.record_faults)
44
- which = rule & R_3NAN_1ST_MASK;
127
45
+ ret = val[rule & R_3NAN_1ST_MASK];
128
/**
46
rule >>= R_3NAN_1ST_LENGTH;
129
@@ -XXX,XX +XXX,XX @@ static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
47
- } while (!is_snan(cls[which]));
130
48
+ } while (!is_snan(ret->cls));
131
static int decode_ste_s2_cfg(SMMUTransCfg *cfg, STE *ste)
49
} else {
132
{
50
do {
133
- cfg->stage = 2;
51
- which = rule & R_3NAN_1ST_MASK;
134
+ cfg->stage = SMMU_STAGE_2;
52
+ ret = val[rule & R_3NAN_1ST_MASK];
135
53
rule >>= R_3NAN_1ST_LENGTH;
136
if (STE_S2AA64(ste) == 0x0) {
54
- } while (!is_nan(cls[which]));
137
qemu_log_mask(LOG_UNIMP,
55
+ } while (!is_nan(ret->cls));
138
@@ -XXX,XX +XXX,XX @@ static int decode_cd(SMMUTransCfg *cfg, CD *cd, SMMUEventInfo *event)
56
}
139
140
/* we support only those at the moment */
141
cfg->aa64 = true;
142
- cfg->stage = 1;
143
+ cfg->stage = SMMU_STAGE_1;
144
145
cfg->oas = oas2bits(CD_IPS(cd));
146
cfg->oas = MIN(oas2bits(SMMU_IDR5_OAS), cfg->oas);
147
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
148
return ret;
149
}
57
}
150
58
151
- if (cfg->aborted || cfg->bypassed || (cfg->stage == 2)) {
59
- switch (which) {
152
+ if (cfg->aborted || cfg->bypassed || (cfg->stage == SMMU_STAGE_2)) {
60
- case 0:
153
return 0;
61
- break;
62
- case 1:
63
- a = b;
64
- break;
65
- case 2:
66
- a = c;
67
- break;
68
- default:
69
- g_assert_not_reached();
70
+ if (is_snan(ret->cls)) {
71
+ parts_silence_nan(ret, s);
154
}
72
}
155
73
- if (is_snan(a->cls)) {
156
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
74
- parts_silence_nan(a, s);
157
goto epilogue;
75
- }
158
}
76
- return a;
159
77
+ return ret;
160
- if (cfg->stage == 1) {
78
161
+ if (cfg->stage == SMMU_STAGE_1) {
79
default_nan:
162
/* Select stage1 translation table. */
80
parts_default_nan(a, s);
163
tt = select_tt(cfg, addr);
164
if (!tt) {
165
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
166
* nesting is not supported. So it is sufficient to check the
167
* translation stage to know the TLB stage for now.
168
*/
169
- event.u.f_walk_eabt.s2 = (cfg->stage == 2);
170
+ event.u.f_walk_eabt.s2 = (cfg->stage == SMMU_STAGE_2);
171
if (PTW_RECORD_FAULT(cfg)) {
172
event.type = SMMU_EVT_F_PERMISSION;
173
event.u.f_permission.addr = addr;
174
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
175
176
if (smmu_ptw(cfg, aligned_addr, flag, cached_entry, &ptw_info)) {
177
/* All faults from PTW has S2 field. */
178
- event.u.f_walk_eabt.s2 = (ptw_info.stage == 2);
179
+ event.u.f_walk_eabt.s2 = (ptw_info.stage == SMMU_STAGE_2);
180
g_free(cached_entry);
181
switch (ptw_info.type) {
182
case SMMU_PTW_ERR_WALK_EABT:
183
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
184
event.u.f_walk_eabt.addr = addr;
185
event.u.f_walk_eabt.rnw = flag & 0x1;
186
/* Stage-2 (only) is class IN while stage-1 is class TT */
187
- event.u.f_walk_eabt.class = (ptw_info.stage == 2) ?
188
+ event.u.f_walk_eabt.class = (ptw_info.stage == SMMU_STAGE_2) ?
189
SMMU_CLASS_IN : SMMU_CLASS_TT;
190
event.u.f_walk_eabt.addr2 = ptw_info.addr;
191
break;
192
--
81
--
193
2.34.1
82
2.34.1
194
83
195
84
diff view generated by jsdifflib
1
From: Daniyal Khan <danikhan632@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We made a copy above because the fp exception flags
3
While all indices into val[] should be in [0-2], the mask
4
are not propagated back to the FPST register, but
4
applied is two bits. To help static analysis see there is
5
then failed to use the copy.
5
no possibility of read beyond the end of the array, pad the
6
array to 4 entries, with the final being (implicitly) NULL.
6
7
7
Cc: qemu-stable@nongnu.org
8
Fixes: 558e956c719 ("target/arm: Implement FMOPA, FMOPS (non-widening)")
9
Signed-off-by: Daniyal Khan <danikhan632@gmail.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20241203203949.483774-6-richard.henderson@linaro.org
13
Message-id: 20240717060149.204788-2-richard.henderson@linaro.org
14
[rth: Split from a larger patch]
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
17
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
12
---
20
target/arm/tcg/sme_helper.c | 2 +-
13
fpu/softfloat-parts.c.inc | 2 +-
21
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
22
15
23
diff --git a/target/arm/tcg/sme_helper.c b/target/arm/tcg/sme_helper.c
16
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
24
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/tcg/sme_helper.c
18
--- a/fpu/softfloat-parts.c.inc
26
+++ b/target/arm/tcg/sme_helper.c
19
+++ b/fpu/softfloat-parts.c.inc
27
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_fmopa_s)(void *vza, void *vzn, void *vzm, void *vpn,
20
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
28
if (pb & 1) {
21
}
29
uint32_t *a = vza_row + H1_4(col);
22
ret = c;
30
uint32_t *m = vzm + H1_4(col);
23
} else {
31
- *a = float32_muladd(n, *m, *a, 0, vst);
24
- FloatPartsN *val[3] = { a, b, c };
32
+ *a = float32_muladd(n, *m, *a, 0, &fpst);
25
+ FloatPartsN *val[R_3NAN_1ST_MASK + 1] = { a, b, c };
33
}
26
Float3NaNPropRule rule = s->float_3nan_prop_rule;
34
col += 4;
27
35
pb >>= 4;
28
assert(rule != float_3nan_prop_none);
36
--
29
--
37
2.34.1
30
2.34.1
38
31
39
32
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Everything is in place, consolidate parsing of STE cfg and setting
3
This function is part of the public interface and
4
translation stage.
4
is not "specialized" to any target in any way.
5
5
6
Advertise nesting if stage requested is "nested".
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
8
Message-id: 20241203203949.483774-7-richard.henderson@linaro.org
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Signed-off-by: Mostafa Saleh <smostafa@google.com>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Message-id: 20240715084519.1189624-18-smostafa@google.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
hw/arm/smmuv3.c | 35 ++++++++++++++++++++++++++---------
11
fpu/softfloat.c | 52 ++++++++++++++++++++++++++++++++++
16
1 file changed, 26 insertions(+), 9 deletions(-)
12
fpu/softfloat-specialize.c.inc | 52 ----------------------------------
13
2 files changed, 52 insertions(+), 52 deletions(-)
17
14
18
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/smmuv3.c
17
--- a/fpu/softfloat.c
21
+++ b/hw/arm/smmuv3.c
18
+++ b/fpu/softfloat.c
22
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
23
/* Based on sys property, the stages supported in smmu will be advertised.*/
20
*zExpPtr = 1 - shiftCount;
24
if (s->stage && !strcmp("2", s->stage)) {
25
s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
26
+ } else if (s->stage && !strcmp("nested", s->stage)) {
27
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
28
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
29
} else {
30
s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
31
}
32
@@ -XXX,XX +XXX,XX @@ static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
33
34
static int decode_ste_s2_cfg(SMMUTransCfg *cfg, STE *ste)
35
{
36
- cfg->stage = SMMU_STAGE_2;
37
-
38
if (STE_S2AA64(ste) == 0x0) {
39
qemu_log_mask(LOG_UNIMP,
40
"SMMUv3 AArch32 tables not supported\n");
41
@@ -XXX,XX +XXX,XX @@ bad_ste:
42
return -EINVAL;
43
}
21
}
44
22
45
+static void decode_ste_config(SMMUTransCfg *cfg, uint32_t config)
23
+/*----------------------------------------------------------------------------
24
+| Takes two extended double-precision floating-point values `a' and `b', one
25
+| of which is a NaN, and returns the appropriate NaN result. If either `a' or
26
+| `b' is a signaling NaN, the invalid exception is raised.
27
+*----------------------------------------------------------------------------*/
28
+
29
+floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
46
+{
30
+{
31
+ bool aIsLargerSignificand;
32
+ FloatClass a_cls, b_cls;
47
+
33
+
48
+ if (STE_CFG_ABORT(config)) {
34
+ /* This is not complete, but is good enough for pickNaN. */
49
+ cfg->aborted = true;
35
+ a_cls = (!floatx80_is_any_nan(a)
50
+ return;
36
+ ? float_class_normal
51
+ }
37
+ : floatx80_is_signaling_nan(a, status)
52
+ if (STE_CFG_BYPASS(config)) {
38
+ ? float_class_snan
53
+ cfg->bypassed = true;
39
+ : float_class_qnan);
54
+ return;
40
+ b_cls = (!floatx80_is_any_nan(b)
41
+ ? float_class_normal
42
+ : floatx80_is_signaling_nan(b, status)
43
+ ? float_class_snan
44
+ : float_class_qnan);
45
+
46
+ if (is_snan(a_cls) || is_snan(b_cls)) {
47
+ float_raise(float_flag_invalid, status);
55
+ }
48
+ }
56
+
49
+
57
+ if (STE_CFG_S1_ENABLED(config)) {
50
+ if (status->default_nan_mode) {
58
+ cfg->stage = SMMU_STAGE_1;
51
+ return floatx80_default_nan(status);
59
+ }
52
+ }
60
+
53
+
61
+ if (STE_CFG_S2_ENABLED(config)) {
54
+ if (a.low < b.low) {
62
+ cfg->stage |= SMMU_STAGE_2;
55
+ aIsLargerSignificand = 0;
56
+ } else if (b.low < a.low) {
57
+ aIsLargerSignificand = 1;
58
+ } else {
59
+ aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
60
+ }
61
+
62
+ if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
63
+ if (is_snan(b_cls)) {
64
+ return floatx80_silence_nan(b, status);
65
+ }
66
+ return b;
67
+ } else {
68
+ if (is_snan(a_cls)) {
69
+ return floatx80_silence_nan(a, status);
70
+ }
71
+ return a;
63
+ }
72
+ }
64
+}
73
+}
65
+
74
+
66
/* Returns < 0 in case of invalid STE, 0 otherwise */
75
/*----------------------------------------------------------------------------
67
static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
76
| Takes an abstract floating-point value having sign `zSign', exponent `zExp',
68
STE *ste, SMMUEventInfo *event)
77
| and extended significand formed by the concatenation of `zSig0' and `zSig1',
69
@@ -XXX,XX +XXX,XX @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
78
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
70
79
index XXXXXXX..XXXXXXX 100644
71
config = STE_CONFIG(ste);
80
--- a/fpu/softfloat-specialize.c.inc
72
81
+++ b/fpu/softfloat-specialize.c.inc
73
- if (STE_CFG_ABORT(config)) {
82
@@ -XXX,XX +XXX,XX @@ floatx80 floatx80_silence_nan(floatx80 a, float_status *status)
74
- cfg->aborted = true;
83
return a;
75
- return 0;
84
}
85
86
-/*----------------------------------------------------------------------------
87
-| Takes two extended double-precision floating-point values `a' and `b', one
88
-| of which is a NaN, and returns the appropriate NaN result. If either `a' or
89
-| `b' is a signaling NaN, the invalid exception is raised.
90
-*----------------------------------------------------------------------------*/
91
-
92
-floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
93
-{
94
- bool aIsLargerSignificand;
95
- FloatClass a_cls, b_cls;
96
-
97
- /* This is not complete, but is good enough for pickNaN. */
98
- a_cls = (!floatx80_is_any_nan(a)
99
- ? float_class_normal
100
- : floatx80_is_signaling_nan(a, status)
101
- ? float_class_snan
102
- : float_class_qnan);
103
- b_cls = (!floatx80_is_any_nan(b)
104
- ? float_class_normal
105
- : floatx80_is_signaling_nan(b, status)
106
- ? float_class_snan
107
- : float_class_qnan);
108
-
109
- if (is_snan(a_cls) || is_snan(b_cls)) {
110
- float_raise(float_flag_invalid, status);
76
- }
111
- }
77
+ decode_ste_config(cfg, config);
112
-
78
113
- if (status->default_nan_mode) {
79
- if (STE_CFG_BYPASS(config)) {
114
- return floatx80_default_nan(status);
80
- cfg->bypassed = true;
115
- }
81
+ if (cfg->aborted || cfg->bypassed) {
116
-
82
return 0;
117
- if (a.low < b.low) {
83
}
118
- aIsLargerSignificand = 0;
84
119
- } else if (b.low < a.low) {
85
@@ -XXX,XX +XXX,XX @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
120
- aIsLargerSignificand = 1;
86
121
- } else {
87
/* we support only those at the moment */
122
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
88
cfg->aa64 = true;
123
- }
89
- cfg->stage = SMMU_STAGE_1;
124
-
90
125
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
91
cfg->oas = oas2bits(CD_IPS(cd));
126
- if (is_snan(b_cls)) {
92
cfg->oas = MIN(oas2bits(SMMU_IDR5_OAS), cfg->oas);
127
- return floatx80_silence_nan(b, status);
128
- }
129
- return b;
130
- } else {
131
- if (is_snan(a_cls)) {
132
- return floatx80_silence_nan(a, status);
133
- }
134
- return a;
135
- }
136
-}
137
-
138
/*----------------------------------------------------------------------------
139
| Returns 1 if the quadruple-precision floating-point value `a' is a quiet
140
| NaN; otherwise returns 0.
93
--
141
--
94
2.34.1
142
2.34.1
95
96
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
ASID and VMID used to be uint16_t in the translation config, however,
3
Unpacking and repacking the parts may be slightly more work
4
in other contexts they can be int as -1 in case of TLB invalidation,
4
than we did before, but we get to reuse more code. For a
5
to represent all (don’t care).
5
code path handling exceptional values, this is an improvement.
6
When stage-2 was added asid was set to -1 in stage-2 and vmid to -1
7
in stage-1 configs. However, that meant they were set as (65536),
8
this was not an issue as nesting was not supported and no
9
commands/lookup uses both.
10
6
11
With nesting, it’s critical to get this right as translation must be
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
tagged correctly with ASID/VMID, and with ASID=-1 meaning stage-2.
8
Message-id: 20241203203949.483774-8-richard.henderson@linaro.org
13
Represent ASID/VMID everywhere as int.
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
fpu/softfloat.c | 43 +++++--------------------------------------
13
1 file changed, 5 insertions(+), 38 deletions(-)
14
14
15
Reviewed-by: Eric Auger <eric.auger@redhat.com>
15
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
16
Signed-off-by: Mostafa Saleh <smostafa@google.com>
17
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
18
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
19
Message-id: 20240715084519.1189624-7-smostafa@google.com
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
22
include/hw/arm/smmu-common.h | 14 +++++++-------
23
hw/arm/smmu-common.c | 10 +++++-----
24
hw/arm/smmuv3.c | 4 ++--
25
hw/arm/trace-events | 18 +++++++++---------
26
4 files changed, 23 insertions(+), 23 deletions(-)
27
28
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
29
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
30
--- a/include/hw/arm/smmu-common.h
17
--- a/fpu/softfloat.c
31
+++ b/include/hw/arm/smmu-common.h
18
+++ b/fpu/softfloat.c
32
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUS2Cfg {
19
@@ -XXX,XX +XXX,XX @@ void normalizeFloatx80Subnormal(uint64_t aSig, int32_t *zExpPtr,
33
bool record_faults; /* Record fault events (S2R) */
20
34
uint8_t granule_sz; /* Granule page shift (based on S2TG) */
21
floatx80 propagateFloatx80NaN(floatx80 a, floatx80 b, float_status *status)
35
uint8_t eff_ps; /* Effective PA output range (based on S2PS) */
22
{
36
- uint16_t vmid; /* Virtual Machine ID (S2VMID) */
23
- bool aIsLargerSignificand;
37
+ int vmid; /* Virtual Machine ID (S2VMID) */
24
- FloatClass a_cls, b_cls;
38
uint64_t vttb; /* Address of translation table base (S2TTB) */
25
+ FloatParts128 pa, pb, *pr;
39
} SMMUS2Cfg;
26
40
27
- /* This is not complete, but is good enough for pickNaN. */
41
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUTransCfg {
28
- a_cls = (!floatx80_is_any_nan(a)
42
uint64_t ttb; /* TT base address */
29
- ? float_class_normal
43
uint8_t oas; /* output address width */
30
- : floatx80_is_signaling_nan(a, status)
44
uint8_t tbi; /* Top Byte Ignore */
31
- ? float_class_snan
45
- uint16_t asid;
32
- : float_class_qnan);
46
+ int asid;
33
- b_cls = (!floatx80_is_any_nan(b)
47
SMMUTransTableInfo tt[2];
34
- ? float_class_normal
48
/* Used by stage-2 only. */
35
- : floatx80_is_signaling_nan(b, status)
49
struct SMMUS2Cfg s2cfg;
36
- ? float_class_snan
50
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUPciBus {
37
- : float_class_qnan);
51
38
-
52
typedef struct SMMUIOTLBKey {
39
- if (is_snan(a_cls) || is_snan(b_cls)) {
53
uint64_t iova;
40
- float_raise(float_flag_invalid, status);
54
- uint16_t asid;
41
- }
55
- uint16_t vmid;
42
-
56
+ int asid;
43
- if (status->default_nan_mode) {
57
+ int vmid;
44
+ if (!floatx80_unpack_canonical(&pa, a, status) ||
58
uint8_t tg;
45
+ !floatx80_unpack_canonical(&pb, b, status)) {
59
uint8_t level;
46
return floatx80_default_nan(status);
60
} SMMUIOTLBKey;
47
}
61
@@ -XXX,XX +XXX,XX @@ SMMUDevice *smmu_find_sdev(SMMUState *s, uint32_t sid);
48
62
SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
49
- if (a.low < b.low) {
63
SMMUTransTableInfo *tt, hwaddr iova);
50
- aIsLargerSignificand = 0;
64
void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
51
- } else if (b.low < a.low) {
65
-SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint16_t vmid, uint64_t iova,
52
- aIsLargerSignificand = 1;
66
+SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
53
- } else {
67
uint8_t tg, uint8_t level);
54
- aIsLargerSignificand = (a.high < b.high) ? 1 : 0;
68
void smmu_iotlb_inv_all(SMMUState *s);
55
- }
69
-void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
56
-
70
-void smmu_iotlb_inv_vmid(SMMUState *s, uint16_t vmid);
57
- if (pickNaN(a_cls, b_cls, aIsLargerSignificand, status)) {
71
+void smmu_iotlb_inv_asid(SMMUState *s, int asid);
58
- if (is_snan(b_cls)) {
72
+void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
59
- return floatx80_silence_nan(b, status);
73
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
60
- }
74
uint8_t tg, uint64_t num_pages, uint8_t ttl);
61
- return b;
75
62
- } else {
76
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
63
- if (is_snan(a_cls)) {
77
index XXXXXXX..XXXXXXX 100644
64
- return floatx80_silence_nan(a, status);
78
--- a/hw/arm/smmu-common.c
65
- }
79
+++ b/hw/arm/smmu-common.c
66
- return a;
80
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
67
- }
81
(k1->vmid == k2->vmid);
68
+ pr = parts_pick_nan(&pa, &pb, status);
69
+ return floatx80_round_pack_canonical(pr, status);
82
}
70
}
83
71
84
-SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint16_t vmid, uint64_t iova,
72
/*----------------------------------------------------------------------------
85
+SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
86
uint8_t tg, uint8_t level)
87
{
88
SMMUIOTLBKey key = {.asid = asid, .vmid = vmid, .iova = iova,
89
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_all(SMMUState *s)
90
static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value,
91
gpointer user_data)
92
{
93
- uint16_t asid = *(uint16_t *)user_data;
94
+ int asid = *(int *)user_data;
95
SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key;
96
97
return SMMU_IOTLB_ASID(*iotlb_key) == asid;
98
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value,
99
static gboolean smmu_hash_remove_by_vmid(gpointer key, gpointer value,
100
gpointer user_data)
101
{
102
- uint16_t vmid = *(uint16_t *)user_data;
103
+ int vmid = *(int *)user_data;
104
SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key;
105
106
return SMMU_IOTLB_VMID(*iotlb_key) == vmid;
107
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
108
&info);
109
}
110
111
-void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
112
+void smmu_iotlb_inv_asid(SMMUState *s, int asid)
113
{
114
trace_smmu_iotlb_inv_asid(asid);
115
g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid);
116
}
117
118
-void smmu_iotlb_inv_vmid(SMMUState *s, uint16_t vmid)
119
+void smmu_iotlb_inv_vmid(SMMUState *s, int vmid)
120
{
121
trace_smmu_iotlb_inv_vmid(vmid);
122
g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_vmid, &vmid);
123
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/hw/arm/smmuv3.c
126
+++ b/hw/arm/smmuv3.c
127
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
128
}
129
case SMMU_CMD_TLBI_NH_ASID:
130
{
131
- uint16_t asid = CMD_ASID(&cmd);
132
+ int asid = CMD_ASID(&cmd);
133
134
if (!STAGE1_SUPPORTED(s)) {
135
cmd_error = SMMU_CERROR_ILL;
136
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
137
break;
138
case SMMU_CMD_TLBI_S12_VMALL:
139
{
140
- uint16_t vmid = CMD_VMID(&cmd);
141
+ int vmid = CMD_VMID(&cmd);
142
143
if (!STAGE2_SUPPORTED(s)) {
144
cmd_error = SMMU_CERROR_ILL;
145
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
146
index XXXXXXX..XXXXXXX 100644
147
--- a/hw/arm/trace-events
148
+++ b/hw/arm/trace-events
149
@@ -XXX,XX +XXX,XX @@ smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint6
150
smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
151
smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
152
smmu_iotlb_inv_all(void) "IOTLB invalidate all"
153
-smmu_iotlb_inv_asid(uint16_t asid) "IOTLB invalidate asid=%d"
154
-smmu_iotlb_inv_vmid(uint16_t vmid) "IOTLB invalidate vmid=%d"
155
-smmu_iotlb_inv_iova(uint16_t asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
156
+smmu_iotlb_inv_asid(int asid) "IOTLB invalidate asid=%d"
157
+smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d"
158
+smmu_iotlb_inv_iova(int asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
159
smmu_inv_notifiers_mr(const char *name) "iommu mr=%s"
160
-smmu_iotlb_lookup_hit(uint16_t asid, uint16_t vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
161
-smmu_iotlb_lookup_miss(uint16_t asid, uint16_t vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache MISS asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
162
-smmu_iotlb_insert(uint16_t asid, uint16_t vmid, uint64_t addr, uint8_t tg, uint8_t level) "IOTLB ++ asid=%d vmid=%d addr=0x%"PRIx64" tg=%d level=%d"
163
+smmu_iotlb_lookup_hit(int asid, int vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
164
+smmu_iotlb_lookup_miss(int asid, int vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache MISS asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
165
+smmu_iotlb_insert(int asid, int vmid, uint64_t addr, uint8_t tg, uint8_t level) "IOTLB ++ asid=%d vmid=%d addr=0x%"PRIx64" tg=%d level=%d"
166
167
# smmuv3.c
168
smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
169
@@ -XXX,XX +XXX,XX @@ smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t p
170
smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
171
smmuv3_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d"
172
smmuv3_cmdq_tlbi_nh(void) ""
173
-smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
174
-smmuv3_cmdq_tlbi_s12_vmid(uint16_t vmid) "vmid=%d"
175
+smmuv3_cmdq_tlbi_nh_asid(int asid) "asid=%d"
176
+smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
177
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid=0x%x"
178
smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
179
smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
180
-smmuv3_inv_notifiers_iova(const char *name, uint16_t asid, uint16_t vmid, uint64_t iova, uint8_t tg, uint64_t num_pages) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64
181
+smmuv3_inv_notifiers_iova(const char *name, int asid, int vmid, uint64_t iova, uint8_t tg, uint64_t num_pages) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64
182
183
# strongarm.c
184
strongarm_uart_update_parameters(const char *label, int speed, char parity, int data_bits, int stop_bits) "%s speed=%d parity=%c data=%d stop=%d"
185
--
73
--
186
2.34.1
74
2.34.1
187
188
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
IOMMUTLBEvent only understands IOVA, for stage-1 or stage-2
3
Inline pickNaN into its only caller. This makes one assert
4
SMMU instances we consider the input address as the IOVA, but when
4
redundant with the immediately preceding IF.
5
nesting is used, we can't mix stage-1 and stage-2 addresses, so for
5
6
nesting only stage-1 is considered the IOVA and would be notified.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Mostafa Saleh <smostafa@google.com>
8
Message-id: 20241203203949.483774-9-richard.henderson@linaro.org
9
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Message-id: 20240715084519.1189624-16-smostafa@google.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
hw/arm/smmuv3.c | 39 +++++++++++++++++++++++++--------------
11
fpu/softfloat-parts.c.inc | 82 +++++++++++++++++++++++++----
16
hw/arm/trace-events | 2 +-
12
fpu/softfloat-specialize.c.inc | 96 ----------------------------------
17
2 files changed, 26 insertions(+), 15 deletions(-)
13
2 files changed, 73 insertions(+), 105 deletions(-)
18
14
19
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
15
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/smmuv3.c
17
--- a/fpu/softfloat-parts.c.inc
22
+++ b/hw/arm/smmuv3.c
18
+++ b/fpu/softfloat-parts.c.inc
23
@@ -XXX,XX +XXX,XX @@ epilogue:
19
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
24
* @iova: iova
20
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
25
* @tg: translation granule (if communicated through range invalidation)
21
float_status *s)
26
* @num_pages: number of @granule sized pages (if tg != 0), otherwise 1
27
+ * @stage: Which stage(1 or 2) is used
28
*/
29
static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
30
IOMMUNotifier *n,
31
int asid, int vmid,
32
dma_addr_t iova, uint8_t tg,
33
- uint64_t num_pages)
34
+ uint64_t num_pages, int stage)
35
{
22
{
36
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
23
+ int cmp, which;
37
+ SMMUEventInfo eventinfo = {.inval_ste_allowed = true};
38
+ SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo);
39
IOMMUTLBEvent event;
40
uint8_t granule;
41
- SMMUv3State *s = sdev->smmu;
42
+
24
+
43
+ if (!cfg) {
25
if (is_snan(a->cls) || is_snan(b->cls)) {
44
+ return;
26
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
27
}
28
29
if (s->default_nan_mode) {
30
parts_default_nan(a, s);
31
- } else {
32
- int cmp = frac_cmp(a, b);
33
- if (cmp == 0) {
34
- cmp = a->sign < b->sign;
35
- }
36
+ return a;
37
+ }
38
39
- if (pickNaN(a->cls, b->cls, cmp > 0, s)) {
40
- a = b;
41
- }
42
+ cmp = frac_cmp(a, b);
43
+ if (cmp == 0) {
44
+ cmp = a->sign < b->sign;
45
+ }
45
+ }
46
+
46
+
47
+ /*
47
+ switch (s->float_2nan_prop_rule) {
48
+ * stage is passed from TLB invalidation commands which can be either
48
+ case float_2nan_prop_s_ab:
49
+ * stage-1 or stage-2.
49
if (is_snan(a->cls)) {
50
+ * However, IOMMUTLBEvent only understands IOVA, for stage-1 or stage-2
50
- parts_silence_nan(a, s);
51
+ * SMMU instances we consider the input address as the IOVA, but when
51
+ which = 0;
52
+ * nesting is used, we can't mix stage-1 and stage-2 addresses, so for
52
+ } else if (is_snan(b->cls)) {
53
+ * nesting only stage-1 is considered the IOVA and would be notified.
53
+ which = 1;
54
+ */
54
+ } else if (is_qnan(a->cls)) {
55
+ if ((stage == SMMU_STAGE_2) && (cfg->stage == SMMU_NESTED))
55
+ which = 0;
56
+ return;
56
+ } else {
57
57
+ which = 1;
58
if (!tg) {
59
- SMMUEventInfo eventinfo = {.inval_ste_allowed = true};
60
- SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo);
61
SMMUTransTableInfo *tt;
62
63
- if (!cfg) {
64
- return;
65
- }
66
-
67
if (asid >= 0 && cfg->asid != asid) {
68
return;
69
}
58
}
70
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
59
+ break;
71
return;
60
+ case float_2nan_prop_s_ba:
72
}
61
+ if (is_snan(b->cls)) {
73
62
+ which = 1;
74
- if (STAGE1_SUPPORTED(s)) {
63
+ } else if (is_snan(a->cls)) {
75
+ if (stage == SMMU_STAGE_1) {
64
+ which = 0;
76
tt = select_tt(cfg, iova);
65
+ } else if (is_qnan(b->cls)) {
77
if (!tt) {
66
+ which = 1;
78
return;
67
+ } else {
79
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
68
+ which = 0;
80
/* invalidate an asid/vmid/iova range tuple in all mr's */
69
+ }
81
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
70
+ break;
82
dma_addr_t iova, uint8_t tg,
71
+ case float_2nan_prop_ab:
83
- uint64_t num_pages)
72
+ which = is_nan(a->cls) ? 0 : 1;
84
+ uint64_t num_pages, int stage)
73
+ break;
85
{
74
+ case float_2nan_prop_ba:
86
SMMUDevice *sdev;
75
+ which = is_nan(b->cls) ? 1 : 0;
87
76
+ break;
88
@@ -XXX,XX +XXX,XX @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
77
+ case float_2nan_prop_x87:
89
IOMMUNotifier *n;
78
+ /*
90
79
+ * This implements x87 NaN propagation rules:
91
trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, vmid,
80
+ * SNaN + QNaN => return the QNaN
92
- iova, tg, num_pages);
81
+ * two SNaNs => return the one with the larger significand, silenced
93
+ iova, tg, num_pages, stage);
82
+ * two QNaNs => return the one with the larger significand
94
83
+ * SNaN and a non-NaN => return the SNaN, silenced
95
IOMMU_NOTIFIER_FOREACH(n, mr) {
84
+ * QNaN and a non-NaN => return the QNaN
96
- smmuv3_notify_iova(mr, n, asid, vmid, iova, tg, num_pages);
85
+ *
97
+ smmuv3_notify_iova(mr, n, asid, vmid, iova, tg, num_pages, stage);
86
+ * If we get down to comparing significands and they are the same,
98
}
87
+ * return the NaN with the positive sign bit (if any).
88
+ */
89
+ if (is_snan(a->cls)) {
90
+ if (is_snan(b->cls)) {
91
+ which = cmp > 0 ? 0 : 1;
92
+ } else {
93
+ which = is_qnan(b->cls) ? 1 : 0;
94
+ }
95
+ } else if (is_qnan(a->cls)) {
96
+ if (is_snan(b->cls) || !is_qnan(b->cls)) {
97
+ which = 0;
98
+ } else {
99
+ which = cmp > 0 ? 0 : 1;
100
+ }
101
+ } else {
102
+ which = 1;
103
+ }
104
+ break;
105
+ default:
106
+ g_assert_not_reached();
107
+ }
108
+
109
+ if (which) {
110
+ a = b;
111
+ }
112
+ if (is_snan(a->cls)) {
113
+ parts_silence_nan(a, s);
114
}
115
return a;
116
}
117
diff --git a/fpu/softfloat-specialize.c.inc b/fpu/softfloat-specialize.c.inc
118
index XXXXXXX..XXXXXXX 100644
119
--- a/fpu/softfloat-specialize.c.inc
120
+++ b/fpu/softfloat-specialize.c.inc
121
@@ -XXX,XX +XXX,XX @@ bool float32_is_signaling_nan(float32 a_, float_status *status)
99
}
122
}
100
}
123
}
101
@@ -XXX,XX +XXX,XX @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
124
102
125
-/*----------------------------------------------------------------------------
103
if (!tg) {
126
-| Select which NaN to propagate for a two-input operation.
104
trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf, stage);
127
-| IEEE754 doesn't specify all the details of this, so the
105
- smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1);
128
-| algorithm is target-specific.
106
+ smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage);
129
-| The routine is passed various bits of information about the
107
if (stage == SMMU_STAGE_1) {
130
-| two NaNs and should return 0 to select NaN a and 1 for NaN b.
108
smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
131
-| Note that signalling NaNs are always squashed to quiet NaNs
109
} else {
132
-| by the caller, by calling floatXX_silence_nan() before
110
@@ -XXX,XX +XXX,XX @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
133
-| returning them.
111
num_pages = (mask + 1) >> granule;
134
-|
112
trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages,
135
-| aIsLargerSignificand is only valid if both a and b are NaNs
113
ttl, leaf, stage);
136
-| of some kind, and is true if a has the larger significand,
114
- smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, num_pages);
137
-| or if both a and b have the same significand but a is
115
+ smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, num_pages, stage);
138
-| positive but b is negative. It is only needed for the x87
116
if (stage == SMMU_STAGE_1) {
139
-| tie-break rule.
117
smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
140
-*----------------------------------------------------------------------------*/
118
} else {
141
-
119
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
142
-static int pickNaN(FloatClass a_cls, FloatClass b_cls,
120
index XXXXXXX..XXXXXXX 100644
143
- bool aIsLargerSignificand, float_status *status)
121
--- a/hw/arm/trace-events
144
-{
122
+++ b/hw/arm/trace-events
145
- /*
123
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
146
- * We guarantee not to require the target to tell us how to
124
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid=0x%x"
147
- * pick a NaN if we're always returning the default NaN.
125
smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
148
- * But if we're not in default-NaN mode then the target must
126
smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
149
- * specify via set_float_2nan_prop_rule().
127
-smmuv3_inv_notifiers_iova(const char *name, int asid, int vmid, uint64_t iova, uint8_t tg, uint64_t num_pages) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64
150
- */
128
+smmuv3_inv_notifiers_iova(const char *name, int asid, int vmid, uint64_t iova, uint8_t tg, uint64_t num_pages, int stage) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" stage=%d"
151
- assert(!status->default_nan_mode);
129
152
-
130
# strongarm.c
153
- switch (status->float_2nan_prop_rule) {
131
strongarm_uart_update_parameters(const char *label, int speed, char parity, int data_bits, int stop_bits) "%s speed=%d parity=%c data=%d stop=%d"
154
- case float_2nan_prop_s_ab:
155
- if (is_snan(a_cls)) {
156
- return 0;
157
- } else if (is_snan(b_cls)) {
158
- return 1;
159
- } else if (is_qnan(a_cls)) {
160
- return 0;
161
- } else {
162
- return 1;
163
- }
164
- break;
165
- case float_2nan_prop_s_ba:
166
- if (is_snan(b_cls)) {
167
- return 1;
168
- } else if (is_snan(a_cls)) {
169
- return 0;
170
- } else if (is_qnan(b_cls)) {
171
- return 1;
172
- } else {
173
- return 0;
174
- }
175
- break;
176
- case float_2nan_prop_ab:
177
- if (is_nan(a_cls)) {
178
- return 0;
179
- } else {
180
- return 1;
181
- }
182
- break;
183
- case float_2nan_prop_ba:
184
- if (is_nan(b_cls)) {
185
- return 1;
186
- } else {
187
- return 0;
188
- }
189
- break;
190
- case float_2nan_prop_x87:
191
- /*
192
- * This implements x87 NaN propagation rules:
193
- * SNaN + QNaN => return the QNaN
194
- * two SNaNs => return the one with the larger significand, silenced
195
- * two QNaNs => return the one with the larger significand
196
- * SNaN and a non-NaN => return the SNaN, silenced
197
- * QNaN and a non-NaN => return the QNaN
198
- *
199
- * If we get down to comparing significands and they are the same,
200
- * return the NaN with the positive sign bit (if any).
201
- */
202
- if (is_snan(a_cls)) {
203
- if (is_snan(b_cls)) {
204
- return aIsLargerSignificand ? 0 : 1;
205
- }
206
- return is_qnan(b_cls) ? 1 : 0;
207
- } else if (is_qnan(a_cls)) {
208
- if (is_snan(b_cls) || !is_qnan(b_cls)) {
209
- return 0;
210
- } else {
211
- return aIsLargerSignificand ? 0 : 1;
212
- }
213
- } else {
214
- return 1;
215
- }
216
- default:
217
- g_assert_not_reached();
218
- }
219
-}
220
-
221
/*----------------------------------------------------------------------------
222
| Returns 1 if the double-precision floating-point value `a' is a quiet
223
| NaN; otherwise returns 0.
132
--
224
--
133
2.34.1
225
2.34.1
134
226
135
227
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Previously, to check if faults are enabled, it was sufficient to check
3
Remember if there was an SNaN, and use that to simplify
4
the current stage of translation and check the corresponding
4
float_2nan_prop_s_{ab,ba} to only the snan component.
5
record_faults flag.
5
Then, fall through to the corresponding
6
float_2nan_prop_{ab,ba} case to handle any remaining
7
nans, which must be quiet.
6
8
7
However, with nesting, it is possible for stage-1 (nested) translation
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
to trigger a stage-2 fault, so we check SMMUPTWEventInfo as it would
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
have the correct stage set from the page table walk.
11
Message-id: 20241203203949.483774-10-richard.henderson@linaro.org
10
11
Signed-off-by: Mostafa Saleh <smostafa@google.com>
12
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Message-id: 20240715084519.1189624-17-smostafa@google.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
13
---
18
hw/arm/smmuv3.c | 15 ++++++++-------
14
fpu/softfloat-parts.c.inc | 32 ++++++++++++--------------------
19
1 file changed, 8 insertions(+), 7 deletions(-)
15
1 file changed, 12 insertions(+), 20 deletions(-)
20
16
21
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/smmuv3.c
19
--- a/fpu/softfloat-parts.c.inc
24
+++ b/hw/arm/smmuv3.c
20
+++ b/fpu/softfloat-parts.c.inc
25
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static void partsN(return_nan)(FloatPartsN *a, float_status *s)
26
#include "smmuv3-internal.h"
22
static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
27
#include "smmu-internal.h"
23
float_status *s)
28
24
{
29
-#define PTW_RECORD_FAULT(cfg) (((cfg)->stage == SMMU_STAGE_1) ? \
25
+ bool have_snan = false;
30
- (cfg)->record_faults : \
26
int cmp, which;
31
- (cfg)->s2cfg.record_faults)
27
32
+#define PTW_RECORD_FAULT(ptw_info, cfg) (((ptw_info).stage == SMMU_STAGE_1 && \
28
if (is_snan(a->cls) || is_snan(b->cls)) {
33
+ (cfg)->record_faults) || \
29
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
34
+ ((ptw_info).stage == SMMU_STAGE_2 && \
30
+ have_snan = true;
35
+ (cfg)->s2cfg.record_faults))
31
}
36
32
37
/**
33
if (s->default_nan_mode) {
38
* smmuv3_trigger_irq - pulse @irq if enabled and update
34
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
39
@@ -XXX,XX +XXX,XX @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
35
40
event->u.f_walk_eabt.addr2 = ptw_info.addr;
36
switch (s->float_2nan_prop_rule) {
41
break;
37
case float_2nan_prop_s_ab:
42
case SMMU_PTW_ERR_TRANSLATION:
38
- if (is_snan(a->cls)) {
43
- if (PTW_RECORD_FAULT(cfg)) {
39
- which = 0;
44
+ if (PTW_RECORD_FAULT(ptw_info, cfg)) {
40
- } else if (is_snan(b->cls)) {
45
event->type = SMMU_EVT_F_TRANSLATION;
41
- which = 1;
46
event->u.f_translation.addr2 = ptw_info.addr;
42
- } else if (is_qnan(a->cls)) {
47
event->u.f_translation.class = class;
43
- which = 0;
48
@@ -XXX,XX +XXX,XX @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
44
- } else {
49
}
45
- which = 1;
50
break;
46
+ if (have_snan) {
51
case SMMU_PTW_ERR_ADDR_SIZE:
47
+ which = is_snan(a->cls) ? 0 : 1;
52
- if (PTW_RECORD_FAULT(cfg)) {
48
+ break;
53
+ if (PTW_RECORD_FAULT(ptw_info, cfg)) {
49
}
54
event->type = SMMU_EVT_F_ADDR_SIZE;
50
- break;
55
event->u.f_addr_size.addr2 = ptw_info.addr;
51
- case float_2nan_prop_s_ba:
56
event->u.f_addr_size.class = class;
52
- if (is_snan(b->cls)) {
57
@@ -XXX,XX +XXX,XX @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
53
- which = 1;
58
}
54
- } else if (is_snan(a->cls)) {
59
break;
55
- which = 0;
60
case SMMU_PTW_ERR_ACCESS:
56
- } else if (is_qnan(b->cls)) {
61
- if (PTW_RECORD_FAULT(cfg)) {
57
- which = 1;
62
+ if (PTW_RECORD_FAULT(ptw_info, cfg)) {
58
- } else {
63
event->type = SMMU_EVT_F_ACCESS;
59
- which = 0;
64
event->u.f_access.addr2 = ptw_info.addr;
60
- }
65
event->u.f_access.class = class;
61
- break;
66
@@ -XXX,XX +XXX,XX @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
62
+ /* fall through */
67
}
63
case float_2nan_prop_ab:
68
break;
64
which = is_nan(a->cls) ? 0 : 1;
69
case SMMU_PTW_ERR_PERMISSION:
65
break;
70
- if (PTW_RECORD_FAULT(cfg)) {
66
+ case float_2nan_prop_s_ba:
71
+ if (PTW_RECORD_FAULT(ptw_info, cfg)) {
67
+ if (have_snan) {
72
event->type = SMMU_EVT_F_PERMISSION;
68
+ which = is_snan(b->cls) ? 1 : 0;
73
event->u.f_permission.addr2 = ptw_info.addr;
69
+ break;
74
event->u.f_permission.class = class;
70
+ }
71
+ /* fall through */
72
case float_2nan_prop_ba:
73
which = is_nan(b->cls) ? 1 : 0;
74
break;
75
--
75
--
76
2.34.1
76
2.34.1
77
78
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For the following events (ARM IHI 0070 F.b - 7.3 Event records):
3
Move the fractional comparison to the end of the
4
- F_TRANSLATION
4
float_2nan_prop_x87 case. This is not required for
5
- F_ACCESS
5
any other 2nan propagation rule. Reorganize the
6
- F_PERMISSION
6
x87 case itself to break out of the switch when the
7
- F_ADDR_SIZE
7
fractional comparison is not required.
8
8
9
If fault occurs at stage 2, S2 == 1 and:
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
- If translating an IPA for a transaction (whether by input to
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
stage 2-only configuration, or after successful stage 1 translation),
11
Message-id: 20241203203949.483774-11-richard.henderson@linaro.org
12
CLASS == IN, and IPA is provided.
13
14
At the moment only CLASS == IN is used which indicates input
15
translation.
16
17
However, this was not implemented correctly, as for stage 2, the code
18
only sets the S2 bit but not the IPA.
19
20
This field has the same bits as FetchAddr in F_WALK_EABT which is
21
populated correctly, so we don’t change that.
22
The setting of this field should be done from the walker as the IPA address
23
wouldn't be known in case of nesting.
24
25
For stage 1, the spec says:
26
If fault occurs at stage 1, S2 == 0 and:
27
CLASS == IN, IPA is UNKNOWN.
28
29
So, no need to set it to for stage 1, as ptw_info is initialised by zero in
30
smmuv3_translate().
31
32
Fixes: e703f7076a “hw/arm/smmuv3: Add page table walk for stage-2”
33
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
34
Reviewed-by: Eric Auger <eric.auger@redhat.com>
35
Signed-off-by: Mostafa Saleh <smostafa@google.com>
36
Message-id: 20240715084519.1189624-3-smostafa@google.com
37
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
---
13
---
39
hw/arm/smmu-common.c | 10 ++++++----
14
fpu/softfloat-parts.c.inc | 19 +++++++++----------
40
hw/arm/smmuv3.c | 4 ++++
15
1 file changed, 9 insertions(+), 10 deletions(-)
41
2 files changed, 10 insertions(+), 4 deletions(-)
42
16
43
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
17
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
44
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/arm/smmu-common.c
19
--- a/fpu/softfloat-parts.c.inc
46
+++ b/hw/arm/smmu-common.c
20
+++ b/fpu/softfloat-parts.c.inc
47
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
21
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
48
*/
22
return a;
49
if (ipa >= (1ULL << inputsize)) {
50
info->type = SMMU_PTW_ERR_TRANSLATION;
51
- goto error;
52
+ goto error_ipa;
53
}
23
}
54
24
55
while (level < VMSA_LEVELS) {
25
- cmp = frac_cmp(a, b);
56
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
26
- if (cmp == 0) {
27
- cmp = a->sign < b->sign;
28
- }
29
-
30
switch (s->float_2nan_prop_rule) {
31
case float_2nan_prop_s_ab:
32
if (have_snan) {
33
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
34
* return the NaN with the positive sign bit (if any).
57
*/
35
*/
58
if (!PTE_AF(pte) && !cfg->s2cfg.affd) {
36
if (is_snan(a->cls)) {
59
info->type = SMMU_PTW_ERR_ACCESS;
37
- if (is_snan(b->cls)) {
60
- goto error;
38
- which = cmp > 0 ? 0 : 1;
61
+ goto error_ipa;
39
- } else {
40
+ if (!is_snan(b->cls)) {
41
which = is_qnan(b->cls) ? 1 : 0;
42
+ break;
43
}
44
} else if (is_qnan(a->cls)) {
45
if (is_snan(b->cls) || !is_qnan(b->cls)) {
46
which = 0;
47
- } else {
48
- which = cmp > 0 ? 0 : 1;
49
+ break;
50
}
51
} else {
52
which = 1;
53
+ break;
62
}
54
}
63
55
+ cmp = frac_cmp(a, b);
64
s2ap = PTE_AP(pte);
56
+ if (cmp == 0) {
65
if (is_permission_fault_s2(s2ap, perm)) {
57
+ cmp = a->sign < b->sign;
66
info->type = SMMU_PTW_ERR_PERMISSION;
58
+ }
67
- goto error;
59
+ which = cmp > 0 ? 0 : 1;
68
+ goto error_ipa;
60
break;
69
}
61
default:
70
62
g_assert_not_reached();
71
/*
72
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
73
*/
74
if (gpa >= (1ULL << cfg->s2cfg.eff_ps)) {
75
info->type = SMMU_PTW_ERR_ADDR_SIZE;
76
- goto error;
77
+ goto error_ipa;
78
}
79
80
tlbe->entry.translated_addr = gpa;
81
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
82
}
83
info->type = SMMU_PTW_ERR_TRANSLATION;
84
85
+error_ipa:
86
+ info->addr = ipa;
87
error:
88
info->stage = 2;
89
tlbe->entry.perm = IOMMU_NONE;
90
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/arm/smmuv3.c
93
+++ b/hw/arm/smmuv3.c
94
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
95
if (PTW_RECORD_FAULT(cfg)) {
96
event.type = SMMU_EVT_F_TRANSLATION;
97
event.u.f_translation.addr = addr;
98
+ event.u.f_translation.addr2 = ptw_info.addr;
99
event.u.f_translation.rnw = flag & 0x1;
100
}
101
break;
102
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
103
if (PTW_RECORD_FAULT(cfg)) {
104
event.type = SMMU_EVT_F_ADDR_SIZE;
105
event.u.f_addr_size.addr = addr;
106
+ event.u.f_addr_size.addr2 = ptw_info.addr;
107
event.u.f_addr_size.rnw = flag & 0x1;
108
}
109
break;
110
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
111
if (PTW_RECORD_FAULT(cfg)) {
112
event.type = SMMU_EVT_F_ACCESS;
113
event.u.f_access.addr = addr;
114
+ event.u.f_access.addr2 = ptw_info.addr;
115
event.u.f_access.rnw = flag & 0x1;
116
}
117
break;
118
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
119
if (PTW_RECORD_FAULT(cfg)) {
120
event.type = SMMU_EVT_F_PERMISSION;
121
event.u.f_permission.addr = addr;
122
+ event.u.f_permission.addr2 = ptw_info.addr;
123
event.u.f_permission.rnw = flag & 0x1;
124
}
125
break;
126
--
63
--
127
2.34.1
64
2.34.1
128
129
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Some commands need rework for nesting, as they used to assume S1
3
Replace the "index" selecting between A and B with a result variable
4
and S2 are mutually exclusive:
4
of the proper type. This improves clarity within the function.
5
5
6
- CMD_TLBI_NH_ASID: Consider VMID if stage-2 is supported
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
- CMD_TLBI_NH_ALL: Consider VMID if stage-2 is supported, otherwise
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
invalidate everything, this required a new vmid invalidation
8
Message-id: 20241203203949.483774-12-richard.henderson@linaro.org
9
function for stage-1 only (ASID >= 0)
10
11
Also, rework trace events to reflect the new implementation.
12
13
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
15
Signed-off-by: Mostafa Saleh <smostafa@google.com>
16
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
17
Message-id: 20240715084519.1189624-15-smostafa@google.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
10
---
20
include/hw/arm/smmu-common.h | 1 +
11
fpu/softfloat-parts.c.inc | 28 +++++++++++++---------------
21
hw/arm/smmu-common.c | 16 ++++++++++++++++
12
1 file changed, 13 insertions(+), 15 deletions(-)
22
hw/arm/smmuv3.c | 28 ++++++++++++++++++++++++++--
23
hw/arm/trace-events | 4 +++-
24
4 files changed, 46 insertions(+), 3 deletions(-)
25
13
26
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
14
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
27
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/arm/smmu-common.h
16
--- a/fpu/softfloat-parts.c.inc
29
+++ b/include/hw/arm/smmu-common.h
17
+++ b/fpu/softfloat-parts.c.inc
30
@@ -XXX,XX +XXX,XX @@ SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
18
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
31
void smmu_iotlb_inv_all(SMMUState *s);
19
float_status *s)
32
void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
33
void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
34
+void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid);
35
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
36
uint8_t tg, uint64_t num_pages, uint8_t ttl);
37
void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
38
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/smmu-common.c
41
+++ b/hw/arm/smmu-common.c
42
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_vmid(gpointer key, gpointer value,
43
return SMMU_IOTLB_VMID(*iotlb_key) == vmid;
44
}
45
46
+static gboolean smmu_hash_remove_by_vmid_s1(gpointer key, gpointer value,
47
+ gpointer user_data)
48
+{
49
+ int vmid = *(int *)user_data;
50
+ SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key;
51
+
52
+ return (SMMU_IOTLB_VMID(*iotlb_key) == vmid) &&
53
+ (SMMU_IOTLB_ASID(*iotlb_key) >= 0);
54
+}
55
+
56
static gboolean smmu_hash_remove_by_asid_vmid_iova(gpointer key, gpointer value,
57
gpointer user_data)
58
{
20
{
59
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_vmid(SMMUState *s, int vmid)
21
bool have_snan = false;
60
g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_vmid, &vmid);
22
- int cmp, which;
61
}
23
+ FloatPartsN *ret;
62
24
+ int cmp;
63
+inline void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
25
64
+{
26
if (is_snan(a->cls) || is_snan(b->cls)) {
65
+ trace_smmu_iotlb_inv_vmid_s1(vmid);
27
float_raise(float_flag_invalid | float_flag_invalid_snan, s);
66
+ g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_vmid_s1, &vmid);
28
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
67
+}
29
switch (s->float_2nan_prop_rule) {
68
+
30
case float_2nan_prop_s_ab:
69
/* VMSAv8-64 Translation */
31
if (have_snan) {
70
32
- which = is_snan(a->cls) ? 0 : 1;
71
/**
33
+ ret = is_snan(a->cls) ? a : b;
72
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
34
break;
73
index XXXXXXX..XXXXXXX 100644
35
}
74
--- a/hw/arm/smmuv3.c
36
/* fall through */
75
+++ b/hw/arm/smmuv3.c
37
case float_2nan_prop_ab:
76
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
38
- which = is_nan(a->cls) ? 0 : 1;
77
case SMMU_CMD_TLBI_NH_ASID:
39
+ ret = is_nan(a->cls) ? a : b;
78
{
40
break;
79
int asid = CMD_ASID(&cmd);
41
case float_2nan_prop_s_ba:
80
+ int vmid = -1;
42
if (have_snan) {
81
43
- which = is_snan(b->cls) ? 1 : 0;
82
if (!STAGE1_SUPPORTED(s)) {
44
+ ret = is_snan(b->cls) ? b : a;
83
cmd_error = SMMU_CERROR_ILL;
45
break;
46
}
47
/* fall through */
48
case float_2nan_prop_ba:
49
- which = is_nan(b->cls) ? 1 : 0;
50
+ ret = is_nan(b->cls) ? b : a;
51
break;
52
case float_2nan_prop_x87:
53
/*
54
@@ -XXX,XX +XXX,XX @@ static FloatPartsN *partsN(pick_nan)(FloatPartsN *a, FloatPartsN *b,
55
*/
56
if (is_snan(a->cls)) {
57
if (!is_snan(b->cls)) {
58
- which = is_qnan(b->cls) ? 1 : 0;
59
+ ret = is_qnan(b->cls) ? b : a;
84
break;
60
break;
85
}
61
}
86
62
} else if (is_qnan(a->cls)) {
87
+ /*
63
if (is_snan(b->cls) || !is_qnan(b->cls)) {
88
+ * VMID is only matched when stage 2 is supported, otherwise set it
64
- which = 0;
89
+ * to -1 as the value used for stage-1 only VMIDs.
65
+ ret = a;
90
+ */
66
break;
91
+ if (STAGE2_SUPPORTED(s)) {
67
}
92
+ vmid = CMD_VMID(&cmd);
68
} else {
93
+ }
69
- which = 1;
94
+
70
+ ret = b;
95
trace_smmuv3_cmdq_tlbi_nh_asid(asid);
96
smmu_inv_notifiers_all(&s->smmu_state);
97
- smmu_iotlb_inv_asid_vmid(bs, asid, -1);
98
+ smmu_iotlb_inv_asid_vmid(bs, asid, vmid);
99
break;
71
break;
100
}
72
}
101
case SMMU_CMD_TLBI_NH_ALL:
73
cmp = frac_cmp(a, b);
102
+ {
74
if (cmp == 0) {
103
+ int vmid = -1;
75
cmp = a->sign < b->sign;
104
+
76
}
105
if (!STAGE1_SUPPORTED(s)) {
77
- which = cmp > 0 ? 0 : 1;
106
cmd_error = SMMU_CERROR_ILL;
78
+ ret = cmp > 0 ? a : b;
107
break;
79
break;
108
}
80
default:
109
+
81
g_assert_not_reached();
110
+ /*
82
}
111
+ * If stage-2 is supported, invalidate for this VMID only, otherwise
83
112
+ * invalidate the whole thing.
84
- if (which) {
113
+ */
85
- a = b;
114
+ if (STAGE2_SUPPORTED(s)) {
86
+ if (is_snan(ret->cls)) {
115
+ vmid = CMD_VMID(&cmd);
87
+ parts_silence_nan(ret, s);
116
+ trace_smmuv3_cmdq_tlbi_nh(vmid);
88
}
117
+ smmu_iotlb_inv_vmid_s1(bs, vmid);
89
- if (is_snan(a->cls)) {
118
+ break;
90
- parts_silence_nan(a, s);
119
+ }
91
- }
120
QEMU_FALLTHROUGH;
92
- return a;
121
+ }
93
+ return ret;
122
case SMMU_CMD_TLBI_NSNH_ALL:
94
}
123
- trace_smmuv3_cmdq_tlbi_nh();
95
124
+ trace_smmuv3_cmdq_tlbi_nsnh();
96
static FloatPartsN *partsN(pick_nan_muladd)(FloatPartsN *a, FloatPartsN *b,
125
smmu_inv_notifiers_all(&s->smmu_state);
126
smmu_iotlb_inv_all(bs);
127
break;
128
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
129
index XXXXXXX..XXXXXXX 100644
130
--- a/hw/arm/trace-events
131
+++ b/hw/arm/trace-events
132
@@ -XXX,XX +XXX,XX @@ smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "base
133
smmu_iotlb_inv_all(void) "IOTLB invalidate all"
134
smmu_iotlb_inv_asid_vmid(int asid, int vmid) "IOTLB invalidate asid=%d vmid=%d"
135
smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d"
136
+smmu_iotlb_inv_vmid_s1(int vmid) "IOTLB invalidate vmid=%d"
137
smmu_iotlb_inv_iova(int asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
138
smmu_inv_notifiers_mr(const char *name) "iommu mr=%s"
139
smmu_iotlb_lookup_hit(int asid, int vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
140
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_cfgi_cd(uint32_t sid) "sid=0x%x"
141
smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
142
smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
143
smmuv3_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf, int stage) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d stage=%d"
144
-smmuv3_cmdq_tlbi_nh(void) ""
145
+smmuv3_cmdq_tlbi_nh(int vmid) "vmid=%d"
146
+smmuv3_cmdq_tlbi_nsnh(void) ""
147
smmuv3_cmdq_tlbi_nh_asid(int asid) "asid=%d"
148
smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
149
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid=0x%x"
150
--
97
--
151
2.34.1
98
2.34.1
152
99
153
100
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
2
3
The SMMUv3 spec (ARM IHI 0070 F.b - 7.3 Event records) defines the
3
I'm migrating to Qualcomm's new open source email infrastructure, so
4
class of events faults as:
4
update my email address, and update the mailmap to match.
5
5
6
CLASS: The class of the operation that caused the fault:
6
Signed-off-by: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
7
- 0b00: CD, CD fetch.
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
8
- 0b01: TTD, Stage 1 translation table fetch.
8
Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
9
- 0b10: IN, Input address
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
However, this value was not set and left as 0 which means CD and not
11
Message-id: 20241205114047.1125842-1-leif.lindholm@oss.qualcomm.com
12
IN (0b10).
13
14
Another problem was that stage-2 class is considered IN not TT for
15
EABT, according to the spec:
16
Translation of an IPA after successful stage 1 translation (or,
17
in stage 2-only configuration, an input IPA)
18
- S2 == 1 (stage 2), CLASS == IN (Input to stage)
19
20
This would change soon when nested translations are supported.
21
22
While at it, add an enum for class as it would be used for nesting.
23
However, at the moment stage-1 and stage-2 use the same class values,
24
except for EABT.
25
26
Fixes: 9bde7f0674 “hw/arm/smmuv3: Implement translate callback”
27
Signed-off-by: Mostafa Saleh <smostafa@google.com>
28
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
29
Reviewed-by: Eric Auger <eric.auger@redhat.com>
30
Message-id: 20240715084519.1189624-4-smostafa@google.com
31
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
32
---
13
---
33
hw/arm/smmuv3-internal.h | 6 ++++++
14
MAINTAINERS | 2 +-
34
hw/arm/smmuv3.c | 8 +++++++-
15
.mailmap | 5 +++--
35
2 files changed, 13 insertions(+), 1 deletion(-)
16
2 files changed, 4 insertions(+), 3 deletions(-)
36
17
37
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
18
diff --git a/MAINTAINERS b/MAINTAINERS
38
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
39
--- a/hw/arm/smmuv3-internal.h
20
--- a/MAINTAINERS
40
+++ b/hw/arm/smmuv3-internal.h
21
+++ b/MAINTAINERS
41
@@ -XXX,XX +XXX,XX @@ typedef enum SMMUTranslationStatus {
22
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
42
SMMU_TRANS_SUCCESS,
23
SBSA-REF
43
} SMMUTranslationStatus;
24
M: Radoslaw Biernacki <rad@semihalf.com>
44
25
M: Peter Maydell <peter.maydell@linaro.org>
45
+typedef enum SMMUTranslationClass {
26
-R: Leif Lindholm <quic_llindhol@quicinc.com>
46
+ SMMU_CLASS_CD,
27
+R: Leif Lindholm <leif.lindholm@oss.qualcomm.com>
47
+ SMMU_CLASS_TT,
28
R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
48
+ SMMU_CLASS_IN,
29
L: qemu-arm@nongnu.org
49
+} SMMUTranslationClass;
30
S: Maintained
50
+
31
diff --git a/.mailmap b/.mailmap
51
/* MMIO Registers */
52
53
REG32(IDR0, 0x0)
54
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
55
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/arm/smmuv3.c
33
--- a/.mailmap
57
+++ b/hw/arm/smmuv3.c
34
+++ b/.mailmap
58
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
35
@@ -XXX,XX +XXX,XX @@ Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
59
event.type = SMMU_EVT_F_WALK_EABT;
36
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
60
event.u.f_walk_eabt.addr = addr;
37
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
61
event.u.f_walk_eabt.rnw = flag & 0x1;
38
Juan Quintela <quintela@trasno.org> <quintela@redhat.com>
62
- event.u.f_walk_eabt.class = 0x1;
39
-Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
63
+ /* Stage-2 (only) is class IN while stage-1 is class TT */
40
-Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
64
+ event.u.f_walk_eabt.class = (ptw_info.stage == 2) ?
41
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <quic_llindhol@quicinc.com>
65
+ SMMU_CLASS_IN : SMMU_CLASS_TT;
42
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif.lindholm@linaro.org>
66
event.u.f_walk_eabt.addr2 = ptw_info.addr;
43
+Leif Lindholm <leif.lindholm@oss.qualcomm.com> <leif@nuviainc.com>
67
break;
44
Luc Michel <luc@lmichel.fr> <luc.michel@git.antfield.fr>
68
case SMMU_PTW_ERR_TRANSLATION:
45
Luc Michel <luc@lmichel.fr> <luc.michel@greensocs.com>
69
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
46
Luc Michel <luc@lmichel.fr> <lmichel@kalray.eu>
70
event.type = SMMU_EVT_F_TRANSLATION;
71
event.u.f_translation.addr = addr;
72
event.u.f_translation.addr2 = ptw_info.addr;
73
+ event.u.f_translation.class = SMMU_CLASS_IN;
74
event.u.f_translation.rnw = flag & 0x1;
75
}
76
break;
77
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
78
event.type = SMMU_EVT_F_ADDR_SIZE;
79
event.u.f_addr_size.addr = addr;
80
event.u.f_addr_size.addr2 = ptw_info.addr;
81
+ event.u.f_translation.class = SMMU_CLASS_IN;
82
event.u.f_addr_size.rnw = flag & 0x1;
83
}
84
break;
85
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
86
event.type = SMMU_EVT_F_ACCESS;
87
event.u.f_access.addr = addr;
88
event.u.f_access.addr2 = ptw_info.addr;
89
+ event.u.f_translation.class = SMMU_CLASS_IN;
90
event.u.f_access.rnw = flag & 0x1;
91
}
92
break;
93
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
94
event.type = SMMU_EVT_F_PERMISSION;
95
event.u.f_permission.addr = addr;
96
event.u.f_permission.addr2 = ptw_info.addr;
97
+ event.u.f_translation.class = SMMU_CLASS_IN;
98
event.u.f_permission.rnw = flag & 0x1;
99
}
100
break;
101
--
47
--
102
2.34.1
48
2.34.1
103
49
104
50
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Vikram Garhwal <vikram.garhwal@bytedance.com>
2
2
3
According to the SMMU architecture specification (ARM IHI 0070 F.b),
3
Previously, maintainer role was paused due to inactive email id. Commit id:
4
in “3.4 Address sizes”
4
c009d715721861984c4987bcc78b7ee183e86d75.
5
The address output from the translation causes a stage 1 Address Size
6
fault if it exceeds the range of the effective IPA size for the given CD.
7
5
8
However, this check was missing.
6
Signed-off-by: Vikram Garhwal <vikram.garhwal@bytedance.com>
9
7
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
10
There is already a similar check for stage-2 against effective PA.
8
Message-id: 20241204184205.12952-1-vikram.garhwal@bytedance.com
11
12
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Signed-off-by: Mostafa Saleh <smostafa@google.com>
15
Message-id: 20240715084519.1189624-2-smostafa@google.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
10
---
18
hw/arm/smmu-common.c | 10 ++++++++++
11
MAINTAINERS | 2 ++
19
1 file changed, 10 insertions(+)
12
1 file changed, 2 insertions(+)
20
13
21
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
14
diff --git a/MAINTAINERS b/MAINTAINERS
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/smmu-common.c
16
--- a/MAINTAINERS
24
+++ b/hw/arm/smmu-common.c
17
+++ b/MAINTAINERS
25
@@ -XXX,XX +XXX,XX @@ static int smmu_ptw_64_s1(SMMUTransCfg *cfg,
18
@@ -XXX,XX +XXX,XX @@ F: tests/qtest/fuzz-sb16-test.c
26
goto error;
19
27
}
20
Xilinx CAN
28
21
M: Francisco Iglesias <francisco.iglesias@amd.com>
29
+ /*
22
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
30
+ * The address output from the translation causes a stage 1 Address
23
S: Maintained
31
+ * Size fault if it exceeds the range of the effective IPA size for
24
F: hw/net/can/xlnx-*
32
+ * the given CD.
25
F: include/hw/net/xlnx-*
33
+ */
26
@@ -XXX,XX +XXX,XX @@ F: include/hw/rx/
34
+ if (gpa >= (1ULL << cfg->oas)) {
27
CAN bus subsystem and hardware
35
+ info->type = SMMU_PTW_ERR_ADDR_SIZE;
28
M: Pavel Pisa <pisa@cmp.felk.cvut.cz>
36
+ goto error;
29
M: Francisco Iglesias <francisco.iglesias@amd.com>
37
+ }
30
+M: Vikram Garhwal <vikram.garhwal@bytedance.com>
38
+
31
S: Maintained
39
tlbe->entry.translated_addr = gpa;
32
W: https://canbus.pages.fel.cvut.cz/
40
tlbe->entry.iova = iova & ~mask;
33
F: net/can/*
41
tlbe->entry.addr_mask = mask;
42
--
34
--
43
2.34.1
35
2.34.1
44
45
diff view generated by jsdifflib