1
First pullreq for 6.0: mostly my v8.1M work, plus some other
1
The following changes since commit a97978bcc2d1f650c7d411428806e5b03082b8c7:
2
bits and pieces. (I still have a lot of stuff in my to-review
3
folder, which I may or may not get to before the Christmas break...)
4
2
5
thanks
3
Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.1-20210603' into staging (2021-06-03 10:00:35 +0100)
6
-- PMM
7
8
The following changes since commit 5e7b204dbfae9a562fc73684986f936b97f63877:
9
10
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-12-09 20:08:54 +0000)
11
4
12
are available in the Git repository at:
5
are available in the Git repository at:
13
6
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201210
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210603
15
8
16
for you to fetch changes up to 71f916be1c7e9ede0e37d9cabc781b5a9e8638ff:
9
for you to fetch changes up to 1c861885894d840235954060050d240259f5340b:
17
10
18
hw/arm/armv7m: Correct typo in QOM object name (2020-12-10 11:44:56 +0000)
11
tests/unit/test-vmstate: Assert that dup() and mkstemp() succeed (2021-06-03 16:43:27 +0100)
19
12
20
----------------------------------------------------------------
13
----------------------------------------------------------------
21
target-arm queue:
14
target-arm queue:
22
* hw/arm/smmuv3: Fix up L1STD_SPAN decoding
15
* Some not-yet-enabled preliminaries for M-profile MVE support
23
* xlnx-zynqmp: Support Xilinx ZynqMP CAN controllers
16
* Consistently use "Cortex-Axx", not "Cortex Axx" in docs, comments
24
* sbsa-ref: allow to use Cortex-A53/57/72 cpus
17
* docs: Fix installation of man pages with Sphinx 4.x
25
* Various minor code cleanups
18
* Mark LDS{MIN,MAX} as signed operations
26
* hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
19
* Fix missing syndrome value for DAIF and PAC check exceptions
27
* Implement more pieces of ARMv8.1M support
20
* Implement BFloat16 extensions
21
* Refactoring of hvf accelerator code in preparation for aarch64 support
22
* Fix some coverity nits in test code
28
23
29
----------------------------------------------------------------
24
----------------------------------------------------------------
30
Alex Chen (4):
25
Alexander Graf (12):
31
i.MX25: Fix bad printf format specifiers
26
hvf: Move assert_hvf_ok() into common directory
32
i.MX31: Fix bad printf format specifiers
27
hvf: Move vcpu thread functions into common directory
33
i.MX6: Fix bad printf format specifiers
28
hvf: Move cpu functions into common directory
34
i.MX6ul: Fix bad printf format specifiers
29
hvf: Move hvf internal definitions into common header
30
hvf: Make hvf_set_phys_mem() static
31
hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
32
hvf: Split out common code on vcpu init and destroy
33
hvf: Use cpu_synchronize_state()
34
hvf: Make synchronize functions static
35
hvf: Remove hvf-accel-ops.h
36
hvf: Introduce hvf vcpu struct
37
hvf: Simplify post reset/init/loadvm hooks
35
38
36
Havard Skinnemoen (1):
39
Damien Goutte-Gattat (1):
37
tests/qtest/npcm7xx_rng-test: dump random data on failure
40
docs: Fix installation of man pages with Sphinx 4.x
38
41
39
Kunkun Jiang (1):
42
Jamie Iles (4):
40
hw/arm/smmuv3: Fix up L1STD_SPAN decoding
43
target/arm: fix missing exception class
44
target/arm: fold do_raise_exception into raise_exception
45
target/arm: use raise_exception_ra for MTE check failure
46
target/arm: use raise_exception_ra for stack limit exception
41
47
42
Marcin Juszkiewicz (1):
48
Peter Maydell (15):
43
sbsa-ref: allow to use Cortex-A53/57/72 cpus
49
target/arm: Add isar feature check functions for MVE
50
target/arm: Update feature checks for insns which are "MVE or FP"
51
target/arm: Move fpsp/fpdp isar check into callers of do_vfp_2op_sp/dp
52
target/arm: Add MVE check to VMOV_reg_sp and VMOV_reg_dp
53
target/arm: Fix return values in fp_sysreg_checks()
54
target/arm: Implement M-profile VPR register
55
target/arm: Make FPSCR.LTPSIZE writable for MVE
56
target/arm: Allow board models to specify initial NS VTOR
57
arm: Consistently use "Cortex-Axx", not "Cortex Axx"
58
tests/qtest/bios-tables-test: Check for dup2() failure
59
tests/qtest/e1000e-test: Check qemu_recv() succeeded
60
tests/qtest/hd-geo-test: Fix checks on mkstemp() return value
61
tests/qtest/pflash-cfi02-test: Avoid potential integer overflow
62
tests/qtest/tpm-tests: Remove unnecessary NULL checks
63
tests/unit/test-vmstate: Assert that dup() and mkstemp() succeed
44
64
45
Peter Maydell (25):
65
Richard Henderson (13):
46
hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
66
target/arm: Mark LDS{MIN,MAX} as signed operations
47
target/arm: Implement v8.1M PXN extension
67
target/arm: Add isar_feature_{aa32, aa64, aa64_sve}_bf16
48
target/arm: Don't clobber ID_PFR1.Security on M-profile cores
68
target/arm: Unify unallocated path in disas_fp_1src
49
target/arm: Implement VSCCLRM insn
69
target/arm: Implement scalar float32 to bfloat16 conversion
50
target/arm: Implement CLRM instruction
70
target/arm: Implement vector float32 to bfloat16 conversion
51
target/arm: Enforce M-profile VMRS/VMSR register restrictions
71
softfpu: Add float_round_to_odd_inf
52
target/arm: Refactor M-profile VMSR/VMRS handling
72
target/arm: Implement bfloat16 dot product (vector)
53
target/arm: Move general-use constant expanders up in translate.c
73
target/arm: Implement bfloat16 dot product (indexed)
54
target/arm: Implement VLDR/VSTR system register
74
target/arm: Implement bfloat16 matrix multiply accumulate
55
target/arm: Implement M-profile FPSCR_nzcvqc
75
target/arm: Implement bfloat widening fma (vector)
56
target/arm: Use new FPCR_NZCV_MASK constant
76
target/arm: Implement bfloat widening fma (indexed)
57
target/arm: Factor out preserve-fp-state from full_vfp_access_check()
77
linux-user/aarch64: Enable hwcap bits for bfloat16
58
target/arm: Implement FPCXT_S fp system register
78
target/arm: Enable BFloat16 extensions
59
hw/intc/armv7m_nvic: Update FPDSCR masking for v8.1M
60
target/arm: For v8.1M, always clear R0-R3, R12, APSR, EPSR on exception entry
61
target/arm: In v8.1M, don't set HFSR.FORCED on vector table fetch failures
62
target/arm: Implement v8.1M REVIDR register
63
target/arm: Implement new v8.1M NOCP check for exception return
64
target/arm: Implement new v8.1M VLLDM and VLSTM encodings
65
hw/intc/armv7m_nvic: Support v8.1M CCR.TRD bit
66
target/arm: Implement CCR_S.TRD behaviour for SG insns
67
hw/intc/armv7m_nvic: Fix "return from inactive handler" check
68
target/arm: Implement M-profile "minimal RAS implementation"
69
hw/intc/armv7m_nvic: Implement read/write for RAS register block
70
hw/arm/armv7m: Correct typo in QOM object name
71
79
72
Vikram Garhwal (4):
80
docs/conf.py | 1 +
73
hw/net/can: Introduce Xilinx ZynqMP CAN controller
81
docs/system/arm/aspeed.rst | 4 +-
74
xlnx-zynqmp: Connect Xilinx ZynqMP CAN controllers
82
docs/system/arm/nuvoton.rst | 6 +-
75
tests/qtest: Introduce tests for Xilinx ZynqMP CAN controller
83
docs/system/arm/sabrelite.rst | 2 +-
76
MAINTAINERS: Add maintainer entry for Xilinx ZynqMP CAN controller
84
include/fpu/softfloat-types.h | 4 +-
85
include/hw/arm/allwinner-h3.h | 2 +-
86
include/hw/arm/armv7m.h | 2 +
87
include/hw/core/cpu.h | 3 +-
88
include/sysemu/hvf_int.h | 58 +++++
89
target/arm/cpu.h | 48 +++-
90
target/arm/helper-sve.h | 4 +
91
target/arm/helper.h | 15 ++
92
target/i386/hvf/hvf-accel-ops.h | 23 --
93
target/i386/hvf/hvf-i386.h | 33 +--
94
target/i386/hvf/vmx.h | 24 +-
95
target/i386/hvf/x86hvf.h | 2 -
96
target/arm/neon-dp.decode | 1 +
97
target/arm/neon-shared.decode | 11 +
98
target/arm/sve.decode | 19 +-
99
target/arm/vfp.decode | 2 +
100
accel/hvf/hvf-accel-ops.c | 471 ++++++++++++++++++++++++++++++++++++++++
101
accel/hvf/hvf-all.c | 47 ++++
102
hw/arm/armv7m.c | 7 +
103
hw/arm/aspeed.c | 6 +-
104
hw/arm/mcimx6ul-evk.c | 2 +-
105
hw/arm/mcimx7d-sabre.c | 2 +-
106
hw/arm/npcm7xx_boards.c | 4 +-
107
hw/arm/sabrelite.c | 2 +-
108
hw/misc/npcm7xx_clk.c | 2 +-
109
linux-user/elfload.c | 2 +
110
target/arm/cpu.c | 13 ++
111
target/arm/cpu64.c | 3 +
112
target/arm/cpu_tcg.c | 1 +
113
target/arm/m_helper.c | 5 +-
114
target/arm/machine.c | 20 ++
115
target/arm/mte_helper.c | 12 +-
116
target/arm/op_helper.c | 32 ++-
117
target/arm/sve_helper.c | 2 +
118
target/arm/translate-a64.c | 155 +++++++++++--
119
target/arm/translate-neon.c | 91 ++++++++
120
target/arm/translate-sve.c | 112 ++++++++++
121
target/arm/translate-vfp.c | 164 ++++++++++----
122
target/arm/vec_helper.c | 140 +++++++++++-
123
target/arm/vfp_helper.c | 21 +-
124
target/i386/hvf/hvf-accel-ops.c | 146 -------------
125
target/i386/hvf/hvf.c | 464 +++++----------------------------------
126
target/i386/hvf/x86.c | 28 +--
127
target/i386/hvf/x86_descr.c | 26 +--
128
target/i386/hvf/x86_emu.c | 62 +++---
129
target/i386/hvf/x86_mmu.c | 4 +-
130
target/i386/hvf/x86_task.c | 12 +-
131
target/i386/hvf/x86hvf.c | 222 +++++++++----------
132
tests/qtest/bios-tables-test.c | 8 +-
133
tests/qtest/e1000e-test.c | 3 +-
134
tests/qtest/hd-geo-test.c | 4 +-
135
tests/qtest/pflash-cfi02-test.c | 2 +-
136
tests/qtest/tpm-tests.c | 12 +-
137
tests/unit/test-vmstate.c | 5 +-
138
fpu/softfloat-parts.c.inc | 6 +-
139
MAINTAINERS | 8 +
140
accel/hvf/meson.build | 7 +
141
accel/meson.build | 1 +
142
target/i386/hvf/meson.build | 1 -
143
63 files changed, 1666 insertions(+), 935 deletions(-)
144
create mode 100644 include/sysemu/hvf_int.h
145
delete mode 100644 target/i386/hvf/hvf-accel-ops.h
146
create mode 100644 accel/hvf/hvf-accel-ops.c
147
create mode 100644 accel/hvf/hvf-all.c
148
delete mode 100644 target/i386/hvf/hvf-accel-ops.c
149
create mode 100644 accel/hvf/meson.build
77
150
78
meson.build | 1 +
79
hw/arm/smmuv3-internal.h | 2 +-
80
hw/net/can/trace.h | 1 +
81
include/hw/arm/xlnx-zynqmp.h | 8 +
82
include/hw/intc/armv7m_nvic.h | 2 +
83
include/hw/net/xlnx-zynqmp-can.h | 78 +++
84
target/arm/cpu.h | 46 ++
85
target/arm/m-nocp.decode | 10 +-
86
target/arm/t32.decode | 10 +-
87
target/arm/vfp.decode | 14 +
88
hw/arm/armv7m.c | 4 +-
89
hw/arm/sbsa-ref.c | 23 +-
90
hw/arm/xlnx-zcu102.c | 20 +
91
hw/arm/xlnx-zynqmp.c | 34 ++
92
hw/intc/armv7m_nvic.c | 246 ++++++--
93
hw/misc/imx25_ccm.c | 12 +-
94
hw/misc/imx31_ccm.c | 14 +-
95
hw/misc/imx6_ccm.c | 20 +-
96
hw/misc/imx6_src.c | 2 +-
97
hw/misc/imx6ul_ccm.c | 4 +-
98
hw/misc/imx_ccm.c | 4 +-
99
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++++++++++
100
target/arm/cpu.c | 5 +-
101
target/arm/helper.c | 7 +-
102
target/arm/m_helper.c | 130 ++++-
103
target/arm/translate.c | 105 +++-
104
tests/qtest/npcm7xx_rng-test.c | 12 +
105
tests/qtest/xlnx-can-test.c | 360 ++++++++++++
106
MAINTAINERS | 8 +
107
hw/Kconfig | 1 +
108
hw/net/can/meson.build | 1 +
109
hw/net/can/trace-events | 9 +
110
target/arm/translate-vfp.c.inc | 511 ++++++++++++++++-
111
tests/qtest/meson.build | 1 +
112
34 files changed, 2713 insertions(+), 153 deletions(-)
113
create mode 100644 hw/net/can/trace.h
114
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
115
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
116
create mode 100644 tests/qtest/xlnx-can-test.c
117
create mode 100644 hw/net/can/trace-events
118
diff view generated by jsdifflib
1
Currently M-profile borrows the A-profile code for VMSR and VMRS
1
Add the isar feature check functions we will need for v8.1M MVE:
2
(access to the FP system registers), because all it needs to support
2
* a check for MVE present: this corresponds to the pseudocode's
3
is the FPSCR. In v8.1M things become significantly more complicated
3
CheckDecodeFaults(ExtType_Mve)
4
in two ways:
4
* a check for the optional floating-point part of MVE: this
5
5
corresponds to CheckDecodeFaults(ExtType_MveFp)
6
* there are several new FP system registers; some have side effects
7
on read, and one (FPCXT_NS) needs to avoid the usual
8
vfp_access_check() and the "only if FPU implemented" check
9
10
* all sysregs are now accessible both by VMRS/VMSR (which
11
reads/writes a general purpose register) and also by VLDR/VSTR
12
(which reads/writes them directly to memory)
13
14
Refactor the structure of how we handle VMSR/VMRS to cope with this:
15
16
* keep the M-profile code entirely separate from the A-profile code
17
18
* abstract out the "read or write the general purpose register" part
19
of the code into a loadfn or storefn function pointer, so we can
20
reuse it for VLDR/VSTR.
21
6
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
9
Message-id: 20210520152840.24453-2-peter.maydell@linaro.org
25
---
10
---
26
target/arm/cpu.h | 3 +
11
target/arm/cpu.h | 22 ++++++++++++++++++++++
27
target/arm/translate-vfp.c.inc | 182 ++++++++++++++++++++++++++++++---
12
1 file changed, 22 insertions(+)
28
2 files changed, 171 insertions(+), 14 deletions(-)
29
13
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.h
16
--- a/target/arm/cpu.h
33
+++ b/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
34
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
35
#define ARM_VFP_FPINST 9
19
}
36
#define ARM_VFP_FPINST2 10
37
38
+/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
39
+#define QEMU_VFP_FPSCR_NZCV 0xffff
40
+
41
/* iwMMXt coprocessor control registers. */
42
#define ARM_IWMMXT_wCID 0
43
#define ARM_IWMMXT_wCon 1
44
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/translate-vfp.c.inc
47
+++ b/target/arm/translate-vfp.c.inc
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
49
return true;
50
}
20
}
51
21
52
+/*
22
+static inline bool isar_feature_aa32_mve(const ARMISARegisters *id)
53
+ * M-profile provides two different sets of instructions that can
54
+ * access floating point system registers: VMSR/VMRS (which move
55
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
56
+ * move directly to/from memory). In some cases there are also side
57
+ * effects which must happen after any write to memory (which could
58
+ * cause an exception). So we implement the common logic for the
59
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
60
+ * which take pointers to callback functions which will perform the
61
+ * actual "read/write general purpose register" and "read/write
62
+ * memory" operations.
63
+ */
64
+
65
+/*
66
+ * Emit code to store the sysreg to its final destination; frees the
67
+ * TCG temp 'value' it is passed.
68
+ */
69
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
70
+/*
71
+ * Emit code to load the value to be copied to the sysreg; returns
72
+ * a new TCG temporary
73
+ */
74
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
75
+
76
+/* Common decode/access checks for fp sysreg read/write */
77
+typedef enum FPSysRegCheckResult {
78
+ FPSysRegCheckFailed, /* caller should return false */
79
+ FPSysRegCheckDone, /* caller should return true */
80
+ FPSysRegCheckContinue, /* caller should continue generating code */
81
+} FPSysRegCheckResult;
82
+
83
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
84
+{
23
+{
85
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
24
+ /*
86
+ return FPSysRegCheckFailed;
25
+ * Return true if MVE is supported (either integer or floating point).
87
+ }
26
+ * We must check for M-profile as the MVFR1 field means something
88
+
27
+ * else for A-profile.
89
+ switch (regno) {
28
+ */
90
+ case ARM_VFP_FPSCR:
29
+ return isar_feature_aa32_mprofile(id) &&
91
+ case QEMU_VFP_FPSCR_NZCV:
30
+ FIELD_EX32(id->mvfr1, MVFR1, MVE) > 0;
92
+ break;
93
+ default:
94
+ return FPSysRegCheckFailed;
95
+ }
96
+
97
+ if (!vfp_access_check(s)) {
98
+ return FPSysRegCheckDone;
99
+ }
100
+
101
+ return FPSysRegCheckContinue;
102
+}
31
+}
103
+
32
+
104
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
33
+static inline bool isar_feature_aa32_mve_fp(const ARMISARegisters *id)
105
+
106
+ fp_sysreg_loadfn *loadfn,
107
+ void *opaque)
108
+{
34
+{
109
+ /* Do a write to an M-profile floating point system register */
35
+ /*
110
+ TCGv_i32 tmp;
36
+ * Return true if MVE is supported (either integer or floating point).
111
+
37
+ * We must check for M-profile as the MVFR1 field means something
112
+ switch (fp_sysreg_checks(s, regno)) {
38
+ * else for A-profile.
113
+ case FPSysRegCheckFailed:
39
+ */
114
+ return false;
40
+ return isar_feature_aa32_mprofile(id) &&
115
+ case FPSysRegCheckDone:
41
+ FIELD_EX32(id->mvfr1, MVFR1, MVE) >= 2;
116
+ return true;
117
+ case FPSysRegCheckContinue:
118
+ break;
119
+ }
120
+
121
+ switch (regno) {
122
+ case ARM_VFP_FPSCR:
123
+ tmp = loadfn(s, opaque);
124
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
125
+ tcg_temp_free_i32(tmp);
126
+ gen_lookup_tb(s);
127
+ break;
128
+ default:
129
+ g_assert_not_reached();
130
+ }
131
+ return true;
132
+}
42
+}
133
+
43
+
134
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
44
static inline bool isar_feature_aa32_vfp_simd(const ARMISARegisters *id)
135
+ fp_sysreg_storefn *storefn,
136
+ void *opaque)
137
+{
138
+ /* Do a read from an M-profile floating point system register */
139
+ TCGv_i32 tmp;
140
+
141
+ switch (fp_sysreg_checks(s, regno)) {
142
+ case FPSysRegCheckFailed:
143
+ return false;
144
+ case FPSysRegCheckDone:
145
+ return true;
146
+ case FPSysRegCheckContinue:
147
+ break;
148
+ }
149
+
150
+ switch (regno) {
151
+ case ARM_VFP_FPSCR:
152
+ tmp = tcg_temp_new_i32();
153
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
154
+ storefn(s, opaque, tmp);
155
+ break;
156
+ case QEMU_VFP_FPSCR_NZCV:
157
+ /*
158
+ * Read just NZCV; this is a special case to avoid the
159
+ * helper call for the "VMRS to CPSR.NZCV" insn.
160
+ */
161
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
162
+ tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
163
+ storefn(s, opaque, tmp);
164
+ break;
165
+ default:
166
+ g_assert_not_reached();
167
+ }
168
+ return true;
169
+}
170
+
171
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
172
+{
173
+ arg_VMSR_VMRS *a = opaque;
174
+
175
+ if (a->rt == 15) {
176
+ /* Set the 4 flag bits in the CPSR */
177
+ gen_set_nzcv(value);
178
+ tcg_temp_free_i32(value);
179
+ } else {
180
+ store_reg(s, a->rt, value);
181
+ }
182
+}
183
+
184
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
185
+{
186
+ arg_VMSR_VMRS *a = opaque;
187
+
188
+ return load_reg(s, a->rt);
189
+}
190
+
191
+static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
192
+{
193
+ /*
194
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
195
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
196
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
197
+ * we only care about the top 4 bits of FPSCR there.
198
+ */
199
+ if (a->rt == 15) {
200
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
201
+ a->reg = QEMU_VFP_FPSCR_NZCV;
202
+ } else {
203
+ return false;
204
+ }
205
+ }
206
+
207
+ if (a->l) {
208
+ /* VMRS, move FP system register to gp register */
209
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
210
+ } else {
211
+ /* VMSR, move gp register to FP system register */
212
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
213
+ }
214
+}
215
+
216
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
217
{
45
{
218
TCGv_i32 tmp;
46
/*
219
bool ignore_vfp_enabled = false;
220
221
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
222
- return false;
223
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
224
+ return gen_M_VMSR_VMRS(s, a);
225
}
226
227
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
228
- /*
229
- * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
230
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
231
- * (FPSCR -> r15 is a special case which writes to the PSR flags.)
232
- */
233
- if (a->reg != ARM_VFP_FPSCR) {
234
- return false;
235
- }
236
- if (a->rt == 15 && !a->l) {
237
- return false;
238
- }
239
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
240
+ return false;
241
}
242
243
switch (a->reg) {
244
--
47
--
245
2.20.1
48
2.20.1
246
49
247
50
diff view generated by jsdifflib
1
In commit 077d7449100d824a4 we added code to handle the v8M
1
Some v8M instructions are present if either the floating point
2
requirement that returns from NMI or HardFault forcibly deactivate
2
extension or MVE is implemented. Update our implementation of them
3
those exceptions regardless of what interrupt the guest is trying to
3
to check for MVE as well as for FP.
4
deactivate. Unfortunately this broke the handling of the "illegal
5
exception return because the returning exception number is not
6
active" check for those cases. In the pseudocode this test is done
7
on the exception the guest asks to return from, but because our
8
implementation was doing this in armv7m_nvic_complete_irq() after the
9
new "deactivate NMI/HardFault regardless" code we ended up doing the
10
test on the VecInfo for that exception instead, which usually meant
11
failing to raise the illegal exception return fault.
12
4
13
In the case for "configurable exception targeting the opposite
5
This is all the insns which use CheckDecodeFaults(ExtType_MveOrFp) or
14
security state" we detected the illegal-return case but went ahead
6
CheckDecodeFaults(ExtType_MveOrDpFp) in their pseudocode, which are
15
and deactivated the VecInfo anyway, which is wrong because that is
7
essentially the loads and stores, moves and sysreg accesses, except
16
the VecInfo for the other security state.
8
for VMOV_reg_sp and VMOV_reg_dp, which we handle in subsequent
17
9
patches because they need a refactor to provide a place to put the
18
Rearrange the code so that we first identify the illegal return
10
new MVE check.
19
cases, then see if we really need to deactivate NMI or HardFault
20
instead, and finally do the deactivation.
21
11
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-25-peter.maydell@linaro.org
14
Message-id: 20210520152840.24453-3-peter.maydell@linaro.org
25
---
15
---
26
hw/intc/armv7m_nvic.c | 59 +++++++++++++++++++++++--------------------
16
target/arm/translate-vfp.c | 48 +++++++++++++++++++++++---------------
27
1 file changed, 32 insertions(+), 27 deletions(-)
17
1 file changed, 29 insertions(+), 19 deletions(-)
28
18
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
30
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
21
--- a/target/arm/translate-vfp.c
32
+++ b/hw/intc/armv7m_nvic.c
22
+++ b/target/arm/translate-vfp.c
33
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
23
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
24
/* VMOV scalar to general purpose register */
25
TCGv_i32 tmp;
26
27
- /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
28
- if (a->size == MO_32
29
- ? !dc_isar_feature(aa32_fpsp_v2, s)
30
- : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
31
- return false;
32
+ /*
33
+ * SIZE == MO_32 is a VFP instruction; otherwise NEON. MVE has
34
+ * all sizes, whether the CPU has fp or not.
35
+ */
36
+ if (!dc_isar_feature(aa32_mve, s)) {
37
+ if (a->size == MO_32
38
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
39
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
40
+ return false;
41
+ }
42
}
43
44
/* UNDEF accesses to D16-D31 if they don't exist */
45
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
46
/* VMOV general purpose register to scalar */
47
TCGv_i32 tmp;
48
49
- /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
50
- if (a->size == MO_32
51
- ? !dc_isar_feature(aa32_fpsp_v2, s)
52
- : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
53
- return false;
54
+ /*
55
+ * SIZE == MO_32 is a VFP instruction; otherwise NEON. MVE has
56
+ * all sizes, whether the CPU has fp or not.
57
+ */
58
+ if (!dc_isar_feature(aa32_mve, s)) {
59
+ if (a->size == MO_32
60
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
61
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
62
+ return false;
63
+ }
64
}
65
66
/* UNDEF accesses to D16-D31 if they don't exist */
67
@@ -XXX,XX +XXX,XX @@ typedef enum FPSysRegCheckResult {
68
69
static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
34
{
70
{
35
NVICState *s = (NVICState *)opaque;
71
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
36
VecInfo *vec = NULL;
72
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
37
- int ret;
73
return FPSysRegCheckFailed;
38
+ int ret = 0;
39
40
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
41
42
+ trace_nvic_complete_irq(irq, secure);
43
+
44
+ if (secure && exc_is_banked(irq)) {
45
+ vec = &s->sec_vectors[irq];
46
+ } else {
47
+ vec = &s->vectors[irq];
48
+ }
49
+
50
+ /*
51
+ * Identify illegal exception return cases. We can't immediately
52
+ * return at this point because we still need to deactivate
53
+ * (either this exception or NMI/HardFault) first.
54
+ */
55
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
56
+ /*
57
+ * Return from a configurable exception targeting the opposite
58
+ * security state from the one we're trying to complete it for.
59
+ * Clear vec because it's not really the VecInfo for this
60
+ * (irq, secstate) so we mustn't deactivate it.
61
+ */
62
+ ret = -1;
63
+ vec = NULL;
64
+ } else if (!vec->active) {
65
+ /* Return from an inactive interrupt */
66
+ ret = -1;
67
+ } else {
68
+ /* Legal return, we will return the RETTOBASE bit value to the caller */
69
+ ret = nvic_rettobase(s);
70
+ }
71
+
72
/*
73
* For negative priorities, v8M will forcibly deactivate the appropriate
74
* NMI or HardFault regardless of what interrupt we're being asked to
75
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
76
}
74
}
77
75
78
if (!vec) {
76
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
79
- if (secure && exc_is_banked(irq)) {
77
{
80
- vec = &s->sec_vectors[irq];
78
TCGv_i32 tmp;
81
- } else {
79
82
- vec = &s->vectors[irq];
80
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
83
- }
81
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
84
- }
82
return false;
85
-
86
- trace_nvic_complete_irq(irq, secure);
87
-
88
- if (!vec->active) {
89
- /* Tell the caller this was an illegal exception return */
90
- return -1;
91
- }
92
-
93
- /*
94
- * If this is a configurable exception and it is currently
95
- * targeting the opposite security state from the one we're trying
96
- * to complete it for, this counts as an illegal exception return.
97
- * We still need to deactivate whatever vector the logic above has
98
- * selected, though, as it might not be the same as the one for the
99
- * requested exception number.
100
- */
101
- if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
102
- ret = -1;
103
- } else {
104
- ret = nvic_rettobase(s);
105
+ return ret;
106
}
83
}
107
84
108
vec->active = 0;
85
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
86
{
87
TCGv_i32 tmp;
88
89
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
90
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
91
return false;
92
}
93
94
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
95
* floating point register. Note that this does not require support
96
* for double precision arithmetic.
97
*/
98
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
99
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
100
return false;
101
}
102
103
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
104
uint32_t offset;
105
TCGv_i32 addr, tmp;
106
107
- if (!dc_isar_feature(aa32_fp16_arith, s)) {
108
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
109
return false;
110
}
111
112
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
113
uint32_t offset;
114
TCGv_i32 addr, tmp;
115
116
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
117
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
118
return false;
119
}
120
121
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
122
TCGv_i64 tmp;
123
124
/* Note that this does not require support for double arithmetic. */
125
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
126
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
127
return false;
128
}
129
130
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
131
TCGv_i32 addr, tmp;
132
int i, n;
133
134
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
135
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
136
return false;
137
}
138
139
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
140
int i, n;
141
142
/* Note that this does not require support for double arithmetic. */
143
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
144
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
145
return false;
146
}
147
109
--
148
--
110
2.20.1
149
2.20.1
111
150
112
151
diff view generated by jsdifflib
1
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
1
The do_vfp_2op_sp() and do_vfp_2op_dp() functions currently check
2
The only difference is that:
2
whether floating point is supported via the aa32_fpdp_v2 and
3
* the old T1 encodings UNDEF if the implementation implements 32
3
aa32_fpsp_v2 isar checks. For v8.1M MVE support, the VMOV_reg trans
4
Dregs (this is currently architecturally impossible for M-profile)
4
functions (but not any of the others) need to update this to also
5
* the new T2 encodings have the implementation-defined option to
5
allow the insn if MVE is implemented. Move the check out of the do_
6
read from memory (discarding the data) or write UNKNOWN values to
6
function and into its callsites (which are all implemented via the
7
memory for the stack slots that would be D16-D31
7
DO_VFP_2OP macro), so we have a place to change the check for the
8
8
VMOV insns.
9
We choose not to make those accesses, so for us the two
10
instructions behave identically assuming they don't UNDEF.
11
9
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
12
Message-id: 20210520152840.24453-4-peter.maydell@linaro.org
15
---
13
---
16
target/arm/m-nocp.decode | 2 +-
14
target/arm/translate-vfp.c | 37 +++++++++++++++++++------------------
17
target/arm/translate-vfp.c.inc | 25 +++++++++++++++++++++++++
15
1 file changed, 19 insertions(+), 18 deletions(-)
18
2 files changed, 26 insertions(+), 1 deletion(-)
19
16
20
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
17
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m-nocp.decode
19
--- a/target/arm/translate-vfp.c
23
+++ b/target/arm/m-nocp.decode
20
+++ b/target/arm/translate-vfp.c
24
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
25
22
int veclen = s->vec_len;
26
{
23
TCGv_i32 f0, fd;
27
# Special cases which do not take an early NOCP: VLLDM and VLSTM
24
28
- VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
25
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
29
+ VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
26
- return false;
30
# VSCCLRM (new in v8.1M) is similar:
27
- }
31
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
28
+ /* Note that the caller must check the aa32_fpsp_v2 feature. */
32
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
29
33
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
30
if (!dc_isar_feature(aa32_fpshvec, s) &&
34
index XXXXXXX..XXXXXXX 100644
31
(veclen != 0 || s->vec_stride != 0)) {
35
--- a/target/arm/translate-vfp.c.inc
32
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
36
+++ b/target/arm/translate-vfp.c.inc
33
*/
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
34
TCGv_i32 f0;
38
!arm_dc_feature(s, ARM_FEATURE_V8)) {
35
36
+ /* Note that the caller must check the aa32_fp16_arith feature */
37
+
38
if (!dc_isar_feature(aa32_fp16_arith, s)) {
39
return false;
39
return false;
40
}
40
}
41
+
41
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
42
+ if (a->op) {
42
int veclen = s->vec_len;
43
+ /*
43
TCGv_i64 f0, fd;
44
+ * T2 encoding ({D0-D31} reglist): v8.1M and up. We choose not
44
45
+ * to take the IMPDEF option to make memory accesses to the stack
45
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
46
+ * slots that correspond to the D16-D31 registers (discarding
46
- return false;
47
+ * read data and writing UNKNOWN values), so for us the T2
47
- }
48
+ * encoding behaves identically to the T1 encoding.
48
+ /* Note that the caller must check the aa32_fpdp_v2 feature. */
49
+ */
49
50
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
50
/* UNDEF accesses to D16-D31 if they don't exist */
51
+ return false;
51
if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vm) & 0x10)) {
52
+ }
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
53
+ } else {
53
return true;
54
+ /*
54
}
55
+ * T1 encoding ({D0-D15} reglist); undef if we have 32 Dregs.
55
56
+ * This is currently architecturally impossible, but we add the
56
-#define DO_VFP_2OP(INSN, PREC, FN) \
57
+ * check to stay in line with the pseudocode. Note that we must
57
+#define DO_VFP_2OP(INSN, PREC, FN, CHECK) \
58
+ * emit code for the UNDEF so it takes precedence over the NOCP.
58
static bool trans_##INSN##_##PREC(DisasContext *s, \
59
+ */
59
arg_##INSN##_##PREC *a) \
60
+ if (dc_isar_feature(aa32_simd_r32, s)) {
60
{ \
61
+ unallocated_encoding(s);
61
+ if (!dc_isar_feature(CHECK, s)) { \
62
+ return true;
62
+ return false; \
63
+ }
63
+ } \
64
+ }
64
return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
65
+
65
}
66
/*
66
67
* If not secure, UNDEF. We must emit code for this
67
-DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32)
68
* rather than returning false so that this takes
68
-DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64)
69
+DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32, aa32_fpsp_v2)
70
+DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64, aa32_fpdp_v2)
71
72
-DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh)
73
-DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss)
74
-DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd)
75
+DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
76
+DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
77
+DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd, aa32_fpdp_v2)
78
79
-DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh)
80
-DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs)
81
-DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd)
82
+DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh, aa32_fp16_arith)
83
+DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs, aa32_fpsp_v2)
84
+DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd, aa32_fpdp_v2)
85
86
static void gen_VSQRT_hp(TCGv_i32 vd, TCGv_i32 vm)
87
{
88
@@ -XXX,XX +XXX,XX @@ static void gen_VSQRT_dp(TCGv_i64 vd, TCGv_i64 vm)
89
gen_helper_vfp_sqrtd(vd, vm, cpu_env);
90
}
91
92
-DO_VFP_2OP(VSQRT, hp, gen_VSQRT_hp)
93
-DO_VFP_2OP(VSQRT, sp, gen_VSQRT_sp)
94
-DO_VFP_2OP(VSQRT, dp, gen_VSQRT_dp)
95
+DO_VFP_2OP(VSQRT, hp, gen_VSQRT_hp, aa32_fp16_arith)
96
+DO_VFP_2OP(VSQRT, sp, gen_VSQRT_sp, aa32_fpsp_v2)
97
+DO_VFP_2OP(VSQRT, dp, gen_VSQRT_dp, aa32_fpdp_v2)
98
99
static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
100
{
69
--
101
--
70
2.20.1
102
2.20.1
71
103
72
104
diff view generated by jsdifflib
1
In v8.1M a REVIDR register is defined, which is at address 0xe00ecfc
1
Split out the handling of VMOV_reg_sp and VMOV_reg_dp so that we can
2
and is a read-only IMPDEF register providing implementation specific
2
permit the insns if either FP or MVE are present.
3
minor revision information, like the v8A REVIDR_EL1. Implement this.
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-19-peter.maydell@linaro.org
6
Message-id: 20210520152840.24453-5-peter.maydell@linaro.org
8
---
7
---
9
hw/intc/armv7m_nvic.c | 5 +++++
8
target/arm/translate-vfp.c | 15 +++++++++++++--
10
1 file changed, 5 insertions(+)
9
1 file changed, 13 insertions(+), 2 deletions(-)
11
10
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
11
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/armv7m_nvic.c
13
--- a/target/arm/translate-vfp.c
15
+++ b/hw/intc/armv7m_nvic.c
14
+++ b/target/arm/translate-vfp.c
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
17
}
16
return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
18
return val;
19
}
17
}
20
+ case 0xcfc:
18
21
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8_1M)) {
19
-DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32, aa32_fpsp_v2)
22
+ goto bad_offset;
20
-DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64, aa32_fpdp_v2)
23
+ }
21
+#define DO_VFP_VMOV(INSN, PREC, FN) \
24
+ return cpu->revidr;
22
+ static bool trans_##INSN##_##PREC(DisasContext *s, \
25
case 0xd00: /* CPUID Base. */
23
+ arg_##INSN##_##PREC *a) \
26
return cpu->midr;
24
+ { \
27
case 0xd04: /* Interrupt Control State (ICSR) */
25
+ if (!dc_isar_feature(aa32_fp##PREC##_v2, s) && \
26
+ !dc_isar_feature(aa32_mve, s)) { \
27
+ return false; \
28
+ } \
29
+ return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
30
+ }
31
+
32
+DO_VFP_VMOV(VMOV_reg, sp, tcg_gen_mov_i32)
33
+DO_VFP_VMOV(VMOV_reg, dp, tcg_gen_mov_i64)
34
35
DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
36
DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
28
--
37
--
29
2.20.1
38
2.20.1
30
39
31
40
diff view generated by jsdifflib
1
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
1
The fp_sysreg_checks() function is supposed to be returning an
2
in the previous commit; use it in a couple of places in existing code,
2
FPSysRegCheckResult, which is an enum with three possible values.
3
where we're masking out everything except NZCV for the "load to Rt=15
3
However, three places in the function "return false" (a hangover from
4
sets CPSR.NZCV" special case.
4
a previous iteration of the design where the function just returned a
5
bool). Make these return FPSysRegCheckFailed instead (for no
6
functional change, since both false and FPSysRegCheckFailed are
7
zero).
5
8
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
11
Message-id: 20210520152840.24453-6-peter.maydell@linaro.org
9
---
12
---
10
target/arm/translate-vfp.c.inc | 4 ++--
13
target/arm/translate-vfp.c | 6 +++---
11
1 file changed, 2 insertions(+), 2 deletions(-)
14
1 file changed, 3 insertions(+), 3 deletions(-)
12
15
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
16
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
18
--- a/target/arm/translate-vfp.c
16
+++ b/target/arm/translate-vfp.c.inc
19
+++ b/target/arm/translate-vfp.c
17
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
20
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
18
* helper call for the "VMRS to CPSR.NZCV" insn.
21
break;
19
*/
22
case ARM_VFP_FPSCR_NZCVQC:
20
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
23
if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
21
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
24
- return false;
22
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
25
+ return FPSysRegCheckFailed;
23
storefn(s, opaque, tmp);
26
}
27
break;
28
case ARM_VFP_FPCXT_S:
29
case ARM_VFP_FPCXT_NS:
30
if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
31
- return false;
32
+ return FPSysRegCheckFailed;
33
}
34
if (!s->v8m_secure) {
35
- return false;
36
+ return FPSysRegCheckFailed;
37
}
24
break;
38
break;
25
default:
39
default:
26
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
27
case ARM_VFP_FPSCR:
28
if (a->rt == 15) {
29
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
30
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
31
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
32
} else {
33
tmp = tcg_temp_new_i32();
34
gen_helper_vfp_get_fpscr(tmp, cpu_env);
35
--
40
--
36
2.20.1
41
2.20.1
37
42
38
43
diff view generated by jsdifflib
1
Implement the new-in-v8.1M FPCXT_S floating point system register.
1
If MVE is implemented for an M-profile CPU then it has a VPR
2
This is for saving and restoring the secure floating point context,
2
register, which tracks predication information.
3
and it reads and writes bits [27:0] from the FPSCR and the
3
4
CONTROL.SFPA bit in bit [31].
4
Implement the read and write handling of this register, and
5
the migration of its state.
5
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
9
Message-id: 20210520152840.24453-7-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate-vfp.c.inc | 58 ++++++++++++++++++++++++++++++++++
11
target/arm/cpu.h | 6 ++++++
11
1 file changed, 58 insertions(+)
12
target/arm/machine.c | 19 +++++++++++++++++++
13
target/arm/translate-vfp.c | 38 ++++++++++++++++++++++++++++++++++++++
14
3 files changed, 63 insertions(+)
12
15
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
18
--- a/target/arm/cpu.h
16
+++ b/target/arm/translate-vfp.c.inc
19
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
21
uint32_t cpacr[M_REG_NUM_BANKS];
22
uint32_t nsacr;
23
int ltpsize;
24
+ uint32_t vpr;
25
} v7m;
26
27
/* Information associated with an exception about to be taken:
28
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_FPCCR, ASPEN, 31, 1)
29
R_V7M_FPCCR_UFRDY_MASK | \
30
R_V7M_FPCCR_ASPEN_MASK)
31
32
+/* v7M VPR bits */
33
+FIELD(V7M_VPR, P0, 0, 16)
34
+FIELD(V7M_VPR, MASK01, 16, 4)
35
+FIELD(V7M_VPR, MASK23, 20, 4)
36
+
37
/*
38
* System register ID fields.
39
*/
40
diff --git a/target/arm/machine.c b/target/arm/machine.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/machine.c
43
+++ b/target/arm/machine.c
44
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_fp = {
45
}
46
};
47
48
+static bool mve_needed(void *opaque)
49
+{
50
+ ARMCPU *cpu = opaque;
51
+
52
+ return cpu_isar_feature(aa32_mve, cpu);
53
+}
54
+
55
+static const VMStateDescription vmstate_m_mve = {
56
+ .name = "cpu/m/mve",
57
+ .version_id = 1,
58
+ .minimum_version_id = 1,
59
+ .needed = mve_needed,
60
+ .fields = (VMStateField[]) {
61
+ VMSTATE_UINT32(env.v7m.vpr, ARMCPU),
62
+ VMSTATE_END_OF_LIST()
63
+ },
64
+};
65
+
66
static const VMStateDescription vmstate_m = {
67
.name = "cpu/m",
68
.version_id = 4,
69
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
70
&vmstate_m_other_sp,
71
&vmstate_m_v8m,
72
&vmstate_m_fp,
73
+ &vmstate_m_mve,
74
NULL
75
}
76
};
77
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/translate-vfp.c
80
+++ b/target/arm/translate-vfp.c
17
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
81
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
18
return false;
82
return FPSysRegCheckFailed;
19
}
83
}
20
break;
84
break;
21
+ case ARM_VFP_FPCXT_S:
85
+ case ARM_VFP_VPR:
22
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
86
+ case ARM_VFP_P0:
23
+ return false;
87
+ if (!dc_isar_feature(aa32_mve, s)) {
24
+ }
88
+ return FPSysRegCheckFailed;
25
+ if (!s->v8m_secure) {
26
+ return false;
27
+ }
89
+ }
28
+ break;
90
+ break;
29
default:
91
default:
30
return FPSysRegCheckFailed;
92
return FPSysRegCheckFailed;
31
}
93
}
32
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
94
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
33
tcg_temp_free_i32(tmp);
95
tcg_temp_free_i32(sfpa);
34
break;
96
break;
35
}
97
}
36
+ case ARM_VFP_FPCXT_S:
98
+ case ARM_VFP_VPR:
99
+ /* Behaves as NOP if not privileged */
100
+ if (IS_USER(s)) {
101
+ break;
102
+ }
103
+ tmp = loadfn(s, opaque);
104
+ store_cpu_field(tmp, v7m.vpr);
105
+ break;
106
+ case ARM_VFP_P0:
37
+ {
107
+ {
38
+ TCGv_i32 sfpa, control, fpscr;
108
+ TCGv_i32 vpr;
39
+ /* Set FPSCR[27:0] and CONTROL.SFPA from value */
40
+ tmp = loadfn(s, opaque);
109
+ tmp = loadfn(s, opaque);
41
+ sfpa = tcg_temp_new_i32();
110
+ vpr = load_cpu_field(v7m.vpr);
42
+ tcg_gen_shri_i32(sfpa, tmp, 31);
111
+ tcg_gen_deposit_i32(vpr, vpr, tmp,
43
+ control = load_cpu_field(v7m.control[M_REG_S]);
112
+ R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
44
+ tcg_gen_deposit_i32(control, control, sfpa,
113
+ store_cpu_field(vpr, v7m.vpr);
45
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
46
+ store_cpu_field(control, v7m.control[M_REG_S]);
47
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
48
+ tcg_gen_andi_i32(fpscr, fpscr, FPCR_NZCV_MASK);
49
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
50
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
51
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
52
+ tcg_temp_free_i32(tmp);
114
+ tcg_temp_free_i32(tmp);
53
+ tcg_temp_free_i32(sfpa);
54
+ break;
115
+ break;
55
+ }
116
+ }
56
default:
117
default:
57
g_assert_not_reached();
118
g_assert_not_reached();
58
}
119
}
59
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
120
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
60
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
121
tcg_temp_free_i32(fpscr);
61
storefn(s, opaque, tmp);
62
break;
122
break;
63
+ case ARM_VFP_FPCXT_S:
123
}
64
+ {
124
+ case ARM_VFP_VPR:
65
+ TCGv_i32 control, sfpa, fpscr;
125
+ /* Behaves as NOP if not privileged */
66
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
126
+ if (IS_USER(s)) {
67
+ tmp = tcg_temp_new_i32();
127
+ break;
68
+ sfpa = tcg_temp_new_i32();
128
+ }
69
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
129
+ tmp = load_cpu_field(v7m.vpr);
70
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
71
+ control = load_cpu_field(v7m.control[M_REG_S]);
72
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
73
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
74
+ tcg_gen_or_i32(tmp, tmp, sfpa);
75
+ tcg_temp_free_i32(sfpa);
76
+ /*
77
+ * Store result before updating FPSCR etc, in case
78
+ * it is a memory write which causes an exception.
79
+ */
80
+ storefn(s, opaque, tmp);
130
+ storefn(s, opaque, tmp);
81
+ /*
82
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
83
+ * CONTROL.SFPA; so we'll end the TB here.
84
+ */
85
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
86
+ store_cpu_field(control, v7m.control[M_REG_S]);
87
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
88
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
89
+ tcg_temp_free_i32(fpscr);
90
+ gen_lookup_tb(s);
91
+ break;
131
+ break;
92
+ }
132
+ case ARM_VFP_P0:
133
+ tmp = load_cpu_field(v7m.vpr);
134
+ tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
135
+ storefn(s, opaque, tmp);
136
+ break;
93
default:
137
default:
94
g_assert_not_reached();
138
g_assert_not_reached();
95
}
139
}
96
--
140
--
97
2.20.1
141
2.20.1
98
142
99
143
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
The M-profile FPSCR has an LTPSIZE field, but if MVE is not
2
checking for stack frame integrity signatures on SG instructions.
2
implemented it is read-only and always reads as 4; this is how QEMU
3
This bit is not banked, and is always RAZ/WI to Non-secure code.
3
currently handles it.
4
Adjust the code for handling CCR reads and writes to handle this.
4
5
Make the field writable when MVE is implemented.
6
7
We can safely add the field to the MVE migration struct because
8
currently no CPUs enable MVE and so the migration struct is never
9
used.
5
10
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
13
Message-id: 20210520152840.24453-8-peter.maydell@linaro.org
9
---
14
---
10
target/arm/cpu.h | 2 ++
15
target/arm/cpu.h | 3 ++-
11
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++--------
16
target/arm/machine.c | 1 +
12
2 files changed, 20 insertions(+), 8 deletions(-)
17
target/arm/vfp_helper.c | 9 ++++++---
18
3 files changed, 9 insertions(+), 4 deletions(-)
13
19
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
22
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKOFHFNMIGN, 10, 1)
24
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
19
FIELD(V7M_CCR, DC, 16, 1)
25
uint32_t fpdscr[M_REG_NUM_BANKS];
20
FIELD(V7M_CCR, IC, 17, 1)
26
uint32_t cpacr[M_REG_NUM_BANKS];
21
FIELD(V7M_CCR, BP, 18, 1)
27
uint32_t nsacr;
22
+FIELD(V7M_CCR, LOB, 19, 1)
28
- int ltpsize;
23
+FIELD(V7M_CCR, TRD, 20, 1)
29
+ uint32_t ltpsize;
24
30
uint32_t vpr;
25
/* V7M SCR bits */
31
} v7m;
26
FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
32
27
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
33
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
34
35
#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
36
#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
37
+#define FPCR_LTPSIZE_LENGTH 3
38
39
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
40
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
41
diff --git a/target/arm/machine.c b/target/arm/machine.c
28
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/armv7m_nvic.c
43
--- a/target/arm/machine.c
30
+++ b/hw/intc/armv7m_nvic.c
44
+++ b/target/arm/machine.c
31
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
45
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_mve = {
32
}
46
.needed = mve_needed,
33
return cpu->env.v7m.scr[attrs.secure];
47
.fields = (VMStateField[]) {
34
case 0xd14: /* Configuration Control. */
48
VMSTATE_UINT32(env.v7m.vpr, ARMCPU),
35
- /* The BFHFNMIGN bit is the only non-banked bit; we
49
+ VMSTATE_UINT32(env.v7m.ltpsize, ARMCPU),
36
- * keep it in the non-secure copy of the register.
50
VMSTATE_END_OF_LIST()
37
+ /*
51
},
38
+ * Non-banked bits: BFHFNMIGN (stored in the NS copy of the register)
52
};
39
+ * and TRD (stored in the S copy of the register)
53
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/vfp_helper.c
56
+++ b/target/arm/vfp_helper.c
57
@@ -XXX,XX +XXX,XX @@ uint32_t vfp_get_fpscr(CPUARMState *env)
58
59
void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
60
{
61
+ ARMCPU *cpu = env_archcpu(env);
62
+
63
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
64
- if (!cpu_isar_feature(any_fp16, env_archcpu(env))) {
65
+ if (!cpu_isar_feature(any_fp16, cpu)) {
66
val &= ~FPCR_FZ16;
67
}
68
69
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
70
* because in v7A no-short-vector-support cores still had to
71
* allow Stride/Len to be written with the only effect that
72
* some insns are required to UNDEF if the guest sets them.
73
- *
74
- * TODO: if M-profile MVE implemented, set LTPSIZE.
40
*/
75
*/
41
val = cpu->env.v7m.ccr[attrs.secure];
76
env->vfp.vec_len = extract32(val, 16, 3);
42
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
77
env->vfp.vec_stride = extract32(val, 20, 2);
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
78
+ } else if (cpu_isar_feature(aa32_mve, cpu)) {
44
cpu->env.v7m.scr[attrs.secure] = value;
79
+ env->v7m.ltpsize = extract32(val, FPCR_LTPSIZE_SHIFT,
45
break;
80
+ FPCR_LTPSIZE_LENGTH);
46
case 0xd14: /* Configuration Control. */
81
}
47
+ {
82
48
+ uint32_t mask;
83
if (arm_feature(env, ARM_FEATURE_NEON)) {
49
+
50
if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
51
goto bad_offset;
52
}
53
54
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
55
- value &= (R_V7M_CCR_STKALIGN_MASK |
56
- R_V7M_CCR_BFHFNMIGN_MASK |
57
- R_V7M_CCR_DIV_0_TRP_MASK |
58
- R_V7M_CCR_UNALIGN_TRP_MASK |
59
- R_V7M_CCR_USERSETMPEND_MASK |
60
- R_V7M_CCR_NONBASETHRDENA_MASK);
61
+ mask = R_V7M_CCR_STKALIGN_MASK |
62
+ R_V7M_CCR_BFHFNMIGN_MASK |
63
+ R_V7M_CCR_DIV_0_TRP_MASK |
64
+ R_V7M_CCR_UNALIGN_TRP_MASK |
65
+ R_V7M_CCR_USERSETMPEND_MASK |
66
+ R_V7M_CCR_NONBASETHRDENA_MASK;
67
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8_1M) && attrs.secure) {
68
+ /* TRD is always RAZ/WI from NS */
69
+ mask |= R_V7M_CCR_TRD_MASK;
70
+ }
71
+ value &= mask;
72
73
if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
74
/* v8M makes NONBASETHRDENA and STKALIGN be RES1 */
75
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
76
77
cpu->env.v7m.ccr[attrs.secure] = value;
78
break;
79
+ }
80
case 0xd24: /* System Handler Control and State (SHCSR) */
81
if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
82
goto bad_offset;
83
--
84
--
84
2.20.1
85
2.20.1
85
86
86
87
diff view generated by jsdifflib
1
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
1
Currently we allow board models to specify the initial value of the
2
gains new fields FZ16 (if half-precision floating point is supported)
2
Secure VTOR register, using an init-svtor property on the TYPE_ARMV7M
3
and LTPSIZE (always reads as 4). Update the reset value and the code
3
object which is plumbed through to the CPU. Allow board models to
4
that handles writes to this register accordingly.
4
also specify the initial value of the Non-secure VTOR via a similar
5
init-nsvtor property.
5
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
9
Message-id: 20210520152840.24453-10-peter.maydell@linaro.org
9
---
10
---
10
target/arm/cpu.h | 5 +++++
11
include/hw/arm/armv7m.h | 2 ++
11
hw/intc/armv7m_nvic.c | 9 ++++++++-
12
target/arm/cpu.h | 2 ++
12
target/arm/cpu.c | 3 +++
13
hw/arm/armv7m.c | 7 +++++++
13
3 files changed, 16 insertions(+), 1 deletion(-)
14
target/arm/cpu.c | 10 ++++++++++
15
4 files changed, 21 insertions(+)
14
16
17
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/armv7m.h
20
+++ b/include/hw/arm/armv7m.h
21
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(ARMv7MState, ARMV7M)
22
* devices will be automatically layered on top of this view.)
23
* + Property "idau": IDAU interface (forwarded to CPU object)
24
* + Property "init-svtor": secure VTOR reset value (forwarded to CPU object)
25
+ * + Property "init-nsvtor": non-secure VTOR reset value (forwarded to CPU object)
26
* + Property "vfp": enable VFP (forwarded to CPU object)
27
* + Property "dsp": enable DSP (forwarded to CPU object)
28
* + Property "enable-bitband": expose bitbanded IO
29
@@ -XXX,XX +XXX,XX @@ struct ARMv7MState {
30
MemoryRegion *board_memory;
31
Object *idau;
32
uint32_t init_svtor;
33
+ uint32_t init_nsvtor;
34
bool enable_bitband;
35
bool start_powered_off;
36
bool vfp;
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
37
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
39
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
40
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
41
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
20
#define FPCR_IXE (1 << 12) /* Inexact exception trap enable */
42
21
#define FPCR_IDE (1 << 15) /* Input Denormal exception trap enable */
43
/* For v8M, initial value of the Secure VTOR */
22
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
44
uint32_t init_svtor;
23
+#define FPCR_RMODE_MASK (3 << 22) /* Rounding mode */
45
+ /* For v8M, initial value of the Non-secure VTOR */
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
46
+ uint32_t init_nsvtor;
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
47
26
+#define FPCR_AHP (1 << 26) /* Alternative half-precision */
48
/* [QEMU_]KVM_ARM_TARGET_* constant for this CPU, or
27
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
49
* QEMU_KVM_ARM_TARGET_NONE if the kernel doesn't support this CPU type.
28
#define FPCR_V (1 << 28) /* FP overflow flag */
50
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
29
#define FPCR_C (1 << 29) /* FP carry flag */
30
#define FPCR_Z (1 << 30) /* FP zero flag */
31
#define FPCR_N (1 << 31) /* FP negative flag */
32
33
+#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
34
+#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
35
+
36
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
37
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
38
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
40
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
52
--- a/hw/arm/armv7m.c
42
+++ b/hw/intc/armv7m_nvic.c
53
+++ b/hw/arm/armv7m.c
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
54
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
44
break;
55
return;
45
case 0xf3c: /* FPDSCR */
46
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
47
- value &= 0x07c00000;
48
+ uint32_t mask = FPCR_AHP | FPCR_DN | FPCR_FZ | FPCR_RMODE_MASK;
49
+ if (cpu_isar_feature(any_fp16, cpu)) {
50
+ mask |= FPCR_FZ16;
51
+ }
52
+ value &= mask;
53
+ if (cpu_isar_feature(aa32_lob, cpu)) {
54
+ value |= 4 << FPCR_LTPSIZE_SHIFT;
55
+ }
56
cpu->env.v7m.fpdscr[attrs.secure] = value;
57
}
56
}
58
break;
57
}
58
+ if (object_property_find(OBJECT(s->cpu), "init-nsvtor")) {
59
+ if (!object_property_set_uint(OBJECT(s->cpu), "init-nsvtor",
60
+ s->init_nsvtor, errp)) {
61
+ return;
62
+ }
63
+ }
64
if (object_property_find(OBJECT(s->cpu), "start-powered-off")) {
65
if (!object_property_set_bool(OBJECT(s->cpu), "start-powered-off",
66
s->start_powered_off, errp)) {
67
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
68
MemoryRegion *),
69
DEFINE_PROP_LINK("idau", ARMv7MState, idau, TYPE_IDAU_INTERFACE, Object *),
70
DEFINE_PROP_UINT32("init-svtor", ARMv7MState, init_svtor, 0),
71
+ DEFINE_PROP_UINT32("init-nsvtor", ARMv7MState, init_nsvtor, 0),
72
DEFINE_PROP_BOOL("enable-bitband", ARMv7MState, enable_bitband, false),
73
DEFINE_PROP_BOOL("start-powered-off", ARMv7MState, start_powered_off,
74
false),
59
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
75
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/cpu.c
77
--- a/target/arm/cpu.c
62
+++ b/target/arm/cpu.c
78
+++ b/target/arm/cpu.c
63
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
79
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
64
* always reset to 4.
80
env->regs[14] = 0xffffffff;
65
*/
81
66
env->v7m.ltpsize = 4;
82
env->v7m.vecbase[M_REG_S] = cpu->init_svtor & 0xffffff80;
67
+ /* The LTPSIZE field in FPDSCR is constant and reads as 4. */
83
+ env->v7m.vecbase[M_REG_NS] = cpu->init_nsvtor & 0xffffff80;
68
+ env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
84
69
+ env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
85
/* Load the initial SP and PC from offset 0 and 4 in the vector table */
70
}
86
vecbase = env->v7m.vecbase[env->v7m.secure];
71
87
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
72
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
88
&cpu->init_svtor,
89
OBJ_PROP_FLAG_READWRITE);
90
}
91
+ if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
92
+ /*
93
+ * Initial value of the NS VTOR (for cores without the Security
94
+ * extension, this is the only VTOR)
95
+ */
96
+ object_property_add_uint32_ptr(obj, "init-nsvtor",
97
+ &cpu->init_nsvtor,
98
+ OBJ_PROP_FLAG_READWRITE);
99
+ }
100
101
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property);
102
73
--
103
--
74
2.20.1
104
2.20.1
75
105
76
106
diff view generated by jsdifflib
1
Correct a typo in the name we give the NVIC object.
1
The official punctuation for Arm CPU names uses a hyphen, like
2
"Cortex-A9". We mostly follow this, but in a few places usage
3
without the hyphen has crept in. Fix those so we consistently
4
use the same way of writing the CPU name.
5
6
This commit was created with:
7
git grep -z -l 'Cortex ' | xargs -0 sed -i 's/Cortex /Cortex-/'
2
8
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-28-peter.maydell@linaro.org
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 20210527095152.10968-1-peter.maydell@linaro.org
7
---
14
---
8
hw/arm/armv7m.c | 2 +-
15
docs/system/arm/aspeed.rst | 4 ++--
9
1 file changed, 1 insertion(+), 1 deletion(-)
16
docs/system/arm/nuvoton.rst | 6 +++---
10
17
docs/system/arm/sabrelite.rst | 2 +-
11
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
18
include/hw/arm/allwinner-h3.h | 2 +-
12
index XXXXXXX..XXXXXXX 100644
19
hw/arm/aspeed.c | 6 +++---
13
--- a/hw/arm/armv7m.c
20
hw/arm/mcimx6ul-evk.c | 2 +-
14
+++ b/hw/arm/armv7m.c
21
hw/arm/mcimx7d-sabre.c | 2 +-
15
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
22
hw/arm/npcm7xx_boards.c | 4 ++--
16
23
hw/arm/sabrelite.c | 2 +-
17
memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
24
hw/misc/npcm7xx_clk.c | 2 +-
18
25
10 files changed, 16 insertions(+), 16 deletions(-)
19
- object_initialize_child(obj, "nvnic", &s->nvic, TYPE_NVIC);
26
20
+ object_initialize_child(obj, "nvic", &s->nvic, TYPE_NVIC);
27
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
21
object_property_add_alias(obj, "num-irq",
28
index XXXXXXX..XXXXXXX 100644
22
OBJECT(&s->nvic), "num-irq");
29
--- a/docs/system/arm/aspeed.rst
23
30
+++ b/docs/system/arm/aspeed.rst
31
@@ -XXX,XX +XXX,XX @@ The QEMU Aspeed machines model BMCs of various OpenPOWER systems and
32
Aspeed evaluation boards. They are based on different releases of the
33
Aspeed SoC : the AST2400 integrating an ARM926EJ-S CPU (400MHz), the
34
AST2500 with an ARM1176JZS CPU (800MHz) and more recently the AST2600
35
-with dual cores ARM Cortex A7 CPUs (1.2GHz).
36
+with dual cores ARM Cortex-A7 CPUs (1.2GHz).
37
38
The SoC comes with RAM, Gigabit ethernet, USB, SD/MMC, USB, SPI, I2C,
39
etc.
40
@@ -XXX,XX +XXX,XX @@ AST2500 SoC based machines :
41
42
AST2600 SoC based machines :
43
44
-- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex A7)
45
+- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex-A7)
46
- ``tacoma-bmc`` OpenPOWER Witherspoon POWER9 AST2600 BMC
47
48
Supported devices
49
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
50
index XXXXXXX..XXXXXXX 100644
51
--- a/docs/system/arm/nuvoton.rst
52
+++ b/docs/system/arm/nuvoton.rst
53
@@ -XXX,XX +XXX,XX @@ Nuvoton iBMC boards (``npcm750-evb``, ``quanta-gsj``)
54
55
The `Nuvoton iBMC`_ chips (NPCM7xx) are a family of ARM-based SoCs that are
56
designed to be used as Baseboard Management Controllers (BMCs) in various
57
-servers. They all feature one or two ARM Cortex A9 CPU cores, as well as an
58
+servers. They all feature one or two ARM Cortex-A9 CPU cores, as well as an
59
assortment of peripherals targeted for either Enterprise or Data Center /
60
Hyperscale applications. The former is a superset of the latter, so NPCM750 has
61
all the peripherals of NPCM730 and more.
62
63
.. _Nuvoton iBMC: https://www.nuvoton.com/products/cloud-computing/ibmc/
64
65
-The NPCM750 SoC has two Cortex A9 cores and is targeted for the Enterprise
66
+The NPCM750 SoC has two Cortex-A9 cores and is targeted for the Enterprise
67
segment. The following machines are based on this chip :
68
69
- ``npcm750-evb`` Nuvoton NPCM750 Evaluation board
70
71
-The NPCM730 SoC has two Cortex A9 cores and is targeted for Data Center and
72
+The NPCM730 SoC has two Cortex-A9 cores and is targeted for Data Center and
73
Hyperscale applications. The following machines are based on this chip :
74
75
- ``quanta-gsj`` Quanta GSJ server BMC
76
diff --git a/docs/system/arm/sabrelite.rst b/docs/system/arm/sabrelite.rst
77
index XXXXXXX..XXXXXXX 100644
78
--- a/docs/system/arm/sabrelite.rst
79
+++ b/docs/system/arm/sabrelite.rst
80
@@ -XXX,XX +XXX,XX @@ Supported devices
81
82
The SABRE Lite machine supports the following devices:
83
84
- * Up to 4 Cortex A9 cores
85
+ * Up to 4 Cortex-A9 cores
86
* Generic Interrupt Controller
87
* 1 Clock Controller Module
88
* 1 System Reset Controller
89
diff --git a/include/hw/arm/allwinner-h3.h b/include/hw/arm/allwinner-h3.h
90
index XXXXXXX..XXXXXXX 100644
91
--- a/include/hw/arm/allwinner-h3.h
92
+++ b/include/hw/arm/allwinner-h3.h
93
@@ -XXX,XX +XXX,XX @@
94
*/
95
96
/*
97
- * The Allwinner H3 is a System on Chip containing four ARM Cortex A7
98
+ * The Allwinner H3 is a System on Chip containing four ARM Cortex-A7
99
* processor cores. Features and specifications include DDR2/DDR3 memory,
100
* SD/MMC storage cards, 10/100/1000Mbit Ethernet, USB 2.0, HDMI and
101
* various I/O modules.
102
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/hw/arm/aspeed.c
105
+++ b/hw/arm/aspeed.c
106
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_ast2600_evb_class_init(ObjectClass *oc, void *data)
107
MachineClass *mc = MACHINE_CLASS(oc);
108
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
109
110
- mc->desc = "Aspeed AST2600 EVB (Cortex A7)";
111
+ mc->desc = "Aspeed AST2600 EVB (Cortex-A7)";
112
amc->soc_name = "ast2600-a1";
113
amc->hw_strap1 = AST2600_EVB_HW_STRAP1;
114
amc->hw_strap2 = AST2600_EVB_HW_STRAP2;
115
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_tacoma_class_init(ObjectClass *oc, void *data)
116
MachineClass *mc = MACHINE_CLASS(oc);
117
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
118
119
- mc->desc = "OpenPOWER Tacoma BMC (Cortex A7)";
120
+ mc->desc = "OpenPOWER Tacoma BMC (Cortex-A7)";
121
amc->soc_name = "ast2600-a1";
122
amc->hw_strap1 = TACOMA_BMC_HW_STRAP1;
123
amc->hw_strap2 = TACOMA_BMC_HW_STRAP2;
124
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_rainier_class_init(ObjectClass *oc, void *data)
125
MachineClass *mc = MACHINE_CLASS(oc);
126
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
127
128
- mc->desc = "IBM Rainier BMC (Cortex A7)";
129
+ mc->desc = "IBM Rainier BMC (Cortex-A7)";
130
amc->soc_name = "ast2600-a1";
131
amc->hw_strap1 = RAINIER_BMC_HW_STRAP1;
132
amc->hw_strap2 = RAINIER_BMC_HW_STRAP2;
133
diff --git a/hw/arm/mcimx6ul-evk.c b/hw/arm/mcimx6ul-evk.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/hw/arm/mcimx6ul-evk.c
136
+++ b/hw/arm/mcimx6ul-evk.c
137
@@ -XXX,XX +XXX,XX @@ static void mcimx6ul_evk_init(MachineState *machine)
138
139
static void mcimx6ul_evk_machine_init(MachineClass *mc)
140
{
141
- mc->desc = "Freescale i.MX6UL Evaluation Kit (Cortex A7)";
142
+ mc->desc = "Freescale i.MX6UL Evaluation Kit (Cortex-A7)";
143
mc->init = mcimx6ul_evk_init;
144
mc->max_cpus = FSL_IMX6UL_NUM_CPUS;
145
mc->default_ram_id = "mcimx6ul-evk.ram";
146
diff --git a/hw/arm/mcimx7d-sabre.c b/hw/arm/mcimx7d-sabre.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/hw/arm/mcimx7d-sabre.c
149
+++ b/hw/arm/mcimx7d-sabre.c
150
@@ -XXX,XX +XXX,XX @@ static void mcimx7d_sabre_init(MachineState *machine)
151
152
static void mcimx7d_sabre_machine_init(MachineClass *mc)
153
{
154
- mc->desc = "Freescale i.MX7 DUAL SABRE (Cortex A7)";
155
+ mc->desc = "Freescale i.MX7 DUAL SABRE (Cortex-A7)";
156
mc->init = mcimx7d_sabre_init;
157
mc->max_cpus = FSL_IMX7_NUM_CPUS;
158
mc->default_ram_id = "mcimx7d-sabre.ram";
159
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/arm/npcm7xx_boards.c
162
+++ b/hw/arm/npcm7xx_boards.c
163
@@ -XXX,XX +XXX,XX @@ static void npcm750_evb_machine_class_init(ObjectClass *oc, void *data)
164
165
npcm7xx_set_soc_type(nmc, TYPE_NPCM750);
166
167
- mc->desc = "Nuvoton NPCM750 Evaluation Board (Cortex A9)";
168
+ mc->desc = "Nuvoton NPCM750 Evaluation Board (Cortex-A9)";
169
mc->init = npcm750_evb_init;
170
mc->default_ram_size = 512 * MiB;
171
};
172
@@ -XXX,XX +XXX,XX @@ static void gsj_machine_class_init(ObjectClass *oc, void *data)
173
174
npcm7xx_set_soc_type(nmc, TYPE_NPCM730);
175
176
- mc->desc = "Quanta GSJ (Cortex A9)";
177
+ mc->desc = "Quanta GSJ (Cortex-A9)";
178
mc->init = quanta_gsj_init;
179
mc->default_ram_size = 512 * MiB;
180
};
181
diff --git a/hw/arm/sabrelite.c b/hw/arm/sabrelite.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/hw/arm/sabrelite.c
184
+++ b/hw/arm/sabrelite.c
185
@@ -XXX,XX +XXX,XX @@ static void sabrelite_init(MachineState *machine)
186
187
static void sabrelite_machine_init(MachineClass *mc)
188
{
189
- mc->desc = "Freescale i.MX6 Quad SABRE Lite Board (Cortex A9)";
190
+ mc->desc = "Freescale i.MX6 Quad SABRE Lite Board (Cortex-A9)";
191
mc->init = sabrelite_init;
192
mc->max_cpus = FSL_IMX6_NUM_CPUS;
193
mc->ignore_memory_transaction_failures = true;
194
diff --git a/hw/misc/npcm7xx_clk.c b/hw/misc/npcm7xx_clk.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/hw/misc/npcm7xx_clk.c
197
+++ b/hw/misc/npcm7xx_clk.c
198
@@ -XXX,XX +XXX,XX @@
199
#define NPCM7XX_CLOCK_REF_HZ (25000000)
200
201
/* Register Field Definitions */
202
-#define NPCM7XX_CLK_WDRCR_CA9C BIT(0) /* Cortex A9 Cores */
203
+#define NPCM7XX_CLK_WDRCR_CA9C BIT(0) /* Cortex-A9 Cores */
204
205
#define PLLCON_LOKI BIT(31)
206
#define PLLCON_LOKS BIT(30)
24
--
207
--
25
2.20.1
208
2.20.1
26
209
27
210
diff view generated by jsdifflib
New patch
1
From: Damien Goutte-Gattat <dgouttegattat@incenp.org>
1
2
3
The 4.x branch of Sphinx introduces a breaking change, as generated man
4
pages are now written to subdirectories corresponding to the manual
5
section they belong to. This results in `make install` erroring out when
6
attempting to install the man pages, because they are not where it
7
expects to find them.
8
9
This patch restores the behavior of Sphinx 3.x regarding man pages.
10
11
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/256
12
Signed-off-by: Damien Goutte-Gattat <dgouttegattat@incenp.org>
13
Message-id: 20210503161422.15028-1-dgouttegattat@incenp.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
docs/conf.py | 1 +
18
1 file changed, 1 insertion(+)
19
20
diff --git a/docs/conf.py b/docs/conf.py
21
index XXXXXXX..XXXXXXX 100644
22
--- a/docs/conf.py
23
+++ b/docs/conf.py
24
@@ -XXX,XX +XXX,XX @@
25
['Stefan Hajnoczi <stefanha@redhat.com>',
26
'Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>'], 1),
27
]
28
+man_make_section_directory = False
29
30
# -- Options for Texinfo output -------------------------------------------
31
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The operands to tcg_gen_atomic_fetch_s{min,max}_i64 must
4
be signed, so that the inputs are properly extended.
5
Zero extend the result afterward, as needed.
6
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/364
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20210602020720.47679-1-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/translate-a64.c | 13 ++++++++++---
14
1 file changed, 10 insertions(+), 3 deletions(-)
15
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
19
+++ b/target/arm/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
21
int o3_opc = extract32(insn, 12, 4);
22
bool r = extract32(insn, 22, 1);
23
bool a = extract32(insn, 23, 1);
24
- TCGv_i64 tcg_rs, clean_addr;
25
+ TCGv_i64 tcg_rs, tcg_rt, clean_addr;
26
AtomicThreeOpFn *fn = NULL;
27
+ MemOp mop = s->be_data | size | MO_ALIGN;
28
29
if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
30
unallocated_encoding(s);
31
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
32
break;
33
case 004: /* LDSMAX */
34
fn = tcg_gen_atomic_fetch_smax_i64;
35
+ mop |= MO_SIGN;
36
break;
37
case 005: /* LDSMIN */
38
fn = tcg_gen_atomic_fetch_smin_i64;
39
+ mop |= MO_SIGN;
40
break;
41
case 006: /* LDUMAX */
42
fn = tcg_gen_atomic_fetch_umax_i64;
43
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
44
}
45
46
tcg_rs = read_cpu_reg(s, rs, true);
47
+ tcg_rt = cpu_reg(s, rt);
48
49
if (o3_opc == 1) { /* LDCLR */
50
tcg_gen_not_i64(tcg_rs, tcg_rs);
51
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
52
/* The tcg atomic primitives are all full barriers. Therefore we
53
* can ignore the Acquire and Release bits of this instruction.
54
*/
55
- fn(cpu_reg(s, rt), clean_addr, tcg_rs, get_mem_index(s),
56
- s->be_data | size | MO_ALIGN);
57
+ fn(tcg_rt, clean_addr, tcg_rs, get_mem_index(s), mop);
58
+
59
+ if ((mop & MO_SIGN) && size != MO_64) {
60
+ tcg_gen_ext32u_i64(tcg_rt, tcg_rt);
61
+ }
62
}
63
64
/*
65
--
66
2.20.1
67
68
diff view generated by jsdifflib
New patch
1
From: Jamie Iles <jamie@nuviainc.com>
1
2
3
The DAIF and PAC checks used raise_exception_ra to raise an exception
4
and unwind CPU state but raise_exception_ra is currently designed for
5
handling data aborts as the syndrome is partially precomputed and
6
encoded in the TB and then merged in merge_syn_data_abort when handling
7
the data abort. Using raise_exception_ra for DAIF and PAC checks
8
results in an empty syndrome being retrieved from data[2] in
9
restore_state_to_opc and setting ESR to 0. This manifested as:
10
11
kvm [571]: Unknown exception class: esr: 0x000000 –
12
Unknown/Uncategorized
13
14
when launching a KVM guest when the host qemu used a CPU supporting
15
EL2+pointer authentication and enabling pointer authentication in the
16
guest.
17
18
Rework raise_exception_ra such that the state is restored before raising
19
the exception so that the exception is not clobbered by
20
restore_state_to_opc.
21
22
Fixes: 0d43e1a2d29a ("target/arm: Add PAuth helpers")
23
Cc: Richard Henderson <richard.henderson@linaro.org>
24
Cc: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
26
[PMM: added comment]
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
30
target/arm/op_helper.c | 11 +++++++++--
31
1 file changed, 9 insertions(+), 2 deletions(-)
32
33
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/op_helper.c
36
+++ b/target/arm/op_helper.c
37
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
38
void raise_exception_ra(CPUARMState *env, uint32_t excp, uint32_t syndrome,
39
uint32_t target_el, uintptr_t ra)
40
{
41
- CPUState *cs = do_raise_exception(env, excp, syndrome, target_el);
42
- cpu_loop_exit_restore(cs, ra);
43
+ CPUState *cs = env_cpu(env);
44
+
45
+ /*
46
+ * restore_state_to_opc() will set env->exception.syndrome, so
47
+ * we must restore CPU state here before setting the syndrome
48
+ * the caller passed us, and cannot use cpu_loop_exit_restore().
49
+ */
50
+ cpu_restore_state(cs, ra, true);
51
+ raise_exception(env, excp, syndrome, target_el);
52
}
53
54
uint64_t HELPER(neon_tbl)(CPUARMState *env, uint32_t desc,
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
1
The RAS feature has a block of memory-mapped registers at offset
1
From: Jamie Iles <jamie@nuviainc.com>
2
0x5000 within the PPB. For a "minimal RAS" implementation we provide
3
no error records and so the only registers that exist in the block
4
are ERRIIDR and ERRDEVID.
5
2
6
The "RAZ/WI for privileged, BusFault for nonprivileged" behaviour
3
Now that there are no other users of do_raise_exception, fold it into
7
of the "nvic-default" region is actually valid for minimal-RAS,
4
raise_exception.
8
so the main benefit of providing an explicit implementation of
9
the register block is more accurate LOG_UNIMP messages, and a
10
framework for where we could add a real RAS implementation later
11
if necessary.
12
5
6
Cc: Richard Henderson <richard.henderson@linaro.org>
7
Cc: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-27-peter.maydell@linaro.org
16
---
11
---
17
include/hw/intc/armv7m_nvic.h | 1 +
12
target/arm/op_helper.c | 12 ++----------
18
hw/intc/armv7m_nvic.c | 56 +++++++++++++++++++++++++++++++++++
13
1 file changed, 2 insertions(+), 10 deletions(-)
19
2 files changed, 57 insertions(+)
20
14
21
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
15
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
22
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/intc/armv7m_nvic.h
17
--- a/target/arm/op_helper.c
24
+++ b/include/hw/intc/armv7m_nvic.h
18
+++ b/target/arm/op_helper.c
25
@@ -XXX,XX +XXX,XX @@ struct NVICState {
19
@@ -XXX,XX +XXX,XX @@
26
MemoryRegion sysreg_ns_mem;
20
#define SIGNBIT (uint32_t)0x80000000
27
MemoryRegion systickmem;
21
#define SIGNBIT64 ((uint64_t)1 << 63)
28
MemoryRegion systick_ns_mem;
22
29
+ MemoryRegion ras_mem;
23
-static CPUState *do_raise_exception(CPUARMState *env, uint32_t excp,
30
MemoryRegion container;
24
- uint32_t syndrome, uint32_t target_el)
31
MemoryRegion defaultmem;
25
+void raise_exception(CPUARMState *env, uint32_t excp,
32
26
+ uint32_t syndrome, uint32_t target_el)
33
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
27
{
34
index XXXXXXX..XXXXXXX 100644
28
CPUState *cs = env_cpu(env);
35
--- a/hw/intc/armv7m_nvic.c
29
36
+++ b/hw/intc/armv7m_nvic.c
30
@@ -XXX,XX +XXX,XX @@ static CPUState *do_raise_exception(CPUARMState *env, uint32_t excp,
37
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
31
cs->exception_index = excp;
38
.endianness = DEVICE_NATIVE_ENDIAN,
32
env->exception.syndrome = syndrome;
39
};
33
env->exception.target_el = target_el;
40
34
-
41
+
35
- return cs;
42
+static MemTxResult ras_read(void *opaque, hwaddr addr,
36
-}
43
+ uint64_t *data, unsigned size,
37
-
44
+ MemTxAttrs attrs)
38
-void raise_exception(CPUARMState *env, uint32_t excp,
45
+{
39
- uint32_t syndrome, uint32_t target_el)
46
+ if (attrs.user) {
40
-{
47
+ return MEMTX_ERROR;
41
- CPUState *cs = do_raise_exception(env, excp, syndrome, target_el);
48
+ }
42
cpu_loop_exit(cs);
49
+
50
+ switch (addr) {
51
+ case 0xe10: /* ERRIIDR */
52
+ /* architect field = Arm; product/variant/revision 0 */
53
+ *data = 0x43b;
54
+ break;
55
+ case 0xfc8: /* ERRDEVID */
56
+ /* Minimal RAS: we implement 0 error record indexes */
57
+ *data = 0;
58
+ break;
59
+ default:
60
+ qemu_log_mask(LOG_UNIMP, "Read RAS register offset 0x%x\n",
61
+ (uint32_t)addr);
62
+ *data = 0;
63
+ break;
64
+ }
65
+ return MEMTX_OK;
66
+}
67
+
68
+static MemTxResult ras_write(void *opaque, hwaddr addr,
69
+ uint64_t value, unsigned size,
70
+ MemTxAttrs attrs)
71
+{
72
+ if (attrs.user) {
73
+ return MEMTX_ERROR;
74
+ }
75
+
76
+ switch (addr) {
77
+ default:
78
+ qemu_log_mask(LOG_UNIMP, "Write to RAS register offset 0x%x\n",
79
+ (uint32_t)addr);
80
+ break;
81
+ }
82
+ return MEMTX_OK;
83
+}
84
+
85
+static const MemoryRegionOps ras_ops = {
86
+ .read_with_attrs = ras_read,
87
+ .write_with_attrs = ras_write,
88
+ .endianness = DEVICE_NATIVE_ENDIAN,
89
+};
90
+
91
/*
92
* Unassigned portions of the PPB space are RAZ/WI for privileged
93
* accesses, and fault for non-privileged accesses.
94
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
95
&s->systick_ns_mem, 1);
96
}
97
98
+ if (cpu_isar_feature(aa32_ras, s->cpu)) {
99
+ memory_region_init_io(&s->ras_mem, OBJECT(s),
100
+ &ras_ops, s, "nvic_ras", 0x1000);
101
+ memory_region_add_subregion(&s->container, 0x5000, &s->ras_mem);
102
+ }
103
+
104
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
105
}
43
}
106
44
107
--
45
--
108
2.20.1
46
2.20.1
109
47
110
48
diff view generated by jsdifflib
New patch
1
From: Jamie Iles <jamie@nuviainc.com>
1
2
3
Now that raise_exception_ra restores the state before raising the
4
exception we can use restore_exception_ra to perform the state restore +
5
exception raising without clobbering the syndrome.
6
7
Cc: Richard Henderson <richard.henderson@linaro.org>
8
Cc: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
10
[PMM: Keep the one line of the comment that is still relevant]
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/mte_helper.c | 12 +++---------
15
1 file changed, 3 insertions(+), 9 deletions(-)
16
17
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/mte_helper.c
20
+++ b/target/arm/mte_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
22
23
switch (tcf) {
24
case 1:
25
- /*
26
- * Tag check fail causes a synchronous exception.
27
- *
28
- * In restore_state_to_opc, we set the exception syndrome
29
- * for the load or store operation. Unwind first so we
30
- * may overwrite that with the syndrome for the tag check.
31
- */
32
- cpu_restore_state(env_cpu(env), ra, true);
33
+ /* Tag check fail causes a synchronous exception. */
34
env->exception.vaddress = dirty_ptr;
35
36
is_write = FIELD_EX32(desc, MTEDESC, WRITE);
37
syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0,
38
is_write, 0x11);
39
- raise_exception(env, EXCP_DATA_ABORT, syn, exception_target_el(env));
40
+ raise_exception_ra(env, EXCP_DATA_ABORT, syn,
41
+ exception_target_el(env), ra);
42
/* noreturn, but fall through to the assert anyway */
43
44
case 0:
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
1
In v8.1M a new exception return check is added which may cause a NOCP
1
From: Jamie Iles <jamie@nuviainc.com>
2
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
3
we must check whether access to CP10 from the Security state of the
4
returning exception is disabled; if it is then we must take a fault.
5
2
6
(Note that for our implementation CPPWR is always RAZ/WI and so can
3
The sequence cpu_restore_state() + raise_exception() is equivalent to
7
never cause CP10 accesses to fail.)
4
raise_exception_ra(), so use that instead. (In this case we never
5
cared about the syndrome value, because M-profile doesn't use the
6
syndrome; the old code was just written unnecessarily awkwardly.)
8
7
9
The other v8.1M change to this register-clearing code is that if MVE
8
Cc: Richard Henderson <richard.henderson@linaro.org>
10
is implemented VPR must also be cleared, so add a TODO comment to
9
Cc: Peter Maydell <peter.maydell@linaro.org>
11
that effect.
10
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
12
11
[PMM: Retain edited version of comment; rewrite commit message]
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
16
---
14
---
17
target/arm/m_helper.c | 22 +++++++++++++++++++++-
15
target/arm/m_helper.c | 5 +----
18
1 file changed, 21 insertions(+), 1 deletion(-)
16
target/arm/op_helper.c | 9 +++------
17
2 files changed, 4 insertions(+), 10 deletions(-)
19
18
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
19
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m_helper.c
21
--- a/target/arm/m_helper.c
23
+++ b/target/arm/m_helper.c
22
+++ b/target/arm/m_helper.c
24
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
23
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
25
v7m_exception_taken(cpu, excret, true, false);
24
limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
26
return;
25
27
} else {
26
if (val < limit) {
28
- /* Clear s0..s15 and FPSCR */
27
- CPUState *cs = env_cpu(env);
29
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
28
-
30
+ /* v8.1M adds this NOCP check */
29
- cpu_restore_state(cs, GETPC(), true);
31
+ bool nsacr_pass = exc_secure ||
30
- raise_exception(env, EXCP_STKOF, 0, 1);
32
+ extract32(env->v7m.nsacr, 10, 1);
31
+ raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
33
+ bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true);
32
}
34
+ if (!nsacr_pass) {
33
35
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
34
if (is_psp) {
36
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
35
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
37
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
36
index XXXXXXX..XXXXXXX 100644
38
+ "stackframe: NSACR prevents clearing FPU registers\n");
37
--- a/target/arm/op_helper.c
39
+ v7m_exception_taken(cpu, excret, true, false);
38
+++ b/target/arm/op_helper.c
40
+ } else if (!cpacr_pass) {
39
@@ -XXX,XX +XXX,XX @@ void HELPER(v8m_stackcheck)(CPUARMState *env, uint32_t newvalue)
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
40
* raising an exception if the limit is breached.
42
+ exc_secure);
41
*/
43
+ env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK;
42
if (newvalue < v7m_sp_limit(env)) {
44
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
43
- CPUState *cs = env_cpu(env);
45
+ "stackframe: CPACR prevents clearing FPU registers\n");
44
-
46
+ v7m_exception_taken(cpu, excret, true, false);
45
/*
47
+ }
46
* Stack limit exceptions are a rare case, so rather than syncing
48
+ }
47
- * PC/condbits before the call, we use cpu_restore_state() to
49
+ /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
48
- * get them right before raising the exception.
50
int i;
49
+ * PC/condbits before the call, we use raise_exception_ra() so
51
50
+ * that cpu_restore_state() will sort them out.
52
for (i = 0; i < 16; i += 2) {
51
*/
52
- cpu_restore_state(cs, GETPC(), true);
53
- raise_exception(env, EXCP_STKOF, 0, 1);
54
+ raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
55
}
56
}
57
53
--
58
--
54
2.20.1
59
2.20.1
55
60
56
61
diff view generated by jsdifflib
1
For v8.1M the architecture mandates that CPUs must provide at
1
From: Richard Henderson <richard.henderson@linaro.org>
2
least the "minimal RAS implementation" from the Reliability,
3
Availability and Serviceability extension. This consists of:
4
* an ESB instruction which is a NOP
5
-- since it is in the HINT space we need only add a comment
6
* an RFSR register which will RAZ/WI
7
* a RAZ/WI AIRCR.IESB bit
8
-- the code which handles writes to AIRCR does not allow setting
9
of RES0 bits, so we already treat this as RAZ/WI; add a comment
10
noting that this is deliberate
11
* minimal implementation of the RAS register block at 0xe0005000
12
-- this will be in a subsequent commit
13
* setting the ID_PFR0.RAS field to 0b0010
14
-- we will do this when we add the Cortex-M55 CPU model
15
2
3
Note that the SVE BFLOAT16 support does not require SVE2,
4
it is an independent extension.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-2-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
19
---
10
---
20
target/arm/cpu.h | 14 ++++++++++++++
11
target/arm/cpu.h | 15 +++++++++++++++
21
target/arm/t32.decode | 4 ++++
12
1 file changed, 15 insertions(+)
22
hw/intc/armv7m_nvic.c | 13 +++++++++++++
23
3 files changed, 31 insertions(+)
24
13
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
16
--- a/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, LSM, 20, 4)
30
FIELD(ID_MMFR4, CCIDX, 24, 4)
31
FIELD(ID_MMFR4, EVT, 28, 4)
32
33
+FIELD(ID_PFR0, STATE0, 0, 4)
34
+FIELD(ID_PFR0, STATE1, 4, 4)
35
+FIELD(ID_PFR0, STATE2, 8, 4)
36
+FIELD(ID_PFR0, STATE3, 12, 4)
37
+FIELD(ID_PFR0, CSV2, 16, 4)
38
+FIELD(ID_PFR0, AMU, 20, 4)
39
+FIELD(ID_PFR0, DIT, 24, 4)
40
+FIELD(ID_PFR0, RAS, 28, 4)
41
+
42
FIELD(ID_PFR1, PROGMOD, 0, 4)
43
FIELD(ID_PFR1, SECURITY, 4, 4)
44
FIELD(ID_PFR1, MPROGMOD, 8, 4)
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
46
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
19
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
47
}
20
}
48
21
49
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
22
+static inline bool isar_feature_aa32_bf16(const ARMISARegisters *id)
50
+{
23
+{
51
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
24
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, BF16) != 0;
52
+}
25
+}
53
+
26
+
54
static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
27
static inline bool isar_feature_aa32_i8mm(const ARMISARegisters *id)
55
{
28
{
56
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
29
return FIELD_EX32(id->id_isar6, ID_ISAR6, I8MM) != 0;
57
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
30
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_dcpodp(const ARMISARegisters *id)
58
index XXXXXXX..XXXXXXX 100644
31
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) >= 2;
59
--- a/target/arm/t32.decode
32
}
60
+++ b/target/arm/t32.decode
33
61
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
34
+static inline bool isar_feature_aa64_bf16(const ARMISARegisters *id)
62
# SEV 1111 0011 1010 1111 1000 0000 0000 0100
35
+{
63
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101
36
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, BF16) != 0;
64
37
+}
65
+ # For M-profile minimal-RAS ESB can be a NOP, which is the
66
+ # default behaviour since it is in the hint space.
67
+ # ESB 1111 0011 1010 1111 1000 0000 0001 0000
68
+
38
+
69
# The canonical nop ends in 0000 0000, but the whole rest
39
static inline bool isar_feature_aa64_fp_simd(const ARMISARegisters *id)
70
# of the space is "reserved hint, behaves as nop".
40
{
71
NOP 1111 0011 1010 1111 1000 0000 ---- ----
41
/* We always set the AdvSIMD and FP fields identically. */
72
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
42
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
73
index XXXXXXX..XXXXXXX 100644
43
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
74
--- a/hw/intc/armv7m_nvic.c
44
}
75
+++ b/hw/intc/armv7m_nvic.c
45
76
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
46
+static inline bool isar_feature_aa64_sve_bf16(const ARMISARegisters *id)
77
return 0;
47
+{
78
}
48
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BFLOAT16) != 0;
79
return cpu->env.v7m.sfar;
49
+}
80
+ case 0xf04: /* RFSR */
50
+
81
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
51
static inline bool isar_feature_aa64_sve2_sha3(const ARMISARegisters *id)
82
+ goto bad_offset;
52
{
83
+ }
53
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SHA3) != 0;
84
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
85
+ return 0;
86
case 0xf34: /* FPCCR */
87
if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
88
return 0;
89
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
90
R_V7M_AIRCR_PRIGROUP_SHIFT,
91
R_V7M_AIRCR_PRIGROUP_LENGTH);
92
}
93
+ /* AIRCR.IESB is RAZ/WI because we implement only minimal RAS */
94
if (attrs.secure) {
95
/* These bits are only writable by secure */
96
cpu->env.v7m.aircr = value &
97
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
98
}
99
break;
100
}
101
+ case 0xf04: /* RFSR */
102
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
103
+ goto bad_offset;
104
+ }
105
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
106
+ break;
107
case 0xf34: /* FPCCR */
108
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
109
/* Not all bits here are banked. */
110
--
54
--
111
2.20.1
55
2.20.1
112
56
113
57
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525225817.400336-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 15 ++++++---------
9
1 file changed, 6 insertions(+), 9 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
16
int rd = extract32(insn, 0, 5);
17
18
if (mos) {
19
- unallocated_encoding(s);
20
- return;
21
+ goto do_unallocated;
22
}
23
24
switch (opcode) {
25
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
26
/* FCVT between half, single and double precision */
27
int dtype = extract32(opcode, 0, 2);
28
if (type == 2 || dtype == type) {
29
- unallocated_encoding(s);
30
- return;
31
+ goto do_unallocated;
32
}
33
if (!fp_access_check(s)) {
34
return;
35
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
36
37
case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
38
if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
39
- unallocated_encoding(s);
40
- return;
41
+ goto do_unallocated;
42
}
43
/* fall through */
44
case 0x0 ... 0x3:
45
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
46
break;
47
case 3:
48
if (!dc_isar_feature(aa64_fp16, s)) {
49
- unallocated_encoding(s);
50
- return;
51
+ goto do_unallocated;
52
}
53
54
if (!fp_access_check(s)) {
55
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
56
handle_fp_1src_half(s, opcode, rd, rn);
57
break;
58
default:
59
- unallocated_encoding(s);
60
+ goto do_unallocated;
61
}
62
break;
63
64
default:
65
+ do_unallocated:
66
unallocated_encoding(s);
67
break;
68
}
69
--
70
2.20.1
71
72
diff view generated by jsdifflib
1
Implement the new-in-v8.1M VLDR/VSTR variants which directly
1
From: Richard Henderson <richard.henderson@linaro.org>
2
read or write FP system registers to memory.
3
2
3
This is the 64-bit BFCVT and the 32-bit VCVT{B,T}.BF16.F32.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525225817.400336-4-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
7
---
9
---
8
target/arm/vfp.decode | 14 ++++++
10
target/arm/helper.h | 1 +
9
target/arm/translate-vfp.c.inc | 91 ++++++++++++++++++++++++++++++++++
11
target/arm/vfp.decode | 2 ++
10
2 files changed, 105 insertions(+)
12
target/arm/translate-a64.c | 19 +++++++++++++++++++
13
target/arm/translate-vfp.c | 24 ++++++++++++++++++++++++
14
target/arm/vfp_helper.c | 5 +++++
15
5 files changed, 51 insertions(+)
11
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
22
23
DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
24
DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
25
+DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
26
27
DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
28
DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
12
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
29
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
13
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/vfp.decode
31
--- a/target/arm/vfp.decode
15
+++ b/target/arm/vfp.decode
32
+++ b/target/arm/vfp.decode
16
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
33
@@ -XXX,XX +XXX,XX @@ VCVT_f64_f16 ---- 1110 1.11 0010 .... 1011 t:1 1.0 .... \
17
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
34
18
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
35
# VCVTB and VCVTT to f16: Vd format is always vd_sp;
19
36
# Vm format depends on size bit
20
+# M-profile VLDR/VSTR to sysreg
37
+VCVT_b16_f32 ---- 1110 1.11 0011 .... 1001 t:1 1.0 .... \
21
+%vldr_sysreg 22:1 13:3
38
+ vd=%vd_sp vm=%vm_sp
22
+%imm7_0x4 0:7 !function=times_4
39
VCVT_f16_f32 ---- 1110 1.11 0011 .... 1010 t:1 1.0 .... \
40
vd=%vd_sp vm=%vm_sp
41
VCVT_f16_f64 ---- 1110 1.11 0011 .... 1011 t:1 1.0 .... \
42
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/translate-a64.c
45
+++ b/target/arm/translate-a64.c
46
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
47
case 0x3: /* FSQRT */
48
gen_helper_vfp_sqrts(tcg_res, tcg_op, cpu_env);
49
goto done;
50
+ case 0x6: /* BFCVT */
51
+ gen_fpst = gen_helper_bfcvt;
52
+ break;
53
case 0x8: /* FRINTN */
54
case 0x9: /* FRINTP */
55
case 0xa: /* FRINTM */
56
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
57
}
58
break;
59
60
+ case 0x6:
61
+ switch (type) {
62
+ case 1: /* BFCVT */
63
+ if (!dc_isar_feature(aa64_bf16, s)) {
64
+ goto do_unallocated;
65
+ }
66
+ if (!fp_access_check(s)) {
67
+ return;
68
+ }
69
+ handle_fp_1src_single(s, opcode, rd, rn);
70
+ break;
71
+ default:
72
+ goto do_unallocated;
73
+ }
74
+ break;
23
+
75
+
24
+&vldr_sysreg rn reg imm a w p
76
default:
25
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
77
do_unallocated:
26
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
78
unallocated_encoding(s);
27
+
79
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
28
+# P=0 W=0 is SEE "Related encodings", so split into two patterns
29
+VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
30
+VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
31
+VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
32
+VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
33
+
34
# We split the load/store multiple up into two patterns to avoid
35
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
36
# grouping:
37
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
38
index XXXXXXX..XXXXXXX 100644
80
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-vfp.c.inc
81
--- a/target/arm/translate-vfp.c
40
+++ b/target/arm/translate-vfp.c.inc
82
+++ b/target/arm/translate-vfp.c
41
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
42
return true;
84
return true;
43
}
85
}
44
86
45
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
87
+static bool trans_VCVT_b16_f32(DisasContext *s, arg_VCVT_b16_f32 *a)
46
+{
88
+{
47
+ arg_vldr_sysreg *a = opaque;
89
+ TCGv_ptr fpst;
48
+ uint32_t offset = a->imm;
90
+ TCGv_i32 tmp;
49
+ TCGv_i32 addr;
50
+
91
+
51
+ if (!a->a) {
92
+ if (!dc_isar_feature(aa32_bf16, s)) {
52
+ offset = - offset;
93
+ return false;
53
+ }
94
+ }
54
+
95
+
55
+ addr = load_reg(s, a->rn);
96
+ if (!vfp_access_check(s)) {
56
+ if (a->p) {
97
+ return true;
57
+ tcg_gen_addi_i32(addr, addr, offset);
58
+ }
98
+ }
59
+
99
+
60
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
100
+ fpst = fpstatus_ptr(FPST_FPCR);
61
+ gen_helper_v8m_stackcheck(cpu_env, addr);
101
+ tmp = tcg_temp_new_i32();
62
+ }
63
+
102
+
64
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
103
+ vfp_load_reg32(tmp, a->vm);
65
+ MO_UL | MO_ALIGN | s->be_data);
104
+ gen_helper_bfcvt(tmp, tmp, fpst);
66
+ tcg_temp_free_i32(value);
105
+ tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
67
+
106
+ tcg_temp_free_ptr(fpst);
68
+ if (a->w) {
107
+ tcg_temp_free_i32(tmp);
69
+ /* writeback */
108
+ return true;
70
+ if (!a->p) {
71
+ tcg_gen_addi_i32(addr, addr, offset);
72
+ }
73
+ store_reg(s, a->rn, addr);
74
+ } else {
75
+ tcg_temp_free_i32(addr);
76
+ }
77
+}
109
+}
78
+
110
+
79
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
111
static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
112
{
113
TCGv_ptr fpst;
114
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/vfp_helper.c
117
+++ b/target/arm/vfp_helper.c
118
@@ -XXX,XX +XXX,XX @@ float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env)
119
return float64_to_float32(x, &env->vfp.fp_status);
120
}
121
122
+uint32_t HELPER(bfcvt)(float32 x, void *status)
80
+{
123
+{
81
+ arg_vldr_sysreg *a = opaque;
124
+ return float32_to_bfloat16(x, status);
82
+ uint32_t offset = a->imm;
83
+ TCGv_i32 addr;
84
+ TCGv_i32 value = tcg_temp_new_i32();
85
+
86
+ if (!a->a) {
87
+ offset = - offset;
88
+ }
89
+
90
+ addr = load_reg(s, a->rn);
91
+ if (a->p) {
92
+ tcg_gen_addi_i32(addr, addr, offset);
93
+ }
94
+
95
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
96
+ gen_helper_v8m_stackcheck(cpu_env, addr);
97
+ }
98
+
99
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
100
+ MO_UL | MO_ALIGN | s->be_data);
101
+
102
+ if (a->w) {
103
+ /* writeback */
104
+ if (!a->p) {
105
+ tcg_gen_addi_i32(addr, addr, offset);
106
+ }
107
+ store_reg(s, a->rn, addr);
108
+ } else {
109
+ tcg_temp_free_i32(addr);
110
+ }
111
+ return value;
112
+}
125
+}
113
+
126
+
114
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
127
/*
115
+{
128
* VFP3 fixed point conversion. The AArch32 versions of fix-to-float
116
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
129
* must always round-to-nearest; the AArch64 ones honour the FPSCR
117
+ return false;
118
+ }
119
+ if (a->rn == 15) {
120
+ return false;
121
+ }
122
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
123
+}
124
+
125
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
126
+{
127
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
128
+ return false;
129
+ }
130
+ if (a->rn == 15) {
131
+ return false;
132
+ }
133
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
134
+}
135
+
136
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
137
{
138
TCGv_i32 tmp;
139
--
130
--
140
2.20.1
131
2.20.1
141
132
142
133
diff view generated by jsdifflib
1
Implement the v8.1M VSCCLRM insn, which zeros floating point
1
From: Richard Henderson <richard.henderson@linaro.org>
2
registers if there is an active floating point context.
2
3
This requires support in write_neon_element32() for the MO_32
3
This is BFCVT{N,T} for both AArch64 AdvSIMD and SVE,
4
element size, so add it.
4
and VCVT.BF16.F32 for AArch32 NEON.
5
5
6
Because we want to use arm_gen_condlabel(), we need to move
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
the definition of that function up in translate.c so it is
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
before the #include of translate-vfp.c.inc.
8
Message-id: 20210525225817.400336-5-richard.henderson@linaro.org
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
13
---
10
---
14
target/arm/cpu.h | 9 ++++
11
target/arm/helper-sve.h | 4 ++++
15
target/arm/m-nocp.decode | 8 +++-
12
target/arm/helper.h | 1 +
16
target/arm/translate.c | 21 +++++----
13
target/arm/neon-dp.decode | 1 +
17
target/arm/translate-vfp.c.inc | 84 ++++++++++++++++++++++++++++++++++
14
target/arm/sve.decode | 2 ++
18
4 files changed, 111 insertions(+), 11 deletions(-)
15
target/arm/sve_helper.c | 2 ++
19
16
target/arm/translate-a64.c | 17 ++++++++++++++
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
target/arm/translate-neon.c | 45 +++++++++++++++++++++++++++++++++++++
21
index XXXXXXX..XXXXXXX 100644
18
target/arm/translate-sve.c | 16 +++++++++++++
22
--- a/target/arm/cpu.h
19
target/arm/vfp_helper.c | 7 ++++++
23
+++ b/target/arm/cpu.h
20
9 files changed, 95 insertions(+)
24
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
21
25
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
22
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
26
}
23
index XXXXXXX..XXXXXXX 100644
27
24
--- a/target/arm/helper-sve.h
28
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
25
+++ b/target/arm/helper-sve.h
29
+{
26
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
30
+ /*
27
void, ptr, ptr, ptr, ptr, i32)
31
+ * Return true if M-profile state handling insns
28
DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
32
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
29
void, ptr, ptr, ptr, ptr, i32)
33
+ */
30
+DEF_HELPER_FLAGS_5(sve_bfcvt, TCG_CALL_NO_RWG,
34
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
31
+ void, ptr, ptr, ptr, ptr, i32)
35
+}
32
36
+
33
DEF_HELPER_FLAGS_5(sve_fcvtzs_hh, TCG_CALL_NO_RWG,
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
34
void, ptr, ptr, ptr, ptr, i32)
38
{
35
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
39
/* Sadly this is encoded differently for A-profile and M-profile */
36
void, ptr, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
37
DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
41
index XXXXXXX..XXXXXXX 100644
38
void, ptr, ptr, ptr, ptr, i32)
42
--- a/target/arm/m-nocp.decode
39
+DEF_HELPER_FLAGS_5(sve_bfcvtnt, TCG_CALL_NO_RWG,
43
+++ b/target/arm/m-nocp.decode
40
+ void, ptr, ptr, ptr, ptr, i32)
44
@@ -XXX,XX +XXX,XX @@
41
45
# If the coprocessor is not present or disabled then we will generate
42
DEF_HELPER_FLAGS_5(sve2_fcvtlt_hs, TCG_CALL_NO_RWG,
46
# the NOCP exception; otherwise we let the insn through to the main decode.
43
void, ptr, ptr, ptr, ptr, i32)
47
44
diff --git a/target/arm/helper.h b/target/arm/helper.h
48
+%vd_dp 22:1 12:4
45
index XXXXXXX..XXXXXXX 100644
49
+%vd_sp 12:4 22:1
46
--- a/target/arm/helper.h
50
+
47
+++ b/target/arm/helper.h
51
&nocp cp
48
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
52
49
DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
53
{
50
DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
54
# Special cases which do not take an early NOCP: VLLDM and VLSTM
51
DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
55
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
52
+DEF_HELPER_FLAGS_2(bfcvt_pair, TCG_CALL_NO_RWG, i32, i64, ptr)
56
- # TODO: VSCCLRM (new in v8.1M) is similar:
53
57
- #VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
54
DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
58
+ # VSCCLRM (new in v8.1M) is similar:
55
DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
59
+ VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
56
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
60
+ VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
57
index XXXXXXX..XXXXXXX 100644
61
58
--- a/target/arm/neon-dp.decode
62
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
59
+++ b/target/arm/neon-dp.decode
63
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
60
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
61
VRINTZ 1111 001 11 . 11 .. 10 .... 0 1011 . . 0 .... @2misc
65
index XXXXXXX..XXXXXXX 100644
62
66
--- a/target/arm/translate.c
63
VCVT_F16_F32 1111 001 11 . 11 .. 10 .... 0 1100 0 . 0 .... @2misc_q0
67
+++ b/target/arm/translate.c
64
+ VCVT_B16_F32 1111 001 11 . 11 .. 10 .... 0 1100 1 . 0 .... @2misc_q0
68
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
65
69
a64_translate_init();
66
VRINTM 1111 001 11 . 11 .. 10 .... 0 1101 . . 0 .... @2misc
70
}
67
71
68
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
72
+/* Generate a label used for skipping this instruction */
69
index XXXXXXX..XXXXXXX 100644
73
+static void arm_gen_condlabel(DisasContext *s)
70
--- a/target/arm/sve.decode
74
+{
71
+++ b/target/arm/sve.decode
75
+ if (!s->condjmp) {
72
@@ -XXX,XX +XXX,XX @@ FNMLS_zpzzz 01100101 .. 1 ..... 111 ... ..... ..... @rdn_pg_rm_ra
76
+ s->condlabel = gen_new_label();
73
# SVE floating-point convert precision
77
+ s->condjmp = 1;
74
FCVT_sh 01100101 10 0010 00 101 ... ..... ..... @rd_pg_rn_e0
78
+ }
75
FCVT_hs 01100101 10 0010 01 101 ... ..... ..... @rd_pg_rn_e0
79
+}
76
+BFCVT 01100101 10 0010 10 101 ... ..... ..... @rd_pg_rn_e0
80
+
77
FCVT_dh 01100101 11 0010 00 101 ... ..... ..... @rd_pg_rn_e0
81
/* Flags for the disas_set_da_iss info argument:
78
FCVT_hd 01100101 11 0010 01 101 ... ..... ..... @rd_pg_rn_e0
82
* lower bits hold the Rt register number, higher bits are flags.
79
FCVT_ds 01100101 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
83
*/
80
@@ -XXX,XX +XXX,XX @@ RAX1 01000101 00 1 ..... 11110 1 ..... ..... @rd_rn_rm_e0
84
@@ -XXX,XX +XXX,XX @@ static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
81
FCVTXNT_ds 01100100 00 0010 10 101 ... ..... ..... @rd_pg_rn_e0
85
long off = neon_element_offset(reg, ele, memop);
82
FCVTX_ds 01100101 00 0010 10 101 ... ..... ..... @rd_pg_rn_e0
86
83
FCVTNT_sh 01100100 10 0010 00 101 ... ..... ..... @rd_pg_rn_e0
87
switch (memop) {
84
+BFCVTNT 01100100 10 0010 10 101 ... ..... ..... @rd_pg_rn_e0
88
+ case MO_32:
85
FCVTLT_hs 01100100 10 0010 01 101 ... ..... ..... @rd_pg_rn_e0
89
+ tcg_gen_st32_i64(src, cpu_env, off);
86
FCVTNT_ds 01100100 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
90
+ break;
87
FCVTLT_sd 01100100 11 0010 11 101 ... ..... ..... @rd_pg_rn_e0
91
case MO_64:
88
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
92
tcg_gen_st_i64(src, cpu_env, off);
89
index XXXXXXX..XXXXXXX 100644
93
break;
90
--- a/target/arm/sve_helper.c
94
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
91
+++ b/target/arm/sve_helper.c
95
s->base.is_jmp = DISAS_UPDATE_EXIT;
92
@@ -XXX,XX +XXX,XX @@ static inline uint64_t vfp_float64_to_uint64_rtz(float64 f, float_status *s)
96
}
93
97
94
DO_ZPZ_FP(sve_fcvt_sh, uint32_t, H1_4, sve_f32_to_f16)
98
-/* Generate a label used for skipping this instruction */
95
DO_ZPZ_FP(sve_fcvt_hs, uint32_t, H1_4, sve_f16_to_f32)
99
-static void arm_gen_condlabel(DisasContext *s)
96
+DO_ZPZ_FP(sve_bfcvt, uint32_t, H1_4, float32_to_bfloat16)
100
-{
97
DO_ZPZ_FP(sve_fcvt_dh, uint64_t, , sve_f64_to_f16)
101
- if (!s->condjmp) {
98
DO_ZPZ_FP(sve_fcvt_hd, uint64_t, , sve_f16_to_f64)
102
- s->condlabel = gen_new_label();
99
DO_ZPZ_FP(sve_fcvt_ds, uint64_t, , float64_to_float32)
103
- s->condjmp = 1;
100
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
104
- }
101
} while (i != 0); \
105
-}
102
}
106
-
103
107
/* Skip this instruction if the ARM condition is false */
104
+DO_FCVTNT(sve_bfcvtnt, uint32_t, uint16_t, H1_4, H1_2, float32_to_bfloat16)
108
static void arm_skip_unless(DisasContext *s, uint32_t cond)
105
DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
109
{
106
DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, , H1_4, float64_to_float32)
110
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
107
111
index XXXXXXX..XXXXXXX 100644
108
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
112
--- a/target/arm/translate-vfp.c.inc
109
index XXXXXXX..XXXXXXX 100644
113
+++ b/target/arm/translate-vfp.c.inc
110
--- a/target/arm/translate-a64.c
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
111
+++ b/target/arm/translate-a64.c
112
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
113
tcg_temp_free_i32(ahp);
114
}
115
break;
116
+ case 0x36: /* BFCVTN, BFCVTN2 */
117
+ {
118
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
119
+ gen_helper_bfcvt_pair(tcg_res[pass], tcg_op, fpst);
120
+ tcg_temp_free_ptr(fpst);
121
+ }
122
+ break;
123
case 0x56: /* FCVTXN, FCVTXN2 */
124
/* 64 bit to 32 bit float conversion
125
* with von Neumann rounding (round to odd)
126
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
127
}
128
handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
129
return;
130
+ case 0x36: /* BFCVTN, BFCVTN2 */
131
+ if (!dc_isar_feature(aa64_bf16, s) || size != 2) {
132
+ unallocated_encoding(s);
133
+ return;
134
+ }
135
+ if (!fp_access_check(s)) {
136
+ return;
137
+ }
138
+ handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
139
+ return;
140
case 0x17: /* FCVTL, FCVTL2 */
141
if (!fp_access_check(s)) {
142
return;
143
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/target/arm/translate-neon.c
146
+++ b/target/arm/translate-neon.c
147
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
115
return true;
148
return true;
116
}
149
}
117
150
118
+static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
151
+static bool trans_VCVT_B16_F32(DisasContext *s, arg_2misc *a)
119
+{
152
+{
120
+ int btmreg, topreg;
153
+ TCGv_ptr fpst;
121
+ TCGv_i64 zero;
154
+ TCGv_i64 tmp;
122
+ TCGv_i32 aspen, sfpa;
155
+ TCGv_i32 dst0, dst1;
123
+
156
+
124
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
157
+ if (!dc_isar_feature(aa32_bf16, s)) {
125
+ /* Before v8.1M, fall through in decode to NOCP check */
158
+ return false;
126
+ return false;
159
+ }
127
+ }
160
+
128
+
161
+ /* UNDEF accesses to D16-D31 if they don't exist. */
129
+ /* Explicitly UNDEF because this takes precedence over NOCP */
162
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
130
+ if (!arm_dc_feature(s, ARM_FEATURE_M_MAIN) || !s->v8m_secure) {
163
+ ((a->vd | a->vm) & 0x10)) {
131
+ unallocated_encoding(s);
164
+ return false;
132
+ return true;
165
+ }
133
+ }
166
+
134
+
167
+ if ((a->vm & 1) || (a->size != 1)) {
135
+ if (!dc_isar_feature(aa32_vfp_simd, s)) {
168
+ return false;
136
+ /* NOP if we have neither FP nor MVE */
137
+ return true;
138
+ }
139
+
140
+ /*
141
+ * If FPCCR.ASPEN != 0 && CONTROL_S.SFPA == 0 then there is no
142
+ * active floating point context so we must NOP (without doing
143
+ * any lazy state preservation or the NOCP check).
144
+ */
145
+ aspen = load_cpu_field(v7m.fpccr[M_REG_S]);
146
+ sfpa = load_cpu_field(v7m.control[M_REG_S]);
147
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
148
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
149
+ tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
150
+ tcg_gen_or_i32(sfpa, sfpa, aspen);
151
+ arm_gen_condlabel(s);
152
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
153
+
154
+ if (s->fp_excp_el != 0) {
155
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
156
+ syn_uncategorized(), s->fp_excp_el);
157
+ return true;
158
+ }
159
+
160
+ topreg = a->vd + a->imm - 1;
161
+ btmreg = a->vd;
162
+
163
+ /* Convert to Sreg numbers if the insn specified in Dregs */
164
+ if (a->size == 3) {
165
+ topreg = topreg * 2 + 1;
166
+ btmreg *= 2;
167
+ }
168
+
169
+ if (topreg > 63 || (topreg > 31 && !(topreg & 1))) {
170
+ /* UNPREDICTABLE: we choose to undef */
171
+ unallocated_encoding(s);
172
+ return true;
173
+ }
174
+
175
+ /* Silently ignore requests to clear D16-D31 if they don't exist */
176
+ if (topreg > 31 && !dc_isar_feature(aa32_simd_r32, s)) {
177
+ topreg = 31;
178
+ }
169
+ }
179
+
170
+
180
+ if (!vfp_access_check(s)) {
171
+ if (!vfp_access_check(s)) {
181
+ return true;
172
+ return true;
182
+ }
173
+ }
183
+
174
+
184
+ /* Zero the Sregs from btmreg to topreg inclusive. */
175
+ fpst = fpstatus_ptr(FPST_STD);
185
+ zero = tcg_const_i64(0);
176
+ tmp = tcg_temp_new_i64();
186
+ if (btmreg & 1) {
177
+ dst0 = tcg_temp_new_i32();
187
+ write_neon_element64(zero, btmreg >> 1, 1, MO_32);
178
+ dst1 = tcg_temp_new_i32();
188
+ btmreg++;
179
+
189
+ }
180
+ read_neon_element64(tmp, a->vm, 0, MO_64);
190
+ for (; btmreg + 1 <= topreg; btmreg += 2) {
181
+ gen_helper_bfcvt_pair(dst0, tmp, fpst);
191
+ write_neon_element64(zero, btmreg >> 1, 0, MO_64);
182
+
192
+ }
183
+ read_neon_element64(tmp, a->vm, 1, MO_64);
193
+ if (btmreg == topreg) {
184
+ gen_helper_bfcvt_pair(dst1, tmp, fpst);
194
+ write_neon_element64(zero, btmreg >> 1, 0, MO_32);
185
+
195
+ btmreg++;
186
+ write_neon_element32(dst0, a->vd, 0, MO_32);
196
+ }
187
+ write_neon_element32(dst1, a->vd, 1, MO_32);
197
+ assert(btmreg == topreg + 1);
188
+
198
+ /* TODO: when MVE is implemented, zero VPR here */
189
+ tcg_temp_free_i64(tmp);
190
+ tcg_temp_free_i32(dst0);
191
+ tcg_temp_free_i32(dst1);
192
+ tcg_temp_free_ptr(fpst);
199
+ return true;
193
+ return true;
200
+}
194
+}
201
+
195
+
202
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
196
static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
203
{
197
{
204
/*
198
TCGv_ptr fpst;
199
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/target/arm/translate-sve.c
202
+++ b/target/arm/translate-sve.c
203
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVT_hs(DisasContext *s, arg_rpr_esz *a)
204
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_hs);
205
}
206
207
+static bool trans_BFCVT(DisasContext *s, arg_rpr_esz *a)
208
+{
209
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
210
+ return false;
211
+ }
212
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_bfcvt);
213
+}
214
+
215
static bool trans_FCVT_dh(DisasContext *s, arg_rpr_esz *a)
216
{
217
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_dh);
218
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVTNT_sh(DisasContext *s, arg_rpr_esz *a)
219
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve2_fcvtnt_sh);
220
}
221
222
+static bool trans_BFCVTNT(DisasContext *s, arg_rpr_esz *a)
223
+{
224
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
225
+ return false;
226
+ }
227
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_bfcvtnt);
228
+}
229
+
230
static bool trans_FCVTNT_ds(DisasContext *s, arg_rpr_esz *a)
231
{
232
if (!dc_isar_feature(aa64_sve2, s)) {
233
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
234
index XXXXXXX..XXXXXXX 100644
235
--- a/target/arm/vfp_helper.c
236
+++ b/target/arm/vfp_helper.c
237
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(bfcvt)(float32 x, void *status)
238
return float32_to_bfloat16(x, status);
239
}
240
241
+uint32_t HELPER(bfcvt_pair)(uint64_t pair, void *status)
242
+{
243
+ bfloat16 lo = float32_to_bfloat16(extract64(pair, 0, 32), status);
244
+ bfloat16 hi = float32_to_bfloat16(extract64(pair, 32, 32), status);
245
+ return deposit32(lo, 16, 16, hi);
246
+}
247
+
248
/*
249
* VFP3 fixed point conversion. The AArch32 versions of fix-to-float
250
* must always round-to-nearest; the AArch64 ones honour the FPSCR
205
--
251
--
206
2.20.1
252
2.20.1
207
253
208
254
diff view generated by jsdifflib
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Trusted Firmware now supports A72 on sbsa-ref by default [1] so enable
3
For Arm BFDOT and BFMMLA, we need a version of round-to-odd
4
it for QEMU as well. A53 was already enabled there.
4
that overflows to infinity, instead of the max normal number.
5
5
6
1. https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/7117
6
Cc: Alex Bennée <alex.bennee@linaro.org>
7
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
8
Message-id: 20210525225817.400336-6-richard.henderson@linaro.org
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201120141705.246690-1-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/arm/sbsa-ref.c | 23 ++++++++++++++++++++---
12
include/fpu/softfloat-types.h | 4 +++-
15
1 file changed, 20 insertions(+), 3 deletions(-)
13
fpu/softfloat-parts.c.inc | 6 ++++--
14
2 files changed, 7 insertions(+), 3 deletions(-)
16
15
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
16
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
18
--- a/include/fpu/softfloat-types.h
20
+++ b/hw/arm/sbsa-ref.c
19
+++ b/include/fpu/softfloat-types.h
21
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
20
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
22
[SBSA_GWDT] = 16,
21
float_round_up = 2,
23
};
22
float_round_to_zero = 3,
24
23
float_round_ties_away = 4,
25
+static const char * const valid_cpus[] = {
24
- /* Not an IEEE rounding mode: round to the closest odd mantissa value */
26
+ ARM_CPU_TYPE_NAME("cortex-a53"),
25
+ /* Not an IEEE rounding mode: round to closest odd, overflow to max */
27
+ ARM_CPU_TYPE_NAME("cortex-a57"),
26
float_round_to_odd = 5,
28
+ ARM_CPU_TYPE_NAME("cortex-a72"),
27
+ /* Not an IEEE rounding mode: round to closest odd, overflow to inf */
29
+};
28
+ float_round_to_odd_inf = 6,
30
+
29
} FloatRoundMode;
31
+static bool cpu_type_valid(const char *cpu)
30
32
+{
31
/*
33
+ int i;
32
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
34
+
33
index XXXXXXX..XXXXXXX 100644
35
+ for (i = 0; i < ARRAY_SIZE(valid_cpus); i++) {
34
--- a/fpu/softfloat-parts.c.inc
36
+ if (strcmp(cpu, valid_cpus[i]) == 0) {
35
+++ b/fpu/softfloat-parts.c.inc
37
+ return true;
36
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
38
+ }
37
g_assert_not_reached();
39
+ }
40
+ return false;
41
+}
42
+
43
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
44
{
45
uint8_t clustersz = ARM_DEFAULT_CPUS_PER_CLUSTER;
46
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_init(MachineState *machine)
47
const CPUArchIdList *possible_cpus;
48
int n, sbsa_max_cpus;
49
50
- if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a57"))) {
51
- error_report("sbsa-ref: CPU type other than the built-in "
52
- "cortex-a57 not supported");
53
+ if (!cpu_type_valid(machine->cpu_type)) {
54
+ error_report("mach-virt: CPU type %s not supported", machine->cpu_type);
55
exit(1);
56
}
38
}
57
39
40
+ overflow_norm = false;
41
switch (s->float_rounding_mode) {
42
case float_round_nearest_even:
43
- overflow_norm = false;
44
inc = ((p->frac_lo & roundeven_mask) != frac_lsbm1 ? frac_lsbm1 : 0);
45
break;
46
case float_round_ties_away:
47
- overflow_norm = false;
48
inc = frac_lsbm1;
49
break;
50
case float_round_to_zero:
51
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
52
break;
53
case float_round_to_odd:
54
overflow_norm = true;
55
+ /* fall through */
56
+ case float_round_to_odd_inf:
57
inc = p->frac_lo & frac_lsb ? 0 : round_mask;
58
break;
59
default:
60
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
61
? frac_lsbm1 : 0);
62
break;
63
case float_round_to_odd:
64
+ case float_round_to_odd_inf:
65
inc = p->frac_lo & frac_lsb ? 0 : round_mask;
66
break;
67
default:
58
--
68
--
59
2.20.1
69
2.20.1
60
70
61
71
diff view generated by jsdifflib
1
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
1
From: Richard Henderson <richard.henderson@linaro.org>
2
like the existing FPSCR, except that it reads and writes only bits
3
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
4
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
5
permitted.)
6
2
7
Implement the register. Since we don't yet implement MVE, we handle
3
This is BFDOT for both AArch64 AdvSIMD and SVE,
8
the QC bit as RES0, with todo comments for where we will need to add
4
and VDOT.BF16 for AArch32 NEON.
9
support later.
10
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525225817.400336-7-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
14
---
10
---
15
target/arm/cpu.h | 13 +++++++++++++
11
target/arm/helper.h | 3 +++
16
target/arm/translate-vfp.c.inc | 27 +++++++++++++++++++++++++++
12
target/arm/neon-shared.decode | 2 ++
17
2 files changed, 40 insertions(+)
13
target/arm/sve.decode | 3 +++
14
target/arm/translate-a64.c | 20 ++++++++++++++++++
15
target/arm/translate-neon.c | 9 ++++++++
16
target/arm/translate-sve.c | 12 +++++++++++
17
target/arm/vec_helper.c | 40 +++++++++++++++++++++++++++++++++++
18
7 files changed, 89 insertions(+)
18
19
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
--- a/target/arm/helper.h
22
+++ b/target/arm/cpu.h
23
+++ b/target/arm/helper.h
23
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_ummla_b, TCG_CALL_NO_RWG,
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
25
DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
26
void, ptr, ptr, ptr, ptr, i32)
26
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
27
27
+#define FPCR_V (1 << 28) /* FP overflow flag */
28
+DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
28
+#define FPCR_C (1 << 29) /* FP carry flag */
29
+ void, ptr, ptr, ptr, ptr, i32)
29
+#define FPCR_Z (1 << 30) /* FP zero flag */
30
+#define FPCR_N (1 << 31) /* FP negative flag */
31
+
30
+
32
+#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
31
#ifdef TARGET_AARCH64
33
+#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
32
#include "helper-a64.h"
34
33
#include "helper-sve.h"
35
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
36
{
37
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
38
#define ARM_VFP_FPEXC 8
39
#define ARM_VFP_FPINST 9
40
#define ARM_VFP_FPINST2 10
41
+/* These ones are M-profile only */
42
+#define ARM_VFP_FPSCR_NZCVQC 2
43
+#define ARM_VFP_VPR 12
44
+#define ARM_VFP_P0 13
45
+#define ARM_VFP_FPCXT_NS 14
46
+#define ARM_VFP_FPCXT_S 15
47
48
/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
49
#define QEMU_VFP_FPSCR_NZCV 0xffff
50
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
51
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-vfp.c.inc
36
--- a/target/arm/neon-shared.decode
53
+++ b/target/arm/translate-vfp.c.inc
37
+++ b/target/arm/neon-shared.decode
54
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
38
@@ -XXX,XX +XXX,XX @@ VUDOT 1111 110 00 . 10 .... .... 1101 . q:1 . 1 .... \
55
case ARM_VFP_FPSCR:
39
vm=%vm_dp vn=%vn_dp vd=%vd_dp
56
case QEMU_VFP_FPSCR_NZCV:
40
VUSDOT 1111 110 01 . 10 .... .... 1101 . q:1 . 0 .... \
41
vm=%vm_dp vn=%vn_dp vd=%vd_dp
42
+VDOT_b16 1111 110 00 . 00 .... .... 1101 . q:1 . 0 .... \
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
44
45
# VFM[AS]L
46
VFML 1111 110 0 s:1 . 10 .... .... 1000 . 0 . 1 .... \
47
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/sve.decode
50
+++ b/target/arm/sve.decode
51
@@ -XXX,XX +XXX,XX @@ FMLALT_zzzw 01100100 10 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
52
FMLSLB_zzzw 01100100 10 1 ..... 10 1 00 0 ..... ..... @rda_rn_rm_e0
53
FMLSLT_zzzw 01100100 10 1 ..... 10 1 00 1 ..... ..... @rda_rn_rm_e0
54
55
+### SVE2 floating-point bfloat16 dot-product
56
+BFDOT_zzzz 01100100 01 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
57
+
58
### SVE2 floating-point multiply-add long (indexed)
59
FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
60
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
61
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/translate-a64.c
64
+++ b/target/arm/translate-a64.c
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
66
}
67
feature = dc_isar_feature(aa64_fcma, s);
57
break;
68
break;
58
+ case ARM_VFP_FPSCR_NZCVQC:
69
+ case 0x1f: /* BFDOT */
59
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
70
+ switch (size) {
60
+ return false;
71
+ case 1:
72
+ feature = dc_isar_feature(aa64_bf16, s);
73
+ break;
74
+ default:
75
+ unallocated_encoding(s);
76
+ return;
61
+ }
77
+ }
62
+ break;
78
+ break;
63
default:
79
default:
64
return FPSysRegCheckFailed;
80
unallocated_encoding(s);
65
}
81
return;
66
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
67
tcg_temp_free_i32(tmp);
83
}
68
gen_lookup_tb(s);
84
return;
69
break;
85
70
+ case ARM_VFP_FPSCR_NZCVQC:
86
+ case 0xf: /* BFDOT */
71
+ {
87
+ switch (size) {
72
+ TCGv_i32 fpscr;
88
+ case 1:
73
+ tmp = loadfn(s, opaque);
89
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfdot);
74
+ /*
90
+ break;
75
+ * TODO: when we implement MVE, write the QC bit.
91
+ default:
76
+ * For non-MVE, QC is RES0.
92
+ g_assert_not_reached();
77
+ */
93
+ }
78
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
94
+ return;
79
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
95
+
80
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
81
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
82
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
83
+ tcg_temp_free_i32(tmp);
84
+ break;
85
+ }
86
default:
96
default:
87
g_assert_not_reached();
97
g_assert_not_reached();
88
}
98
}
89
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
99
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
90
gen_helper_vfp_get_fpscr(tmp, cpu_env);
100
index XXXXXXX..XXXXXXX 100644
91
storefn(s, opaque, tmp);
101
--- a/target/arm/translate-neon.c
92
break;
102
+++ b/target/arm/translate-neon.c
93
+ case ARM_VFP_FPSCR_NZCVQC:
103
@@ -XXX,XX +XXX,XX @@ static bool trans_VUSDOT(DisasContext *s, arg_VUSDOT *a)
94
+ /*
104
gen_helper_gvec_usdot_b);
95
+ * TODO: MVE has a QC bit, which we probably won't store
105
}
96
+ * in the xregs[] field. For non-MVE, where QC is RES0,
106
97
+ * we can just fall through to the FPSCR_NZCV case.
107
+static bool trans_VDOT_b16(DisasContext *s, arg_VDOT_b16 *a)
98
+ */
108
+{
99
case QEMU_VFP_FPSCR_NZCV:
109
+ if (!dc_isar_feature(aa32_bf16, s)) {
100
/*
110
+ return false;
101
* Read just NZCV; this is a special case to avoid the
111
+ }
112
+ return do_neon_ddda(s, a->q * 7, a->vd, a->vn, a->vm, 0,
113
+ gen_helper_gvec_bfdot);
114
+}
115
+
116
static bool trans_VFML(DisasContext *s, arg_VFML *a)
117
{
118
int opr_sz;
119
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/translate-sve.c
122
+++ b/target/arm/translate-sve.c
123
@@ -XXX,XX +XXX,XX @@ static bool trans_UMMLA(DisasContext *s, arg_rrrr_esz *a)
124
{
125
return do_i8mm_zzzz_ool(s, a, gen_helper_gvec_ummla_b, 0);
126
}
127
+
128
+static bool trans_BFDOT_zzzz(DisasContext *s, arg_rrrr_esz *a)
129
+{
130
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
131
+ return false;
132
+ }
133
+ if (sve_access_check(s)) {
134
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfdot,
135
+ a->rd, a->rn, a->rm, a->ra, 0);
136
+ }
137
+ return true;
138
+}
139
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
140
index XXXXXXX..XXXXXXX 100644
141
--- a/target/arm/vec_helper.c
142
+++ b/target/arm/vec_helper.c
143
@@ -XXX,XX +XXX,XX @@ static void do_mmla_b(void *vd, void *vn, void *vm, void *va, uint32_t desc,
144
DO_MMLA_B(gvec_smmla_b, do_smmla_b)
145
DO_MMLA_B(gvec_ummla_b, do_ummla_b)
146
DO_MMLA_B(gvec_usmmla_b, do_usmmla_b)
147
+
148
+/*
149
+ * BFloat16 Dot Product
150
+ */
151
+
152
+static float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2)
153
+{
154
+ /* FPCR is ignored for BFDOT and BFMMLA. */
155
+ float_status bf_status = {
156
+ .tininess_before_rounding = float_tininess_before_rounding,
157
+ .float_rounding_mode = float_round_to_odd_inf,
158
+ .flush_to_zero = true,
159
+ .flush_inputs_to_zero = true,
160
+ .default_nan_mode = true,
161
+ };
162
+ float32 t1, t2;
163
+
164
+ /*
165
+ * Extract each BFloat16 from the element pair, and shift
166
+ * them such that they become float32.
167
+ */
168
+ t1 = float32_mul(e1 << 16, e2 << 16, &bf_status);
169
+ t2 = float32_mul(e1 & 0xffff0000u, e2 & 0xffff0000u, &bf_status);
170
+ t1 = float32_add(t1, t2, &bf_status);
171
+ t1 = float32_add(sum, t1, &bf_status);
172
+
173
+ return t1;
174
+}
175
+
176
+void HELPER(gvec_bfdot)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
177
+{
178
+ intptr_t i, opr_sz = simd_oprsz(desc);
179
+ float32 *d = vd, *a = va;
180
+ uint32_t *n = vn, *m = vm;
181
+
182
+ for (i = 0; i < opr_sz / 4; ++i) {
183
+ d[i] = bfdotadd(a[i], n[i], m[i]);
184
+ }
185
+ clear_tail(d, opr_sz, simd_maxsz(desc));
186
+}
102
--
187
--
103
2.20.1
188
2.20.1
104
189
105
190
diff view generated by jsdifflib
1
Factor out the code which handles M-profile lazy FP state preservation
1
From: Richard Henderson <richard.henderson@linaro.org>
2
from full_vfp_access_check(); accesses to the FPCXT_NS register are
3
a special case which need to do just this part (corresponding in the
4
pseudocode to the PreserveFPState() function), and not the full
5
set of actions matching the pseudocode ExecuteFPCheck() which
6
normal FP instructions need to do.
7
2
3
This is BFDOT for both AArch64 AdvSIMD and SVE,
4
and VDOT.BF16 for AArch32 NEON.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525225817.400336-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
12
---
10
---
13
target/arm/translate-vfp.c.inc | 45 ++++++++++++++++++++--------------
11
target/arm/helper.h | 2 ++
14
1 file changed, 27 insertions(+), 18 deletions(-)
12
target/arm/neon-shared.decode | 2 ++
13
target/arm/sve.decode | 3 +++
14
target/arm/translate-a64.c | 41 +++++++++++++++++++++++++++--------
15
target/arm/translate-neon.c | 9 ++++++++
16
target/arm/translate-sve.c | 12 ++++++++++
17
target/arm/vec_helper.c | 20 +++++++++++++++++
18
7 files changed, 80 insertions(+), 9 deletions(-)
15
19
16
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-vfp.c.inc
22
--- a/target/arm/helper.h
19
+++ b/target/arm/translate-vfp.c.inc
23
+++ b/target/arm/helper.h
20
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
21
return offs;
25
26
DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
27
void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
31
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
33
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/neon-shared.decode
36
+++ b/target/arm/neon-shared.decode
37
@@ -XXX,XX +XXX,XX @@ VUSDOT_scalar 1111 1110 1 . 00 .... .... 1101 . q:1 index:1 0 vm:4 \
38
vn=%vn_dp vd=%vd_dp
39
VSUDOT_scalar 1111 1110 1 . 00 .... .... 1101 . q:1 index:1 1 vm:4 \
40
vn=%vn_dp vd=%vd_dp
41
+VDOT_b16_scal 1111 1110 0 . 00 .... .... 1101 . q:1 index:1 0 vm:4 \
42
+ vn=%vn_dp vd=%vd_dp
43
44
%vfml_scalar_q0_rm 0:3 5:1
45
%vfml_scalar_q1_index 5:1 3:1
46
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/sve.decode
49
+++ b/target/arm/sve.decode
50
@@ -XXX,XX +XXX,XX @@ FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
51
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
52
FMLSLB_zzxw 01100100 10 1 ..... 0110.0 ..... ..... @rrxr_3a esz=2
53
FMLSLT_zzxw 01100100 10 1 ..... 0110.1 ..... ..... @rrxr_3a esz=2
54
+
55
+### SVE2 floating-point bfloat16 dot-product (indexed)
56
+BFDOT_zzxz 01100100 01 1 ..... 010000 ..... ..... @rrxr_2 esz=2
57
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate-a64.c
60
+++ b/target/arm/translate-a64.c
61
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
62
return;
63
}
64
break;
65
- case 0x0f: /* SUDOT, USDOT */
66
- if (is_scalar || (size & 1) || !dc_isar_feature(aa64_i8mm, s)) {
67
+ case 0x0f:
68
+ switch (size) {
69
+ case 0: /* SUDOT */
70
+ case 2: /* USDOT */
71
+ if (is_scalar || !dc_isar_feature(aa64_i8mm, s)) {
72
+ unallocated_encoding(s);
73
+ return;
74
+ }
75
+ break;
76
+ case 1: /* BFDOT */
77
+ if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
78
+ unallocated_encoding(s);
79
+ return;
80
+ }
81
+ break;
82
+ default:
83
unallocated_encoding(s);
84
return;
85
}
86
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
87
u ? gen_helper_gvec_udot_idx_b
88
: gen_helper_gvec_sdot_idx_b);
89
return;
90
- case 0x0f: /* SUDOT, USDOT */
91
- gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
92
- extract32(insn, 23, 1)
93
- ? gen_helper_gvec_usdot_idx_b
94
- : gen_helper_gvec_sudot_idx_b);
95
- return;
96
-
97
+ case 0x0f:
98
+ switch (extract32(insn, 22, 2)) {
99
+ case 0: /* SUDOT */
100
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
101
+ gen_helper_gvec_sudot_idx_b);
102
+ return;
103
+ case 1: /* BFDOT */
104
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
105
+ gen_helper_gvec_bfdot_idx);
106
+ return;
107
+ case 2: /* USDOT */
108
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
109
+ gen_helper_gvec_usdot_idx_b);
110
+ return;
111
+ }
112
+ g_assert_not_reached();
113
case 0x11: /* FCMLA #0 */
114
case 0x13: /* FCMLA #90 */
115
case 0x15: /* FCMLA #180 */
116
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate-neon.c
119
+++ b/target/arm/translate-neon.c
120
@@ -XXX,XX +XXX,XX @@ static bool trans_VSUDOT_scalar(DisasContext *s, arg_VSUDOT_scalar *a)
121
gen_helper_gvec_sudot_idx_b);
22
}
122
}
23
123
24
+/*
124
+static bool trans_VDOT_b16_scal(DisasContext *s, arg_VDOT_b16_scal *a)
25
+ * Generate code for M-profile lazy FP state preservation if needed;
26
+ * this corresponds to the pseudocode PreserveFPState() function.
27
+ */
28
+static void gen_preserve_fp_state(DisasContext *s)
29
+{
125
+{
30
+ if (s->v7m_lspact) {
126
+ if (!dc_isar_feature(aa32_bf16, s)) {
31
+ /*
127
+ return false;
32
+ * Lazy state saving affects external memory and also the NVIC,
33
+ * so we must mark it as an IO operation for icount (and cause
34
+ * this to be the last insn in the TB).
35
+ */
36
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
37
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
38
+ gen_io_start();
39
+ }
40
+ gen_helper_v7m_preserve_fp_state(cpu_env);
41
+ /*
42
+ * If the preserve_fp_state helper doesn't throw an exception
43
+ * then it will clear LSPACT; we don't need to repeat this for
44
+ * any further FP insns in this TB.
45
+ */
46
+ s->v7m_lspact = false;
47
+ }
128
+ }
129
+ return do_neon_ddda(s, a->q * 6, a->vd, a->vn, a->vm, a->index,
130
+ gen_helper_gvec_bfdot_idx);
48
+}
131
+}
49
+
132
+
50
/*
133
static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
51
* Check that VFP access is enabled. If it is, do the necessary
134
{
52
* M-profile lazy-FP handling and then return true.
135
int opr_sz;
53
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
136
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
54
/* Handle M-profile lazy FP state mechanics */
137
index XXXXXXX..XXXXXXX 100644
55
138
--- a/target/arm/translate-sve.c
56
/* Trigger lazy-state preservation if necessary */
139
+++ b/target/arm/translate-sve.c
57
- if (s->v7m_lspact) {
140
@@ -XXX,XX +XXX,XX @@ static bool trans_BFDOT_zzzz(DisasContext *s, arg_rrrr_esz *a)
58
- /*
141
}
59
- * Lazy state saving affects external memory and also the NVIC,
142
return true;
60
- * so we must mark it as an IO operation for icount (and cause
143
}
61
- * this to be the last insn in the TB).
144
+
62
- */
145
+static bool trans_BFDOT_zzxz(DisasContext *s, arg_rrxr_esz *a)
63
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
146
+{
64
- s->base.is_jmp = DISAS_UPDATE_EXIT;
147
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
65
- gen_io_start();
148
+ return false;
66
- }
149
+ }
67
- gen_helper_v7m_preserve_fp_state(cpu_env);
150
+ if (sve_access_check(s)) {
68
- /*
151
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfdot_idx,
69
- * If the preserve_fp_state helper doesn't throw an exception
152
+ a->rd, a->rn, a->rm, a->ra, a->index);
70
- * then it will clear LSPACT; we don't need to repeat this for
153
+ }
71
- * any further FP insns in this TB.
154
+ return true;
72
- */
155
+}
73
- s->v7m_lspact = false;
156
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
74
- }
157
index XXXXXXX..XXXXXXX 100644
75
+ gen_preserve_fp_state(s);
158
--- a/target/arm/vec_helper.c
76
159
+++ b/target/arm/vec_helper.c
77
/* Update ownership of FP context: set FPCCR.S to match current state */
160
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfdot)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
78
if (s->v8m_fpccr_s_wrong) {
161
}
162
clear_tail(d, opr_sz, simd_maxsz(desc));
163
}
164
+
165
+void HELPER(gvec_bfdot_idx)(void *vd, void *vn, void *vm,
166
+ void *va, uint32_t desc)
167
+{
168
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
169
+ intptr_t index = simd_data(desc);
170
+ intptr_t elements = opr_sz / 4;
171
+ intptr_t eltspersegment = MIN(16 / 4, elements);
172
+ float32 *d = vd, *a = va;
173
+ uint32_t *n = vn, *m = vm;
174
+
175
+ for (i = 0; i < elements; i += eltspersegment) {
176
+ uint32_t m_idx = m[i + H4(index)];
177
+
178
+ for (j = i; j < i + eltspersegment; j++) {
179
+ d[j] = bfdotadd(a[j], n[j], m_idx);
180
+ }
181
+ }
182
+ clear_tail(d, opr_sz, simd_maxsz(desc));
183
+}
79
--
184
--
80
2.20.1
185
2.20.1
81
186
82
187
diff view generated by jsdifflib
1
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the general-purpose registers and APSR. Implement this.
3
2
4
The encoding is a subset of the LDMIA T2 encoding, using what would
3
This is BFMMLA for both AArch64 AdvSIMD and SVE,
5
be Rn=0b1111 (which UNDEFs for LDMIA).
4
and VMMLA.BF16 for AArch32 NEON.
6
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-9-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
10
---
10
---
11
target/arm/t32.decode | 6 +++++-
11
target/arm/helper.h | 3 +++
12
target/arm/translate.c | 38 ++++++++++++++++++++++++++++++++++++++
12
target/arm/neon-shared.decode | 2 ++
13
2 files changed, 43 insertions(+), 1 deletion(-)
13
target/arm/sve.decode | 6 +++--
14
target/arm/translate-a64.c | 10 +++++++++
15
target/arm/translate-neon.c | 9 ++++++++
16
target/arm/translate-sve.c | 12 ++++++++++
17
target/arm/vec_helper.c | 42 ++++++++++++++++++++++++++++++++++-
18
7 files changed, 81 insertions(+), 3 deletions(-)
14
19
15
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/t32.decode
22
--- a/target/arm/helper.h
18
+++ b/target/arm/t32.decode
23
+++ b/target/arm/helper.h
19
@@ -XXX,XX +XXX,XX @@ UXTAB 1111 1010 0101 .... 1111 .... 10.. .... @rrr_rot
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
20
25
DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
21
STM_t32 1110 1000 10.0 .... ................ @ldstm i=1 b=0
26
void, ptr, ptr, ptr, ptr, i32)
22
STM_t32 1110 1001 00.0 .... ................ @ldstm i=0 b=1
27
23
-LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
28
+DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
33
#include "helper-sve.h"
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/neon-shared.decode
37
+++ b/target/arm/neon-shared.decode
38
@@ -XXX,XX +XXX,XX @@ VUMMLA 1111 1100 0.10 .... .... 1100 .1.1 .... \
39
vm=%vm_dp vn=%vn_dp vd=%vd_dp
40
VUSMMLA 1111 1100 1.10 .... .... 1100 .1.0 .... \
41
vm=%vm_dp vn=%vn_dp vd=%vd_dp
42
+VMMLA_b16 1111 1100 0.00 .... .... 1100 .1.0 .... \
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
44
45
VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
46
vn=%vn_dp vd=%vd_dp size=1
47
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/sve.decode
50
+++ b/target/arm/sve.decode
51
@@ -XXX,XX +XXX,XX @@ SQRDCMLAH_zzzz 01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5 ra=%reg_movprfx
52
USDOT_zzzz 01000100 .. 0 ..... 011 110 ..... ..... @rda_rn_rm
53
54
### SVE2 floating point matrix multiply accumulate
55
-
56
-FMMLA 01100100 .. 1 ..... 111001 ..... ..... @rda_rn_rm
24
+{
57
+{
25
+ # Rn=15 UNDEFs for LDM; M-profile CLRM uses that encoding
58
+ BFMMLA 01100100 01 1 ..... 111 001 ..... ..... @rda_rn_rm_e0
26
+ CLRM 1110 1000 1001 1111 list:16
59
+ FMMLA 01100100 .. 1 ..... 111 001 ..... ..... @rda_rn_rm
27
+ LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
28
+}
60
+}
29
LDM_t32 1110 1001 00.1 .... ................ @ldstm i=0 b=1
61
30
62
### SVE2 Memory Gather Load Group
31
&rfe !extern rn w pu
63
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
64
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.c
66
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate.c
67
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static bool trans_LDM_t16(DisasContext *s, arg_ldst_block *a)
68
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
37
return do_ldm(s, a, 1);
69
}
70
feature = dc_isar_feature(aa64_fcma, s);
71
break;
72
+ case 0x1d: /* BFMMLA */
73
+ if (size != MO_16 || !is_q) {
74
+ unallocated_encoding(s);
75
+ return;
76
+ }
77
+ feature = dc_isar_feature(aa64_bf16, s);
78
+ break;
79
case 0x1f: /* BFDOT */
80
switch (size) {
81
case 1:
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
83
}
84
return;
85
86
+ case 0xd: /* BFMMLA */
87
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfmmla);
88
+ return;
89
case 0xf: /* BFDOT */
90
switch (size) {
91
case 1:
92
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate-neon.c
95
+++ b/target/arm/translate-neon.c
96
@@ -XXX,XX +XXX,XX @@ static bool trans_VUSMMLA(DisasContext *s, arg_VUSMMLA *a)
97
return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
98
gen_helper_gvec_usmmla_b);
38
}
99
}
39
100
+
40
+static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
101
+static bool trans_VMMLA_b16(DisasContext *s, arg_VMMLA_b16 *a)
41
+{
102
+{
42
+ int i;
103
+ if (!dc_isar_feature(aa32_bf16, s)) {
43
+ TCGv_i32 zero;
44
+
45
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
46
+ return false;
104
+ return false;
47
+ }
105
+ }
106
+ return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
107
+ gen_helper_gvec_bfmmla);
108
+}
109
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/translate-sve.c
112
+++ b/target/arm/translate-sve.c
113
@@ -XXX,XX +XXX,XX @@ static bool trans_BFDOT_zzxz(DisasContext *s, arg_rrxr_esz *a)
114
}
115
return true;
116
}
48
+
117
+
49
+ if (extract32(a->list, 13, 1)) {
118
+static bool trans_BFMMLA(DisasContext *s, arg_rrrr_esz *a)
119
+{
120
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
50
+ return false;
121
+ return false;
51
+ }
122
+ }
52
+
123
+ if (sve_access_check(s)) {
53
+ if (!a->list) {
124
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfmmla,
54
+ /* UNPREDICTABLE; we choose to UNDEF */
125
+ a->rd, a->rn, a->rm, a->ra, 0);
55
+ return false;
56
+ }
126
+ }
57
+
58
+ zero = tcg_const_i32(0);
59
+ for (i = 0; i < 15; i++) {
60
+ if (extract32(a->list, i, 1)) {
61
+ /* Clear R[i] */
62
+ tcg_gen_mov_i32(cpu_R[i], zero);
63
+ }
64
+ }
65
+ if (extract32(a->list, 15, 1)) {
66
+ /*
67
+ * Clear APSR (by calling the MSR helper with the same argument
68
+ * as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
69
+ */
70
+ TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
71
+ gen_helper_v7m_msr(cpu_env, maskreg, zero);
72
+ tcg_temp_free_i32(maskreg);
73
+ }
74
+ tcg_temp_free_i32(zero);
75
+ return true;
127
+ return true;
76
+}
128
+}
129
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/target/arm/vec_helper.c
132
+++ b/target/arm/vec_helper.c
133
@@ -XXX,XX +XXX,XX @@ static void do_mmla_b(void *vd, void *vn, void *vm, void *va, uint32_t desc,
134
* Process the entire segment at once, writing back the
135
* results only after we've consumed all of the inputs.
136
*
137
- * Key to indicies by column:
138
+ * Key to indices by column:
139
* i j i j
140
*/
141
sum0 = a[H4(0 + 0)];
142
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfdot_idx)(void *vd, void *vn, void *vm,
143
}
144
clear_tail(d, opr_sz, simd_maxsz(desc));
145
}
77
+
146
+
78
/*
147
+void HELPER(gvec_bfmmla)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
79
* Branch, branch with link
148
+{
80
*/
149
+ intptr_t s, opr_sz = simd_oprsz(desc);
150
+ float32 *d = vd, *a = va;
151
+ uint32_t *n = vn, *m = vm;
152
+
153
+ for (s = 0; s < opr_sz / 4; s += 4) {
154
+ float32 sum00, sum01, sum10, sum11;
155
+
156
+ /*
157
+ * Process the entire segment at once, writing back the
158
+ * results only after we've consumed all of the inputs.
159
+ *
160
+ * Key to indicies by column:
161
+ * i j i k j k
162
+ */
163
+ sum00 = a[s + H4(0 + 0)];
164
+ sum00 = bfdotadd(sum00, n[s + H4(0 + 0)], m[s + H4(0 + 0)]);
165
+ sum00 = bfdotadd(sum00, n[s + H4(0 + 1)], m[s + H4(0 + 1)]);
166
+
167
+ sum01 = a[s + H4(0 + 1)];
168
+ sum01 = bfdotadd(sum01, n[s + H4(0 + 0)], m[s + H4(2 + 0)]);
169
+ sum01 = bfdotadd(sum01, n[s + H4(0 + 1)], m[s + H4(2 + 1)]);
170
+
171
+ sum10 = a[s + H4(2 + 0)];
172
+ sum10 = bfdotadd(sum10, n[s + H4(2 + 0)], m[s + H4(0 + 0)]);
173
+ sum10 = bfdotadd(sum10, n[s + H4(2 + 1)], m[s + H4(0 + 1)]);
174
+
175
+ sum11 = a[s + H4(2 + 1)];
176
+ sum11 = bfdotadd(sum11, n[s + H4(2 + 0)], m[s + H4(2 + 0)]);
177
+ sum11 = bfdotadd(sum11, n[s + H4(2 + 1)], m[s + H4(2 + 1)]);
178
+
179
+ d[s + H4(0 + 0)] = sum00;
180
+ d[s + H4(0 + 1)] = sum01;
181
+ d[s + H4(2 + 0)] = sum10;
182
+ d[s + H4(2 + 1)] = sum11;
183
+ }
184
+ clear_tail(d, opr_sz, simd_maxsz(desc));
185
+}
81
--
186
--
82
2.20.1
187
2.20.1
83
188
84
189
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The QTests perform five tests on the Xilinx ZynqMP CAN controller:
3
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
4
Tests the CAN controller in loopback, sleep and snoop mode.
4
and VFMA{B,T}.BF16 for AArch32 NEON.
5
Tests filtering of incoming CAN messages.
6
5
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
8
Message-id: 20210525225817.400336-10-richard.henderson@linaro.org
10
Message-id: 1605728926-352690-4-git-send-email-fnu.vikram@xilinx.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
tests/qtest/xlnx-can-test.c | 360 ++++++++++++++++++++++++++++++++++++
11
target/arm/helper.h | 3 +++
14
tests/qtest/meson.build | 1 +
12
target/arm/neon-shared.decode | 3 +++
15
2 files changed, 361 insertions(+)
13
target/arm/sve.decode | 3 +++
16
create mode 100644 tests/qtest/xlnx-can-test.c
14
target/arm/translate-a64.c | 13 +++++++++----
15
target/arm/translate-neon.c | 9 +++++++++
16
target/arm/translate-sve.c | 30 ++++++++++++++++++++++++++++++
17
target/arm/vec_helper.c | 16 ++++++++++++++++
18
7 files changed, 73 insertions(+), 4 deletions(-)
17
19
18
diff --git a/tests/qtest/xlnx-can-test.c b/tests/qtest/xlnx-can-test.c
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
new file mode 100644
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX
22
--- a/target/arm/helper.h
21
--- /dev/null
23
+++ b/target/arm/helper.h
22
+++ b/tests/qtest/xlnx-can-test.c
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
23
@@ -XXX,XX +XXX,XX @@
25
DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
24
+/*
26
void, ptr, ptr, ptr, ptr, i32)
25
+ * QTests for the Xilinx ZynqMP CAN controller.
27
26
+ *
28
+DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
27
+ * Copyright (c) 2020 Xilinx Inc.
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
28
+ *
29
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
30
+ *
31
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
32
+ * of this software and associated documentation files (the "Software"), to deal
33
+ * in the Software without restriction, including without limitation the rights
34
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
35
+ * copies of the Software, and to permit persons to whom the Software is
36
+ * furnished to do so, subject to the following conditions:
37
+ *
38
+ * The above copyright notice and this permission notice shall be included in
39
+ * all copies or substantial portions of the Software.
40
+ *
41
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
42
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
43
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
44
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
45
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
46
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
47
+ * THE SOFTWARE.
48
+ */
49
+
30
+
50
+#include "qemu/osdep.h"
31
#ifdef TARGET_AARCH64
51
+#include "libqos/libqtest.h"
32
#include "helper-a64.h"
33
#include "helper-sve.h"
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/neon-shared.decode
37
+++ b/target/arm/neon-shared.decode
38
@@ -XXX,XX +XXX,XX @@ VUSMMLA 1111 1100 1.10 .... .... 1100 .1.0 .... \
39
VMMLA_b16 1111 1100 0.00 .... .... 1100 .1.0 .... \
40
vm=%vm_dp vn=%vn_dp vd=%vd_dp
41
42
+VFMA_b16 1111 110 0 0.11 .... .... 1000 . q:1 . 1 .... \
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
52
+
44
+
53
+/* Base address. */
45
VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
54
+#define CAN0_BASE_ADDR 0xFF060000
46
vn=%vn_dp vd=%vd_dp size=1
55
+#define CAN1_BASE_ADDR 0xFF070000
47
VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
48
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/sve.decode
51
+++ b/target/arm/sve.decode
52
@@ -XXX,XX +XXX,XX @@ FMLALT_zzzw 01100100 10 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
53
FMLSLB_zzzw 01100100 10 1 ..... 10 1 00 0 ..... ..... @rda_rn_rm_e0
54
FMLSLT_zzzw 01100100 10 1 ..... 10 1 00 1 ..... ..... @rda_rn_rm_e0
55
56
+BFMLALB_zzzw 01100100 11 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
57
+BFMLALT_zzzw 01100100 11 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
56
+
58
+
57
+/* Register addresses. */
59
### SVE2 floating-point bfloat16 dot-product
58
+#define R_SRR_OFFSET 0x00
60
BFDOT_zzzz 01100100 01 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
59
+#define R_MSR_OFFSET 0x04
61
60
+#define R_SR_OFFSET 0x18
62
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
61
+#define R_ISR_OFFSET 0x1C
63
index XXXXXXX..XXXXXXX 100644
62
+#define R_ICR_OFFSET 0x24
64
--- a/target/arm/translate-a64.c
63
+#define R_TXID_OFFSET 0x30
65
+++ b/target/arm/translate-a64.c
64
+#define R_TXDLC_OFFSET 0x34
66
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
65
+#define R_TXDATA1_OFFSET 0x38
67
}
66
+#define R_TXDATA2_OFFSET 0x3C
68
feature = dc_isar_feature(aa64_bf16, s);
67
+#define R_RXID_OFFSET 0x50
69
break;
68
+#define R_RXDLC_OFFSET 0x54
70
- case 0x1f: /* BFDOT */
69
+#define R_RXDATA1_OFFSET 0x58
71
+ case 0x1f:
70
+#define R_RXDATA2_OFFSET 0x5C
72
switch (size) {
71
+#define R_AFR 0x60
73
- case 1:
72
+#define R_AFMR1 0x64
74
+ case 1: /* BFDOT */
73
+#define R_AFIR1 0x68
75
+ case 3: /* BFMLAL{B,T} */
74
+#define R_AFMR2 0x6C
76
feature = dc_isar_feature(aa64_bf16, s);
75
+#define R_AFIR2 0x70
77
break;
76
+#define R_AFMR3 0x74
78
default:
77
+#define R_AFIR3 0x78
79
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
78
+#define R_AFMR4 0x7C
80
case 0xd: /* BFMMLA */
79
+#define R_AFIR4 0x80
81
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfmmla);
82
return;
83
- case 0xf: /* BFDOT */
84
+ case 0xf:
85
switch (size) {
86
- case 1:
87
+ case 1: /* BFDOT */
88
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfdot);
89
break;
90
+ case 3: /* BFMLAL{B,T} */
91
+ gen_gvec_op4_fpst(s, 1, rd, rn, rm, rd, false, is_q,
92
+ gen_helper_gvec_bfmlal);
93
+ break;
94
default:
95
g_assert_not_reached();
96
}
97
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate-neon.c
100
+++ b/target/arm/translate-neon.c
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMMLA_b16(DisasContext *s, arg_VMMLA_b16 *a)
102
return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
103
gen_helper_gvec_bfmmla);
104
}
80
+
105
+
81
+/* CAN modes. */
106
+static bool trans_VFMA_b16(DisasContext *s, arg_VFMA_b16 *a)
82
+#define CONFIG_MODE 0x00
107
+{
83
+#define NORMAL_MODE 0x00
108
+ if (!dc_isar_feature(aa32_bf16, s)) {
84
+#define LOOPBACK_MODE 0x02
109
+ return false;
85
+#define SNOOP_MODE 0x04
110
+ }
86
+#define SLEEP_MODE 0x01
111
+ return do_neon_ddda_fpst(s, 7, a->vd, a->vn, a->vm, a->q, FPST_STD,
87
+#define ENABLE_CAN (1 << 1)
112
+ gen_helper_gvec_bfmlal);
88
+#define STATUS_NORMAL_MODE (1 << 3)
113
+}
89
+#define STATUS_LOOPBACK_MODE (1 << 1)
114
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
90
+#define STATUS_SNOOP_MODE (1 << 12)
115
index XXXXXXX..XXXXXXX 100644
91
+#define STATUS_SLEEP_MODE (1 << 2)
116
--- a/target/arm/translate-sve.c
92
+#define ISR_TXOK (1 << 1)
117
+++ b/target/arm/translate-sve.c
93
+#define ISR_RXOK (1 << 4)
118
@@ -XXX,XX +XXX,XX @@ static bool trans_BFMMLA(DisasContext *s, arg_rrrr_esz *a)
119
}
120
return true;
121
}
94
+
122
+
95
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
123
+static bool do_BFMLAL_zzzw(DisasContext *s, arg_rrrr_esz *a, bool sel)
96
+ uint8_t can_timestamp)
97
+{
124
+{
98
+ uint16_t size = 0;
125
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
99
+ uint8_t len = 4;
126
+ return false;
127
+ }
128
+ if (sve_access_check(s)) {
129
+ TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
130
+ unsigned vsz = vec_full_reg_size(s);
100
+
131
+
101
+ while (size < len) {
132
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
102
+ if (R_RXID_OFFSET + 4 * size == R_RXDLC_OFFSET) {
133
+ vec_full_reg_offset(s, a->rn),
103
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size] + can_timestamp);
134
+ vec_full_reg_offset(s, a->rm),
104
+ } else {
135
+ vec_full_reg_offset(s, a->ra),
105
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
136
+ status, vsz, vsz, sel,
106
+ }
137
+ gen_helper_gvec_bfmlal);
107
+
138
+ tcg_temp_free_ptr(status);
108
+ size++;
109
+ }
139
+ }
140
+ return true;
110
+}
141
+}
111
+
142
+
112
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
143
+static bool trans_BFMLALB_zzzw(DisasContext *s, arg_rrrr_esz *a)
113
+{
144
+{
114
+ uint32_t int_status;
145
+ return do_BFMLAL_zzzw(s, a, false);
115
+
116
+ /* Read the interrupt on CAN rx. */
117
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
118
+
119
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
120
+
121
+ /* Read the RX register data for CAN. */
122
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RXID_OFFSET);
123
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RXDLC_OFFSET);
124
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RXDATA1_OFFSET);
125
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RXDATA2_OFFSET);
126
+
127
+ /* Clear the RX interrupt. */
128
+ qtest_writel(qts, CAN1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
129
+}
146
+}
130
+
147
+
131
+static void send_data(QTestState *qts, uint64_t can_base_addr,
148
+static bool trans_BFMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
132
+ const uint32_t *buf_tx)
133
+{
149
+{
134
+ uint32_t int_status;
150
+ return do_BFMLAL_zzzw(s, a, true);
151
+}
152
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/vec_helper.c
155
+++ b/target/arm/vec_helper.c
156
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmmla)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
157
}
158
clear_tail(d, opr_sz, simd_maxsz(desc));
159
}
135
+
160
+
136
+ /* Write the TX register data for CAN. */
161
+void HELPER(gvec_bfmlal)(void *vd, void *vn, void *vm, void *va,
137
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
162
+ void *stat, uint32_t desc)
138
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
163
+{
139
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
164
+ intptr_t i, opr_sz = simd_oprsz(desc);
140
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
165
+ intptr_t sel = simd_data(desc);
166
+ float32 *d = vd, *a = va;
167
+ bfloat16 *n = vn, *m = vm;
141
+
168
+
142
+ /* Read the interrupt on CAN for tx. */
169
+ for (i = 0; i < opr_sz / 4; ++i) {
143
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
170
+ float32 nn = n[H2(i * 2 + sel)] << 16;
144
+
171
+ float32 mm = m[H2(i * 2 + sel)] << 16;
145
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
172
+ d[H4(i)] = float32_muladd(nn, mm, a[H4(i)], 0, stat);
146
+
173
+ }
147
+ /* Clear the interrupt for tx. */
174
+ clear_tail(d, opr_sz, simd_maxsz(desc));
148
+ qtest_writel(qts, CAN0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
149
+}
175
+}
150
+
151
+/*
152
+ * This test will be transferring data from CAN0 and CAN1 through canbus. CAN0
153
+ * initiate the data transfer to can-bus, CAN1 receives the data. Test compares
154
+ * the data sent from CAN0 with received on CAN1.
155
+ */
156
+static void test_can_bus(void)
157
+{
158
+ const uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
159
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
160
+ uint32_t status = 0;
161
+ uint8_t can_timestamp = 1;
162
+
163
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
164
+ " -object can-bus,id=canbus0"
165
+ " -machine xlnx-zcu102.canbus0=canbus0"
166
+ " -machine xlnx-zcu102.canbus1=canbus0"
167
+ );
168
+
169
+ /* Configure the CAN0 and CAN1. */
170
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
171
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
172
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
173
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
174
+
175
+ /* Check here if CAN0 and CAN1 are in normal mode. */
176
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
177
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
178
+
179
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
180
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
181
+
182
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
183
+
184
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
185
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
186
+
187
+ qtest_quit(qts);
188
+}
189
+
190
+/*
191
+ * This test is performing loopback mode on CAN0 and CAN1. Data sent from TX of
192
+ * each CAN0 and CAN1 are compared with RX register data for respective CAN.
193
+ */
194
+static void test_can_loopback(void)
195
+{
196
+ uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
197
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
198
+ uint32_t status = 0;
199
+
200
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
201
+ " -object can-bus,id=canbus0"
202
+ " -machine xlnx-zcu102.canbus0=canbus0"
203
+ " -machine xlnx-zcu102.canbus1=canbus0"
204
+ );
205
+
206
+ /* Configure the CAN0 in loopback mode. */
207
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
208
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
209
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
210
+
211
+ /* Check here if CAN0 is set in loopback mode. */
212
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
213
+
214
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
215
+
216
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
217
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
218
+ match_rx_tx_data(buf_tx, buf_rx, 0);
219
+
220
+ /* Configure the CAN1 in loopback mode. */
221
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
222
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
223
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
224
+
225
+ /* Check here if CAN1 is set in loopback mode. */
226
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
227
+
228
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
229
+
230
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
231
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
232
+ match_rx_tx_data(buf_tx, buf_rx, 0);
233
+
234
+ qtest_quit(qts);
235
+}
236
+
237
+/*
238
+ * Enable filters for CAN1. This will filter incoming messages with ID. In this
239
+ * test message will pass through filter 2.
240
+ */
241
+static void test_can_filter(void)
242
+{
243
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
244
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
245
+ uint32_t status = 0;
246
+ uint8_t can_timestamp = 1;
247
+
248
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
249
+ " -object can-bus,id=canbus0"
250
+ " -machine xlnx-zcu102.canbus0=canbus0"
251
+ " -machine xlnx-zcu102.canbus1=canbus0"
252
+ );
253
+
254
+ /* Configure the CAN0 and CAN1. */
255
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
256
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
257
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
258
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
259
+
260
+ /* Check here if CAN0 and CAN1 are in normal mode. */
261
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
262
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
263
+
264
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
265
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
266
+
267
+ /* Set filter for CAN1 for incoming messages. */
268
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0x0);
269
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR1, 0xF7);
270
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR1, 0x121F);
271
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR2, 0x5431);
272
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR2, 0x14);
273
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR3, 0x1234);
274
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR3, 0x5431);
275
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR4, 0xFFF);
276
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR4, 0x1234);
277
+
278
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0xF);
279
+
280
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
281
+
282
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
283
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
284
+
285
+ qtest_quit(qts);
286
+}
287
+
288
+/* Testing sleep mode on CAN0 while CAN1 is in normal mode. */
289
+static void test_can_sleepmode(void)
290
+{
291
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
292
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
293
+ uint32_t status = 0;
294
+ uint8_t can_timestamp = 1;
295
+
296
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
297
+ " -object can-bus,id=canbus0"
298
+ " -machine xlnx-zcu102.canbus0=canbus0"
299
+ " -machine xlnx-zcu102.canbus1=canbus0"
300
+ );
301
+
302
+ /* Configure the CAN0. */
303
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
304
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SLEEP_MODE);
305
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
306
+
307
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
308
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
309
+
310
+ /* Check here if CAN0 is in SLEEP mode and CAN1 in normal mode. */
311
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
312
+ g_assert_cmpint(status, ==, STATUS_SLEEP_MODE);
313
+
314
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
315
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
316
+
317
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
318
+
319
+ /*
320
+ * Once CAN1 sends data on can-bus. CAN0 should exit sleep mode.
321
+ * Check the CAN0 status now. It should exit the sleep mode and receive the
322
+ * incoming data.
323
+ */
324
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
325
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
326
+
327
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
328
+
329
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
330
+
331
+ qtest_quit(qts);
332
+}
333
+
334
+/* Testing Snoop mode on CAN0 while CAN1 is in normal mode. */
335
+static void test_can_snoopmode(void)
336
+{
337
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
338
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
339
+ uint32_t status = 0;
340
+ uint8_t can_timestamp = 1;
341
+
342
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
343
+ " -object can-bus,id=canbus0"
344
+ " -machine xlnx-zcu102.canbus0=canbus0"
345
+ " -machine xlnx-zcu102.canbus1=canbus0"
346
+ );
347
+
348
+ /* Configure the CAN0. */
349
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
350
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SNOOP_MODE);
351
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
352
+
353
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
354
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
355
+
356
+ /* Check here if CAN0 is in SNOOP mode and CAN1 in normal mode. */
357
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
358
+ g_assert_cmpint(status, ==, STATUS_SNOOP_MODE);
359
+
360
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
361
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
362
+
363
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
364
+
365
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
366
+
367
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
368
+
369
+ qtest_quit(qts);
370
+}
371
+
372
+int main(int argc, char **argv)
373
+{
374
+ g_test_init(&argc, &argv, NULL);
375
+
376
+ qtest_add_func("/net/can/can_bus", test_can_bus);
377
+ qtest_add_func("/net/can/can_loopback", test_can_loopback);
378
+ qtest_add_func("/net/can/can_filter", test_can_filter);
379
+ qtest_add_func("/net/can/can_test_snoopmode", test_can_snoopmode);
380
+ qtest_add_func("/net/can/can_test_sleepmode", test_can_sleepmode);
381
+
382
+ return g_test_run();
383
+}
384
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
385
index XXXXXXX..XXXXXXX 100644
386
--- a/tests/qtest/meson.build
387
+++ b/tests/qtest/meson.build
388
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
389
['arm-cpu-features',
390
'numa-test',
391
'boot-serial-test',
392
+ 'xlnx-can-test',
393
'migration-test']
394
395
qtests_s390x = \
396
--
176
--
397
2.20.1
177
2.20.1
398
178
399
179
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
From: Richard Henderson <richard.henderson@linaro.org>
2
checking for stack frame integrity signatures on SG instructions.
3
Add the code in the SG insn implementation for the new behaviour.
4
2
3
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
4
and VFMA{B,T}.BF16 for AArch32 NEON.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-11-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
8
---
10
---
9
target/arm/m_helper.c | 86 +++++++++++++++++++++++++++++++++++++++++++
11
target/arm/helper.h | 2 ++
10
1 file changed, 86 insertions(+)
12
target/arm/neon-shared.decode | 2 ++
13
target/arm/sve.decode | 2 ++
14
target/arm/translate-a64.c | 15 ++++++++++++++-
15
target/arm/translate-neon.c | 10 ++++++++++
16
target/arm/translate-sve.c | 30 ++++++++++++++++++++++++++++++
17
target/arm/vec_helper.c | 22 ++++++++++++++++++++++
18
7 files changed, 82 insertions(+), 1 deletion(-)
11
19
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
13
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
22
--- a/target/arm/helper.h
15
+++ b/target/arm/m_helper.c
23
+++ b/target/arm/helper.h
16
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
17
return true;
25
26
DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
27
void, ptr, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
30
31
#ifdef TARGET_AARCH64
32
#include "helper-a64.h"
33
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/neon-shared.decode
36
+++ b/target/arm/neon-shared.decode
37
@@ -XXX,XX +XXX,XX @@ VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 0 . 1 index:1 ... \
38
rm=%vfml_scalar_q0_rm vn=%vn_sp vd=%vd_dp q=0
39
VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 1 . 1 . rm:3 \
40
index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp q=1
41
+VFMA_b16_scal 1111 1110 0.11 .... .... 1000 . q:1 . 1 . vm:3 \
42
+ index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp
43
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/sve.decode
46
+++ b/target/arm/sve.decode
47
@@ -XXX,XX +XXX,XX @@ FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
48
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
49
FMLSLB_zzxw 01100100 10 1 ..... 0110.0 ..... ..... @rrxr_3a esz=2
50
FMLSLT_zzxw 01100100 10 1 ..... 0110.1 ..... ..... @rrxr_3a esz=2
51
+BFMLALB_zzxw 01100100 11 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
52
+BFMLALT_zzxw 01100100 11 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
53
54
### SVE2 floating-point bfloat16 dot-product (indexed)
55
BFDOT_zzxz 01100100 01 1 ..... 010000 ..... ..... @rrxr_2 esz=2
56
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-a64.c
59
+++ b/target/arm/translate-a64.c
60
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
61
unallocated_encoding(s);
62
return;
63
}
64
+ size = MO_32;
65
break;
66
case 1: /* BFDOT */
67
if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
68
unallocated_encoding(s);
69
return;
70
}
71
+ size = MO_32;
72
+ break;
73
+ case 3: /* BFMLAL{B,T} */
74
+ if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
75
+ unallocated_encoding(s);
76
+ return;
77
+ }
78
+ /* can't set is_fp without other incorrect size checks */
79
+ size = MO_16;
80
break;
81
default:
82
unallocated_encoding(s);
83
return;
84
}
85
- size = MO_32;
86
break;
87
case 0x11: /* FCMLA #0 */
88
case 0x13: /* FCMLA #90 */
89
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
90
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
91
gen_helper_gvec_usdot_idx_b);
92
return;
93
+ case 3: /* BFMLAL{B,T} */
94
+ gen_gvec_op4_fpst(s, 1, rd, rn, rm, rd, 0, (index << 1) | is_q,
95
+ gen_helper_gvec_bfmlal_idx);
96
+ return;
97
}
98
g_assert_not_reached();
99
case 0x11: /* FCMLA #0 */
100
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/translate-neon.c
103
+++ b/target/arm/translate-neon.c
104
@@ -XXX,XX +XXX,XX @@ static bool trans_VFMA_b16(DisasContext *s, arg_VFMA_b16 *a)
105
return do_neon_ddda_fpst(s, 7, a->vd, a->vn, a->vm, a->q, FPST_STD,
106
gen_helper_gvec_bfmlal);
18
}
107
}
19
108
+
20
+static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
109
+static bool trans_VFMA_b16_scal(DisasContext *s, arg_VFMA_b16_scal *a)
21
+ uint32_t addr, uint32_t *spdata)
22
+{
110
+{
23
+ /*
111
+ if (!dc_isar_feature(aa32_bf16, s)) {
24
+ * Read a word of data from the stack for the SG instruction,
25
+ * writing the value into *spdata. If the load succeeds, return
26
+ * true; otherwise pend an appropriate exception and return false.
27
+ * (We can't use data load helpers here that throw an exception
28
+ * because of the context we're called in, which is halfway through
29
+ * arm_v7m_cpu_do_interrupt().)
30
+ */
31
+ CPUState *cs = CPU(cpu);
32
+ CPUARMState *env = &cpu->env;
33
+ MemTxAttrs attrs = {};
34
+ MemTxResult txres;
35
+ target_ulong page_size;
36
+ hwaddr physaddr;
37
+ int prot;
38
+ ARMMMUFaultInfo fi = {};
39
+ ARMCacheAttrs cacheattrs = {};
40
+ uint32_t value;
41
+
42
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
43
+ &attrs, &prot, &page_size, &fi, &cacheattrs)) {
44
+ /* MPU/SAU lookup failed */
45
+ if (fi.type == ARMFault_QEMU_SFault) {
46
+ qemu_log_mask(CPU_LOG_INT,
47
+ "...SecureFault during stack word read\n");
48
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
49
+ env->v7m.sfar = addr;
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+ } else {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...MemManageFault during stack word read\n");
54
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK |
55
+ R_V7M_CFSR_MMARVALID_MASK;
56
+ env->v7m.mmfar[M_REG_S] = addr;
57
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false);
58
+ }
59
+ return false;
112
+ return false;
60
+ }
113
+ }
61
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
114
+ return do_neon_ddda_fpst(s, 6, a->vd, a->vn, a->vm,
62
+ attrs, &txres);
115
+ (a->index << 1) | a->q, FPST_STD,
63
+ if (txres != MEMTX_OK) {
116
+ gen_helper_gvec_bfmlal_idx);
64
+ /* BusFault trying to read the data */
117
+}
65
+ qemu_log_mask(CPU_LOG_INT,
118
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
66
+ "...BusFault during stack word read\n");
119
index XXXXXXX..XXXXXXX 100644
67
+ env->v7m.cfsr[M_REG_NS] |=
120
--- a/target/arm/translate-sve.c
68
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
121
+++ b/target/arm/translate-sve.c
69
+ env->v7m.bfar = addr;
122
@@ -XXX,XX +XXX,XX @@ static bool trans_BFMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
123
{
124
return do_BFMLAL_zzzw(s, a, true);
125
}
126
+
127
+static bool do_BFMLAL_zzxw(DisasContext *s, arg_rrxr_esz *a, bool sel)
128
+{
129
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
71
+ return false;
130
+ return false;
72
+ }
131
+ }
132
+ if (sve_access_check(s)) {
133
+ TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
134
+ unsigned vsz = vec_full_reg_size(s);
73
+
135
+
74
+ *spdata = value;
136
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
137
+ vec_full_reg_offset(s, a->rn),
138
+ vec_full_reg_offset(s, a->rm),
139
+ vec_full_reg_offset(s, a->ra),
140
+ status, vsz, vsz, (a->index << 1) | sel,
141
+ gen_helper_gvec_bfmlal_idx);
142
+ tcg_temp_free_ptr(status);
143
+ }
75
+ return true;
144
+ return true;
76
+}
145
+}
77
+
146
+
78
static bool v7m_handle_execute_nsc(ARMCPU *cpu)
147
+static bool trans_BFMLALB_zzxw(DisasContext *s, arg_rrxr_esz *a)
79
{
148
+{
80
/*
149
+ return do_BFMLAL_zzxw(s, a, false);
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
150
+}
82
*/
83
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
84
", executing it\n", env->regs[15]);
85
+
151
+
86
+ if (cpu_isar_feature(aa32_m_sec_state, cpu) &&
152
+static bool trans_BFMLALT_zzxw(DisasContext *s, arg_rrxr_esz *a)
87
+ !arm_v7m_is_handler_mode(env)) {
153
+{
88
+ /*
154
+ return do_BFMLAL_zzxw(s, a, true);
89
+ * v8.1M exception stack frame integrity check. Note that we
155
+}
90
+ * must perform the memory access even if CCR_S.TRD is zero
156
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
91
+ * and we aren't going to check what the data loaded is.
157
index XXXXXXX..XXXXXXX 100644
92
+ */
158
--- a/target/arm/vec_helper.c
93
+ uint32_t spdata, sp;
159
+++ b/target/arm/vec_helper.c
160
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmlal)(void *vd, void *vn, void *vm, void *va,
161
}
162
clear_tail(d, opr_sz, simd_maxsz(desc));
163
}
94
+
164
+
95
+ /*
165
+void HELPER(gvec_bfmlal_idx)(void *vd, void *vn, void *vm,
96
+ * We know we are currently NS, so the S stack pointers must be
166
+ void *va, void *stat, uint32_t desc)
97
+ * in other_ss_{psp,msp}, not in regs[13]/other_sp.
167
+{
98
+ */
168
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
99
+ sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp;
169
+ intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1);
100
+ if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) {
170
+ intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 1, 3);
101
+ /* Stack access failed and an exception has been pended */
171
+ intptr_t elements = opr_sz / 4;
102
+ return false;
172
+ intptr_t eltspersegment = MIN(16 / 4, elements);
103
+ }
173
+ float32 *d = vd, *a = va;
174
+ bfloat16 *n = vn, *m = vm;
104
+
175
+
105
+ if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) {
176
+ for (i = 0; i < elements; i += eltspersegment) {
106
+ if (((spdata & ~1) == 0xfefa125a) ||
177
+ float32 m_idx = m[H2(2 * i + index)] << 16;
107
+ !(env->v7m.control[M_REG_S] & 1)) {
178
+
108
+ goto gen_invep;
179
+ for (j = i; j < i + eltspersegment; j++) {
109
+ }
180
+ float32 n_j = n[H2(2 * j + sel)] << 16;
181
+ d[H4(j)] = float32_muladd(n_j, m_idx, a[H4(j)], 0, stat);
110
+ }
182
+ }
111
+ }
183
+ }
112
+
184
+ clear_tail(d, opr_sz, simd_maxsz(desc));
113
env->regs[14] &= ~1;
185
+}
114
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
115
switch_v7m_security_state(env, true);
116
--
186
--
117
2.20.1
187
2.20.1
118
188
119
189
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20210525225817.400336-12-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
linux-user/elfload.c | 2 ++
9
1 file changed, 2 insertions(+)
10
11
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/linux-user/elfload.c
14
+++ b/linux-user/elfload.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
16
GET_FEATURE_ID(aa64_sve_i8mm, ARM_HWCAP2_A64_SVEI8MM);
17
GET_FEATURE_ID(aa64_sve_f32mm, ARM_HWCAP2_A64_SVEF32MM);
18
GET_FEATURE_ID(aa64_sve_f64mm, ARM_HWCAP2_A64_SVEF64MM);
19
+ GET_FEATURE_ID(aa64_sve_bf16, ARM_HWCAP2_A64_SVEBF16);
20
GET_FEATURE_ID(aa64_i8mm, ARM_HWCAP2_A64_I8MM);
21
+ GET_FEATURE_ID(aa64_bf16, ARM_HWCAP2_A64_BF16);
22
GET_FEATURE_ID(aa64_rndr, ARM_HWCAP2_A64_RNG);
23
GET_FEATURE_ID(aa64_bti, ARM_HWCAP2_A64_BTI);
24
GET_FEATURE_ID(aa64_mte, ARM_HWCAP2_A64_MTE);
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
1
In arm_cpu_realizefn() we check whether the board code disabled EL3
1
From: Richard Henderson <richard.henderson@linaro.org>
2
via the has_el3 CPU object property, which we create if the CPU
3
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
4
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
5
the ID_PFR1 and ID_AA64PFR0 registers.
6
2
7
This codepath was incorrectly being taken for M-profile CPUs, which
3
Disable BF16 again for !have_neon and !have_vfp during realize.
8
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
9
the M-profile Security extension and so should have non-zero values
10
in the ID_PFR1.Security field.
11
4
12
Restrict the handling of the feature flag to A/R-profile cores.
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
6
Message-id: 20210525225817.400336-13-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
17
---
9
---
18
target/arm/cpu.c | 2 +-
10
target/arm/cpu.c | 3 +++
19
1 file changed, 1 insertion(+), 1 deletion(-)
11
target/arm/cpu64.c | 3 +++
12
target/arm/cpu_tcg.c | 1 +
13
3 files changed, 7 insertions(+)
20
14
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.c
17
--- a/target/arm/cpu.c
24
+++ b/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
26
}
20
27
}
21
u = cpu->isar.id_isar6;
28
22
u = FIELD_DP32(u, ID_ISAR6, JSCVT, 0);
29
- if (!cpu->has_el3) {
23
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 0);
30
+ if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
24
cpu->isar.id_isar6 = u;
31
/* If the has_el3 CPU property is disabled then we need to disable the
25
32
* feature.
26
u = cpu->isar.mvfr0;
33
*/
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
28
29
t = cpu->isar.id_aa64isar1;
30
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 0);
31
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 0);
32
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 0);
33
cpu->isar.id_aa64isar1 = t;
34
35
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
36
u = cpu->isar.id_isar6;
37
u = FIELD_DP32(u, ID_ISAR6, DP, 0);
38
u = FIELD_DP32(u, ID_ISAR6, FHM, 0);
39
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 0);
40
u = FIELD_DP32(u, ID_ISAR6, I8MM, 0);
41
cpu->isar.id_isar6 = u;
42
43
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/cpu64.c
46
+++ b/target/arm/cpu64.c
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
48
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
49
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
50
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
51
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
52
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
53
t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
54
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
55
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
56
t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
57
t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */
58
t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
59
+ t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
60
t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
61
t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
62
t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
63
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
64
u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
65
u = FIELD_DP32(u, ID_ISAR6, SB, 1);
66
u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
67
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
68
u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
69
cpu->isar.id_isar6 = u;
70
71
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/cpu_tcg.c
74
+++ b/target/arm/cpu_tcg.c
75
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
76
t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
77
t = FIELD_DP32(t, ID_ISAR6, SB, 1);
78
t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
79
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
80
t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
81
cpu->isar.id_isar6 = t;
82
34
--
83
--
35
2.20.1
84
2.20.1
36
85
37
86
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
The Xilinx ZynqMP CAN controller is developed based on SocketCAN, QEMU CAN bus
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
implementation. Bus connection and socketCAN connection for each CAN module
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
can be set through command lines.
5
prepare for support for multiple architectures, let's start moving common
6
6
code out into its own accel directory.
7
Example for using single CAN:
7
8
-object can-bus,id=canbus0 \
8
This patch moves assert_hvf_ok() and introduces generic build infrastructure.
9
-machine xlnx-zcu102.canbus0=canbus0 \
9
10
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Example for connecting both CAN to same virtual CAN on host machine:
12
Message-id: 20210519202253.76782-2-agraf@csgraf.de
13
-object can-bus,id=canbus0 -object can-bus,id=canbus1 \
14
-machine xlnx-zcu102.canbus0=canbus0 \
15
-machine xlnx-zcu102.canbus1=canbus1 \
16
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0 \
17
-object can-host-socketcan,id=socketcan1,if=vcan0,canbus=canbus1
18
19
To create virtual CAN on the host machine, please check the QEMU CAN docs:
20
https://github.com/qemu/qemu/blob/master/docs/can.txt
21
22
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
23
Message-id: 1605728926-352690-2-git-send-email-fnu.vikram@xilinx.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
15
---
27
meson.build | 1 +
16
include/sysemu/hvf_int.h | 18 +++++++++++++++
28
hw/net/can/trace.h | 1 +
17
accel/hvf/hvf-all.c | 47 ++++++++++++++++++++++++++++++++++++++++
29
include/hw/net/xlnx-zynqmp-can.h | 78 ++
18
target/i386/hvf/hvf.c | 33 +---------------------------
30
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++
19
MAINTAINERS | 8 +++++++
31
hw/Kconfig | 1 +
20
accel/hvf/meson.build | 6 +++++
32
hw/net/can/meson.build | 1 +
21
accel/meson.build | 1 +
33
hw/net/can/trace-events | 9 +
22
6 files changed, 81 insertions(+), 32 deletions(-)
34
7 files changed, 1252 insertions(+)
23
create mode 100644 include/sysemu/hvf_int.h
35
create mode 100644 hw/net/can/trace.h
24
create mode 100644 accel/hvf/hvf-all.c
36
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
25
create mode 100644 accel/hvf/meson.build
37
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
26
38
create mode 100644 hw/net/can/trace-events
27
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
39
40
diff --git a/meson.build b/meson.build
41
index XXXXXXX..XXXXXXX 100644
42
--- a/meson.build
43
+++ b/meson.build
44
@@ -XXX,XX +XXX,XX @@ if have_system
45
'hw/misc',
46
'hw/misc/macio',
47
'hw/net',
48
+ 'hw/net/can',
49
'hw/nvram',
50
'hw/pci',
51
'hw/pci-host',
52
diff --git a/hw/net/can/trace.h b/hw/net/can/trace.h
53
new file mode 100644
28
new file mode 100644
54
index XXXXXXX..XXXXXXX
29
index XXXXXXX..XXXXXXX
55
--- /dev/null
30
--- /dev/null
56
+++ b/hw/net/can/trace.h
31
+++ b/include/sysemu/hvf_int.h
57
@@ -0,0 +1 @@
32
@@ -XXX,XX +XXX,XX @@
58
+#include "trace/trace-hw_net_can.h"
33
+/*
59
diff --git a/include/hw/net/xlnx-zynqmp-can.h b/include/hw/net/xlnx-zynqmp-can.h
34
+ * QEMU Hypervisor.framework (HVF) support
35
+ *
36
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
37
+ * See the COPYING file in the top-level directory.
38
+ *
39
+ */
40
+
41
+/* header to be included in HVF-specific code */
42
+
43
+#ifndef HVF_INT_H
44
+#define HVF_INT_H
45
+
46
+#include <Hypervisor/hv.h>
47
+
48
+void assert_hvf_ok(hv_return_t ret);
49
+
50
+#endif
51
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
60
new file mode 100644
52
new file mode 100644
61
index XXXXXXX..XXXXXXX
53
index XXXXXXX..XXXXXXX
62
--- /dev/null
54
--- /dev/null
63
+++ b/include/hw/net/xlnx-zynqmp-can.h
55
+++ b/accel/hvf/hvf-all.c
64
@@ -XXX,XX +XXX,XX @@
56
@@ -XXX,XX +XXX,XX @@
65
+/*
57
+/*
66
+ * QEMU model of the Xilinx ZynqMP CAN controller.
58
+ * QEMU Hypervisor.framework support
67
+ *
59
+ *
68
+ * Copyright (c) 2020 Xilinx Inc.
60
+ * This work is licensed under the terms of the GNU GPL, version 2. See
69
+ *
61
+ * the COPYING file in the top-level directory.
70
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
62
+ *
71
+ *
63
+ * Contributions after 2012-01-13 are licensed under the terms of the
72
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
64
+ * GNU GPL, version 2 or (at your option) any later version.
73
+ * Pavel Pisa.
74
+ *
75
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
76
+ * of this software and associated documentation files (the "Software"), to deal
77
+ * in the Software without restriction, including without limitation the rights
78
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
79
+ * copies of the Software, and to permit persons to whom the Software is
80
+ * furnished to do so, subject to the following conditions:
81
+ *
82
+ * The above copyright notice and this permission notice shall be included in
83
+ * all copies or substantial portions of the Software.
84
+ *
85
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
86
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
87
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
88
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
89
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
90
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
91
+ * THE SOFTWARE.
92
+ */
65
+ */
93
+
66
+
94
+#ifndef XLNX_ZYNQMP_CAN_H
67
+#include "qemu/osdep.h"
95
+#define XLNX_ZYNQMP_CAN_H
68
+#include "qemu-common.h"
96
+
69
+#include "qemu/error-report.h"
97
+#include "hw/register.h"
70
+#include "sysemu/hvf.h"
98
+#include "net/can_emu.h"
71
+#include "sysemu/hvf_int.h"
99
+#include "net/can_host.h"
72
+
100
+#include "qemu/fifo32.h"
73
+void assert_hvf_ok(hv_return_t ret)
101
+#include "hw/ptimer.h"
74
+{
102
+#include "hw/qdev-clock.h"
75
+ if (ret == HV_SUCCESS) {
103
+
76
+ return;
104
+#define TYPE_XLNX_ZYNQMP_CAN "xlnx.zynqmp-can"
77
+ }
105
+
78
+
106
+#define XLNX_ZYNQMP_CAN(obj) \
79
+ switch (ret) {
107
+ OBJECT_CHECK(XlnxZynqMPCANState, (obj), TYPE_XLNX_ZYNQMP_CAN)
80
+ case HV_ERROR:
108
+
81
+ error_report("Error: HV_ERROR");
109
+#define MAX_CAN_CTRLS 2
82
+ break;
110
+#define XLNX_ZYNQMP_CAN_R_MAX (0x84 / 4)
83
+ case HV_BUSY:
111
+#define MAILBOX_CAPACITY 64
84
+ error_report("Error: HV_BUSY");
112
+#define CAN_TIMER_MAX 0XFFFFUL
85
+ break;
113
+#define CAN_DEFAULT_CLOCK (24 * 1000 * 1000)
86
+ case HV_BAD_ARGUMENT:
114
+
87
+ error_report("Error: HV_BAD_ARGUMENT");
115
+/* Each CAN_FRAME will have 4 * 32bit size. */
88
+ break;
116
+#define CAN_FRAME_SIZE 4
89
+ case HV_NO_RESOURCES:
117
+#define RXFIFO_SIZE (MAILBOX_CAPACITY * CAN_FRAME_SIZE)
90
+ error_report("Error: HV_NO_RESOURCES");
118
+
91
+ break;
119
+typedef struct XlnxZynqMPCANState {
92
+ case HV_NO_DEVICE:
120
+ SysBusDevice parent_obj;
93
+ error_report("Error: HV_NO_DEVICE");
121
+ MemoryRegion iomem;
94
+ break;
122
+
95
+ case HV_UNSUPPORTED:
123
+ qemu_irq irq;
96
+ error_report("Error: HV_UNSUPPORTED");
124
+
97
+ break;
125
+ CanBusClientState bus_client;
98
+ default:
126
+ CanBusState *canbus;
99
+ error_report("Unknown Error");
127
+
100
+ }
128
+ struct {
101
+
129
+ uint32_t ext_clk_freq;
102
+ abort();
130
+ } cfg;
103
+}
131
+
104
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
132
+ RegisterInfo reg_info[XLNX_ZYNQMP_CAN_R_MAX];
105
index XXXXXXX..XXXXXXX 100644
133
+ uint32_t regs[XLNX_ZYNQMP_CAN_R_MAX];
106
--- a/target/i386/hvf/hvf.c
134
+
107
+++ b/target/i386/hvf/hvf.c
135
+ Fifo32 rx_fifo;
108
@@ -XXX,XX +XXX,XX @@
136
+ Fifo32 tx_fifo;
109
#include "qemu/error-report.h"
137
+ Fifo32 txhpb_fifo;
110
138
+
111
#include "sysemu/hvf.h"
139
+ ptimer_state *can_timer;
112
+#include "sysemu/hvf_int.h"
140
+} XlnxZynqMPCANState;
113
#include "sysemu/runstate.h"
141
+
114
#include "hvf-i386.h"
142
+#endif
115
#include "vmcs.h"
143
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
116
@@ -XXX,XX +XXX,XX @@
117
118
HVFState *hvf_state;
119
120
-static void assert_hvf_ok(hv_return_t ret)
121
-{
122
- if (ret == HV_SUCCESS) {
123
- return;
124
- }
125
-
126
- switch (ret) {
127
- case HV_ERROR:
128
- error_report("Error: HV_ERROR");
129
- break;
130
- case HV_BUSY:
131
- error_report("Error: HV_BUSY");
132
- break;
133
- case HV_BAD_ARGUMENT:
134
- error_report("Error: HV_BAD_ARGUMENT");
135
- break;
136
- case HV_NO_RESOURCES:
137
- error_report("Error: HV_NO_RESOURCES");
138
- break;
139
- case HV_NO_DEVICE:
140
- error_report("Error: HV_NO_DEVICE");
141
- break;
142
- case HV_UNSUPPORTED:
143
- error_report("Error: HV_UNSUPPORTED");
144
- break;
145
- default:
146
- error_report("Unknown Error");
147
- }
148
-
149
- abort();
150
-}
151
-
152
/* Memory slots */
153
hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
154
{
155
diff --git a/MAINTAINERS b/MAINTAINERS
156
index XXXXXXX..XXXXXXX 100644
157
--- a/MAINTAINERS
158
+++ b/MAINTAINERS
159
@@ -XXX,XX +XXX,XX @@ M: Roman Bolshakov <r.bolshakov@yadro.com>
160
W: https://wiki.qemu.org/Features/HVF
161
S: Maintained
162
F: target/i386/hvf/
163
+
164
+HVF
165
+M: Cameron Esfahani <dirty@apple.com>
166
+M: Roman Bolshakov <r.bolshakov@yadro.com>
167
+W: https://wiki.qemu.org/Features/HVF
168
+S: Maintained
169
+F: accel/hvf/
170
F: include/sysemu/hvf.h
171
+F: include/sysemu/hvf_int.h
172
173
WHPX CPUs
174
M: Sunil Muthuswamy <sunilmut@microsoft.com>
175
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
144
new file mode 100644
176
new file mode 100644
145
index XXXXXXX..XXXXXXX
177
index XXXXXXX..XXXXXXX
146
--- /dev/null
178
--- /dev/null
147
+++ b/hw/net/can/xlnx-zynqmp-can.c
179
+++ b/accel/hvf/meson.build
148
@@ -XXX,XX +XXX,XX @@
180
@@ -XXX,XX +XXX,XX @@
149
+/*
181
+hvf_ss = ss.source_set()
150
+ * QEMU model of the Xilinx ZynqMP CAN controller.
182
+hvf_ss.add(files(
151
+ * This implementation is based on the following datasheet:
183
+ 'hvf-all.c',
152
+ * https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
184
+))
153
+ *
185
+
154
+ * Copyright (c) 2020 Xilinx Inc.
186
+specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
155
+ *
187
diff --git a/accel/meson.build b/accel/meson.build
156
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
157
+ *
158
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
159
+ * Pavel Pisa
160
+ *
161
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
162
+ * of this software and associated documentation files (the "Software"), to deal
163
+ * in the Software without restriction, including without limitation the rights
164
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
165
+ * copies of the Software, and to permit persons to whom the Software is
166
+ * furnished to do so, subject to the following conditions:
167
+ *
168
+ * The above copyright notice and this permission notice shall be included in
169
+ * all copies or substantial portions of the Software.
170
+ *
171
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
172
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
173
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
174
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
175
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
176
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
177
+ * THE SOFTWARE.
178
+ */
179
+
180
+#include "qemu/osdep.h"
181
+#include "hw/sysbus.h"
182
+#include "hw/register.h"
183
+#include "hw/irq.h"
184
+#include "qapi/error.h"
185
+#include "qemu/bitops.h"
186
+#include "qemu/log.h"
187
+#include "qemu/cutils.h"
188
+#include "sysemu/sysemu.h"
189
+#include "migration/vmstate.h"
190
+#include "hw/qdev-properties.h"
191
+#include "net/can_emu.h"
192
+#include "net/can_host.h"
193
+#include "qemu/event_notifier.h"
194
+#include "qom/object_interfaces.h"
195
+#include "hw/net/xlnx-zynqmp-can.h"
196
+#include "trace.h"
197
+
198
+#ifndef XLNX_ZYNQMP_CAN_ERR_DEBUG
199
+#define XLNX_ZYNQMP_CAN_ERR_DEBUG 0
200
+#endif
201
+
202
+#define MAX_DLC 8
203
+#undef ERROR
204
+
205
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
206
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
207
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
208
+REG32(MODE_SELECT_REGISTER, 0x4)
209
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
210
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
211
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
212
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
213
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
214
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
215
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 7, 2)
216
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 4, 3)
217
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 4)
218
+REG32(ERROR_COUNTER_REGISTER, 0x10)
219
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
220
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
221
+REG32(ERROR_STATUS_REGISTER, 0x14)
222
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
223
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
224
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
225
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
226
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
227
+REG32(STATUS_REGISTER, 0x18)
228
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
229
+ FIELD(STATUS_REGISTER, ACFBSY, 11, 1)
230
+ FIELD(STATUS_REGISTER, TXFLL, 10, 1)
231
+ FIELD(STATUS_REGISTER, TXBFLL, 9, 1)
232
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
233
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
234
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
235
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
236
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
237
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
238
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
239
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
240
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
241
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFEMP, 14, 1)
242
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFWMEMP, 13, 1)
243
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
244
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
245
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
246
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
247
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR, 8, 1)
248
+ FIELD(INTERRUPT_STATUS_REGISTER, RXNEMP, 7, 1)
249
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOFLW, 6, 1)
250
+ FIELD(INTERRUPT_STATUS_REGISTER, RXUFLW, 5, 1)
251
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
252
+ FIELD(INTERRUPT_STATUS_REGISTER, TXBFLL, 3, 1)
253
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFLL, 2, 1)
254
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
255
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
256
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
257
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFEMP, 14, 1)
258
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFWMEMP, 13, 1)
259
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
260
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
261
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
262
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
263
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
264
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXNEMP, 7, 1)
265
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOFLW, 6, 1)
266
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXUFLW, 5, 1)
267
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
268
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXBFLL, 3, 1)
269
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFLL, 2, 1)
270
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
271
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLST, 0, 1)
272
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFEMP, 14, 1)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFWMEMP, 13, 1)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
277
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
278
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
279
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
280
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXNEMP, 7, 1)
281
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOFLW, 6, 1)
282
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXUFLW, 5, 1)
283
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
284
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXBFLL, 3, 1)
285
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFLL, 2, 1)
286
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
287
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLST, 0, 1)
288
+REG32(TIMESTAMP_REGISTER, 0x28)
289
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
290
+REG32(WIR, 0x2c)
291
+ FIELD(WIR, EW, 8, 8)
292
+ FIELD(WIR, FW, 0, 8)
293
+REG32(TXFIFO_ID, 0x30)
294
+ FIELD(TXFIFO_ID, IDH, 21, 11)
295
+ FIELD(TXFIFO_ID, SRRRTR, 20, 1)
296
+ FIELD(TXFIFO_ID, IDE, 19, 1)
297
+ FIELD(TXFIFO_ID, IDL, 1, 18)
298
+ FIELD(TXFIFO_ID, RTR, 0, 1)
299
+REG32(TXFIFO_DLC, 0x34)
300
+ FIELD(TXFIFO_DLC, DLC, 28, 4)
301
+REG32(TXFIFO_DATA1, 0x38)
302
+ FIELD(TXFIFO_DATA1, DB0, 24, 8)
303
+ FIELD(TXFIFO_DATA1, DB1, 16, 8)
304
+ FIELD(TXFIFO_DATA1, DB2, 8, 8)
305
+ FIELD(TXFIFO_DATA1, DB3, 0, 8)
306
+REG32(TXFIFO_DATA2, 0x3c)
307
+ FIELD(TXFIFO_DATA2, DB4, 24, 8)
308
+ FIELD(TXFIFO_DATA2, DB5, 16, 8)
309
+ FIELD(TXFIFO_DATA2, DB6, 8, 8)
310
+ FIELD(TXFIFO_DATA2, DB7, 0, 8)
311
+REG32(TXHPB_ID, 0x40)
312
+ FIELD(TXHPB_ID, IDH, 21, 11)
313
+ FIELD(TXHPB_ID, SRRRTR, 20, 1)
314
+ FIELD(TXHPB_ID, IDE, 19, 1)
315
+ FIELD(TXHPB_ID, IDL, 1, 18)
316
+ FIELD(TXHPB_ID, RTR, 0, 1)
317
+REG32(TXHPB_DLC, 0x44)
318
+ FIELD(TXHPB_DLC, DLC, 28, 4)
319
+REG32(TXHPB_DATA1, 0x48)
320
+ FIELD(TXHPB_DATA1, DB0, 24, 8)
321
+ FIELD(TXHPB_DATA1, DB1, 16, 8)
322
+ FIELD(TXHPB_DATA1, DB2, 8, 8)
323
+ FIELD(TXHPB_DATA1, DB3, 0, 8)
324
+REG32(TXHPB_DATA2, 0x4c)
325
+ FIELD(TXHPB_DATA2, DB4, 24, 8)
326
+ FIELD(TXHPB_DATA2, DB5, 16, 8)
327
+ FIELD(TXHPB_DATA2, DB6, 8, 8)
328
+ FIELD(TXHPB_DATA2, DB7, 0, 8)
329
+REG32(RXFIFO_ID, 0x50)
330
+ FIELD(RXFIFO_ID, IDH, 21, 11)
331
+ FIELD(RXFIFO_ID, SRRRTR, 20, 1)
332
+ FIELD(RXFIFO_ID, IDE, 19, 1)
333
+ FIELD(RXFIFO_ID, IDL, 1, 18)
334
+ FIELD(RXFIFO_ID, RTR, 0, 1)
335
+REG32(RXFIFO_DLC, 0x54)
336
+ FIELD(RXFIFO_DLC, DLC, 28, 4)
337
+ FIELD(RXFIFO_DLC, RXT, 0, 16)
338
+REG32(RXFIFO_DATA1, 0x58)
339
+ FIELD(RXFIFO_DATA1, DB0, 24, 8)
340
+ FIELD(RXFIFO_DATA1, DB1, 16, 8)
341
+ FIELD(RXFIFO_DATA1, DB2, 8, 8)
342
+ FIELD(RXFIFO_DATA1, DB3, 0, 8)
343
+REG32(RXFIFO_DATA2, 0x5c)
344
+ FIELD(RXFIFO_DATA2, DB4, 24, 8)
345
+ FIELD(RXFIFO_DATA2, DB5, 16, 8)
346
+ FIELD(RXFIFO_DATA2, DB6, 8, 8)
347
+ FIELD(RXFIFO_DATA2, DB7, 0, 8)
348
+REG32(AFR, 0x60)
349
+ FIELD(AFR, UAF4, 3, 1)
350
+ FIELD(AFR, UAF3, 2, 1)
351
+ FIELD(AFR, UAF2, 1, 1)
352
+ FIELD(AFR, UAF1, 0, 1)
353
+REG32(AFMR1, 0x64)
354
+ FIELD(AFMR1, AMIDH, 21, 11)
355
+ FIELD(AFMR1, AMSRR, 20, 1)
356
+ FIELD(AFMR1, AMIDE, 19, 1)
357
+ FIELD(AFMR1, AMIDL, 1, 18)
358
+ FIELD(AFMR1, AMRTR, 0, 1)
359
+REG32(AFIR1, 0x68)
360
+ FIELD(AFIR1, AIIDH, 21, 11)
361
+ FIELD(AFIR1, AISRR, 20, 1)
362
+ FIELD(AFIR1, AIIDE, 19, 1)
363
+ FIELD(AFIR1, AIIDL, 1, 18)
364
+ FIELD(AFIR1, AIRTR, 0, 1)
365
+REG32(AFMR2, 0x6c)
366
+ FIELD(AFMR2, AMIDH, 21, 11)
367
+ FIELD(AFMR2, AMSRR, 20, 1)
368
+ FIELD(AFMR2, AMIDE, 19, 1)
369
+ FIELD(AFMR2, AMIDL, 1, 18)
370
+ FIELD(AFMR2, AMRTR, 0, 1)
371
+REG32(AFIR2, 0x70)
372
+ FIELD(AFIR2, AIIDH, 21, 11)
373
+ FIELD(AFIR2, AISRR, 20, 1)
374
+ FIELD(AFIR2, AIIDE, 19, 1)
375
+ FIELD(AFIR2, AIIDL, 1, 18)
376
+ FIELD(AFIR2, AIRTR, 0, 1)
377
+REG32(AFMR3, 0x74)
378
+ FIELD(AFMR3, AMIDH, 21, 11)
379
+ FIELD(AFMR3, AMSRR, 20, 1)
380
+ FIELD(AFMR3, AMIDE, 19, 1)
381
+ FIELD(AFMR3, AMIDL, 1, 18)
382
+ FIELD(AFMR3, AMRTR, 0, 1)
383
+REG32(AFIR3, 0x78)
384
+ FIELD(AFIR3, AIIDH, 21, 11)
385
+ FIELD(AFIR3, AISRR, 20, 1)
386
+ FIELD(AFIR3, AIIDE, 19, 1)
387
+ FIELD(AFIR3, AIIDL, 1, 18)
388
+ FIELD(AFIR3, AIRTR, 0, 1)
389
+REG32(AFMR4, 0x7c)
390
+ FIELD(AFMR4, AMIDH, 21, 11)
391
+ FIELD(AFMR4, AMSRR, 20, 1)
392
+ FIELD(AFMR4, AMIDE, 19, 1)
393
+ FIELD(AFMR4, AMIDL, 1, 18)
394
+ FIELD(AFMR4, AMRTR, 0, 1)
395
+REG32(AFIR4, 0x80)
396
+ FIELD(AFIR4, AIIDH, 21, 11)
397
+ FIELD(AFIR4, AISRR, 20, 1)
398
+ FIELD(AFIR4, AIIDE, 19, 1)
399
+ FIELD(AFIR4, AIIDL, 1, 18)
400
+ FIELD(AFIR4, AIRTR, 0, 1)
401
+
402
+static void can_update_irq(XlnxZynqMPCANState *s)
403
+{
404
+ uint32_t irq;
405
+
406
+ /* Watermark register interrupts. */
407
+ if ((fifo32_num_free(&s->tx_fifo) / CAN_FRAME_SIZE) >
408
+ ARRAY_FIELD_EX32(s->regs, WIR, EW)) {
409
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFWMEMP, 1);
410
+ }
411
+
412
+ if ((fifo32_num_used(&s->rx_fifo) / CAN_FRAME_SIZE) >
413
+ ARRAY_FIELD_EX32(s->regs, WIR, FW)) {
414
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
415
+ }
416
+
417
+ /* RX Interrupts. */
418
+ if (fifo32_num_used(&s->rx_fifo) >= CAN_FRAME_SIZE) {
419
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXNEMP, 1);
420
+ }
421
+
422
+ /* TX interrupts. */
423
+ if (fifo32_is_empty(&s->tx_fifo)) {
424
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFEMP, 1);
425
+ }
426
+
427
+ if (fifo32_is_full(&s->tx_fifo)) {
428
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFLL, 1);
429
+ }
430
+
431
+ if (fifo32_is_full(&s->txhpb_fifo)) {
432
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXBFLL, 1);
433
+ }
434
+
435
+ irq = s->regs[R_INTERRUPT_STATUS_REGISTER];
436
+ irq &= s->regs[R_INTERRUPT_ENABLE_REGISTER];
437
+
438
+ trace_xlnx_can_update_irq(s->regs[R_INTERRUPT_STATUS_REGISTER],
439
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
440
+ qemu_set_irq(s->irq, irq);
441
+}
442
+
443
+static void can_ier_post_write(RegisterInfo *reg, uint64_t val)
444
+{
445
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
446
+
447
+ can_update_irq(s);
448
+}
449
+
450
+static uint64_t can_icr_pre_write(RegisterInfo *reg, uint64_t val)
451
+{
452
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
453
+
454
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
455
+ can_update_irq(s);
456
+
457
+ return 0;
458
+}
459
+
460
+static void can_config_reset(XlnxZynqMPCANState *s)
461
+{
462
+ /* Reset all the configuration registers. */
463
+ register_reset(&s->reg_info[R_SOFTWARE_RESET_REGISTER]);
464
+ register_reset(&s->reg_info[R_MODE_SELECT_REGISTER]);
465
+ register_reset(
466
+ &s->reg_info[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER]);
467
+ register_reset(&s->reg_info[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER]);
468
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
469
+ register_reset(&s->reg_info[R_INTERRUPT_STATUS_REGISTER]);
470
+ register_reset(&s->reg_info[R_INTERRUPT_ENABLE_REGISTER]);
471
+ register_reset(&s->reg_info[R_INTERRUPT_CLEAR_REGISTER]);
472
+ register_reset(&s->reg_info[R_WIR]);
473
+}
474
+
475
+static void can_config_mode(XlnxZynqMPCANState *s)
476
+{
477
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
478
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
479
+
480
+ /* Put XlnxZynqMPCAN in configuration mode. */
481
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
482
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
483
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
484
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
485
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR, 0);
486
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 0);
487
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
488
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
489
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
490
+
491
+ can_update_irq(s);
492
+}
493
+
494
+static void update_status_register_mode_bits(XlnxZynqMPCANState *s)
495
+{
496
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
497
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
498
+ /* Wake up interrupt bit. */
499
+ bool wakeup_irq_val = sleep_status && (sleep_mode == 0);
500
+ /* Sleep interrupt bit. */
501
+ bool sleep_irq_val = sleep_mode && (sleep_status == 0);
502
+
503
+ /* Clear previous core mode status bits. */
504
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
505
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
506
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
507
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
508
+
509
+ /* set current mode bit and generate irqs accordingly. */
510
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
511
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
512
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
513
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
514
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
515
+ sleep_irq_val);
516
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
517
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
518
+ } else {
519
+ /*
520
+ * If all bits are zero then XlnxZynqMPCAN is set in normal mode.
521
+ */
522
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
523
+ /* Set wakeup interrupt bit. */
524
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
525
+ wakeup_irq_val);
526
+ }
527
+
528
+ can_update_irq(s);
529
+}
530
+
531
+static void can_exit_sleep_mode(XlnxZynqMPCANState *s)
532
+{
533
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
534
+ update_status_register_mode_bits(s);
535
+}
536
+
537
+static void generate_frame(qemu_can_frame *frame, uint32_t *data)
538
+{
539
+ frame->can_id = data[0];
540
+ frame->can_dlc = FIELD_EX32(data[1], TXFIFO_DLC, DLC);
541
+
542
+ frame->data[0] = FIELD_EX32(data[2], TXFIFO_DATA1, DB3);
543
+ frame->data[1] = FIELD_EX32(data[2], TXFIFO_DATA1, DB2);
544
+ frame->data[2] = FIELD_EX32(data[2], TXFIFO_DATA1, DB1);
545
+ frame->data[3] = FIELD_EX32(data[2], TXFIFO_DATA1, DB0);
546
+
547
+ frame->data[4] = FIELD_EX32(data[3], TXFIFO_DATA2, DB7);
548
+ frame->data[5] = FIELD_EX32(data[3], TXFIFO_DATA2, DB6);
549
+ frame->data[6] = FIELD_EX32(data[3], TXFIFO_DATA2, DB5);
550
+ frame->data[7] = FIELD_EX32(data[3], TXFIFO_DATA2, DB4);
551
+}
552
+
553
+static bool tx_ready_check(XlnxZynqMPCANState *s)
554
+{
555
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
556
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
557
+
558
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
559
+ " data while controller is in reset mode.\n",
560
+ path);
561
+ return false;
562
+ }
563
+
564
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
565
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
566
+
567
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
568
+ " data while controller is in configuration mode. Reset"
569
+ " the core so operations can start fresh.\n",
570
+ path);
571
+ return false;
572
+ }
573
+
574
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
575
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
576
+
577
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
578
+ " data while controller is in SNOOP MODE.\n",
579
+ path);
580
+ return false;
581
+ }
582
+
583
+ return true;
584
+}
585
+
586
+static void transfer_fifo(XlnxZynqMPCANState *s, Fifo32 *fifo)
587
+{
588
+ qemu_can_frame frame;
589
+ uint32_t data[CAN_FRAME_SIZE];
590
+ int i;
591
+ bool can_tx = tx_ready_check(s);
592
+
593
+ if (!can_tx) {
594
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
595
+
596
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is not enabled for data"
597
+ " transfer.\n", path);
598
+ can_update_irq(s);
599
+ return;
600
+ }
601
+
602
+ while (!fifo32_is_empty(fifo)) {
603
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
604
+ data[i] = fifo32_pop(fifo);
605
+ }
606
+
607
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
608
+ /*
609
+ * Controller is in loopback. In Loopback mode, the CAN core
610
+ * transmits a recessive bitstream on to the XlnxZynqMPCAN Bus.
611
+ * Any message transmitted is looped back to the RX line and
612
+ * acknowledged. The XlnxZynqMPCAN core receives any message
613
+ * that it transmits.
614
+ */
615
+ if (fifo32_is_full(&s->rx_fifo)) {
616
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
617
+ } else {
618
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
619
+ fifo32_push(&s->rx_fifo, data[i]);
620
+ }
621
+
622
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
623
+ }
624
+ } else {
625
+ /* Normal mode Tx. */
626
+ generate_frame(&frame, data);
627
+
628
+ trace_xlnx_can_tx_data(frame.can_id, frame.can_dlc,
629
+ frame.data[0], frame.data[1],
630
+ frame.data[2], frame.data[3],
631
+ frame.data[4], frame.data[5],
632
+ frame.data[6], frame.data[7]);
633
+ can_bus_client_send(&s->bus_client, &frame, 1);
634
+ }
635
+ }
636
+
637
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
638
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, TXBFLL, 0);
639
+
640
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
641
+ can_exit_sleep_mode(s);
642
+ }
643
+
644
+ can_update_irq(s);
645
+}
646
+
647
+static uint64_t can_srr_pre_write(RegisterInfo *reg, uint64_t val)
648
+{
649
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
650
+
651
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
652
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
653
+
654
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
655
+ trace_xlnx_can_reset(val);
656
+
657
+ /* First, core will do software reset then will enter in config mode. */
658
+ can_config_reset(s);
659
+ }
660
+
661
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
662
+ can_config_mode(s);
663
+ } else {
664
+ /*
665
+ * Leave config mode. Now XlnxZynqMPCAN core will enter normal,
666
+ * sleep, snoop or loopback mode depending upon LBACK, SLEEP, SNOOP
667
+ * register states.
668
+ */
669
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
670
+
671
+ ptimer_transaction_begin(s->can_timer);
672
+ ptimer_set_count(s->can_timer, 0);
673
+ ptimer_transaction_commit(s->can_timer);
674
+
675
+ /* XlnxZynqMPCAN is out of config mode. It will send pending data. */
676
+ transfer_fifo(s, &s->txhpb_fifo);
677
+ transfer_fifo(s, &s->tx_fifo);
678
+ }
679
+
680
+ update_status_register_mode_bits(s);
681
+
682
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
683
+}
684
+
685
+static uint64_t can_msr_pre_write(RegisterInfo *reg, uint64_t val)
686
+{
687
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
688
+ uint8_t multi_mode;
689
+
690
+ /*
691
+ * Multiple mode set check. This is done to make sure user doesn't set
692
+ * multiple modes.
693
+ */
694
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
695
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
696
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
697
+
698
+ if (multi_mode > 1) {
699
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
700
+
701
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to config"
702
+ " several modes simultaneously. One mode will be selected"
703
+ " according to their priority: LBACK > SLEEP > SNOOP.\n",
704
+ path);
705
+ }
706
+
707
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
708
+ /* We are in configuration mode, any mode can be selected. */
709
+ s->regs[R_MODE_SELECT_REGISTER] = val;
710
+ } else {
711
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
712
+
713
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
714
+
715
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
716
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
717
+
718
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
719
+ " LBACK mode without setting CEN bit as 0.\n",
720
+ path);
721
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
722
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
723
+
724
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
725
+ " SNOOP mode without setting CEN bit as 0.\n",
726
+ path);
727
+ }
728
+
729
+ update_status_register_mode_bits(s);
730
+ }
731
+
732
+ return s->regs[R_MODE_SELECT_REGISTER];
733
+}
734
+
735
+static uint64_t can_brpr_pre_write(RegisterInfo *reg, uint64_t val)
736
+{
737
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
738
+
739
+ /* Only allow writes when in config mode. */
740
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
741
+ return s->regs[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER];
742
+ }
743
+
744
+ return val;
745
+}
746
+
747
+static uint64_t can_btr_pre_write(RegisterInfo *reg, uint64_t val)
748
+{
749
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
750
+
751
+ /* Only allow writes when in config mode. */
752
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
753
+ return s->regs[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER];
754
+ }
755
+
756
+ return val;
757
+}
758
+
759
+static uint64_t can_tcr_pre_write(RegisterInfo *reg, uint64_t val)
760
+{
761
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
762
+
763
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
764
+ ptimer_transaction_begin(s->can_timer);
765
+ ptimer_set_count(s->can_timer, 0);
766
+ ptimer_transaction_commit(s->can_timer);
767
+ }
768
+
769
+ return 0;
770
+}
771
+
772
+static void update_rx_fifo(XlnxZynqMPCANState *s, const qemu_can_frame *frame)
773
+{
774
+ bool filter_pass = false;
775
+ uint16_t timestamp = 0;
776
+
777
+ /* If no filter is enabled. Message will be stored in FIFO. */
778
+ if (!((ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) |
779
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) |
780
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) |
781
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)))) {
782
+ filter_pass = true;
783
+ }
784
+
785
+ /*
786
+ * Messages that pass any of the acceptance filters will be stored in
787
+ * the RX FIFO.
788
+ */
789
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) {
790
+ uint32_t id_masked = s->regs[R_AFMR1] & frame->can_id;
791
+ uint32_t filter_id_masked = s->regs[R_AFMR1] & s->regs[R_AFIR1];
792
+
793
+ if (filter_id_masked == id_masked) {
794
+ filter_pass = true;
795
+ }
796
+ }
797
+
798
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) {
799
+ uint32_t id_masked = s->regs[R_AFMR2] & frame->can_id;
800
+ uint32_t filter_id_masked = s->regs[R_AFMR2] & s->regs[R_AFIR2];
801
+
802
+ if (filter_id_masked == id_masked) {
803
+ filter_pass = true;
804
+ }
805
+ }
806
+
807
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) {
808
+ uint32_t id_masked = s->regs[R_AFMR3] & frame->can_id;
809
+ uint32_t filter_id_masked = s->regs[R_AFMR3] & s->regs[R_AFIR3];
810
+
811
+ if (filter_id_masked == id_masked) {
812
+ filter_pass = true;
813
+ }
814
+ }
815
+
816
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
817
+ uint32_t id_masked = s->regs[R_AFMR4] & frame->can_id;
818
+ uint32_t filter_id_masked = s->regs[R_AFMR4] & s->regs[R_AFIR4];
819
+
820
+ if (filter_id_masked == id_masked) {
821
+ filter_pass = true;
822
+ }
823
+ }
824
+
825
+ if (!filter_pass) {
826
+ trace_xlnx_can_rx_fifo_filter_reject(frame->can_id, frame->can_dlc);
827
+ return;
828
+ }
829
+
830
+ /* Store the message in fifo if it passed through any of the filters. */
831
+ if (filter_pass && frame->can_dlc <= MAX_DLC) {
832
+
833
+ if (fifo32_is_full(&s->rx_fifo)) {
834
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
835
+ } else {
836
+ timestamp = CAN_TIMER_MAX - ptimer_get_count(s->can_timer);
837
+
838
+ fifo32_push(&s->rx_fifo, frame->can_id);
839
+
840
+ fifo32_push(&s->rx_fifo, deposit32(0, R_RXFIFO_DLC_DLC_SHIFT,
841
+ R_RXFIFO_DLC_DLC_LENGTH,
842
+ frame->can_dlc) |
843
+ deposit32(0, R_RXFIFO_DLC_RXT_SHIFT,
844
+ R_RXFIFO_DLC_RXT_LENGTH,
845
+ timestamp));
846
+
847
+ /* First 32 bit of the data. */
848
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA1_DB3_SHIFT,
849
+ R_TXFIFO_DATA1_DB3_LENGTH,
850
+ frame->data[0]) |
851
+ deposit32(0, R_TXFIFO_DATA1_DB2_SHIFT,
852
+ R_TXFIFO_DATA1_DB2_LENGTH,
853
+ frame->data[1]) |
854
+ deposit32(0, R_TXFIFO_DATA1_DB1_SHIFT,
855
+ R_TXFIFO_DATA1_DB1_LENGTH,
856
+ frame->data[2]) |
857
+ deposit32(0, R_TXFIFO_DATA1_DB0_SHIFT,
858
+ R_TXFIFO_DATA1_DB0_LENGTH,
859
+ frame->data[3]));
860
+ /* Last 32 bit of the data. */
861
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA2_DB7_SHIFT,
862
+ R_TXFIFO_DATA2_DB7_LENGTH,
863
+ frame->data[4]) |
864
+ deposit32(0, R_TXFIFO_DATA2_DB6_SHIFT,
865
+ R_TXFIFO_DATA2_DB6_LENGTH,
866
+ frame->data[5]) |
867
+ deposit32(0, R_TXFIFO_DATA2_DB5_SHIFT,
868
+ R_TXFIFO_DATA2_DB5_LENGTH,
869
+ frame->data[6]) |
870
+ deposit32(0, R_TXFIFO_DATA2_DB4_SHIFT,
871
+ R_TXFIFO_DATA2_DB4_LENGTH,
872
+ frame->data[7]));
873
+
874
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
875
+ trace_xlnx_can_rx_data(frame->can_id, frame->can_dlc,
876
+ frame->data[0], frame->data[1],
877
+ frame->data[2], frame->data[3],
878
+ frame->data[4], frame->data[5],
879
+ frame->data[6], frame->data[7]);
880
+ }
881
+
882
+ can_update_irq(s);
883
+ }
884
+}
885
+
886
+static uint64_t can_rxfifo_pre_read(RegisterInfo *reg, uint64_t val)
887
+{
888
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
889
+
890
+ if (!fifo32_is_empty(&s->rx_fifo)) {
891
+ val = fifo32_pop(&s->rx_fifo);
892
+ } else {
893
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXUFLW, 1);
894
+ }
895
+
896
+ can_update_irq(s);
897
+ return val;
898
+}
899
+
900
+static void can_filter_enable_post_write(RegisterInfo *reg, uint64_t val)
901
+{
902
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
903
+
904
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1) &&
905
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF2) &&
906
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF3) &&
907
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 1);
909
+ } else {
910
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 0);
911
+ }
912
+}
913
+
914
+static uint64_t can_filter_mask_pre_write(RegisterInfo *reg, uint64_t val)
915
+{
916
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
917
+ uint32_t reg_idx = (reg->access->addr) / 4;
918
+ uint32_t filter_number = (reg_idx - R_AFMR1) / 2;
919
+
920
+ /* modify an acceptance filter, the corresponding UAF bit should be '0'. */
921
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
922
+ s->regs[reg_idx] = val;
923
+
924
+ trace_xlnx_can_filter_mask_pre_write(filter_number, s->regs[reg_idx]);
925
+ } else {
926
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
927
+
928
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
929
+ " mask is not set as corresponding UAF bit is not 0.\n",
930
+ path, filter_number + 1);
931
+ }
932
+
933
+ return s->regs[reg_idx];
934
+}
935
+
936
+static uint64_t can_filter_id_pre_write(RegisterInfo *reg, uint64_t val)
937
+{
938
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
939
+ uint32_t reg_idx = (reg->access->addr) / 4;
940
+ uint32_t filter_number = (reg_idx - R_AFIR1) / 2;
941
+
942
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
943
+ s->regs[reg_idx] = val;
944
+
945
+ trace_xlnx_can_filter_id_pre_write(filter_number, s->regs[reg_idx]);
946
+ } else {
947
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
948
+
949
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
950
+ " id is not set as corresponding UAF bit is not 0.\n",
951
+ path, filter_number + 1);
952
+ }
953
+
954
+ return s->regs[reg_idx];
955
+}
956
+
957
+static void can_tx_post_write(RegisterInfo *reg, uint64_t val)
958
+{
959
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
960
+
961
+ bool is_txhpb = reg->access->addr > A_TXFIFO_DATA2;
962
+
963
+ bool initiate_transfer = (reg->access->addr == A_TXFIFO_DATA2) ||
964
+ (reg->access->addr == A_TXHPB_DATA2);
965
+
966
+ Fifo32 *f = is_txhpb ? &s->txhpb_fifo : &s->tx_fifo;
967
+
968
+ if (!fifo32_is_full(f)) {
969
+ fifo32_push(f, val);
970
+ } else {
971
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
972
+
973
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: TX FIFO is full.\n", path);
974
+ }
975
+
976
+ /* Initiate the message send if TX register is written. */
977
+ if (initiate_transfer &&
978
+ ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
979
+ transfer_fifo(s, f);
980
+ }
981
+
982
+ can_update_irq(s);
983
+}
984
+
985
+static const RegisterAccessInfo can_regs_info[] = {
986
+ { .name = "SOFTWARE_RESET_REGISTER",
987
+ .addr = A_SOFTWARE_RESET_REGISTER,
988
+ .rsvd = 0xfffffffc,
989
+ .pre_write = can_srr_pre_write,
990
+ },{ .name = "MODE_SELECT_REGISTER",
991
+ .addr = A_MODE_SELECT_REGISTER,
992
+ .rsvd = 0xfffffff8,
993
+ .pre_write = can_msr_pre_write,
994
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
995
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
996
+ .rsvd = 0xffffff00,
997
+ .pre_write = can_brpr_pre_write,
998
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
999
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1000
+ .rsvd = 0xfffffe00,
1001
+ .pre_write = can_btr_pre_write,
1002
+ },{ .name = "ERROR_COUNTER_REGISTER",
1003
+ .addr = A_ERROR_COUNTER_REGISTER,
1004
+ .rsvd = 0xffff0000,
1005
+ .ro = 0xffffffff,
1006
+ },{ .name = "ERROR_STATUS_REGISTER",
1007
+ .addr = A_ERROR_STATUS_REGISTER,
1008
+ .rsvd = 0xffffffe0,
1009
+ .w1c = 0x1f,
1010
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1011
+ .reset = 0x1,
1012
+ .rsvd = 0xffffe000,
1013
+ .ro = 0x1fff,
1014
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1015
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1016
+ .reset = 0x6000,
1017
+ .rsvd = 0xffff8000,
1018
+ .ro = 0x7fff,
1019
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1020
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1021
+ .rsvd = 0xffff8000,
1022
+ .post_write = can_ier_post_write,
1023
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1024
+ .addr = A_INTERRUPT_CLEAR_REGISTER,
1025
+ .rsvd = 0xffff8000,
1026
+ .pre_write = can_icr_pre_write,
1027
+ },{ .name = "TIMESTAMP_REGISTER",
1028
+ .addr = A_TIMESTAMP_REGISTER,
1029
+ .rsvd = 0xfffffffe,
1030
+ .pre_write = can_tcr_pre_write,
1031
+ },{ .name = "WIR", .addr = A_WIR,
1032
+ .reset = 0x3f3f,
1033
+ .rsvd = 0xffff0000,
1034
+ },{ .name = "TXFIFO_ID", .addr = A_TXFIFO_ID,
1035
+ .post_write = can_tx_post_write,
1036
+ },{ .name = "TXFIFO_DLC", .addr = A_TXFIFO_DLC,
1037
+ .rsvd = 0xfffffff,
1038
+ .post_write = can_tx_post_write,
1039
+ },{ .name = "TXFIFO_DATA1", .addr = A_TXFIFO_DATA1,
1040
+ .post_write = can_tx_post_write,
1041
+ },{ .name = "TXFIFO_DATA2", .addr = A_TXFIFO_DATA2,
1042
+ .post_write = can_tx_post_write,
1043
+ },{ .name = "TXHPB_ID", .addr = A_TXHPB_ID,
1044
+ .post_write = can_tx_post_write,
1045
+ },{ .name = "TXHPB_DLC", .addr = A_TXHPB_DLC,
1046
+ .rsvd = 0xfffffff,
1047
+ .post_write = can_tx_post_write,
1048
+ },{ .name = "TXHPB_DATA1", .addr = A_TXHPB_DATA1,
1049
+ .post_write = can_tx_post_write,
1050
+ },{ .name = "TXHPB_DATA2", .addr = A_TXHPB_DATA2,
1051
+ .post_write = can_tx_post_write,
1052
+ },{ .name = "RXFIFO_ID", .addr = A_RXFIFO_ID,
1053
+ .ro = 0xffffffff,
1054
+ .post_read = can_rxfifo_pre_read,
1055
+ },{ .name = "RXFIFO_DLC", .addr = A_RXFIFO_DLC,
1056
+ .rsvd = 0xfff0000,
1057
+ .post_read = can_rxfifo_pre_read,
1058
+ },{ .name = "RXFIFO_DATA1", .addr = A_RXFIFO_DATA1,
1059
+ .post_read = can_rxfifo_pre_read,
1060
+ },{ .name = "RXFIFO_DATA2", .addr = A_RXFIFO_DATA2,
1061
+ .post_read = can_rxfifo_pre_read,
1062
+ },{ .name = "AFR", .addr = A_AFR,
1063
+ .rsvd = 0xfffffff0,
1064
+ .post_write = can_filter_enable_post_write,
1065
+ },{ .name = "AFMR1", .addr = A_AFMR1,
1066
+ .pre_write = can_filter_mask_pre_write,
1067
+ },{ .name = "AFIR1", .addr = A_AFIR1,
1068
+ .pre_write = can_filter_id_pre_write,
1069
+ },{ .name = "AFMR2", .addr = A_AFMR2,
1070
+ .pre_write = can_filter_mask_pre_write,
1071
+ },{ .name = "AFIR2", .addr = A_AFIR2,
1072
+ .pre_write = can_filter_id_pre_write,
1073
+ },{ .name = "AFMR3", .addr = A_AFMR3,
1074
+ .pre_write = can_filter_mask_pre_write,
1075
+ },{ .name = "AFIR3", .addr = A_AFIR3,
1076
+ .pre_write = can_filter_id_pre_write,
1077
+ },{ .name = "AFMR4", .addr = A_AFMR4,
1078
+ .pre_write = can_filter_mask_pre_write,
1079
+ },{ .name = "AFIR4", .addr = A_AFIR4,
1080
+ .pre_write = can_filter_id_pre_write,
1081
+ }
1082
+};
1083
+
1084
+static void xlnx_zynqmp_can_ptimer_cb(void *opaque)
1085
+{
1086
+ /* No action required on the timer rollover. */
1087
+}
1088
+
1089
+static const MemoryRegionOps can_ops = {
1090
+ .read = register_read_memory,
1091
+ .write = register_write_memory,
1092
+ .endianness = DEVICE_LITTLE_ENDIAN,
1093
+ .valid = {
1094
+ .min_access_size = 4,
1095
+ .max_access_size = 4,
1096
+ },
1097
+};
1098
+
1099
+static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
1100
+{
1101
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1102
+ unsigned int i;
1103
+
1104
+ for (i = R_RXFIFO_ID; i < ARRAY_SIZE(s->reg_info); ++i) {
1105
+ register_reset(&s->reg_info[i]);
1106
+ }
1107
+
1108
+ ptimer_transaction_begin(s->can_timer);
1109
+ ptimer_set_count(s->can_timer, 0);
1110
+ ptimer_transaction_commit(s->can_timer);
1111
+}
1112
+
1113
+static void xlnx_zynqmp_can_reset_hold(Object *obj)
1114
+{
1115
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1116
+ unsigned int i;
1117
+
1118
+ for (i = 0; i < R_RXFIFO_ID; ++i) {
1119
+ register_reset(&s->reg_info[i]);
1120
+ }
1121
+
1122
+ /*
1123
+ * Reset FIFOs when CAN model is reset. This will clear the fifo writes
1124
+ * done by post_write which gets called from register_reset function,
1125
+ * post_write handle will not be able to trigger tx because CAN will be
1126
+ * disabled when software_reset_register is cleared first.
1127
+ */
1128
+ fifo32_reset(&s->rx_fifo);
1129
+ fifo32_reset(&s->tx_fifo);
1130
+ fifo32_reset(&s->txhpb_fifo);
1131
+}
1132
+
1133
+static bool xlnx_zynqmp_can_can_receive(CanBusClientState *client)
1134
+{
1135
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1136
+ bus_client);
1137
+
1138
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1139
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1140
+
1141
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in reset state.\n",
1142
+ path);
1143
+ return false;
1144
+ }
1145
+
1146
+ if ((ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) == 0) {
1147
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1148
+
1149
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is disabled. Incoming"
1150
+ " messages will be discarded.\n", path);
1151
+ return false;
1152
+ }
1153
+
1154
+ return true;
1155
+}
1156
+
1157
+static ssize_t xlnx_zynqmp_can_receive(CanBusClientState *client,
1158
+ const qemu_can_frame *buf, size_t buf_size) {
1159
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1160
+ bus_client);
1161
+ const qemu_can_frame *frame = buf;
1162
+
1163
+ if (buf_size <= 0) {
1164
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1165
+
1166
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Error in the data received.\n",
1167
+ path);
1168
+ return 0;
1169
+ }
1170
+
1171
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1172
+ /* Snoop Mode: Just keep the data. no response back. */
1173
+ update_rx_fifo(s, frame);
1174
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1175
+ /*
1176
+ * XlnxZynqMPCAN is in sleep mode. Any data on bus will bring it to wake
1177
+ * up state.
1178
+ */
1179
+ can_exit_sleep_mode(s);
1180
+ update_rx_fifo(s, frame);
1181
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) == 0) {
1182
+ update_rx_fifo(s, frame);
1183
+ } else {
1184
+ /*
1185
+ * XlnxZynqMPCAN will not participate in normal bus communication
1186
+ * and will not receive any messages transmitted by other CAN nodes.
1187
+ */
1188
+ trace_xlnx_can_rx_discard(s->regs[R_STATUS_REGISTER]);
1189
+ }
1190
+
1191
+ return 1;
1192
+}
1193
+
1194
+static CanBusClientInfo can_xilinx_bus_client_info = {
1195
+ .can_receive = xlnx_zynqmp_can_can_receive,
1196
+ .receive = xlnx_zynqmp_can_receive,
1197
+};
1198
+
1199
+static int xlnx_zynqmp_can_connect_to_bus(XlnxZynqMPCANState *s,
1200
+ CanBusState *bus)
1201
+{
1202
+ s->bus_client.info = &can_xilinx_bus_client_info;
1203
+
1204
+ if (can_bus_insert_client(bus, &s->bus_client) < 0) {
1205
+ return -1;
1206
+ }
1207
+ return 0;
1208
+}
1209
+
1210
+static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
1211
+{
1212
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(dev);
1213
+
1214
+ if (s->canbus) {
1215
+ if (xlnx_zynqmp_can_connect_to_bus(s, s->canbus) < 0) {
1216
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1217
+
1218
+ error_setg(errp, "%s: xlnx_zynqmp_can_connect_to_bus"
1219
+ " failed.", path);
1220
+ return;
1221
+ }
1222
+ }
1223
+
1224
+ /* Create RX FIFO, TXFIFO, TXHPB storage. */
1225
+ fifo32_create(&s->rx_fifo, RXFIFO_SIZE);
1226
+ fifo32_create(&s->tx_fifo, RXFIFO_SIZE);
1227
+ fifo32_create(&s->txhpb_fifo, CAN_FRAME_SIZE);
1228
+
1229
+ /* Allocate a new timer. */
1230
+ s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
1231
+ PTIMER_POLICY_DEFAULT);
1232
+
1233
+ ptimer_transaction_begin(s->can_timer);
1234
+
1235
+ ptimer_set_freq(s->can_timer, s->cfg.ext_clk_freq);
1236
+ ptimer_set_limit(s->can_timer, CAN_TIMER_MAX, 1);
1237
+ ptimer_run(s->can_timer, 0);
1238
+ ptimer_transaction_commit(s->can_timer);
1239
+}
1240
+
1241
+static void xlnx_zynqmp_can_init(Object *obj)
1242
+{
1243
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1244
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1245
+
1246
+ RegisterInfoArray *reg_array;
1247
+
1248
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_CAN,
1249
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1250
+ reg_array = register_init_block32(DEVICE(obj), can_regs_info,
1251
+ ARRAY_SIZE(can_regs_info),
1252
+ s->reg_info, s->regs,
1253
+ &can_ops,
1254
+ XLNX_ZYNQMP_CAN_ERR_DEBUG,
1255
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1256
+
1257
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
1258
+ sysbus_init_mmio(sbd, &s->iomem);
1259
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
1260
+}
1261
+
1262
+static const VMStateDescription vmstate_can = {
1263
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1264
+ .version_id = 1,
1265
+ .minimum_version_id = 1,
1266
+ .fields = (VMStateField[]) {
1267
+ VMSTATE_FIFO32(rx_fifo, XlnxZynqMPCANState),
1268
+ VMSTATE_FIFO32(tx_fifo, XlnxZynqMPCANState),
1269
+ VMSTATE_FIFO32(txhpb_fifo, XlnxZynqMPCANState),
1270
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCANState, XLNX_ZYNQMP_CAN_R_MAX),
1271
+ VMSTATE_PTIMER(can_timer, XlnxZynqMPCANState),
1272
+ VMSTATE_END_OF_LIST(),
1273
+ }
1274
+};
1275
+
1276
+static Property xlnx_zynqmp_can_properties[] = {
1277
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxZynqMPCANState, cfg.ext_clk_freq,
1278
+ CAN_DEFAULT_CLOCK),
1279
+ DEFINE_PROP_LINK("canbus", XlnxZynqMPCANState, canbus, TYPE_CAN_BUS,
1280
+ CanBusState *),
1281
+ DEFINE_PROP_END_OF_LIST(),
1282
+};
1283
+
1284
+static void xlnx_zynqmp_can_class_init(ObjectClass *klass, void *data)
1285
+{
1286
+ DeviceClass *dc = DEVICE_CLASS(klass);
1287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1288
+
1289
+ rc->phases.enter = xlnx_zynqmp_can_reset_init;
1290
+ rc->phases.hold = xlnx_zynqmp_can_reset_hold;
1291
+ dc->realize = xlnx_zynqmp_can_realize;
1292
+ device_class_set_props(dc, xlnx_zynqmp_can_properties);
1293
+ dc->vmsd = &vmstate_can;
1294
+}
1295
+
1296
+static const TypeInfo can_info = {
1297
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1298
+ .parent = TYPE_SYS_BUS_DEVICE,
1299
+ .instance_size = sizeof(XlnxZynqMPCANState),
1300
+ .class_init = xlnx_zynqmp_can_class_init,
1301
+ .instance_init = xlnx_zynqmp_can_init,
1302
+};
1303
+
1304
+static void can_register_types(void)
1305
+{
1306
+ type_register_static(&can_info);
1307
+}
1308
+
1309
+type_init(can_register_types)
1310
diff --git a/hw/Kconfig b/hw/Kconfig
1311
index XXXXXXX..XXXXXXX 100644
188
index XXXXXXX..XXXXXXX 100644
1312
--- a/hw/Kconfig
189
--- a/accel/meson.build
1313
+++ b/hw/Kconfig
190
+++ b/accel/meson.build
1314
@@ -XXX,XX +XXX,XX @@ config XILINX_AXI
191
@@ -XXX,XX +XXX,XX @@ specific_ss.add(files('accel-common.c'))
1315
config XLNX_ZYNQMP
192
softmmu_ss.add(files('accel-softmmu.c'))
1316
bool
193
user_ss.add(files('accel-user.c'))
1317
select REGISTER
194
1318
+ select CAN_BUS
195
+subdir('hvf')
1319
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
196
subdir('qtest')
1320
index XXXXXXX..XXXXXXX 100644
197
subdir('kvm')
1321
--- a/hw/net/can/meson.build
198
subdir('tcg')
1322
+++ b/hw/net/can/meson.build
1323
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_pcm3680_pci.c'))
1324
softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
1325
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
1326
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
1327
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
1328
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
1329
new file mode 100644
1330
index XXXXXXX..XXXXXXX
1331
--- /dev/null
1332
+++ b/hw/net/can/trace-events
1333
@@ -XXX,XX +XXX,XX @@
1334
+# xlnx-zynqmp-can.c
1335
+xlnx_can_update_irq(uint32_t isr, uint32_t ier, uint32_t irq) "ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
1336
+xlnx_can_reset(uint32_t val) "Resetting controller with value = 0x%08x"
1337
+xlnx_can_rx_fifo_filter_reject(uint32_t id, uint8_t dlc) "Frame: ID: 0x%08x DLC: 0x%02x"
1338
+xlnx_can_filter_id_pre_write(uint8_t filter_num, uint32_t value) "Filter%d ID: 0x%08x"
1339
+xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MASK: 0x%08x"
1340
+xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1341
+xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1342
+xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
1343
--
199
--
1344
2.20.1
200
2.20.1
1345
201
1346
202
diff view generated by jsdifflib
New patch
1
From: Alexander Graf <agraf@csgraf.de>
1
2
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
7
8
This patch moves the vCPU thread loop over.
9
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-3-agraf@csgraf.de
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
{target/i386 => accel}/hvf/hvf-accel-ops.h | 0
17
{target/i386 => accel}/hvf/hvf-accel-ops.c | 0
18
target/i386/hvf/x86hvf.c | 2 +-
19
accel/hvf/meson.build | 1 +
20
target/i386/hvf/meson.build | 1 -
21
5 files changed, 2 insertions(+), 2 deletions(-)
22
rename {target/i386 => accel}/hvf/hvf-accel-ops.h (100%)
23
rename {target/i386 => accel}/hvf/hvf-accel-ops.c (100%)
24
25
diff --git a/target/i386/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
26
similarity index 100%
27
rename from target/i386/hvf/hvf-accel-ops.h
28
rename to accel/hvf/hvf-accel-ops.h
29
diff --git a/target/i386/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
30
similarity index 100%
31
rename from target/i386/hvf/hvf-accel-ops.c
32
rename to accel/hvf/hvf-accel-ops.c
33
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/i386/hvf/x86hvf.c
36
+++ b/target/i386/hvf/x86hvf.c
37
@@ -XXX,XX +XXX,XX @@
38
#include <Hypervisor/hv.h>
39
#include <Hypervisor/hv_vmx.h>
40
41
-#include "hvf-accel-ops.h"
42
+#include "accel/hvf/hvf-accel-ops.h"
43
44
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
45
SegmentCache *qseg, bool is_tr)
46
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
47
index XXXXXXX..XXXXXXX 100644
48
--- a/accel/hvf/meson.build
49
+++ b/accel/hvf/meson.build
50
@@ -XXX,XX +XXX,XX @@
51
hvf_ss = ss.source_set()
52
hvf_ss.add(files(
53
'hvf-all.c',
54
+ 'hvf-accel-ops.c',
55
))
56
57
specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
58
diff --git a/target/i386/hvf/meson.build b/target/i386/hvf/meson.build
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/i386/hvf/meson.build
61
+++ b/target/i386/hvf/meson.build
62
@@ -XXX,XX +XXX,XX @@
63
i386_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
64
'hvf.c',
65
- 'hvf-accel-ops.c',
66
'x86.c',
67
'x86_cpuid.c',
68
'x86_decode.c',
69
--
70
2.20.1
71
72
diff view generated by jsdifflib
1
From: Havard Skinnemoen <hskinnemoen@google.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Dump the collected random data after a randomness test failure.
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
4
7
5
Note that this relies on the test having called
8
This patch moves CPU and memory operations over. While at it, make sure
6
g_test_set_nonfatal_assertions() so we don't abort immediately on the
9
the code is consumable on non-i386 systems.
7
assertion failure.
8
10
9
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
11
Signed-off-by: Alexander Graf <agraf@csgraf.de>
12
Reviewed-by: Sergio Lopez <slp@redhat.com>
13
Message-id: 20210519202253.76782-4-agraf@csgraf.de
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
[PMM: minor commit message tweak]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
16
---
14
tests/qtest/npcm7xx_rng-test.c | 12 ++++++++++++
17
include/sysemu/hvf_int.h | 4 +
15
1 file changed, 12 insertions(+)
18
target/i386/hvf/hvf-i386.h | 2 -
19
target/i386/hvf/x86hvf.h | 2 -
20
accel/hvf/hvf-accel-ops.c | 308 ++++++++++++++++++++++++++++++++++++-
21
target/i386/hvf/hvf.c | 302 ------------------------------------
22
5 files changed, 311 insertions(+), 307 deletions(-)
16
23
17
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
24
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
18
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/npcm7xx_rng-test.c
26
--- a/include/sysemu/hvf_int.h
20
+++ b/tests/qtest/npcm7xx_rng-test.c
27
+++ b/include/sysemu/hvf_int.h
21
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
22
29
23
#include "libqtest-single.h"
30
#include <Hypervisor/hv.h>
24
#include "qemu/bitops.h"
31
25
+#include "qemu-common.h"
32
+void hvf_set_phys_mem(MemoryRegionSection *, bool);
26
33
void assert_hvf_ok(hv_return_t ret);
27
#define RNG_BASE_ADDR 0xf000b000
34
+hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
28
35
+int hvf_put_registers(CPUState *);
36
+int hvf_get_registers(CPUState *);
37
38
#endif
39
diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/i386/hvf/hvf-i386.h
42
+++ b/target/i386/hvf/hvf-i386.h
43
@@ -XXX,XX +XXX,XX @@ struct HVFState {
44
};
45
extern HVFState *hvf_state;
46
47
-void hvf_set_phys_mem(MemoryRegionSection *, bool);
48
void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int);
49
-hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
50
51
#ifdef NEED_CPU_H
52
/* Functions exported to host specific mode */
53
diff --git a/target/i386/hvf/x86hvf.h b/target/i386/hvf/x86hvf.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/i386/hvf/x86hvf.h
56
+++ b/target/i386/hvf/x86hvf.h
29
@@ -XXX,XX +XXX,XX @@
57
@@ -XXX,XX +XXX,XX @@
30
/* Number of bits to collect for randomness tests. */
58
#include "x86_descr.h"
31
#define TEST_INPUT_BITS (128)
59
32
60
int hvf_process_events(CPUState *);
33
+static void dump_buf_if_failed(const uint8_t *buf, size_t size)
61
-int hvf_put_registers(CPUState *);
34
+{
62
-int hvf_get_registers(CPUState *);
35
+ if (g_test_failed()) {
63
bool hvf_inject_interrupts(CPUState *);
36
+ qemu_hexdump(stderr, "", buf, size);
64
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
37
+ }
65
SegmentCache *qseg, bool is_tr);
38
+}
66
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
39
+
67
index XXXXXXX..XXXXXXX 100644
40
static void rng_writeb(unsigned int offset, uint8_t value)
68
--- a/accel/hvf/hvf-accel-ops.c
69
+++ b/accel/hvf/hvf-accel-ops.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "qemu/osdep.h"
72
#include "qemu/error-report.h"
73
#include "qemu/main-loop.h"
74
+#include "exec/address-spaces.h"
75
+#include "exec/exec-all.h"
76
+#include "sysemu/cpus.h"
77
#include "sysemu/hvf.h"
78
+#include "sysemu/hvf_int.h"
79
#include "sysemu/runstate.h"
80
-#include "target/i386/cpu.h"
81
#include "qemu/guest-random.h"
82
83
#include "hvf-accel-ops.h"
84
85
+HVFState *hvf_state;
86
+
87
+/* Memory slots */
88
+
89
+hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
90
+{
91
+ hvf_slot *slot;
92
+ int x;
93
+ for (x = 0; x < hvf_state->num_slots; ++x) {
94
+ slot = &hvf_state->slots[x];
95
+ if (slot->size && start < (slot->start + slot->size) &&
96
+ (start + size) > slot->start) {
97
+ return slot;
98
+ }
99
+ }
100
+ return NULL;
101
+}
102
+
103
+struct mac_slot {
104
+ int present;
105
+ uint64_t size;
106
+ uint64_t gpa_start;
107
+ uint64_t gva;
108
+};
109
+
110
+struct mac_slot mac_slots[32];
111
+
112
+static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
113
+{
114
+ struct mac_slot *macslot;
115
+ hv_return_t ret;
116
+
117
+ macslot = &mac_slots[slot->slot_id];
118
+
119
+ if (macslot->present) {
120
+ if (macslot->size != slot->size) {
121
+ macslot->present = 0;
122
+ ret = hv_vm_unmap(macslot->gpa_start, macslot->size);
123
+ assert_hvf_ok(ret);
124
+ }
125
+ }
126
+
127
+ if (!slot->size) {
128
+ return 0;
129
+ }
130
+
131
+ macslot->present = 1;
132
+ macslot->gpa_start = slot->start;
133
+ macslot->size = slot->size;
134
+ ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
135
+ assert_hvf_ok(ret);
136
+ return 0;
137
+}
138
+
139
+void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
140
+{
141
+ hvf_slot *mem;
142
+ MemoryRegion *area = section->mr;
143
+ bool writeable = !area->readonly && !area->rom_device;
144
+ hv_memory_flags_t flags;
145
+
146
+ if (!memory_region_is_ram(area)) {
147
+ if (writeable) {
148
+ return;
149
+ } else if (!memory_region_is_romd(area)) {
150
+ /*
151
+ * If the memory device is not in romd_mode, then we actually want
152
+ * to remove the hvf memory slot so all accesses will trap.
153
+ */
154
+ add = false;
155
+ }
156
+ }
157
+
158
+ mem = hvf_find_overlap_slot(
159
+ section->offset_within_address_space,
160
+ int128_get64(section->size));
161
+
162
+ if (mem && add) {
163
+ if (mem->size == int128_get64(section->size) &&
164
+ mem->start == section->offset_within_address_space &&
165
+ mem->mem == (memory_region_get_ram_ptr(area) +
166
+ section->offset_within_region)) {
167
+ return; /* Same region was attempted to register, go away. */
168
+ }
169
+ }
170
+
171
+ /* Region needs to be reset. set the size to 0 and remap it. */
172
+ if (mem) {
173
+ mem->size = 0;
174
+ if (do_hvf_set_memory(mem, 0)) {
175
+ error_report("Failed to reset overlapping slot");
176
+ abort();
177
+ }
178
+ }
179
+
180
+ if (!add) {
181
+ return;
182
+ }
183
+
184
+ if (area->readonly ||
185
+ (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
186
+ flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
187
+ } else {
188
+ flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
189
+ }
190
+
191
+ /* Now make a new slot. */
192
+ int x;
193
+
194
+ for (x = 0; x < hvf_state->num_slots; ++x) {
195
+ mem = &hvf_state->slots[x];
196
+ if (!mem->size) {
197
+ break;
198
+ }
199
+ }
200
+
201
+ if (x == hvf_state->num_slots) {
202
+ error_report("No free slots");
203
+ abort();
204
+ }
205
+
206
+ mem->size = int128_get64(section->size);
207
+ mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region;
208
+ mem->start = section->offset_within_address_space;
209
+ mem->region = area;
210
+
211
+ if (do_hvf_set_memory(mem, flags)) {
212
+ error_report("Error registering new memory slot");
213
+ abort();
214
+ }
215
+}
216
+
217
+static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
218
+{
219
+ if (!cpu->vcpu_dirty) {
220
+ hvf_get_registers(cpu);
221
+ cpu->vcpu_dirty = true;
222
+ }
223
+}
224
+
225
+void hvf_cpu_synchronize_state(CPUState *cpu)
226
+{
227
+ if (!cpu->vcpu_dirty) {
228
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
229
+ }
230
+}
231
+
232
+static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
233
+ run_on_cpu_data arg)
234
+{
235
+ hvf_put_registers(cpu);
236
+ cpu->vcpu_dirty = false;
237
+}
238
+
239
+void hvf_cpu_synchronize_post_reset(CPUState *cpu)
240
+{
241
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
242
+}
243
+
244
+static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
245
+ run_on_cpu_data arg)
246
+{
247
+ hvf_put_registers(cpu);
248
+ cpu->vcpu_dirty = false;
249
+}
250
+
251
+void hvf_cpu_synchronize_post_init(CPUState *cpu)
252
+{
253
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
254
+}
255
+
256
+static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
257
+ run_on_cpu_data arg)
258
+{
259
+ cpu->vcpu_dirty = true;
260
+}
261
+
262
+void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
263
+{
264
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
265
+}
266
+
267
+static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
268
+{
269
+ hvf_slot *slot;
270
+
271
+ slot = hvf_find_overlap_slot(
272
+ section->offset_within_address_space,
273
+ int128_get64(section->size));
274
+
275
+ /* protect region against writes; begin tracking it */
276
+ if (on) {
277
+ slot->flags |= HVF_SLOT_LOG;
278
+ hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
279
+ HV_MEMORY_READ);
280
+ /* stop tracking region*/
281
+ } else {
282
+ slot->flags &= ~HVF_SLOT_LOG;
283
+ hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
284
+ HV_MEMORY_READ | HV_MEMORY_WRITE);
285
+ }
286
+}
287
+
288
+static void hvf_log_start(MemoryListener *listener,
289
+ MemoryRegionSection *section, int old, int new)
290
+{
291
+ if (old != 0) {
292
+ return;
293
+ }
294
+
295
+ hvf_set_dirty_tracking(section, 1);
296
+}
297
+
298
+static void hvf_log_stop(MemoryListener *listener,
299
+ MemoryRegionSection *section, int old, int new)
300
+{
301
+ if (new != 0) {
302
+ return;
303
+ }
304
+
305
+ hvf_set_dirty_tracking(section, 0);
306
+}
307
+
308
+static void hvf_log_sync(MemoryListener *listener,
309
+ MemoryRegionSection *section)
310
+{
311
+ /*
312
+ * sync of dirty pages is handled elsewhere; just make sure we keep
313
+ * tracking the region.
314
+ */
315
+ hvf_set_dirty_tracking(section, 1);
316
+}
317
+
318
+static void hvf_region_add(MemoryListener *listener,
319
+ MemoryRegionSection *section)
320
+{
321
+ hvf_set_phys_mem(section, true);
322
+}
323
+
324
+static void hvf_region_del(MemoryListener *listener,
325
+ MemoryRegionSection *section)
326
+{
327
+ hvf_set_phys_mem(section, false);
328
+}
329
+
330
+static MemoryListener hvf_memory_listener = {
331
+ .priority = 10,
332
+ .region_add = hvf_region_add,
333
+ .region_del = hvf_region_del,
334
+ .log_start = hvf_log_start,
335
+ .log_stop = hvf_log_stop,
336
+ .log_sync = hvf_log_sync,
337
+};
338
+
339
+static void dummy_signal(int sig)
340
+{
341
+}
342
+
343
+bool hvf_allowed;
344
+
345
+static int hvf_accel_init(MachineState *ms)
346
+{
347
+ int x;
348
+ hv_return_t ret;
349
+ HVFState *s;
350
+
351
+ ret = hv_vm_create(HV_VM_DEFAULT);
352
+ assert_hvf_ok(ret);
353
+
354
+ s = g_new0(HVFState, 1);
355
+
356
+ s->num_slots = 32;
357
+ for (x = 0; x < s->num_slots; ++x) {
358
+ s->slots[x].size = 0;
359
+ s->slots[x].slot_id = x;
360
+ }
361
+
362
+ hvf_state = s;
363
+ memory_listener_register(&hvf_memory_listener, &address_space_memory);
364
+ return 0;
365
+}
366
+
367
+static void hvf_accel_class_init(ObjectClass *oc, void *data)
368
+{
369
+ AccelClass *ac = ACCEL_CLASS(oc);
370
+ ac->name = "HVF";
371
+ ac->init_machine = hvf_accel_init;
372
+ ac->allowed = &hvf_allowed;
373
+}
374
+
375
+static const TypeInfo hvf_accel_type = {
376
+ .name = TYPE_HVF_ACCEL,
377
+ .parent = TYPE_ACCEL,
378
+ .class_init = hvf_accel_class_init,
379
+};
380
+
381
+static void hvf_type_init(void)
382
+{
383
+ type_register_static(&hvf_accel_type);
384
+}
385
+
386
+type_init(hvf_type_init);
387
+
388
/*
389
* The HVF-specific vCPU thread function. This one should only run when the host
390
* CPU supports the VMX "unrestricted guest" feature.
391
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
392
index XXXXXXX..XXXXXXX 100644
393
--- a/target/i386/hvf/hvf.c
394
+++ b/target/i386/hvf/hvf.c
395
@@ -XXX,XX +XXX,XX @@
396
397
#include "hvf-accel-ops.h"
398
399
-HVFState *hvf_state;
400
-
401
-/* Memory slots */
402
-hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
403
-{
404
- hvf_slot *slot;
405
- int x;
406
- for (x = 0; x < hvf_state->num_slots; ++x) {
407
- slot = &hvf_state->slots[x];
408
- if (slot->size && start < (slot->start + slot->size) &&
409
- (start + size) > slot->start) {
410
- return slot;
411
- }
412
- }
413
- return NULL;
414
-}
415
-
416
-struct mac_slot {
417
- int present;
418
- uint64_t size;
419
- uint64_t gpa_start;
420
- uint64_t gva;
421
-};
422
-
423
-struct mac_slot mac_slots[32];
424
-
425
-static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
426
-{
427
- struct mac_slot *macslot;
428
- hv_return_t ret;
429
-
430
- macslot = &mac_slots[slot->slot_id];
431
-
432
- if (macslot->present) {
433
- if (macslot->size != slot->size) {
434
- macslot->present = 0;
435
- ret = hv_vm_unmap(macslot->gpa_start, macslot->size);
436
- assert_hvf_ok(ret);
437
- }
438
- }
439
-
440
- if (!slot->size) {
441
- return 0;
442
- }
443
-
444
- macslot->present = 1;
445
- macslot->gpa_start = slot->start;
446
- macslot->size = slot->size;
447
- ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
448
- assert_hvf_ok(ret);
449
- return 0;
450
-}
451
-
452
-void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
453
-{
454
- hvf_slot *mem;
455
- MemoryRegion *area = section->mr;
456
- bool writeable = !area->readonly && !area->rom_device;
457
- hv_memory_flags_t flags;
458
-
459
- if (!memory_region_is_ram(area)) {
460
- if (writeable) {
461
- return;
462
- } else if (!memory_region_is_romd(area)) {
463
- /*
464
- * If the memory device is not in romd_mode, then we actually want
465
- * to remove the hvf memory slot so all accesses will trap.
466
- */
467
- add = false;
468
- }
469
- }
470
-
471
- mem = hvf_find_overlap_slot(
472
- section->offset_within_address_space,
473
- int128_get64(section->size));
474
-
475
- if (mem && add) {
476
- if (mem->size == int128_get64(section->size) &&
477
- mem->start == section->offset_within_address_space &&
478
- mem->mem == (memory_region_get_ram_ptr(area) +
479
- section->offset_within_region)) {
480
- return; /* Same region was attempted to register, go away. */
481
- }
482
- }
483
-
484
- /* Region needs to be reset. set the size to 0 and remap it. */
485
- if (mem) {
486
- mem->size = 0;
487
- if (do_hvf_set_memory(mem, 0)) {
488
- error_report("Failed to reset overlapping slot");
489
- abort();
490
- }
491
- }
492
-
493
- if (!add) {
494
- return;
495
- }
496
-
497
- if (area->readonly ||
498
- (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
499
- flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
500
- } else {
501
- flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
502
- }
503
-
504
- /* Now make a new slot. */
505
- int x;
506
-
507
- for (x = 0; x < hvf_state->num_slots; ++x) {
508
- mem = &hvf_state->slots[x];
509
- if (!mem->size) {
510
- break;
511
- }
512
- }
513
-
514
- if (x == hvf_state->num_slots) {
515
- error_report("No free slots");
516
- abort();
517
- }
518
-
519
- mem->size = int128_get64(section->size);
520
- mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region;
521
- mem->start = section->offset_within_address_space;
522
- mem->region = area;
523
-
524
- if (do_hvf_set_memory(mem, flags)) {
525
- error_report("Error registering new memory slot");
526
- abort();
527
- }
528
-}
529
-
530
void vmx_update_tpr(CPUState *cpu)
41
{
531
{
42
writeb(RNG_BASE_ADDR + offset, value);
532
/* TODO: need integrate APIC handling */
43
@@ -XXX,XX +XXX,XX @@ static void test_continuous_monobit(void)
533
@@ -XXX,XX +XXX,XX @@ void hvf_handle_io(CPUArchState *env, uint16_t port, void *buffer,
44
}
534
}
45
46
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
47
+ dump_buf_if_failed(buf, sizeof(buf));
48
}
535
}
49
536
50
/*
537
-static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
51
@@ -XXX,XX +XXX,XX @@ static void test_continuous_runs(void)
538
-{
52
}
539
- if (!cpu->vcpu_dirty) {
53
540
- hvf_get_registers(cpu);
54
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
541
- cpu->vcpu_dirty = true;
55
+ dump_buf_if_failed(buf.c, sizeof(buf));
542
- }
543
-}
544
-
545
-void hvf_cpu_synchronize_state(CPUState *cpu)
546
-{
547
- if (!cpu->vcpu_dirty) {
548
- run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
549
- }
550
-}
551
-
552
-static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
553
- run_on_cpu_data arg)
554
-{
555
- hvf_put_registers(cpu);
556
- cpu->vcpu_dirty = false;
557
-}
558
-
559
-void hvf_cpu_synchronize_post_reset(CPUState *cpu)
560
-{
561
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
562
-}
563
-
564
-static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
565
- run_on_cpu_data arg)
566
-{
567
- hvf_put_registers(cpu);
568
- cpu->vcpu_dirty = false;
569
-}
570
-
571
-void hvf_cpu_synchronize_post_init(CPUState *cpu)
572
-{
573
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
574
-}
575
-
576
-static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
577
- run_on_cpu_data arg)
578
-{
579
- cpu->vcpu_dirty = true;
580
-}
581
-
582
-void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
583
-{
584
- run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
585
-}
586
-
587
static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
588
{
589
int read, write;
590
@@ -XXX,XX +XXX,XX @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
591
return false;
56
}
592
}
57
593
58
/*
594
-static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
59
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_monobit(void)
595
-{
60
}
596
- hvf_slot *slot;
61
597
-
62
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
598
- slot = hvf_find_overlap_slot(
63
+ dump_buf_if_failed(buf, sizeof(buf));
599
- section->offset_within_address_space,
600
- int128_get64(section->size));
601
-
602
- /* protect region against writes; begin tracking it */
603
- if (on) {
604
- slot->flags |= HVF_SLOT_LOG;
605
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
606
- HV_MEMORY_READ);
607
- /* stop tracking region*/
608
- } else {
609
- slot->flags &= ~HVF_SLOT_LOG;
610
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
611
- HV_MEMORY_READ | HV_MEMORY_WRITE);
612
- }
613
-}
614
-
615
-static void hvf_log_start(MemoryListener *listener,
616
- MemoryRegionSection *section, int old, int new)
617
-{
618
- if (old != 0) {
619
- return;
620
- }
621
-
622
- hvf_set_dirty_tracking(section, 1);
623
-}
624
-
625
-static void hvf_log_stop(MemoryListener *listener,
626
- MemoryRegionSection *section, int old, int new)
627
-{
628
- if (new != 0) {
629
- return;
630
- }
631
-
632
- hvf_set_dirty_tracking(section, 0);
633
-}
634
-
635
-static void hvf_log_sync(MemoryListener *listener,
636
- MemoryRegionSection *section)
637
-{
638
- /*
639
- * sync of dirty pages is handled elsewhere; just make sure we keep
640
- * tracking the region.
641
- */
642
- hvf_set_dirty_tracking(section, 1);
643
-}
644
-
645
-static void hvf_region_add(MemoryListener *listener,
646
- MemoryRegionSection *section)
647
-{
648
- hvf_set_phys_mem(section, true);
649
-}
650
-
651
-static void hvf_region_del(MemoryListener *listener,
652
- MemoryRegionSection *section)
653
-{
654
- hvf_set_phys_mem(section, false);
655
-}
656
-
657
-static MemoryListener hvf_memory_listener = {
658
- .priority = 10,
659
- .region_add = hvf_region_add,
660
- .region_del = hvf_region_del,
661
- .log_start = hvf_log_start,
662
- .log_stop = hvf_log_stop,
663
- .log_sync = hvf_log_sync,
664
-};
665
-
666
void hvf_vcpu_destroy(CPUState *cpu)
667
{
668
X86CPU *x86_cpu = X86_CPU(cpu);
669
@@ -XXX,XX +XXX,XX @@ void hvf_vcpu_destroy(CPUState *cpu)
670
assert_hvf_ok(ret);
64
}
671
}
65
672
66
/*
673
-static void dummy_signal(int sig)
67
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_runs(void)
674
-{
68
}
675
-}
69
676
-
70
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
677
static void init_tsc_freq(CPUX86State *env)
71
+ dump_buf_if_failed(buf.c, sizeof(buf));
678
{
679
size_t length;
680
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
681
682
return ret;
72
}
683
}
73
684
-
74
int main(int argc, char **argv)
685
-bool hvf_allowed;
686
-
687
-static int hvf_accel_init(MachineState *ms)
688
-{
689
- int x;
690
- hv_return_t ret;
691
- HVFState *s;
692
-
693
- ret = hv_vm_create(HV_VM_DEFAULT);
694
- assert_hvf_ok(ret);
695
-
696
- s = g_new0(HVFState, 1);
697
-
698
- s->num_slots = 32;
699
- for (x = 0; x < s->num_slots; ++x) {
700
- s->slots[x].size = 0;
701
- s->slots[x].slot_id = x;
702
- }
703
-
704
- hvf_state = s;
705
- memory_listener_register(&hvf_memory_listener, &address_space_memory);
706
- return 0;
707
-}
708
-
709
-static void hvf_accel_class_init(ObjectClass *oc, void *data)
710
-{
711
- AccelClass *ac = ACCEL_CLASS(oc);
712
- ac->name = "HVF";
713
- ac->init_machine = hvf_accel_init;
714
- ac->allowed = &hvf_allowed;
715
-}
716
-
717
-static const TypeInfo hvf_accel_type = {
718
- .name = TYPE_HVF_ACCEL,
719
- .parent = TYPE_ACCEL,
720
- .class_init = hvf_accel_class_init,
721
-};
722
-
723
-static void hvf_type_init(void)
724
-{
725
- type_register_static(&hvf_accel_type);
726
-}
727
-
728
-type_init(hvf_type_init);
75
--
729
--
76
2.20.1
730
2.20.1
77
731
78
732
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
argument of type "unsigned int".
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
5
7
6
Reported-by: Euler Robot <euler.robot@huawei.com>
8
This patch moves a few internal struct and constant defines over.
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
9
8
Message-id: 20201126111109.112238-5-alex.chen@huawei.com
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-5-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
15
---
12
hw/misc/imx6ul_ccm.c | 4 ++--
16
include/sysemu/hvf_int.h | 30 ++++++++++++++++++++++++++++++
13
1 file changed, 2 insertions(+), 2 deletions(-)
17
target/i386/hvf/hvf-i386.h | 31 +------------------------------
18
2 files changed, 31 insertions(+), 30 deletions(-)
14
19
15
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
20
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx6ul_ccm.c
22
--- a/include/sysemu/hvf_int.h
18
+++ b/hw/misc/imx6ul_ccm.c
23
+++ b/include/sysemu/hvf_int.h
19
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_ccm_reg_name(uint32_t reg)
24
@@ -XXX,XX +XXX,XX @@
20
case CCM_CMEOR:
25
21
return "CMEOR";
26
#include <Hypervisor/hv.h>
22
default:
27
23
- sprintf(unknown, "%d ?", reg);
28
+/* hvf_slot flags */
24
+ sprintf(unknown, "%u ?", reg);
29
+#define HVF_SLOT_LOG (1 << 0)
25
return unknown;
30
+
26
}
31
+typedef struct hvf_slot {
27
}
32
+ uint64_t start;
28
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_analog_reg_name(uint32_t reg)
33
+ uint64_t size;
29
case USB_ANALOG_DIGPROG:
34
+ uint8_t *mem;
30
return "USB_ANALOG_DIGPROG";
35
+ int slot_id;
31
default:
36
+ uint32_t flags;
32
- sprintf(unknown, "%d ?", reg);
37
+ MemoryRegion *region;
33
+ sprintf(unknown, "%u ?", reg);
38
+} hvf_slot;
34
return unknown;
39
+
35
}
40
+typedef struct hvf_vcpu_caps {
36
}
41
+ uint64_t vmx_cap_pinbased;
42
+ uint64_t vmx_cap_procbased;
43
+ uint64_t vmx_cap_procbased2;
44
+ uint64_t vmx_cap_entry;
45
+ uint64_t vmx_cap_exit;
46
+ uint64_t vmx_cap_preemption_timer;
47
+} hvf_vcpu_caps;
48
+
49
+struct HVFState {
50
+ AccelState parent;
51
+ hvf_slot slots[32];
52
+ int num_slots;
53
+
54
+ hvf_vcpu_caps *hvf_caps;
55
+};
56
+extern HVFState *hvf_state;
57
+
58
void hvf_set_phys_mem(MemoryRegionSection *, bool);
59
void assert_hvf_ok(hv_return_t ret);
60
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
61
diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/i386/hvf/hvf-i386.h
64
+++ b/target/i386/hvf/hvf-i386.h
65
@@ -XXX,XX +XXX,XX @@
66
67
#include "qemu/accel.h"
68
#include "sysemu/hvf.h"
69
+#include "sysemu/hvf_int.h"
70
#include "cpu.h"
71
#include "x86.h"
72
73
-/* hvf_slot flags */
74
-#define HVF_SLOT_LOG (1 << 0)
75
-
76
-typedef struct hvf_slot {
77
- uint64_t start;
78
- uint64_t size;
79
- uint8_t *mem;
80
- int slot_id;
81
- uint32_t flags;
82
- MemoryRegion *region;
83
-} hvf_slot;
84
-
85
-typedef struct hvf_vcpu_caps {
86
- uint64_t vmx_cap_pinbased;
87
- uint64_t vmx_cap_procbased;
88
- uint64_t vmx_cap_procbased2;
89
- uint64_t vmx_cap_entry;
90
- uint64_t vmx_cap_exit;
91
- uint64_t vmx_cap_preemption_timer;
92
-} hvf_vcpu_caps;
93
-
94
-struct HVFState {
95
- AccelState parent;
96
- hvf_slot slots[32];
97
- int num_slots;
98
-
99
- hvf_vcpu_caps *hvf_caps;
100
-};
101
-extern HVFState *hvf_state;
102
-
103
void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int);
104
105
#ifdef NEED_CPU_H
37
--
106
--
38
2.20.1
107
2.20.1
39
108
40
109
diff view generated by jsdifflib
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
3
The hvf_set_phys_mem() function is only called within the same file.
4
Descriptor is 5 bits([4:0]).
4
Make it static.
5
5
6
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20201124023711.1184-1-jiangkunkun@huawei.com
8
Message-id: 20210519202253.76782-6-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Acked-by: Eric Auger <eric.auger@redhat.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/arm/smmuv3-internal.h | 2 +-
12
include/sysemu/hvf_int.h | 1 -
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
accel/hvf/hvf-accel-ops.c | 2 +-
14
2 files changed, 1 insertion(+), 2 deletions(-)
15
15
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
16
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/smmuv3-internal.h
18
--- a/include/sysemu/hvf_int.h
19
+++ b/hw/arm/smmuv3-internal.h
19
+++ b/include/sysemu/hvf_int.h
20
@@ -XXX,XX +XXX,XX @@ static inline uint64_t l1std_l2ptr(STEDesc *desc)
20
@@ -XXX,XX +XXX,XX @@ struct HVFState {
21
return hi << 32 | lo;
21
};
22
extern HVFState *hvf_state;
23
24
-void hvf_set_phys_mem(MemoryRegionSection *, bool);
25
void assert_hvf_ok(hv_return_t ret);
26
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
27
int hvf_put_registers(CPUState *);
28
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/accel/hvf/hvf-accel-ops.c
31
+++ b/accel/hvf/hvf-accel-ops.c
32
@@ -XXX,XX +XXX,XX @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
33
return 0;
22
}
34
}
23
35
24
-#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
36
-void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
25
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 5))
37
+static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
26
38
{
27
#endif
39
hvf_slot *mem;
40
MemoryRegion *area = section->mr;
28
--
41
--
29
2.20.1
42
2.20.1
30
43
31
44
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
The ARM version of Hypervisor.framework no longer defines these two
4
argument of type "unsigned int".
4
types, so let's just revert to standard ones.
5
5
6
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20201126111109.112238-4-alex.chen@huawei.com
8
Message-id: 20210519202253.76782-7-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/misc/imx6_ccm.c | 20 ++++++++++----------
12
accel/hvf/hvf-accel-ops.c | 6 +++---
13
hw/misc/imx6_src.c | 2 +-
13
1 file changed, 3 insertions(+), 3 deletions(-)
14
2 files changed, 11 insertions(+), 11 deletions(-)
15
14
16
diff --git a/hw/misc/imx6_ccm.c b/hw/misc/imx6_ccm.c
15
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx6_ccm.c
17
--- a/accel/hvf/hvf-accel-ops.c
19
+++ b/hw/misc/imx6_ccm.c
18
+++ b/accel/hvf/hvf-accel-ops.c
20
@@ -XXX,XX +XXX,XX @@ static const char *imx6_ccm_reg_name(uint32_t reg)
19
@@ -XXX,XX +XXX,XX @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
21
case CCM_CMEOR:
20
macslot->present = 1;
22
return "CMEOR";
21
macslot->gpa_start = slot->start;
23
default:
22
macslot->size = slot->size;
24
- sprintf(unknown, "%d ?", reg);
23
- ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
25
+ sprintf(unknown, "%u ?", reg);
24
+ ret = hv_vm_map(slot->mem, slot->start, slot->size, flags);
26
return unknown;
25
assert_hvf_ok(ret);
27
}
26
return 0;
28
}
27
}
29
@@ -XXX,XX +XXX,XX @@ static const char *imx6_analog_reg_name(uint32_t reg)
28
@@ -XXX,XX +XXX,XX @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
30
case USB_ANALOG_DIGPROG:
29
/* protect region against writes; begin tracking it */
31
return "USB_ANALOG_DIGPROG";
30
if (on) {
32
default:
31
slot->flags |= HVF_SLOT_LOG;
33
- sprintf(unknown, "%d ?", reg);
32
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
34
+ sprintf(unknown, "%u ?", reg);
33
+ hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
35
return unknown;
34
HV_MEMORY_READ);
36
}
35
/* stop tracking region*/
37
}
36
} else {
38
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_clk(IMX6CCMState *dev)
37
slot->flags &= ~HVF_SLOT_LOG;
39
freq *= 20;
38
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
40
}
39
+ hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
41
40
HV_MEMORY_READ | HV_MEMORY_WRITE);
42
- DPRINTF("freq = %d\n", (uint32_t)freq);
43
+ DPRINTF("freq = %u\n", (uint32_t)freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd0_clk(IMX6CCMState *dev)
48
freq = imx6_analog_get_pll2_clk(dev) * 18
49
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD0_FRAC);
50
51
- DPRINTF("freq = %d\n", (uint32_t)freq);
52
+ DPRINTF("freq = %u\n", (uint32_t)freq);
53
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd2_clk(IMX6CCMState *dev)
57
freq = imx6_analog_get_pll2_clk(dev) * 18
58
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD2_FRAC);
59
60
- DPRINTF("freq = %d\n", (uint32_t)freq);
61
+ DPRINTF("freq = %u\n", (uint32_t)freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_periph_clk(IMX6CCMState *dev)
66
break;
67
}
68
69
- DPRINTF("freq = %d\n", (uint32_t)freq);
70
+ DPRINTF("freq = %u\n", (uint32_t)freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ahb_clk(IMX6CCMState *dev)
75
freq = imx6_analog_get_periph_clk(dev)
76
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], AHB_PODF));
77
78
- DPRINTF("freq = %d\n", (uint32_t)freq);
79
+ DPRINTF("freq = %u\n", (uint32_t)freq);
80
81
return freq;
82
}
83
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ipg_clk(IMX6CCMState *dev)
84
freq = imx6_ccm_get_ahb_clk(dev)
85
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], IPG_PODF));
86
87
- DPRINTF("freq = %d\n", (uint32_t)freq);
88
+ DPRINTF("freq = %u\n", (uint32_t)freq);
89
90
return freq;
91
}
92
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_per_clk(IMX6CCMState *dev)
93
freq = imx6_ccm_get_ipg_clk(dev)
94
/ (1 + EXTRACT(dev->ccm[CCM_CSCMR1], PERCLK_PODF));
95
96
- DPRINTF("freq = %d\n", (uint32_t)freq);
97
+ DPRINTF("freq = %u\n", (uint32_t)freq);
98
99
return freq;
100
}
101
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
102
break;
103
}
104
105
- DPRINTF("Clock = %d) = %d\n", clock, freq);
106
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
107
108
return freq;
109
}
110
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/hw/misc/imx6_src.c
113
+++ b/hw/misc/imx6_src.c
114
@@ -XXX,XX +XXX,XX @@ static const char *imx6_src_reg_name(uint32_t reg)
115
case SRC_GPR10:
116
return "SRC_GPR10";
117
default:
118
- sprintf(unknown, "%d ?", reg);
119
+ sprintf(unknown, "%u ?", reg);
120
return unknown;
121
}
41
}
122
}
42
}
123
--
43
--
124
2.20.1
44
2.20.1
125
45
126
46
diff view generated by jsdifflib
1
For M-profile CPUs, the range from 0xe0000000 to 0xe00fffff is the
1
From: Alexander Graf <agraf@csgraf.de>
2
Private Peripheral Bus range, which includes all of the memory mapped
3
devices and registers that are part of the CPU itself, including the
4
NVIC, systick timer, and debug and trace components like the Data
5
Watchpoint and Trace unit (DWT). Within this large region, the range
6
0xe000e000 to 0xe000efff is the System Control Space (NVIC, system
7
registers, systick) and 0xe002e000 to 0exe002efff is its Non-secure
8
alias.
9
2
10
The architecture is clear that within the SCS unimplemented registers
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
11
should be RES0 for privileged accesses and generate BusFault for
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
12
unprivileged accesses, and we currently implement this.
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
13
7
14
It is less clear about how to handle accesses to unimplemented
8
This patch splits the vcpu init and destroy functions into a generic and
15
regions of the wider PPB. Unprivileged accesses should definitely
9
an architecture specific portion. This also allows us to move the generic
16
cause BusFaults (R_DQQS), but the behaviour of privileged accesses is
10
functions into the generic hvf code, removing exported functions.
17
not given as a general rule. However, the register definitions of
18
individual registers for components like the DWT all state that they
19
are RES0 if the relevant component is not implemented, so the
20
simplest way to provide that is to provide RAZ/WI for the whole range
21
for privileged accesses. (The v7M Arm ARM does say that reserved
22
registers should be UNK/SBZP.)
23
11
24
Expand the container MemoryRegion that the NVIC exposes so that
12
Signed-off-by: Alexander Graf <agraf@csgraf.de>
25
it covers the whole PPB space. This means:
13
Reviewed-by: Sergio Lopez <slp@redhat.com>
26
* moving the address that the ARMV7M device maps it to down by
14
Message-id: 20210519202253.76782-8-agraf@csgraf.de
27
0xe000 bytes
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
* moving the off and the offsets within the container of all the
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
subregions forward by 0xe000 bytes
17
---
30
* adding a new default MemoryRegion that covers the whole container
18
accel/hvf/hvf-accel-ops.h | 2 --
31
at a lower priority than anything else and which provides the
19
include/sysemu/hvf_int.h | 2 ++
32
RAZWI/BusFault behaviour
20
accel/hvf/hvf-accel-ops.c | 30 ++++++++++++++++++++++++++++++
21
target/i386/hvf/hvf.c | 23 ++---------------------
22
4 files changed, 34 insertions(+), 23 deletions(-)
33
23
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20201119215617.29887-2-peter.maydell@linaro.org
37
---
38
include/hw/intc/armv7m_nvic.h | 1 +
39
hw/arm/armv7m.c | 2 +-
40
hw/intc/armv7m_nvic.c | 78 ++++++++++++++++++++++++++++++-----
41
3 files changed, 69 insertions(+), 12 deletions(-)
42
43
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
44
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
45
--- a/include/hw/intc/armv7m_nvic.h
26
--- a/accel/hvf/hvf-accel-ops.h
46
+++ b/include/hw/intc/armv7m_nvic.h
27
+++ b/accel/hvf/hvf-accel-ops.h
47
@@ -XXX,XX +XXX,XX @@ struct NVICState {
28
@@ -XXX,XX +XXX,XX @@
48
MemoryRegion systickmem;
29
49
MemoryRegion systick_ns_mem;
30
#include "sysemu/cpus.h"
50
MemoryRegion container;
31
51
+ MemoryRegion defaultmem;
32
-int hvf_init_vcpu(CPUState *);
52
33
int hvf_vcpu_exec(CPUState *);
53
uint32_t num_irq;
34
void hvf_cpu_synchronize_state(CPUState *);
54
qemu_irq excpout;
35
void hvf_cpu_synchronize_post_reset(CPUState *);
55
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
36
void hvf_cpu_synchronize_post_init(CPUState *);
37
void hvf_cpu_synchronize_pre_loadvm(CPUState *);
38
-void hvf_vcpu_destroy(CPUState *);
39
40
#endif /* HVF_CPUS_H */
41
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
56
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/armv7m.c
43
--- a/include/sysemu/hvf_int.h
58
+++ b/hw/arm/armv7m.c
44
+++ b/include/sysemu/hvf_int.h
59
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
45
@@ -XXX,XX +XXX,XX @@ struct HVFState {
60
sysbus_connect_irq(sbd, 0,
46
extern HVFState *hvf_state;
61
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
47
62
48
void assert_hvf_ok(hv_return_t ret);
63
- memory_region_add_subregion(&s->container, 0xe000e000,
49
+int hvf_arch_init_vcpu(CPUState *cpu);
64
+ memory_region_add_subregion(&s->container, 0xe0000000,
50
+void hvf_arch_vcpu_destroy(CPUState *cpu);
65
sysbus_mmio_get_region(sbd, 0));
51
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
66
52
int hvf_put_registers(CPUState *);
67
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
53
int hvf_get_registers(CPUState *);
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
54
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
69
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
56
--- a/accel/hvf/hvf-accel-ops.c
71
+++ b/hw/intc/armv7m_nvic.c
57
+++ b/accel/hvf/hvf-accel-ops.c
72
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
58
@@ -XXX,XX +XXX,XX @@ static void hvf_type_init(void)
73
.endianness = DEVICE_NATIVE_ENDIAN,
59
74
};
60
type_init(hvf_type_init);
75
61
76
+/*
62
+static void hvf_vcpu_destroy(CPUState *cpu)
77
+ * Unassigned portions of the PPB space are RAZ/WI for privileged
78
+ * accesses, and fault for non-privileged accesses.
79
+ */
80
+static MemTxResult ppb_default_read(void *opaque, hwaddr addr,
81
+ uint64_t *data, unsigned size,
82
+ MemTxAttrs attrs)
83
+{
63
+{
84
+ qemu_log_mask(LOG_UNIMP, "Read of unassigned area of PPB: offset 0x%x\n",
64
+ hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd);
85
+ (uint32_t)addr);
65
+ assert_hvf_ok(ret);
86
+ if (attrs.user) {
66
+
87
+ return MEMTX_ERROR;
67
+ hvf_arch_vcpu_destroy(cpu);
88
+ }
89
+ *data = 0;
90
+ return MEMTX_OK;
91
+}
68
+}
92
+
69
+
93
+static MemTxResult ppb_default_write(void *opaque, hwaddr addr,
70
+static int hvf_init_vcpu(CPUState *cpu)
94
+ uint64_t value, unsigned size,
95
+ MemTxAttrs attrs)
96
+{
71
+{
97
+ qemu_log_mask(LOG_UNIMP, "Write of unassigned area of PPB: offset 0x%x\n",
72
+ int r;
98
+ (uint32_t)addr);
73
+
99
+ if (attrs.user) {
74
+ /* init cpu signals */
100
+ return MEMTX_ERROR;
75
+ sigset_t set;
101
+ }
76
+ struct sigaction sigact;
102
+ return MEMTX_OK;
77
+
78
+ memset(&sigact, 0, sizeof(sigact));
79
+ sigact.sa_handler = dummy_signal;
80
+ sigaction(SIG_IPI, &sigact, NULL);
81
+
82
+ pthread_sigmask(SIG_BLOCK, NULL, &set);
83
+ sigdelset(&set, SIG_IPI);
84
+
85
+ r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
86
+ cpu->vcpu_dirty = 1;
87
+ assert_hvf_ok(r);
88
+
89
+ return hvf_arch_init_vcpu(cpu);
103
+}
90
+}
104
+
91
+
105
+static const MemoryRegionOps ppb_default_ops = {
92
/*
106
+ .read_with_attrs = ppb_default_read,
93
* The HVF-specific vCPU thread function. This one should only run when the host
107
+ .write_with_attrs = ppb_default_write,
94
* CPU supports the VMX "unrestricted guest" feature.
108
+ .endianness = DEVICE_NATIVE_ENDIAN,
95
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
109
+ .valid.min_access_size = 1,
96
index XXXXXXX..XXXXXXX 100644
110
+ .valid.max_access_size = 8,
97
--- a/target/i386/hvf/hvf.c
111
+};
98
+++ b/target/i386/hvf/hvf.c
112
+
99
@@ -XXX,XX +XXX,XX @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
113
static int nvic_post_load(void *opaque, int version_id)
100
return false;
101
}
102
103
-void hvf_vcpu_destroy(CPUState *cpu)
104
+void hvf_arch_vcpu_destroy(CPUState *cpu)
114
{
105
{
115
NVICState *s = opaque;
106
X86CPU *x86_cpu = X86_CPU(cpu);
116
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
107
CPUX86State *env = &x86_cpu->env;
117
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
108
109
- hv_return_t ret = hv_vcpu_destroy((hv_vcpuid_t)cpu->hvf_fd);
110
g_free(env->hvf_mmio_buf);
111
- assert_hvf_ok(ret);
112
}
113
114
static void init_tsc_freq(CPUX86State *env)
115
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
116
return env->apic_bus_freq != 0;
117
}
118
119
-int hvf_init_vcpu(CPUState *cpu)
120
+int hvf_arch_init_vcpu(CPUState *cpu)
118
{
121
{
119
NVICState *s = NVIC(dev);
122
-
120
- int regionlen;
123
X86CPU *x86cpu = X86_CPU(cpu);
121
124
CPUX86State *env = &x86cpu->env;
122
/* The armv7m container object will have set our CPU pointer */
125
- int r;
123
if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
126
-
124
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
127
- /* init cpu signals */
125
M_REG_S));
128
- sigset_t set;
129
- struct sigaction sigact;
130
-
131
- memset(&sigact, 0, sizeof(sigact));
132
- sigact.sa_handler = dummy_signal;
133
- sigaction(SIG_IPI, &sigact, NULL);
134
-
135
- pthread_sigmask(SIG_BLOCK, NULL, &set);
136
- sigdelset(&set, SIG_IPI);
137
138
init_emu();
139
init_decoder();
140
@@ -XXX,XX +XXX,XX @@ int hvf_init_vcpu(CPUState *cpu)
141
}
126
}
142
}
127
143
128
- /* The NVIC and System Control Space (SCS) starts at 0xe000e000
144
- r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
129
+ /*
145
- cpu->vcpu_dirty = 1;
130
+ * This device provides a single sysbus memory region which
146
- assert_hvf_ok(r);
131
+ * represents the whole of the "System PPB" space. This is the
147
-
132
+ * range from 0xe0000000 to 0xe00fffff and includes the NVIC,
148
if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED,
133
+ * the System Control Space (system registers), the systick timer,
149
&hvf_state->hvf_caps->vmx_cap_pinbased)) {
134
+ * and for CPUs with the Security extension an NS banked version
150
abort();
135
+ * of all of these.
136
+ *
137
+ * The default behaviour for unimplemented registers/ranges
138
+ * (for instance the Data Watchpoint and Trace unit at 0xe0001000)
139
+ * is to RAZ/WI for privileged access and BusFault for non-privileged
140
+ * access.
141
+ *
142
+ * The NVIC and System Control Space (SCS) starts at 0xe000e000
143
* and looks like this:
144
* 0x004 - ICTR
145
* 0x010 - 0xff - systick
146
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
147
* generally code determining which banked register to use should
148
* use attrs.secure; code determining actual behaviour of the system
149
* should use env->v7m.secure.
150
+ *
151
+ * The container covers the whole PPB space. Within it the priority
152
+ * of overlapping regions is:
153
+ * - default region (for RAZ/WI and BusFault) : -1
154
+ * - system register regions : 0
155
+ * - systick : 1
156
+ * This is because the systick device is a small block of registers
157
+ * in the middle of the other system control registers.
158
*/
159
- regionlen = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? 0x21000 : 0x1000;
160
- memory_region_init(&s->container, OBJECT(s), "nvic", regionlen);
161
- /* The system register region goes at the bottom of the priority
162
- * stack as it covers the whole page.
163
- */
164
+ memory_region_init(&s->container, OBJECT(s), "nvic", 0x100000);
165
+ memory_region_init_io(&s->defaultmem, OBJECT(s), &ppb_default_ops, s,
166
+ "nvic-default", 0x100000);
167
+ memory_region_add_subregion_overlap(&s->container, 0, &s->defaultmem, -1);
168
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
169
"nvic_sysregs", 0x1000);
170
- memory_region_add_subregion(&s->container, 0, &s->sysregmem);
171
+ memory_region_add_subregion(&s->container, 0xe000, &s->sysregmem);
172
173
memory_region_init_io(&s->systickmem, OBJECT(s),
174
&nvic_systick_ops, s,
175
"nvic_systick", 0xe0);
176
177
- memory_region_add_subregion_overlap(&s->container, 0x10,
178
+ memory_region_add_subregion_overlap(&s->container, 0xe010,
179
&s->systickmem, 1);
180
181
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
182
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
183
&nvic_sysreg_ns_ops, &s->sysregmem,
184
"nvic_sysregs_ns", 0x1000);
185
- memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
186
+ memory_region_add_subregion(&s->container, 0x2e000, &s->sysreg_ns_mem);
187
memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
188
&nvic_sysreg_ns_ops, &s->systickmem,
189
"nvic_systick_ns", 0xe0);
190
- memory_region_add_subregion_overlap(&s->container, 0x20010,
191
+ memory_region_add_subregion_overlap(&s->container, 0x2e010,
192
&s->systick_ns_mem, 1);
193
}
194
195
--
151
--
196
2.20.1
152
2.20.1
197
153
198
154
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
There is no reason to call the hvf specific hvf_cpu_synchronize_state()
4
argument of type "unsigned int".
4
when we can just use the generic cpu_synchronize_state() instead. This
5
allows us to have less dependency on internal function definitions and
6
allows us to make hvf_cpu_synchronize_state() static.
5
7
6
Reported-by: Euler Robot <euler.robot@huawei.com>
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
9
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20201126111109.112238-3-alex.chen@huawei.com
10
Message-id: 20210519202253.76782-9-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
hw/misc/imx31_ccm.c | 14 +++++++-------
14
accel/hvf/hvf-accel-ops.h | 1 -
13
hw/misc/imx_ccm.c | 4 ++--
15
accel/hvf/hvf-accel-ops.c | 2 +-
14
2 files changed, 9 insertions(+), 9 deletions(-)
16
target/i386/hvf/x86hvf.c | 9 ++++-----
17
3 files changed, 5 insertions(+), 7 deletions(-)
15
18
16
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
19
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx31_ccm.c
21
--- a/accel/hvf/hvf-accel-ops.h
19
+++ b/hw/misc/imx31_ccm.c
22
+++ b/accel/hvf/hvf-accel-ops.h
20
@@ -XXX,XX +XXX,XX @@ static const char *imx31_ccm_reg_name(uint32_t reg)
23
@@ -XXX,XX +XXX,XX @@
21
case IMX31_CCM_PDR2_REG:
24
#include "sysemu/cpus.h"
22
return "PDR2";
25
23
default:
26
int hvf_vcpu_exec(CPUState *);
24
- sprintf(unknown, "[%d ?]", reg);
27
-void hvf_cpu_synchronize_state(CPUState *);
25
+ sprintf(unknown, "[%u ?]", reg);
28
void hvf_cpu_synchronize_post_reset(CPUState *);
26
return unknown;
29
void hvf_cpu_synchronize_post_init(CPUState *);
30
void hvf_cpu_synchronize_pre_loadvm(CPUState *);
31
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/accel/hvf/hvf-accel-ops.c
34
+++ b/accel/hvf/hvf-accel-ops.c
35
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
27
}
36
}
28
}
37
}
29
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
38
30
freq = CKIH_FREQ;
39
-void hvf_cpu_synchronize_state(CPUState *cpu)
40
+static void hvf_cpu_synchronize_state(CPUState *cpu)
41
{
42
if (!cpu->vcpu_dirty) {
43
run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
44
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/i386/hvf/x86hvf.c
47
+++ b/target/i386/hvf/x86hvf.c
48
@@ -XXX,XX +XXX,XX @@
49
#include "cpu.h"
50
#include "x86_descr.h"
51
#include "x86_decode.h"
52
+#include "sysemu/hw_accel.h"
53
54
#include "hw/i386/apic_internal.h"
55
56
#include <Hypervisor/hv.h>
57
#include <Hypervisor/hv_vmx.h>
58
59
-#include "accel/hvf/hvf-accel-ops.h"
60
-
61
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
62
SegmentCache *qseg, bool is_tr)
63
{
64
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
65
env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
66
67
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
68
- hvf_cpu_synchronize_state(cpu_state);
69
+ cpu_synchronize_state(cpu_state);
70
do_cpu_init(cpu);
31
}
71
}
32
72
33
- DPRINTF("freq = %d\n", freq);
73
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
34
+ DPRINTF("freq = %u\n", freq);
74
cpu_state->halted = 0;
35
36
return freq;
37
}
38
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mpll_clk(IMXCCMState *dev)
39
freq = imx_ccm_calc_pll(s->reg[IMX31_CCM_MPCTL_REG],
40
imx31_ccm_get_pll_ref_clk(dev));
41
42
- DPRINTF("freq = %d\n", freq);
43
+ DPRINTF("freq = %u\n", freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mcu_main_clk(IMXCCMState *dev)
48
freq = imx31_ccm_get_mpll_clk(dev);
49
}
75
}
50
76
if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
51
- DPRINTF("freq = %d\n", freq);
77
- hvf_cpu_synchronize_state(cpu_state);
52
+ DPRINTF("freq = %u\n", freq);
78
+ cpu_synchronize_state(cpu_state);
53
79
do_cpu_sipi(cpu);
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_hclk_clk(IMXCCMState *dev)
57
freq = imx31_ccm_get_mcu_main_clk(dev)
58
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], MAX));
59
60
- DPRINTF("freq = %d\n", freq);
61
+ DPRINTF("freq = %u\n", freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_ipg_clk(IMXCCMState *dev)
66
freq = imx31_ccm_get_hclk_clk(dev)
67
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], IPG));
68
69
- DPRINTF("freq = %d\n", freq);
70
+ DPRINTF("freq = %u\n", freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
75
break;
76
}
80
}
77
81
if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
78
- DPRINTF("Clock = %d) = %d\n", clock, freq);
82
cpu_state->interrupt_request &= ~CPU_INTERRUPT_TPR;
79
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
83
- hvf_cpu_synchronize_state(cpu_state);
80
84
+ cpu_synchronize_state(cpu_state);
81
return freq;
85
apic_handle_tpr_access_report(cpu->apic_state, env->eip,
82
}
86
env->tpr_access_type);
83
diff --git a/hw/misc/imx_ccm.c b/hw/misc/imx_ccm.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/hw/misc/imx_ccm.c
86
+++ b/hw/misc/imx_ccm.c
87
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
88
freq = klass->get_clock_frequency(dev, clock);
89
}
87
}
90
91
- DPRINTF("(clock = %d) = %d\n", clock, freq);
92
+ DPRINTF("(clock = %d) = %u\n", clock, freq);
93
94
return freq;
95
}
96
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_calc_pll(uint32_t pllreg, uint32_t base_freq)
97
freq = ((2 * (base_freq >> 10) * (mfi * mfd + mfn)) /
98
(mfd * pd)) << 10;
99
100
- DPRINTF("(pllreg = 0x%08x, base_freq = %d) = %d\n", pllreg, base_freq,
101
+ DPRINTF("(pllreg = 0x%08x, base_freq = %u) = %d\n", pllreg, base_freq,
102
freq);
103
104
return freq;
105
--
88
--
106
2.20.1
89
2.20.1
107
90
108
91
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
The hvf accel synchronize functions are only used as input for local
4
argument of type "unsigned int".
4
callback functions, so we can make them static.
5
5
6
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20201126111109.112238-2-alex.chen@huawei.com
8
Message-id: 20210519202253.76782-10-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/misc/imx25_ccm.c | 12 ++++++------
12
accel/hvf/hvf-accel-ops.h | 3 ---
13
1 file changed, 6 insertions(+), 6 deletions(-)
13
accel/hvf/hvf-accel-ops.c | 6 +++---
14
2 files changed, 3 insertions(+), 6 deletions(-)
14
15
15
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
16
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx25_ccm.c
18
--- a/accel/hvf/hvf-accel-ops.h
18
+++ b/hw/misc/imx25_ccm.c
19
+++ b/accel/hvf/hvf-accel-ops.h
19
@@ -XXX,XX +XXX,XX @@ static const char *imx25_ccm_reg_name(uint32_t reg)
20
@@ -XXX,XX +XXX,XX @@
20
case IMX25_CCM_LPIMR1_REG:
21
#include "sysemu/cpus.h"
21
return "lpimr1";
22
22
default:
23
int hvf_vcpu_exec(CPUState *);
23
- sprintf(unknown, "[%d ?]", reg);
24
-void hvf_cpu_synchronize_post_reset(CPUState *);
24
+ sprintf(unknown, "[%u ?]", reg);
25
-void hvf_cpu_synchronize_post_init(CPUState *);
25
return unknown;
26
-void hvf_cpu_synchronize_pre_loadvm(CPUState *);
26
}
27
28
#endif /* HVF_CPUS_H */
29
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/accel/hvf/hvf-accel-ops.c
32
+++ b/accel/hvf/hvf-accel-ops.c
33
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
34
cpu->vcpu_dirty = false;
27
}
35
}
28
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mpll_clk(IMXCCMState *dev)
36
29
freq = imx_ccm_calc_pll(s->reg[IMX25_CCM_MPCTL_REG], CKIH_FREQ);
37
-void hvf_cpu_synchronize_post_reset(CPUState *cpu)
30
}
38
+static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
31
39
{
32
- DPRINTF("freq = %d\n", freq);
40
run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
33
+ DPRINTF("freq = %u\n", freq);
34
35
return freq;
36
}
41
}
37
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mcu_clk(IMXCCMState *dev)
42
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
38
43
cpu->vcpu_dirty = false;
39
freq = freq / (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], ARM_CLK_DIV));
40
41
- DPRINTF("freq = %d\n", freq);
42
+ DPRINTF("freq = %u\n", freq);
43
44
return freq;
45
}
44
}
46
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ahb_clk(IMXCCMState *dev)
45
47
freq = imx25_ccm_get_mcu_clk(dev)
46
-void hvf_cpu_synchronize_post_init(CPUState *cpu)
48
/ (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], AHB_CLK_DIV));
47
+static void hvf_cpu_synchronize_post_init(CPUState *cpu)
49
48
{
50
- DPRINTF("freq = %d\n", freq);
49
run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
51
+ DPRINTF("freq = %u\n", freq);
52
53
return freq;
54
}
50
}
55
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ipg_clk(IMXCCMState *dev)
51
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
56
52
cpu->vcpu_dirty = true;
57
freq = imx25_ccm_get_ahb_clk(dev) / 2;
58
59
- DPRINTF("freq = %d\n", freq);
60
+ DPRINTF("freq = %u\n", freq);
61
62
return freq;
63
}
53
}
64
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
54
65
break;
55
-void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
66
}
56
+static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
67
57
{
68
- DPRINTF("Clock = %d) = %d\n", clock, freq);
58
run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
69
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
70
71
return freq;
72
}
59
}
73
--
60
--
74
2.20.1
61
2.20.1
75
62
76
63
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
3
We can move the definition of hvf_vcpu_exec() into our internal
4
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
4
hvf header, obsoleting the need for hvf-accel-ops.h.
5
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
5
6
Message-id: 1605728926-352690-5-git-send-email-fnu.vikram@xilinx.com
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20210519202253.76782-11-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
MAINTAINERS | 8 ++++++++
12
accel/hvf/hvf-accel-ops.h | 17 -----------------
10
1 file changed, 8 insertions(+)
13
include/sysemu/hvf_int.h | 1 +
14
accel/hvf/hvf-accel-ops.c | 2 --
15
target/i386/hvf/hvf.c | 2 --
16
4 files changed, 1 insertion(+), 21 deletions(-)
17
delete mode 100644 accel/hvf/hvf-accel-ops.h
11
18
12
diff --git a/MAINTAINERS b/MAINTAINERS
19
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
20
deleted file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- a/accel/hvf/hvf-accel-ops.h
23
+++ /dev/null
24
@@ -XXX,XX +XXX,XX @@
25
-/*
26
- * Accelerator CPUS Interface
27
- *
28
- * Copyright 2020 SUSE LLC
29
- *
30
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
31
- * See the COPYING file in the top-level directory.
32
- */
33
-
34
-#ifndef HVF_CPUS_H
35
-#define HVF_CPUS_H
36
-
37
-#include "sysemu/cpus.h"
38
-
39
-int hvf_vcpu_exec(CPUState *);
40
-
41
-#endif /* HVF_CPUS_H */
42
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
13
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
14
--- a/MAINTAINERS
44
--- a/include/sysemu/hvf_int.h
15
+++ b/MAINTAINERS
45
+++ b/include/sysemu/hvf_int.h
16
@@ -XXX,XX +XXX,XX @@ F: hw/net/opencores_eth.c
46
@@ -XXX,XX +XXX,XX @@ extern HVFState *hvf_state;
17
47
void assert_hvf_ok(hv_return_t ret);
18
Devices
48
int hvf_arch_init_vcpu(CPUState *cpu);
19
-------
49
void hvf_arch_vcpu_destroy(CPUState *cpu);
20
+Xilinx CAN
50
+int hvf_vcpu_exec(CPUState *);
21
+M: Vikram Garhwal <fnu.vikram@xilinx.com>
51
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
22
+M: Francisco Iglesias <francisco.iglesias@xilinx.com>
52
int hvf_put_registers(CPUState *);
23
+S: Maintained
53
int hvf_get_registers(CPUState *);
24
+F: hw/net/can/xlnx-*
54
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
25
+F: include/hw/net/xlnx-*
55
index XXXXXXX..XXXXXXX 100644
26
+F: tests/qtest/xlnx-can-test*
56
--- a/accel/hvf/hvf-accel-ops.c
27
+
57
+++ b/accel/hvf/hvf-accel-ops.c
28
EDU
58
@@ -XXX,XX +XXX,XX @@
29
M: Jiri Slaby <jslaby@suse.cz>
59
#include "sysemu/runstate.h"
30
S: Maintained
60
#include "qemu/guest-random.h"
61
62
-#include "hvf-accel-ops.h"
63
-
64
HVFState *hvf_state;
65
66
/* Memory slots */
67
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/i386/hvf/hvf.c
70
+++ b/target/i386/hvf/hvf.c
71
@@ -XXX,XX +XXX,XX @@
72
#include "qemu/accel.h"
73
#include "target/i386/cpu.h"
74
75
-#include "hvf-accel-ops.h"
76
-
77
void vmx_update_tpr(CPUState *cpu)
78
{
79
/* TODO: need integrate APIC handling */
31
--
80
--
32
2.20.1
81
2.20.1
33
82
34
83
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Connect CAN0 and CAN1 on the ZynqMP.
3
We will need more than a single field for hvf going forward. To keep
4
the global vcpu struct uncluttered, let's allocate a special hvf vcpu
5
struct, similar to how hax does it.
4
6
5
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
9
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
8
Message-id: 1605728926-352690-3-git-send-email-fnu.vikram@xilinx.com
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-12-agraf@csgraf.de
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
include/hw/arm/xlnx-zynqmp.h | 8 ++++++++
16
include/hw/core/cpu.h | 3 +-
12
hw/arm/xlnx-zcu102.c | 20 ++++++++++++++++++++
17
include/sysemu/hvf_int.h | 4 +
13
hw/arm/xlnx-zynqmp.c | 34 ++++++++++++++++++++++++++++++++++
18
target/i386/hvf/vmx.h | 24 +++--
14
3 files changed, 62 insertions(+)
19
accel/hvf/hvf-accel-ops.c | 8 +-
20
target/i386/hvf/hvf.c | 104 +++++++++---------
21
target/i386/hvf/x86.c | 28 ++---
22
target/i386/hvf/x86_descr.c | 26 ++---
23
target/i386/hvf/x86_emu.c | 62 +++++------
24
target/i386/hvf/x86_mmu.c | 4 +-
25
target/i386/hvf/x86_task.c | 12 +--
26
target/i386/hvf/x86hvf.c | 210 ++++++++++++++++++------------------
27
11 files changed, 248 insertions(+), 237 deletions(-)
15
28
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
29
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
17
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/xlnx-zynqmp.h
31
--- a/include/hw/core/cpu.h
19
+++ b/include/hw/arm/xlnx-zynqmp.h
32
+++ b/include/hw/core/cpu.h
20
@@ -XXX,XX +XXX,XX @@
33
@@ -XXX,XX +XXX,XX @@ struct KVMState;
21
#include "hw/intc/arm_gic.h"
34
struct kvm_run;
22
#include "hw/net/cadence_gem.h"
35
23
#include "hw/char/cadence_uart.h"
36
struct hax_vcpu_state;
24
+#include "hw/net/xlnx-zynqmp-can.h"
37
+struct hvf_vcpu_state;
25
#include "hw/ide/ahci.h"
38
26
#include "hw/sd/sdhci.h"
39
#define TB_JMP_CACHE_BITS 12
27
#include "hw/ssi/xilinx_spips.h"
40
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
28
@@ -XXX,XX +XXX,XX @@
41
@@ -XXX,XX +XXX,XX @@ struct CPUState {
29
#include "hw/cpu/cluster.h"
42
30
#include "target/arm/cpu.h"
43
struct hax_vcpu_state *hax_vcpu;
31
#include "qom/object.h"
44
32
+#include "net/can_emu.h"
45
- int hvf_fd;
33
46
+ struct hvf_vcpu_state *hvf;
34
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
47
35
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
48
/* track IOMMUs whose translations we've cached in the TCG TLB */
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
49
GArray *iommu_notifiers;
37
#define XLNX_ZYNQMP_NUM_RPU_CPUS 2
50
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
38
#define XLNX_ZYNQMP_NUM_GEMS 4
51
index XXXXXXX..XXXXXXX 100644
39
#define XLNX_ZYNQMP_NUM_UARTS 2
52
--- a/include/sysemu/hvf_int.h
40
+#define XLNX_ZYNQMP_NUM_CAN 2
53
+++ b/include/sysemu/hvf_int.h
41
+#define XLNX_ZYNQMP_CAN_REF_CLK (24 * 1000 * 1000)
54
@@ -XXX,XX +XXX,XX @@ struct HVFState {
42
#define XLNX_ZYNQMP_NUM_SDHCI 2
43
#define XLNX_ZYNQMP_NUM_SPIS 2
44
#define XLNX_ZYNQMP_NUM_GDMA_CH 8
45
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
46
47
CadenceGEMState gem[XLNX_ZYNQMP_NUM_GEMS];
48
CadenceUARTState uart[XLNX_ZYNQMP_NUM_UARTS];
49
+ XlnxZynqMPCANState can[XLNX_ZYNQMP_NUM_CAN];
50
SysbusAHCIState sata;
51
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
52
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
53
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
54
bool virt;
55
/* Has the RPU subsystem? */
56
bool has_rpu;
57
+
58
+ /* CAN bus. */
59
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
60
};
55
};
61
56
extern HVFState *hvf_state;
62
#endif
57
63
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
58
+struct hvf_vcpu_state {
64
index XXXXXXX..XXXXXXX 100644
59
+ int fd;
65
--- a/hw/arm/xlnx-zcu102.c
66
+++ b/hw/arm/xlnx-zcu102.c
67
@@ -XXX,XX +XXX,XX @@
68
#include "sysemu/qtest.h"
69
#include "sysemu/device_tree.h"
70
#include "qom/object.h"
71
+#include "net/can_emu.h"
72
73
struct XlnxZCU102 {
74
MachineState parent_obj;
75
@@ -XXX,XX +XXX,XX @@ struct XlnxZCU102 {
76
bool secure;
77
bool virt;
78
79
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
80
+
81
struct arm_boot_info binfo;
82
};
83
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_init(MachineState *machine)
85
object_property_set_bool(OBJECT(&s->soc), "virtualization", s->virt,
86
&error_fatal);
87
88
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
89
+ gchar *bus_name = g_strdup_printf("canbus%d", i);
90
+
91
+ object_property_set_link(OBJECT(&s->soc), bus_name,
92
+ OBJECT(s->canbus[i]), &error_fatal);
93
+ g_free(bus_name);
94
+ }
95
+
96
qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
97
98
/* Create and plug in the SD cards */
99
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_machine_instance_init(Object *obj)
100
s->secure = false;
101
/* Default to virt (EL2) being disabled */
102
s->virt = false;
103
+ object_property_add_link(obj, "xlnx-zcu102.canbus0", TYPE_CAN_BUS,
104
+ (Object **)&s->canbus[0],
105
+ object_property_allow_set_link,
106
+ 0);
107
+
108
+ object_property_add_link(obj, "xlnx-zcu102.canbus1", TYPE_CAN_BUS,
109
+ (Object **)&s->canbus[1],
110
+ object_property_allow_set_link,
111
+ 0);
112
}
113
114
static void xlnx_zcu102_machine_class_init(ObjectClass *oc, void *data)
115
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/hw/arm/xlnx-zynqmp.c
118
+++ b/hw/arm/xlnx-zynqmp.c
119
@@ -XXX,XX +XXX,XX @@ static const int uart_intr[XLNX_ZYNQMP_NUM_UARTS] = {
120
21, 22,
121
};
122
123
+static const uint64_t can_addr[XLNX_ZYNQMP_NUM_CAN] = {
124
+ 0xFF060000, 0xFF070000,
125
+};
60
+};
126
+
61
+
127
+static const int can_intr[XLNX_ZYNQMP_NUM_CAN] = {
62
void assert_hvf_ok(hv_return_t ret);
128
+ 23, 24,
63
int hvf_arch_init_vcpu(CPUState *cpu);
129
+};
64
void hvf_arch_vcpu_destroy(CPUState *cpu);
65
diff --git a/target/i386/hvf/vmx.h b/target/i386/hvf/vmx.h
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/i386/hvf/vmx.h
68
+++ b/target/i386/hvf/vmx.h
69
@@ -XXX,XX +XXX,XX @@
70
#include "vmcs.h"
71
#include "cpu.h"
72
#include "x86.h"
73
+#include "sysemu/hvf.h"
74
+#include "sysemu/hvf_int.h"
75
76
#include "exec/address-spaces.h"
77
78
@@ -XXX,XX +XXX,XX @@ static inline void macvm_set_rip(CPUState *cpu, uint64_t rip)
79
uint64_t val;
80
81
/* BUG, should take considering overlap.. */
82
- wreg(cpu->hvf_fd, HV_X86_RIP, rip);
83
+ wreg(cpu->hvf->fd, HV_X86_RIP, rip);
84
env->eip = rip;
85
86
/* after moving forward in rip, we need to clean INTERRUPTABILITY */
87
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
88
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
89
if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING |
90
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
91
env->hflags &= ~HF_INHIBIT_IRQ_MASK;
92
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY,
93
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY,
94
val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING |
95
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING));
96
}
97
@@ -XXX,XX +XXX,XX @@ static inline void vmx_clear_nmi_blocking(CPUState *cpu)
98
CPUX86State *env = &x86_cpu->env;
99
100
env->hflags2 &= ~HF2_NMI_MASK;
101
- uint32_t gi = (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
102
+ uint32_t gi = (uint32_t) rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
103
gi &= ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
104
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
105
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
106
}
107
108
static inline void vmx_set_nmi_blocking(CPUState *cpu)
109
@@ -XXX,XX +XXX,XX @@ static inline void vmx_set_nmi_blocking(CPUState *cpu)
110
CPUX86State *env = &x86_cpu->env;
111
112
env->hflags2 |= HF2_NMI_MASK;
113
- uint32_t gi = (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
114
+ uint32_t gi = (uint32_t)rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
115
gi |= VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
116
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
117
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
118
}
119
120
static inline void vmx_set_nmi_window_exiting(CPUState *cpu)
121
{
122
uint64_t val;
123
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
124
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val |
125
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
126
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
127
VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
128
129
}
130
@@ -XXX,XX +XXX,XX @@ static inline void vmx_clear_nmi_window_exiting(CPUState *cpu)
131
{
132
133
uint64_t val;
134
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
135
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val &
136
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
137
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
138
~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
139
}
140
141
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
142
index XXXXXXX..XXXXXXX 100644
143
--- a/accel/hvf/hvf-accel-ops.c
144
+++ b/accel/hvf/hvf-accel-ops.c
145
@@ -XXX,XX +XXX,XX @@ type_init(hvf_type_init);
146
147
static void hvf_vcpu_destroy(CPUState *cpu)
148
{
149
- hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd);
150
+ hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd);
151
assert_hvf_ok(ret);
152
153
hvf_arch_vcpu_destroy(cpu);
154
+ g_free(cpu->hvf);
155
+ cpu->hvf = NULL;
156
}
157
158
static int hvf_init_vcpu(CPUState *cpu)
159
{
160
int r;
161
162
+ cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
130
+
163
+
131
static const uint64_t sdhci_addr[XLNX_ZYNQMP_NUM_SDHCI] = {
164
/* init cpu signals */
132
0xFF160000, 0xFF170000,
165
sigset_t set;
133
};
166
struct sigaction sigact;
134
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
167
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
135
TYPE_CADENCE_UART);
168
pthread_sigmask(SIG_BLOCK, NULL, &set);
136
}
169
sigdelset(&set, SIG_IPI);
137
170
138
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
171
- r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
139
+ object_initialize_child(obj, "can[*]", &s->can[i],
172
+ r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
140
+ TYPE_XLNX_ZYNQMP_CAN);
173
cpu->vcpu_dirty = 1;
141
+ }
174
assert_hvf_ok(r);
142
+
175
143
object_initialize_child(obj, "sata", &s->sata, TYPE_SYSBUS_AHCI);
176
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
144
177
index XXXXXXX..XXXXXXX 100644
145
for (i = 0; i < XLNX_ZYNQMP_NUM_SDHCI; i++) {
178
--- a/target/i386/hvf/hvf.c
146
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
179
+++ b/target/i386/hvf/hvf.c
147
gic_spi[uart_intr[i]]);
180
@@ -XXX,XX +XXX,XX @@ void vmx_update_tpr(CPUState *cpu)
148
}
181
int tpr = cpu_get_apic_tpr(x86_cpu->apic_state) << 4;
149
182
int irr = apic_get_highest_priority_irr(x86_cpu->apic_state);
150
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
183
151
+ object_property_set_int(OBJECT(&s->can[i]), "ext_clk_freq",
184
- wreg(cpu->hvf_fd, HV_X86_TPR, tpr);
152
+ XLNX_ZYNQMP_CAN_REF_CLK, &error_abort);
185
+ wreg(cpu->hvf->fd, HV_X86_TPR, tpr);
153
+
186
if (irr == -1) {
154
+ object_property_set_link(OBJECT(&s->can[i]), "canbus",
187
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0);
155
+ OBJECT(s->canbus[i]), &error_fatal);
188
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
156
+
189
} else {
157
+ sysbus_realize(SYS_BUS_DEVICE(&s->can[i]), &err);
190
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 :
158
+ if (err) {
191
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 :
159
+ error_propagate(errp, err);
192
irr >> 4);
160
+ return;
193
}
161
+ }
194
}
162
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->can[i]), 0, can_addr[i]);
195
@@ -XXX,XX +XXX,XX @@ void vmx_update_tpr(CPUState *cpu)
163
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->can[i]), 0,
196
static void update_apic_tpr(CPUState *cpu)
164
+ gic_spi[can_intr[i]]);
197
{
165
+ }
198
X86CPU *x86_cpu = X86_CPU(cpu);
166
+
199
- int tpr = rreg(cpu->hvf_fd, HV_X86_TPR) >> 4;
167
object_property_set_int(OBJECT(&s->sata), "num-ports", SATA_NUM_PORTS,
200
+ int tpr = rreg(cpu->hvf->fd, HV_X86_TPR) >> 4;
168
&error_abort);
201
cpu_set_apic_tpr(x86_cpu->apic_state, tpr);
169
if (!sysbus_realize(SYS_BUS_DEVICE(&s->sata), errp)) {
202
}
170
@@ -XXX,XX +XXX,XX @@ static Property xlnx_zynqmp_props[] = {
203
171
DEFINE_PROP_BOOL("has_rpu", XlnxZynqMPState, has_rpu, false),
204
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu)
172
DEFINE_PROP_LINK("ddr-ram", XlnxZynqMPState, ddr_ram, TYPE_MEMORY_REGION,
205
}
173
MemoryRegion *),
206
174
+ DEFINE_PROP_LINK("canbus0", XlnxZynqMPState, canbus[0], TYPE_CAN_BUS,
207
/* set VMCS control fields */
175
+ CanBusState *),
208
- wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS,
176
+ DEFINE_PROP_LINK("canbus1", XlnxZynqMPState, canbus[1], TYPE_CAN_BUS,
209
+ wvmcs(cpu->hvf->fd, VMCS_PIN_BASED_CTLS,
177
+ CanBusState *),
210
cap2ctrl(hvf_state->hvf_caps->vmx_cap_pinbased,
178
DEFINE_PROP_END_OF_LIST()
211
VMCS_PIN_BASED_CTLS_EXTINT |
179
};
212
VMCS_PIN_BASED_CTLS_NMI |
180
213
VMCS_PIN_BASED_CTLS_VNMI));
214
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS,
215
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS,
216
cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased,
217
VMCS_PRI_PROC_BASED_CTLS_HLT |
218
VMCS_PRI_PROC_BASED_CTLS_MWAIT |
219
VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET |
220
VMCS_PRI_PROC_BASED_CTLS_TPR_SHADOW) |
221
VMCS_PRI_PROC_BASED_CTLS_SEC_CONTROL);
222
- wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS,
223
+ wvmcs(cpu->hvf->fd, VMCS_SEC_PROC_BASED_CTLS,
224
cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased2,
225
VMCS_PRI_PROC_BASED2_CTLS_APIC_ACCESSES));
226
227
- wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,
228
+ wvmcs(cpu->hvf->fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,
229
0));
230
- wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */
231
+ wvmcs(cpu->hvf->fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */
232
233
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0);
234
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
235
236
x86cpu = X86_CPU(cpu);
237
x86cpu->env.xsave_buf = qemu_memalign(4096, 4096);
238
239
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_STAR, 1);
240
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_LSTAR, 1);
241
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_CSTAR, 1);
242
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FMASK, 1);
243
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FSBASE, 1);
244
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_GSBASE, 1);
245
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_KERNELGSBASE, 1);
246
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_TSC_AUX, 1);
247
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1);
248
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_CS, 1);
249
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_EIP, 1);
250
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_ESP, 1);
251
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_STAR, 1);
252
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_LSTAR, 1);
253
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_CSTAR, 1);
254
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FMASK, 1);
255
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FSBASE, 1);
256
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_GSBASE, 1);
257
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_KERNELGSBASE, 1);
258
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_TSC_AUX, 1);
259
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_TSC, 1);
260
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_CS, 1);
261
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_EIP, 1);
262
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_ESP, 1);
263
264
return 0;
265
}
266
@@ -XXX,XX +XXX,XX @@ static void hvf_store_events(CPUState *cpu, uint32_t ins_len, uint64_t idtvec_in
267
}
268
if (idtvec_info & VMCS_IDT_VEC_ERRCODE_VALID) {
269
env->has_error_code = true;
270
- env->error_code = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_ERROR);
271
+ env->error_code = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_ERROR);
272
}
273
}
274
- if ((rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) &
275
+ if ((rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) &
276
VMCS_INTERRUPTIBILITY_NMI_BLOCKING)) {
277
env->hflags2 |= HF2_NMI_MASK;
278
} else {
279
env->hflags2 &= ~HF2_NMI_MASK;
280
}
281
- if (rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) &
282
+ if (rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) &
283
(VMCS_INTERRUPTIBILITY_STI_BLOCKING |
284
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
285
env->hflags |= HF_INHIBIT_IRQ_MASK;
286
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
287
return EXCP_HLT;
288
}
289
290
- hv_return_t r = hv_vcpu_run(cpu->hvf_fd);
291
+ hv_return_t r = hv_vcpu_run(cpu->hvf->fd);
292
assert_hvf_ok(r);
293
294
/* handle VMEXIT */
295
- uint64_t exit_reason = rvmcs(cpu->hvf_fd, VMCS_EXIT_REASON);
296
- uint64_t exit_qual = rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION);
297
- uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf_fd,
298
+ uint64_t exit_reason = rvmcs(cpu->hvf->fd, VMCS_EXIT_REASON);
299
+ uint64_t exit_qual = rvmcs(cpu->hvf->fd, VMCS_EXIT_QUALIFICATION);
300
+ uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf->fd,
301
VMCS_EXIT_INSTRUCTION_LENGTH);
302
303
- uint64_t idtvec_info = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO);
304
+ uint64_t idtvec_info = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO);
305
306
hvf_store_events(cpu, ins_len, idtvec_info);
307
- rip = rreg(cpu->hvf_fd, HV_X86_RIP);
308
- env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS);
309
+ rip = rreg(cpu->hvf->fd, HV_X86_RIP);
310
+ env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS);
311
312
qemu_mutex_lock_iothread();
313
314
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
315
case EXIT_REASON_EPT_FAULT:
316
{
317
hvf_slot *slot;
318
- uint64_t gpa = rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDRESS);
319
+ uint64_t gpa = rvmcs(cpu->hvf->fd, VMCS_GUEST_PHYSICAL_ADDRESS);
320
321
if (((idtvec_info & VMCS_IDT_VEC_VALID) == 0) &&
322
((exit_qual & EXIT_QUAL_NMIUDTI) != 0)) {
323
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
324
store_regs(cpu);
325
break;
326
} else if (!string && !in) {
327
- RAX(env) = rreg(cpu->hvf_fd, HV_X86_RAX);
328
+ RAX(env) = rreg(cpu->hvf->fd, HV_X86_RAX);
329
hvf_handle_io(env, port, &RAX(env), 1, size, 1);
330
macvm_set_rip(cpu, rip + ins_len);
331
break;
332
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
333
break;
334
}
335
case EXIT_REASON_CPUID: {
336
- uint32_t rax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX);
337
- uint32_t rbx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX);
338
- uint32_t rcx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX);
339
- uint32_t rdx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX);
340
+ uint32_t rax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX);
341
+ uint32_t rbx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RBX);
342
+ uint32_t rcx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX);
343
+ uint32_t rdx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX);
344
345
if (rax == 1) {
346
/* CPUID1.ecx.OSXSAVE needs to know CR4 */
347
- env->cr[4] = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4);
348
+ env->cr[4] = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4);
349
}
350
hvf_cpu_x86_cpuid(env, rax, rcx, &rax, &rbx, &rcx, &rdx);
351
352
- wreg(cpu->hvf_fd, HV_X86_RAX, rax);
353
- wreg(cpu->hvf_fd, HV_X86_RBX, rbx);
354
- wreg(cpu->hvf_fd, HV_X86_RCX, rcx);
355
- wreg(cpu->hvf_fd, HV_X86_RDX, rdx);
356
+ wreg(cpu->hvf->fd, HV_X86_RAX, rax);
357
+ wreg(cpu->hvf->fd, HV_X86_RBX, rbx);
358
+ wreg(cpu->hvf->fd, HV_X86_RCX, rcx);
359
+ wreg(cpu->hvf->fd, HV_X86_RDX, rdx);
360
361
macvm_set_rip(cpu, rip + ins_len);
362
break;
363
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
364
case EXIT_REASON_XSETBV: {
365
X86CPU *x86_cpu = X86_CPU(cpu);
366
CPUX86State *env = &x86_cpu->env;
367
- uint32_t eax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX);
368
- uint32_t ecx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX);
369
- uint32_t edx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX);
370
+ uint32_t eax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX);
371
+ uint32_t ecx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX);
372
+ uint32_t edx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX);
373
374
if (ecx) {
375
macvm_set_rip(cpu, rip + ins_len);
376
break;
377
}
378
env->xcr0 = ((uint64_t)edx << 32) | eax;
379
- wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1);
380
+ wreg(cpu->hvf->fd, HV_X86_XCR0, env->xcr0 | 1);
381
macvm_set_rip(cpu, rip + ins_len);
382
break;
383
}
384
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
385
386
switch (cr) {
387
case 0x0: {
388
- macvm_set_cr0(cpu->hvf_fd, RRX(env, reg));
389
+ macvm_set_cr0(cpu->hvf->fd, RRX(env, reg));
390
break;
391
}
392
case 4: {
393
- macvm_set_cr4(cpu->hvf_fd, RRX(env, reg));
394
+ macvm_set_cr4(cpu->hvf->fd, RRX(env, reg));
395
break;
396
}
397
case 8: {
398
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
399
break;
400
}
401
case EXIT_REASON_TASK_SWITCH: {
402
- uint64_t vinfo = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO);
403
+ uint64_t vinfo = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO);
404
x68_segment_selector sel = {.sel = exit_qual & 0xffff};
405
vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3,
406
vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MASK, vinfo
407
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
408
break;
409
}
410
case EXIT_REASON_RDPMC:
411
- wreg(cpu->hvf_fd, HV_X86_RAX, 0);
412
- wreg(cpu->hvf_fd, HV_X86_RDX, 0);
413
+ wreg(cpu->hvf->fd, HV_X86_RAX, 0);
414
+ wreg(cpu->hvf->fd, HV_X86_RDX, 0);
415
macvm_set_rip(cpu, rip + ins_len);
416
break;
417
case VMX_REASON_VMCALL:
418
diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c
419
index XXXXXXX..XXXXXXX 100644
420
--- a/target/i386/hvf/x86.c
421
+++ b/target/i386/hvf/x86.c
422
@@ -XXX,XX +XXX,XX @@ bool x86_read_segment_descriptor(struct CPUState *cpu,
423
}
424
425
if (GDT_SEL == sel.ti) {
426
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
427
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
428
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE);
429
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
430
} else {
431
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
432
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
433
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE);
434
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT);
435
}
436
437
if (sel.index * 8 >= limit) {
438
@@ -XXX,XX +XXX,XX @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
439
uint32_t limit;
440
441
if (GDT_SEL == sel.ti) {
442
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
443
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
444
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE);
445
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
446
} else {
447
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
448
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
449
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE);
450
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT);
451
}
452
453
if (sel.index * 8 >= limit) {
454
@@ -XXX,XX +XXX,XX @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
455
bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc,
456
int gate)
457
{
458
- target_ulong base = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE);
459
- uint32_t limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT);
460
+ target_ulong base = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_BASE);
461
+ uint32_t limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_LIMIT);
462
463
memset(idt_desc, 0, sizeof(*idt_desc));
464
if (gate * 8 >= limit) {
465
@@ -XXX,XX +XXX,XX @@ bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc,
466
467
bool x86_is_protected(struct CPUState *cpu)
468
{
469
- uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
470
+ uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
471
return cr0 & CR0_PE;
472
}
473
474
@@ -XXX,XX +XXX,XX @@ bool x86_is_v8086(struct CPUState *cpu)
475
476
bool x86_is_long_mode(struct CPUState *cpu)
477
{
478
- return rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA;
479
+ return rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA;
480
}
481
482
bool x86_is_long64_mode(struct CPUState *cpu)
483
@@ -XXX,XX +XXX,XX @@ bool x86_is_long64_mode(struct CPUState *cpu)
484
485
bool x86_is_paging_mode(struct CPUState *cpu)
486
{
487
- uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
488
+ uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
489
return cr0 & CR0_PG;
490
}
491
492
bool x86_is_pae_enabled(struct CPUState *cpu)
493
{
494
- uint64_t cr4 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4);
495
+ uint64_t cr4 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4);
496
return cr4 & CR4_PAE;
497
}
498
499
diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c
500
index XXXXXXX..XXXXXXX 100644
501
--- a/target/i386/hvf/x86_descr.c
502
+++ b/target/i386/hvf/x86_descr.c
503
@@ -XXX,XX +XXX,XX @@ static const struct vmx_segment_field {
504
505
uint32_t vmx_read_segment_limit(CPUState *cpu, X86Seg seg)
506
{
507
- return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit);
508
+ return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit);
509
}
510
511
uint32_t vmx_read_segment_ar(CPUState *cpu, X86Seg seg)
512
{
513
- return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes);
514
+ return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes);
515
}
516
517
uint64_t vmx_read_segment_base(CPUState *cpu, X86Seg seg)
518
{
519
- return rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base);
520
+ return rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base);
521
}
522
523
x68_segment_selector vmx_read_segment_selector(CPUState *cpu, X86Seg seg)
524
{
525
x68_segment_selector sel;
526
- sel.sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector);
527
+ sel.sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector);
528
return sel;
529
}
530
531
void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector selector, X86Seg seg)
532
{
533
- wvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector, selector.sel);
534
+ wvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector, selector.sel);
535
}
536
537
void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment *desc, X86Seg seg)
538
{
539
- desc->sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector);
540
- desc->base = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base);
541
- desc->limit = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit);
542
- desc->ar = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes);
543
+ desc->sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector);
544
+ desc->base = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base);
545
+ desc->limit = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit);
546
+ desc->ar = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes);
547
}
548
549
void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc, X86Seg seg)
550
{
551
const struct vmx_segment_field *sf = &vmx_segment_fields[seg];
552
553
- wvmcs(cpu->hvf_fd, sf->base, desc->base);
554
- wvmcs(cpu->hvf_fd, sf->limit, desc->limit);
555
- wvmcs(cpu->hvf_fd, sf->selector, desc->sel);
556
- wvmcs(cpu->hvf_fd, sf->ar_bytes, desc->ar);
557
+ wvmcs(cpu->hvf->fd, sf->base, desc->base);
558
+ wvmcs(cpu->hvf->fd, sf->limit, desc->limit);
559
+ wvmcs(cpu->hvf->fd, sf->selector, desc->sel);
560
+ wvmcs(cpu->hvf->fd, sf->ar_bytes, desc->ar);
561
}
562
563
void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selector selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_desc)
564
diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c
565
index XXXXXXX..XXXXXXX 100644
566
--- a/target/i386/hvf/x86_emu.c
567
+++ b/target/i386/hvf/x86_emu.c
568
@@ -XXX,XX +XXX,XX @@ void simulate_rdmsr(struct CPUState *cpu)
569
570
switch (msr) {
571
case MSR_IA32_TSC:
572
- val = rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET);
573
+ val = rdtscp() + rvmcs(cpu->hvf->fd, VMCS_TSC_OFFSET);
574
break;
575
case MSR_IA32_APICBASE:
576
val = cpu_get_apic_base(X86_CPU(cpu)->apic_state);
577
@@ -XXX,XX +XXX,XX @@ void simulate_rdmsr(struct CPUState *cpu)
578
val = x86_cpu->ucode_rev;
579
break;
580
case MSR_EFER:
581
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER);
582
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER);
583
break;
584
case MSR_FSBASE:
585
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE);
586
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE);
587
break;
588
case MSR_GSBASE:
589
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE);
590
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE);
591
break;
592
case MSR_KERNELGSBASE:
593
- val = rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE);
594
+ val = rvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE);
595
break;
596
case MSR_STAR:
597
abort();
598
@@ -XXX,XX +XXX,XX @@ void simulate_wrmsr(struct CPUState *cpu)
599
cpu_set_apic_base(X86_CPU(cpu)->apic_state, data);
600
break;
601
case MSR_FSBASE:
602
- wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data);
603
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE, data);
604
break;
605
case MSR_GSBASE:
606
- wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data);
607
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE, data);
608
break;
609
case MSR_KERNELGSBASE:
610
- wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data);
611
+ wvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE, data);
612
break;
613
case MSR_STAR:
614
abort();
615
@@ -XXX,XX +XXX,XX @@ void simulate_wrmsr(struct CPUState *cpu)
616
break;
617
case MSR_EFER:
618
/*printf("new efer %llx\n", EFER(cpu));*/
619
- wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data);
620
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER, data);
621
if (data & MSR_EFER_NXE) {
622
- hv_vcpu_invalidate_tlb(cpu->hvf_fd);
623
+ hv_vcpu_invalidate_tlb(cpu->hvf->fd);
624
}
625
break;
626
case MSR_MTRRphysBase(0):
627
@@ -XXX,XX +XXX,XX @@ void load_regs(struct CPUState *cpu)
628
CPUX86State *env = &x86_cpu->env;
629
630
int i = 0;
631
- RRX(env, R_EAX) = rreg(cpu->hvf_fd, HV_X86_RAX);
632
- RRX(env, R_EBX) = rreg(cpu->hvf_fd, HV_X86_RBX);
633
- RRX(env, R_ECX) = rreg(cpu->hvf_fd, HV_X86_RCX);
634
- RRX(env, R_EDX) = rreg(cpu->hvf_fd, HV_X86_RDX);
635
- RRX(env, R_ESI) = rreg(cpu->hvf_fd, HV_X86_RSI);
636
- RRX(env, R_EDI) = rreg(cpu->hvf_fd, HV_X86_RDI);
637
- RRX(env, R_ESP) = rreg(cpu->hvf_fd, HV_X86_RSP);
638
- RRX(env, R_EBP) = rreg(cpu->hvf_fd, HV_X86_RBP);
639
+ RRX(env, R_EAX) = rreg(cpu->hvf->fd, HV_X86_RAX);
640
+ RRX(env, R_EBX) = rreg(cpu->hvf->fd, HV_X86_RBX);
641
+ RRX(env, R_ECX) = rreg(cpu->hvf->fd, HV_X86_RCX);
642
+ RRX(env, R_EDX) = rreg(cpu->hvf->fd, HV_X86_RDX);
643
+ RRX(env, R_ESI) = rreg(cpu->hvf->fd, HV_X86_RSI);
644
+ RRX(env, R_EDI) = rreg(cpu->hvf->fd, HV_X86_RDI);
645
+ RRX(env, R_ESP) = rreg(cpu->hvf->fd, HV_X86_RSP);
646
+ RRX(env, R_EBP) = rreg(cpu->hvf->fd, HV_X86_RBP);
647
for (i = 8; i < 16; i++) {
648
- RRX(env, i) = rreg(cpu->hvf_fd, HV_X86_RAX + i);
649
+ RRX(env, i) = rreg(cpu->hvf->fd, HV_X86_RAX + i);
650
}
651
652
- env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS);
653
+ env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS);
654
rflags_to_lflags(env);
655
- env->eip = rreg(cpu->hvf_fd, HV_X86_RIP);
656
+ env->eip = rreg(cpu->hvf->fd, HV_X86_RIP);
657
}
658
659
void store_regs(struct CPUState *cpu)
660
@@ -XXX,XX +XXX,XX @@ void store_regs(struct CPUState *cpu)
661
CPUX86State *env = &x86_cpu->env;
662
663
int i = 0;
664
- wreg(cpu->hvf_fd, HV_X86_RAX, RAX(env));
665
- wreg(cpu->hvf_fd, HV_X86_RBX, RBX(env));
666
- wreg(cpu->hvf_fd, HV_X86_RCX, RCX(env));
667
- wreg(cpu->hvf_fd, HV_X86_RDX, RDX(env));
668
- wreg(cpu->hvf_fd, HV_X86_RSI, RSI(env));
669
- wreg(cpu->hvf_fd, HV_X86_RDI, RDI(env));
670
- wreg(cpu->hvf_fd, HV_X86_RBP, RBP(env));
671
- wreg(cpu->hvf_fd, HV_X86_RSP, RSP(env));
672
+ wreg(cpu->hvf->fd, HV_X86_RAX, RAX(env));
673
+ wreg(cpu->hvf->fd, HV_X86_RBX, RBX(env));
674
+ wreg(cpu->hvf->fd, HV_X86_RCX, RCX(env));
675
+ wreg(cpu->hvf->fd, HV_X86_RDX, RDX(env));
676
+ wreg(cpu->hvf->fd, HV_X86_RSI, RSI(env));
677
+ wreg(cpu->hvf->fd, HV_X86_RDI, RDI(env));
678
+ wreg(cpu->hvf->fd, HV_X86_RBP, RBP(env));
679
+ wreg(cpu->hvf->fd, HV_X86_RSP, RSP(env));
680
for (i = 8; i < 16; i++) {
681
- wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(env, i));
682
+ wreg(cpu->hvf->fd, HV_X86_RAX + i, RRX(env, i));
683
}
684
685
lflags_to_rflags(env);
686
- wreg(cpu->hvf_fd, HV_X86_RFLAGS, env->eflags);
687
+ wreg(cpu->hvf->fd, HV_X86_RFLAGS, env->eflags);
688
macvm_set_rip(cpu, env->eip);
689
}
690
691
diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c
692
index XXXXXXX..XXXXXXX 100644
693
--- a/target/i386/hvf/x86_mmu.c
694
+++ b/target/i386/hvf/x86_mmu.c
695
@@ -XXX,XX +XXX,XX @@ static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt,
696
pt->err_code |= MMU_PAGE_PT;
697
}
698
699
- uint32_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
700
+ uint32_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
701
/* check protection */
702
if (cr0 & CR0_WP) {
703
if (pt->write_access && !pte_write_access(pte)) {
704
@@ -XXX,XX +XXX,XX @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code,
705
{
706
int top_level, level;
707
bool is_large = false;
708
- target_ulong cr3 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3);
709
+ target_ulong cr3 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR3);
710
uint64_t page_mask = pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK;
711
712
memset(pt, 0, sizeof(*pt));
713
diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c
714
index XXXXXXX..XXXXXXX 100644
715
--- a/target/i386/hvf/x86_task.c
716
+++ b/target/i386/hvf/x86_task.c
717
@@ -XXX,XX +XXX,XX @@ static void load_state_from_tss32(CPUState *cpu, struct x86_tss_segment32 *tss)
718
X86CPU *x86_cpu = X86_CPU(cpu);
719
CPUX86State *env = &x86_cpu->env;
720
721
- wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3);
722
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_CR3, tss->cr3);
723
724
env->eip = tss->eip;
725
env->eflags = tss->eflags | 2;
726
@@ -XXX,XX +XXX,XX @@ static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68_segme
727
728
void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int reason, bool gate_valid, uint8_t gate, uint64_t gate_type)
729
{
730
- uint64_t rip = rreg(cpu->hvf_fd, HV_X86_RIP);
731
+ uint64_t rip = rreg(cpu->hvf->fd, HV_X86_RIP);
732
if (!gate_valid || (gate_type != VMCS_INTR_T_HWEXCEPTION &&
733
gate_type != VMCS_INTR_T_HWINTR &&
734
gate_type != VMCS_INTR_T_NMI)) {
735
- int ins_len = rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH);
736
+ int ins_len = rvmcs(cpu->hvf->fd, VMCS_EXIT_INSTRUCTION_LENGTH);
737
macvm_set_rip(cpu, rip + ins_len);
738
return;
739
}
740
@@ -XXX,XX +XXX,XX @@ void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int rea
741
//ret = task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, &next_tss_desc);
742
VM_PANIC("task_switch_16");
743
744
- macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS);
745
+ macvm_set_cr0(cpu->hvf->fd, rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0) | CR0_TS);
746
x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg);
747
vmx_write_segment_descriptor(cpu, &vmx_seg, R_TR);
748
749
store_regs(cpu);
750
751
- hv_vcpu_invalidate_tlb(cpu->hvf_fd);
752
- hv_vcpu_flush(cpu->hvf_fd);
753
+ hv_vcpu_invalidate_tlb(cpu->hvf->fd);
754
+ hv_vcpu_flush(cpu->hvf->fd);
755
}
756
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
757
index XXXXXXX..XXXXXXX 100644
758
--- a/target/i386/hvf/x86hvf.c
759
+++ b/target/i386/hvf/x86hvf.c
760
@@ -XXX,XX +XXX,XX @@ void hvf_put_xsave(CPUState *cpu_state)
761
762
x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave);
763
764
- if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) {
765
+ if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
766
abort();
767
}
768
}
769
@@ -XXX,XX +XXX,XX @@ void hvf_put_segments(CPUState *cpu_state)
770
CPUX86State *env = &X86_CPU(cpu_state)->env;
771
struct vmx_segment seg;
772
773
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
774
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
775
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
776
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
777
778
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
779
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
780
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
781
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
782
783
- /* wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); */
784
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3, env->cr[3]);
785
+ /* wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR2, env->cr[2]); */
786
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3, env->cr[3]);
787
vmx_update_tpr(cpu_state);
788
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER, env->efer);
789
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER, env->efer);
790
791
- macvm_set_cr4(cpu_state->hvf_fd, env->cr[4]);
792
- macvm_set_cr0(cpu_state->hvf_fd, env->cr[0]);
793
+ macvm_set_cr4(cpu_state->hvf->fd, env->cr[4]);
794
+ macvm_set_cr0(cpu_state->hvf->fd, env->cr[0]);
795
796
hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false);
797
vmx_write_segment_descriptor(cpu_state, &seg, R_CS);
798
@@ -XXX,XX +XXX,XX @@ void hvf_put_segments(CPUState *cpu_state)
799
hvf_set_segment(cpu_state, &seg, &env->ldt, false);
800
vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR);
801
802
- hv_vcpu_flush(cpu_state->hvf_fd);
803
+ hv_vcpu_flush(cpu_state->hvf->fd);
804
}
805
806
void hvf_put_msrs(CPUState *cpu_state)
807
{
808
CPUX86State *env = &X86_CPU(cpu_state)->env;
809
810
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS,
811
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS,
812
env->sysenter_cs);
813
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP,
814
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP,
815
env->sysenter_esp);
816
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP,
817
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP,
818
env->sysenter_eip);
819
820
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_STAR, env->star);
821
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_STAR, env->star);
822
823
#ifdef TARGET_X86_64
824
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_CSTAR, env->cstar);
825
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, env->kernelgsbase);
826
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FMASK, env->fmask);
827
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_LSTAR, env->lstar);
828
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_CSTAR, env->cstar);
829
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, env->kernelgsbase);
830
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FMASK, env->fmask);
831
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_LSTAR, env->lstar);
832
#endif
833
834
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_GSBASE, env->segs[R_GS].base);
835
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FSBASE, env->segs[R_FS].base);
836
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_GSBASE, env->segs[R_GS].base);
837
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FSBASE, env->segs[R_FS].base);
838
}
839
840
841
@@ -XXX,XX +XXX,XX @@ void hvf_get_xsave(CPUState *cpu_state)
842
843
xsave = X86_CPU(cpu_state)->env.xsave_buf;
844
845
- if (hv_vcpu_read_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) {
846
+ if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
847
abort();
848
}
849
850
@@ -XXX,XX +XXX,XX @@ void hvf_get_segments(CPUState *cpu_state)
851
vmx_read_segment_descriptor(cpu_state, &seg, R_LDTR);
852
hvf_get_segment(&env->ldt, &seg);
853
854
- env->idt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT);
855
- env->idt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE);
856
- env->gdt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
857
- env->gdt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE);
858
+ env->idt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT);
859
+ env->idt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE);
860
+ env->gdt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
861
+ env->gdt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE);
862
863
- env->cr[0] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR0);
864
+ env->cr[0] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR0);
865
env->cr[2] = 0;
866
- env->cr[3] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3);
867
- env->cr[4] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4);
868
+ env->cr[3] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3);
869
+ env->cr[4] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR4);
870
871
- env->efer = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER);
872
+ env->efer = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER);
873
}
874
875
void hvf_get_msrs(CPUState *cpu_state)
876
@@ -XXX,XX +XXX,XX @@ void hvf_get_msrs(CPUState *cpu_state)
877
CPUX86State *env = &X86_CPU(cpu_state)->env;
878
uint64_t tmp;
879
880
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp);
881
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS, &tmp);
882
env->sysenter_cs = tmp;
883
884
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp);
885
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP, &tmp);
886
env->sysenter_esp = tmp;
887
888
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, &tmp);
889
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP, &tmp);
890
env->sysenter_eip = tmp;
891
892
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_STAR, &env->star);
893
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_STAR, &env->star);
894
895
#ifdef TARGET_X86_64
896
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_CSTAR, &env->cstar);
897
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, &env->kernelgsbase);
898
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_FMASK, &env->fmask);
899
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_LSTAR, &env->lstar);
900
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_CSTAR, &env->cstar);
901
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, &env->kernelgsbase);
902
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_FMASK, &env->fmask);
903
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_LSTAR, &env->lstar);
904
#endif
905
906
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp);
907
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_APICBASE, &tmp);
908
909
- env->tsc = rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET);
910
+ env->tsc = rdtscp() + rvmcs(cpu_state->hvf->fd, VMCS_TSC_OFFSET);
911
}
912
913
int hvf_put_registers(CPUState *cpu_state)
914
@@ -XXX,XX +XXX,XX @@ int hvf_put_registers(CPUState *cpu_state)
915
X86CPU *x86cpu = X86_CPU(cpu_state);
916
CPUX86State *env = &x86cpu->env;
917
918
- wreg(cpu_state->hvf_fd, HV_X86_RAX, env->regs[R_EAX]);
919
- wreg(cpu_state->hvf_fd, HV_X86_RBX, env->regs[R_EBX]);
920
- wreg(cpu_state->hvf_fd, HV_X86_RCX, env->regs[R_ECX]);
921
- wreg(cpu_state->hvf_fd, HV_X86_RDX, env->regs[R_EDX]);
922
- wreg(cpu_state->hvf_fd, HV_X86_RBP, env->regs[R_EBP]);
923
- wreg(cpu_state->hvf_fd, HV_X86_RSP, env->regs[R_ESP]);
924
- wreg(cpu_state->hvf_fd, HV_X86_RSI, env->regs[R_ESI]);
925
- wreg(cpu_state->hvf_fd, HV_X86_RDI, env->regs[R_EDI]);
926
- wreg(cpu_state->hvf_fd, HV_X86_R8, env->regs[8]);
927
- wreg(cpu_state->hvf_fd, HV_X86_R9, env->regs[9]);
928
- wreg(cpu_state->hvf_fd, HV_X86_R10, env->regs[10]);
929
- wreg(cpu_state->hvf_fd, HV_X86_R11, env->regs[11]);
930
- wreg(cpu_state->hvf_fd, HV_X86_R12, env->regs[12]);
931
- wreg(cpu_state->hvf_fd, HV_X86_R13, env->regs[13]);
932
- wreg(cpu_state->hvf_fd, HV_X86_R14, env->regs[14]);
933
- wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]);
934
- wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags);
935
- wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip);
936
+ wreg(cpu_state->hvf->fd, HV_X86_RAX, env->regs[R_EAX]);
937
+ wreg(cpu_state->hvf->fd, HV_X86_RBX, env->regs[R_EBX]);
938
+ wreg(cpu_state->hvf->fd, HV_X86_RCX, env->regs[R_ECX]);
939
+ wreg(cpu_state->hvf->fd, HV_X86_RDX, env->regs[R_EDX]);
940
+ wreg(cpu_state->hvf->fd, HV_X86_RBP, env->regs[R_EBP]);
941
+ wreg(cpu_state->hvf->fd, HV_X86_RSP, env->regs[R_ESP]);
942
+ wreg(cpu_state->hvf->fd, HV_X86_RSI, env->regs[R_ESI]);
943
+ wreg(cpu_state->hvf->fd, HV_X86_RDI, env->regs[R_EDI]);
944
+ wreg(cpu_state->hvf->fd, HV_X86_R8, env->regs[8]);
945
+ wreg(cpu_state->hvf->fd, HV_X86_R9, env->regs[9]);
946
+ wreg(cpu_state->hvf->fd, HV_X86_R10, env->regs[10]);
947
+ wreg(cpu_state->hvf->fd, HV_X86_R11, env->regs[11]);
948
+ wreg(cpu_state->hvf->fd, HV_X86_R12, env->regs[12]);
949
+ wreg(cpu_state->hvf->fd, HV_X86_R13, env->regs[13]);
950
+ wreg(cpu_state->hvf->fd, HV_X86_R14, env->regs[14]);
951
+ wreg(cpu_state->hvf->fd, HV_X86_R15, env->regs[15]);
952
+ wreg(cpu_state->hvf->fd, HV_X86_RFLAGS, env->eflags);
953
+ wreg(cpu_state->hvf->fd, HV_X86_RIP, env->eip);
954
955
- wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0);
956
+ wreg(cpu_state->hvf->fd, HV_X86_XCR0, env->xcr0);
957
958
hvf_put_xsave(cpu_state);
959
960
@@ -XXX,XX +XXX,XX @@ int hvf_put_registers(CPUState *cpu_state)
961
962
hvf_put_msrs(cpu_state);
963
964
- wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]);
965
- wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]);
966
- wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]);
967
- wreg(cpu_state->hvf_fd, HV_X86_DR3, env->dr[3]);
968
- wreg(cpu_state->hvf_fd, HV_X86_DR4, env->dr[4]);
969
- wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]);
970
- wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]);
971
- wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]);
972
+ wreg(cpu_state->hvf->fd, HV_X86_DR0, env->dr[0]);
973
+ wreg(cpu_state->hvf->fd, HV_X86_DR1, env->dr[1]);
974
+ wreg(cpu_state->hvf->fd, HV_X86_DR2, env->dr[2]);
975
+ wreg(cpu_state->hvf->fd, HV_X86_DR3, env->dr[3]);
976
+ wreg(cpu_state->hvf->fd, HV_X86_DR4, env->dr[4]);
977
+ wreg(cpu_state->hvf->fd, HV_X86_DR5, env->dr[5]);
978
+ wreg(cpu_state->hvf->fd, HV_X86_DR6, env->dr[6]);
979
+ wreg(cpu_state->hvf->fd, HV_X86_DR7, env->dr[7]);
980
981
return 0;
982
}
983
@@ -XXX,XX +XXX,XX @@ int hvf_get_registers(CPUState *cpu_state)
984
X86CPU *x86cpu = X86_CPU(cpu_state);
985
CPUX86State *env = &x86cpu->env;
986
987
- env->regs[R_EAX] = rreg(cpu_state->hvf_fd, HV_X86_RAX);
988
- env->regs[R_EBX] = rreg(cpu_state->hvf_fd, HV_X86_RBX);
989
- env->regs[R_ECX] = rreg(cpu_state->hvf_fd, HV_X86_RCX);
990
- env->regs[R_EDX] = rreg(cpu_state->hvf_fd, HV_X86_RDX);
991
- env->regs[R_EBP] = rreg(cpu_state->hvf_fd, HV_X86_RBP);
992
- env->regs[R_ESP] = rreg(cpu_state->hvf_fd, HV_X86_RSP);
993
- env->regs[R_ESI] = rreg(cpu_state->hvf_fd, HV_X86_RSI);
994
- env->regs[R_EDI] = rreg(cpu_state->hvf_fd, HV_X86_RDI);
995
- env->regs[8] = rreg(cpu_state->hvf_fd, HV_X86_R8);
996
- env->regs[9] = rreg(cpu_state->hvf_fd, HV_X86_R9);
997
- env->regs[10] = rreg(cpu_state->hvf_fd, HV_X86_R10);
998
- env->regs[11] = rreg(cpu_state->hvf_fd, HV_X86_R11);
999
- env->regs[12] = rreg(cpu_state->hvf_fd, HV_X86_R12);
1000
- env->regs[13] = rreg(cpu_state->hvf_fd, HV_X86_R13);
1001
- env->regs[14] = rreg(cpu_state->hvf_fd, HV_X86_R14);
1002
- env->regs[15] = rreg(cpu_state->hvf_fd, HV_X86_R15);
1003
+ env->regs[R_EAX] = rreg(cpu_state->hvf->fd, HV_X86_RAX);
1004
+ env->regs[R_EBX] = rreg(cpu_state->hvf->fd, HV_X86_RBX);
1005
+ env->regs[R_ECX] = rreg(cpu_state->hvf->fd, HV_X86_RCX);
1006
+ env->regs[R_EDX] = rreg(cpu_state->hvf->fd, HV_X86_RDX);
1007
+ env->regs[R_EBP] = rreg(cpu_state->hvf->fd, HV_X86_RBP);
1008
+ env->regs[R_ESP] = rreg(cpu_state->hvf->fd, HV_X86_RSP);
1009
+ env->regs[R_ESI] = rreg(cpu_state->hvf->fd, HV_X86_RSI);
1010
+ env->regs[R_EDI] = rreg(cpu_state->hvf->fd, HV_X86_RDI);
1011
+ env->regs[8] = rreg(cpu_state->hvf->fd, HV_X86_R8);
1012
+ env->regs[9] = rreg(cpu_state->hvf->fd, HV_X86_R9);
1013
+ env->regs[10] = rreg(cpu_state->hvf->fd, HV_X86_R10);
1014
+ env->regs[11] = rreg(cpu_state->hvf->fd, HV_X86_R11);
1015
+ env->regs[12] = rreg(cpu_state->hvf->fd, HV_X86_R12);
1016
+ env->regs[13] = rreg(cpu_state->hvf->fd, HV_X86_R13);
1017
+ env->regs[14] = rreg(cpu_state->hvf->fd, HV_X86_R14);
1018
+ env->regs[15] = rreg(cpu_state->hvf->fd, HV_X86_R15);
1019
1020
- env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
1021
- env->eip = rreg(cpu_state->hvf_fd, HV_X86_RIP);
1022
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
1023
+ env->eip = rreg(cpu_state->hvf->fd, HV_X86_RIP);
1024
1025
hvf_get_xsave(cpu_state);
1026
- env->xcr0 = rreg(cpu_state->hvf_fd, HV_X86_XCR0);
1027
+ env->xcr0 = rreg(cpu_state->hvf->fd, HV_X86_XCR0);
1028
1029
hvf_get_segments(cpu_state);
1030
hvf_get_msrs(cpu_state);
1031
1032
- env->dr[0] = rreg(cpu_state->hvf_fd, HV_X86_DR0);
1033
- env->dr[1] = rreg(cpu_state->hvf_fd, HV_X86_DR1);
1034
- env->dr[2] = rreg(cpu_state->hvf_fd, HV_X86_DR2);
1035
- env->dr[3] = rreg(cpu_state->hvf_fd, HV_X86_DR3);
1036
- env->dr[4] = rreg(cpu_state->hvf_fd, HV_X86_DR4);
1037
- env->dr[5] = rreg(cpu_state->hvf_fd, HV_X86_DR5);
1038
- env->dr[6] = rreg(cpu_state->hvf_fd, HV_X86_DR6);
1039
- env->dr[7] = rreg(cpu_state->hvf_fd, HV_X86_DR7);
1040
+ env->dr[0] = rreg(cpu_state->hvf->fd, HV_X86_DR0);
1041
+ env->dr[1] = rreg(cpu_state->hvf->fd, HV_X86_DR1);
1042
+ env->dr[2] = rreg(cpu_state->hvf->fd, HV_X86_DR2);
1043
+ env->dr[3] = rreg(cpu_state->hvf->fd, HV_X86_DR3);
1044
+ env->dr[4] = rreg(cpu_state->hvf->fd, HV_X86_DR4);
1045
+ env->dr[5] = rreg(cpu_state->hvf->fd, HV_X86_DR5);
1046
+ env->dr[6] = rreg(cpu_state->hvf->fd, HV_X86_DR6);
1047
+ env->dr[7] = rreg(cpu_state->hvf->fd, HV_X86_DR7);
1048
1049
x86_update_hflags(env);
1050
return 0;
1051
@@ -XXX,XX +XXX,XX @@ int hvf_get_registers(CPUState *cpu_state)
1052
static void vmx_set_int_window_exiting(CPUState *cpu)
1053
{
1054
uint64_t val;
1055
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
1056
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val |
1057
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
1058
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
1059
VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING);
1060
}
1061
1062
void vmx_clear_int_window_exiting(CPUState *cpu)
1063
{
1064
uint64_t val;
1065
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
1066
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val &
1067
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
1068
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
1069
~VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING);
1070
}
1071
1072
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1073
uint64_t info = 0;
1074
if (have_event) {
1075
info = vector | intr_type | VMCS_INTR_VALID;
1076
- uint64_t reason = rvmcs(cpu_state->hvf_fd, VMCS_EXIT_REASON);
1077
+ uint64_t reason = rvmcs(cpu_state->hvf->fd, VMCS_EXIT_REASON);
1078
if (env->nmi_injected && reason != EXIT_REASON_TASK_SWITCH) {
1079
vmx_clear_nmi_blocking(cpu_state);
1080
}
1081
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1082
info &= ~(1 << 12); /* clear undefined bit */
1083
if (intr_type == VMCS_INTR_T_SWINTR ||
1084
intr_type == VMCS_INTR_T_SWEXCEPTION) {
1085
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
1086
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
1087
}
1088
1089
if (env->has_error_code) {
1090
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR,
1091
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_EXCEPTION_ERROR,
1092
env->error_code);
1093
/* Indicate that VMCS_ENTRY_EXCEPTION_ERROR is valid */
1094
info |= VMCS_INTR_DEL_ERRCODE;
1095
}
1096
/*printf("reinject %lx err %d\n", info, err);*/
1097
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
1098
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info);
1099
};
1100
}
1101
1102
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1103
if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
1104
cpu_state->interrupt_request &= ~CPU_INTERRUPT_NMI;
1105
info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | EXCP02_NMI;
1106
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
1107
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info);
1108
} else {
1109
vmx_set_nmi_window_exiting(cpu_state);
1110
}
1111
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1112
int line = cpu_get_pic_interrupt(&x86cpu->env);
1113
cpu_state->interrupt_request &= ~CPU_INTERRUPT_HARD;
1114
if (line >= 0) {
1115
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line |
1116
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, line |
1117
VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
1118
}
1119
}
1120
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
1121
X86CPU *cpu = X86_CPU(cpu_state);
1122
CPUX86State *env = &cpu->env;
1123
1124
- env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
1125
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
1126
1127
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
1128
cpu_synchronize_state(cpu_state);
181
--
1129
--
182
2.20.1
1130
2.20.1
183
1131
184
1132
diff view generated by jsdifflib
1
The constant-expander functions like negate, plus_2, etc, are
1
From: Alexander Graf <agraf@csgraf.de>
2
generally useful; move them up in translate.c so we can use them in
3
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
4
2
3
The hooks we have that call us after reset, init and loadvm really all
4
just want to say "The reference of all register state is in the QEMU
5
vcpu struct, please push it".
6
7
We already have a working pushing mechanism though called cpu->vcpu_dirty,
8
so we can just reuse that for all of the above, syncing state properly the
9
next time we actually execute a vCPU.
10
11
This fixes PSCI resets on ARM, as they modify CPU state even after the
12
post init call has completed, but before we execute the vCPU again.
13
14
To also make the scheme work for x86, we have to make sure we don't
15
move stale eflags into our env when the vcpu state is dirty.
16
17
Signed-off-by: Alexander Graf <agraf@csgraf.de>
18
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
19
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
20
Reviewed-by: Sergio Lopez <slp@redhat.com>
21
Message-id: 20210519202253.76782-13-agraf@csgraf.de
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
8
---
23
---
9
target/arm/translate.c | 46 +++++++++++++++++++++++-------------------
24
accel/hvf/hvf-accel-ops.c | 27 +++++++--------------------
10
1 file changed, 25 insertions(+), 21 deletions(-)
25
target/i386/hvf/x86hvf.c | 5 ++++-
26
2 files changed, 11 insertions(+), 21 deletions(-)
11
27
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
28
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
13
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
30
--- a/accel/hvf/hvf-accel-ops.c
15
+++ b/target/arm/translate.c
31
+++ b/accel/hvf/hvf-accel-ops.c
16
@@ -XXX,XX +XXX,XX @@ static void arm_gen_condlabel(DisasContext *s)
32
@@ -XXX,XX +XXX,XX @@ static void hvf_cpu_synchronize_state(CPUState *cpu)
17
}
33
}
18
}
34
}
19
35
20
+/*
36
-static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
21
+ * Constant expanders for the decoders.
37
- run_on_cpu_data arg)
22
+ */
38
+static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu,
23
+
39
+ run_on_cpu_data arg)
24
+static int negate(DisasContext *s, int x)
40
{
25
+{
41
- hvf_put_registers(cpu);
26
+ return -x;
42
- cpu->vcpu_dirty = false;
27
+}
43
+ /* QEMU state is the reference, push it to HVF now and on next entry */
28
+
44
+ cpu->vcpu_dirty = true;
29
+static int plus_2(DisasContext *s, int x)
45
}
30
+{
46
31
+ return x + 2;
47
static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
32
+}
48
{
33
+
49
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
34
+static int times_2(DisasContext *s, int x)
35
+{
36
+ return x * 2;
37
+}
38
+
39
+static int times_4(DisasContext *s, int x)
40
+{
41
+ return x * 4;
42
+}
43
+
44
/* Flags for the disas_set_da_iss info argument:
45
* lower bits hold the Rt register number, higher bits are flags.
46
*/
47
@@ -XXX,XX +XXX,XX @@ static void arm_skip_unless(DisasContext *s, uint32_t cond)
48
49
50
/*
51
- * Constant expanders for the decoders.
52
+ * Constant expanders used by T16/T32 decode
53
*/
54
55
-static int negate(DisasContext *s, int x)
56
-{
57
- return -x;
58
-}
50
-}
59
-
51
-
60
-static int plus_2(DisasContext *s, int x)
52
-static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
53
- run_on_cpu_data arg)
61
-{
54
-{
62
- return x + 2;
55
- hvf_put_registers(cpu);
56
- cpu->vcpu_dirty = false;
57
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
58
}
59
60
static void hvf_cpu_synchronize_post_init(CPUState *cpu)
61
{
62
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
63
-}
63
-}
64
-
64
-
65
-static int times_2(DisasContext *s, int x)
65
-static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
66
- run_on_cpu_data arg)
66
-{
67
-{
67
- return x * 2;
68
- cpu->vcpu_dirty = true;
68
-}
69
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
69
-
70
}
70
-static int times_4(DisasContext *s, int x)
71
71
-{
72
static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
72
- return x * 4;
73
-}
74
-
75
/* Return only the rotation part of T32ExpandImm. */
76
static int t32_expandimm_rot(DisasContext *s, int x)
77
{
73
{
74
- run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
75
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
76
}
77
78
static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
79
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/i386/hvf/x86hvf.c
82
+++ b/target/i386/hvf/x86hvf.c
83
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
84
X86CPU *cpu = X86_CPU(cpu_state);
85
CPUX86State *env = &cpu->env;
86
87
- env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
88
+ if (!cpu_state->vcpu_dirty) {
89
+ /* light weight sync for CPU_INTERRUPT_HARD and IF_MASK */
90
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
91
+ }
92
93
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
94
cpu_synchronize_state(cpu_state);
78
--
95
--
79
2.20.1
96
2.20.1
80
97
81
98
diff view generated by jsdifflib
1
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
1
Coverity notes that we don't check for dup2() failing. Add some
2
are zeroed for an exception taken to Non-secure state; for an
2
assertions so that if it does ever happen we get some indication.
3
exception taken to Secure state they become UNKNOWN, and we chose to
3
(This is similar to how we handle other "don't expect this syscall to
4
leave them at their previous values.
4
fail" checks in this test code.)
5
5
6
In v8.1M the behaviour is specified more tightly and these registers
6
Fixes: Coverity CID 1432346
7
are always zeroed regardless of the security state that the exception
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
targets (see rule R_KPZV). Implement this.
8
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
9
Message-id: 20210525134458.6675-2-peter.maydell@linaro.org
10
---
11
tests/qtest/bios-tables-test.c | 8 ++++++--
12
1 file changed, 6 insertions(+), 2 deletions(-)
9
13
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
13
---
14
target/arm/m_helper.c | 16 ++++++++++++----
15
1 file changed, 12 insertions(+), 4 deletions(-)
16
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/m_helper.c
16
--- a/tests/qtest/bios-tables-test.c
20
+++ b/target/arm/m_helper.c
17
+++ b/tests/qtest/bios-tables-test.c
21
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
18
@@ -XXX,XX +XXX,XX @@ static void test_acpi_asl(test_data *data)
22
* Clear registers if necessary to prevent non-secure exception
19
exp_sdt->asl_file, sdt->asl_file);
23
* code being able to see register values from secure code.
20
int out = dup(STDOUT_FILENO);
24
* Where register values become architecturally UNKNOWN we leave
21
int ret G_GNUC_UNUSED;
25
- * them with their previous values.
22
+ int dupret;
26
+ * them with their previous values. v8.1M is tighter than v8.0M
23
27
+ * here and always zeroes the caller-saved registers regardless
24
- dup2(STDERR_FILENO, STDOUT_FILENO);
28
+ * of the security state the exception is targeting.
25
+ g_assert(out >= 0);
29
*/
26
+ dupret = dup2(STDERR_FILENO, STDOUT_FILENO);
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
27
+ g_assert(dupret >= 0);
31
- if (!targets_secure) {
28
ret = system(diff) ;
32
+ if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) {
29
- dup2(out, STDOUT_FILENO);
33
/*
30
+ dupret = dup2(out, STDOUT_FILENO);
34
* Always clear the caller-saved registers (they have been
31
+ g_assert(dupret >= 0);
35
* pushed to the stack earlier in v7m_push_stack()).
32
close(out);
36
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
33
g_free(diff);
37
* v7m_push_callee_stack()).
38
*/
39
int i;
40
+ /*
41
+ * r4..r11 are callee-saves, zero only if background
42
+ * state was Secure (EXCRET.S == 1) and exception
43
+ * targets Non-secure state
44
+ */
45
+ bool zero_callee_saves = !targets_secure &&
46
+ (lr & R_V7M_EXCRET_S_MASK);
47
48
for (i = 0; i < 13; i++) {
49
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
50
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
51
+ if (i < 4 || i > 11 || zero_callee_saves) {
52
env->regs[i] = 0;
53
}
54
}
34
}
55
--
35
--
56
2.20.1
36
2.20.1
57
37
58
38
diff view generated by jsdifflib
New patch
1
The e1000e_send_verify() test calls qemu_recv() but doesn't
2
check that the call succeeded, which annoys Coverity. Add
3
an explicit test check for the length of the data.
1
4
5
(This is a test check, not a "we assume this syscall always
6
succeeds", so we use g_assert_cmpint() rather than g_assert().)
7
8
Fixes: Coverity CID 1432324
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
11
Message-id: 20210525134458.6675-3-peter.maydell@linaro.org
12
---
13
tests/qtest/e1000e-test.c | 3 ++-
14
1 file changed, 2 insertions(+), 1 deletion(-)
15
16
diff --git a/tests/qtest/e1000e-test.c b/tests/qtest/e1000e-test.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/tests/qtest/e1000e-test.c
19
+++ b/tests/qtest/e1000e-test.c
20
@@ -XXX,XX +XXX,XX @@ static void e1000e_send_verify(QE1000E *d, int *test_sockets, QGuestAllocator *a
21
/* Check data sent to the backend */
22
ret = qemu_recv(test_sockets[0], &recv_len, sizeof(recv_len), 0);
23
g_assert_cmpint(ret, == , sizeof(recv_len));
24
- qemu_recv(test_sockets[0], buffer, 64, 0);
25
+ ret = qemu_recv(test_sockets[0], buffer, 64, 0);
26
+ g_assert_cmpint(ret, >=, 5);
27
g_assert_cmpstr(buffer, == , "TEST");
28
29
/* Free test data buffer */
30
--
31
2.20.1
32
33
diff view generated by jsdifflib
New patch
1
Coverity notices that the checks against mkstemp() failing in
2
create_qcow2_with_mbr() are wrong: mkstemp returns -1 on failure but
3
the check is just "g_assert(fd)". Fix to use "g_assert(fd >= 0)",
4
matching the correct check in create_test_img().
1
5
6
Fixes: Coverity CID 1432274
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
10
Message-id: 20210525134458.6675-4-peter.maydell@linaro.org
11
---
12
tests/qtest/hd-geo-test.c | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
14
15
diff --git a/tests/qtest/hd-geo-test.c b/tests/qtest/hd-geo-test.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/qtest/hd-geo-test.c
18
+++ b/tests/qtest/hd-geo-test.c
19
@@ -XXX,XX +XXX,XX @@ static char *create_qcow2_with_mbr(MBRpartitions mbr, uint64_t sectors)
20
}
21
22
fd = mkstemp(raw_path);
23
- g_assert(fd);
24
+ g_assert(fd >= 0);
25
close(fd);
26
27
fd = open(raw_path, O_WRONLY);
28
@@ -XXX,XX +XXX,XX @@ static char *create_qcow2_with_mbr(MBRpartitions mbr, uint64_t sectors)
29
close(fd);
30
31
fd = mkstemp(qcow2_path);
32
- g_assert(fd);
33
+ g_assert(fd >= 0);
34
close(fd);
35
36
qemu_img_path = getenv("QTEST_QEMU_IMG");
37
--
38
2.20.1
39
40
diff view generated by jsdifflib
1
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
1
Coverity points out that we calculate a 64-bit value using 32-bit
2
the FPSCR. We have a comment that states this, but the actual logic
2
arithmetic; add the cast to force the multiply to be done as 64-bits.
3
to forbid accesses for any other register value is missing, so we
3
(The overflow will never happen with the current test data.)
4
would end up with A-profile style behaviour. Add the missing check.
5
4
5
Fixes: Coverity CID 1432320
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
8
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
9
Message-id: 20210525134458.6675-5-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate-vfp.c.inc | 5 ++++-
11
tests/qtest/pflash-cfi02-test.c | 2 +-
11
1 file changed, 4 insertions(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
diff --git a/tests/qtest/pflash-cfi02-test.c b/tests/qtest/pflash-cfi02-test.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
--- a/tests/qtest/pflash-cfi02-test.c
16
+++ b/target/arm/translate-vfp.c.inc
17
+++ b/tests/qtest/pflash-cfi02-test.c
17
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
18
@@ -XXX,XX +XXX,XX @@ static void test_geometry(const void *opaque)
18
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
19
19
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
20
for (int region = 0; region < nb_erase_regions; ++region) {
20
*/
21
for (uint32_t i = 0; i < c->nb_blocs[region]; ++i) {
21
- if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
22
- uint64_t byte_addr = i * c->sector_len[region];
22
+ if (a->reg != ARM_VFP_FPSCR) {
23
+ uint64_t byte_addr = (uint64_t)i * c->sector_len[region];
23
+ return false;
24
g_assert_cmphex(flash_read(c, byte_addr), ==, bank_mask(c));
24
+ }
25
+ if (a->rt == 15 && !a->l) {
26
return false;
27
}
25
}
28
}
26
}
29
--
27
--
30
2.20.1
28
2.20.1
31
29
32
30
diff view generated by jsdifflib
1
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
1
Coverity points out that in tpm_test_swtpm_migration_test() we
2
R_LLRP). (In previous versions of the architecture this was either
2
assume that src_tpm_addr and dst_tpm_addr are non-NULL (we
3
required or IMPDEF.)
3
pass them to tpm_util_migration_start_qemu() which will
4
unconditionally dereference them) but then later explicitly
5
check them for NULL. Remove the pointless checks.
6
7
Fixes: Coverity CID 1432367, 1432359
4
8
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
11
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
12
Message-id: 20210525134458.6675-6-peter.maydell@linaro.org
8
---
13
---
9
target/arm/m_helper.c | 6 +++++-
14
tests/qtest/tpm-tests.c | 12 ++++--------
10
1 file changed, 5 insertions(+), 1 deletion(-)
15
1 file changed, 4 insertions(+), 8 deletions(-)
11
16
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
17
diff --git a/tests/qtest/tpm-tests.c b/tests/qtest/tpm-tests.c
13
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
19
--- a/tests/qtest/tpm-tests.c
15
+++ b/target/arm/m_helper.c
20
+++ b/tests/qtest/tpm-tests.c
16
@@ -XXX,XX +XXX,XX @@ load_fail:
21
@@ -XXX,XX +XXX,XX @@ void tpm_test_swtpm_migration_test(const char *src_tpm_path,
17
* The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
22
qtest_quit(src_qemu);
18
* secure); otherwise it targets the same security state as the
23
19
* underlying exception.
24
tpm_util_swtpm_kill(dst_tpm_pid);
20
+ * In v8.1M HardFaults from vector table fetch fails don't set FORCED.
25
- if (dst_tpm_addr) {
21
*/
26
- g_unlink(dst_tpm_addr->u.q_unix.path);
22
if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
27
- qapi_free_SocketAddress(dst_tpm_addr);
23
exc_secure = true;
28
- }
24
}
29
+ g_unlink(dst_tpm_addr->u.q_unix.path);
25
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
30
+ qapi_free_SocketAddress(dst_tpm_addr);
26
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK;
31
27
+ if (!arm_feature(env, ARM_FEATURE_V8_1M)) {
32
tpm_util_swtpm_kill(src_tpm_pid);
28
+ env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
33
- if (src_tpm_addr) {
29
+ }
34
- g_unlink(src_tpm_addr->u.q_unix.path);
30
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
35
- qapi_free_SocketAddress(src_tpm_addr);
31
return false;
36
- }
37
+ g_unlink(src_tpm_addr->u.q_unix.path);
38
+ qapi_free_SocketAddress(src_tpm_addr);
32
}
39
}
33
--
40
--
34
2.20.1
41
2.20.1
35
42
36
43
diff view generated by jsdifflib
1
In v8.1M the PXN architecture extension adds a new PXN bit to the
1
Coverity complains that we don't check for failures from dup()
2
MPU_RLAR registers, which forbids execution of code in the region
2
and mkstemp(); add asserts that these syscalls succeeded.
3
from a privileged mode.
4
3
5
This is another feature which is just in the generic "in v8.1M" set
4
Fixes: Coverity CID 1432516, 1432574
6
and has no ID register field indicating its presence.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20210525134458.6675-7-peter.maydell@linaro.org
9
---
10
tests/unit/test-vmstate.c | 5 ++++-
11
1 file changed, 4 insertions(+), 1 deletion(-)
7
12
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/tests/unit/test-vmstate.c b/tests/unit/test-vmstate.c
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 7 ++++++-
13
1 file changed, 6 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
15
--- a/tests/unit/test-vmstate.c
18
+++ b/target/arm/helper.c
16
+++ b/tests/unit/test-vmstate.c
19
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
17
@@ -XXX,XX +XXX,XX @@ static int temp_fd;
20
} else {
18
/* Duplicate temp_fd and seek to the beginning of the file */
21
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
19
static QEMUFile *open_test_file(bool write)
22
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
20
{
23
+ bool pxn = false;
21
- int fd = dup(temp_fd);
24
+
22
+ int fd;
25
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
23
QIOChannel *ioc;
26
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
24
QEMUFile *f;
27
+ }
25
28
26
+ fd = dup(temp_fd);
29
if (m_is_system_region(env, address)) {
27
+ g_assert(fd >= 0);
30
/* System space is always execute never */
28
lseek(fd, 0, SEEK_SET);
31
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
29
if (write) {
32
}
30
g_assert_cmpint(ftruncate(fd, 0), ==, 0);
33
31
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
34
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
32
g_autofree char *temp_file = g_strdup_printf("%s/vmst.test.XXXXXX",
35
- if (*prot && !xn) {
33
g_get_tmp_dir());
36
+ if (*prot && !xn && !(pxn && !is_user)) {
34
temp_fd = mkstemp(temp_file);
37
*prot |= PAGE_EXEC;
35
+ g_assert(temp_fd >= 0);
38
}
36
39
/* We don't need to look the attribute up in the MAIR0/MAIR1
37
module_call_init(MODULE_INIT_QOM);
38
40
--
39
--
41
2.20.1
40
2.20.1
42
41
43
42
diff view generated by jsdifflib